EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

The U.S. Economy in the 1920s

Gene Smiley, Marquette University

Introduction

The interwar period in the United States, and in the rest of the world, is a most interesting era. The decade of the 1930s marks the most severe depression in our history and ushered in sweeping changes in the role of government. Economists and historians have rightly given much attention to that decade. However, with all of this concern about the growing and developing role of government in economic activity in the 1930s, the decade of the 1920s often tends to get overlooked. This is unfortunate because the 1920s are a period of vigorous, vital economic growth. It marks the first truly modern decade and dramatic economic developments are found in those years. There is a rapid adoption of the automobile to the detriment of passenger rail travel. Though suburbs had been growing since the late nineteenth century their growth had been tied to rail or trolley access and this was limited to the largest cities. The flexibility of car access changed this and the growth of suburbs began to accelerate. The demands of trucks and cars led to a rapid growth in the construction of all-weather surfaced roads to facilitate their movement. The rapidly expanding electric utility networks led to new consumer appliances and new types of lighting and heating for homes and businesses. The introduction of the radio, radio stations, and commercial radio networks began to break up rural isolation, as did the expansion of local and long-distance telephone communications. Recreational activities such as traveling, going to movies, and professional sports became major businesses. The period saw major innovations in business organization and manufacturing technology. The Federal Reserve System first tested its powers and the United States moved to a dominant position in international trade and global business. These things make the 1920s a period of considerable importance independent of what happened in the 1930s.

National Product and Income and Prices

We begin the survey of the 1920s with an examination of the overall production in the economy, GNP, the most comprehensive measure of aggregate economic activity. Real GNP growth during the 1920s was relatively rapid, 4.2 percent a year from 1920 to 1929 according to the most widely used estimates. (Historical Statistics of the United States, or HSUS, 1976) Real GNP per capita grew 2.7 percent per year between 1920 and 1929. By both nineteenth and twentieth century standards these were relatively rapid rates of real economic growth and they would be considered rapid even today.

There were several interruptions to this growth. In mid-1920 the American economy began to contract and the 1920-1921 depression lasted about a year, but a rapid recovery reestablished full-employment by 1923. As will be discussed below, the Federal Reserve System’s monetary policy was a major factor in initiating the 1920-1921 depression. From 1923 through 1929 growth was much smoother. There was a very mild recession in 1924 and another mild recession in 1927 both of which may be related to oil price shocks (McMillin and Parker, 1994). The 1927 recession was also associated with Henry Ford’s shut-down of all his factories for six months in order to changeover from the Model T to the new Model A automobile. Though the Model T’s market share was declining after 1924, in 1926 Ford’s Model T still made up nearly 40 percent of all the new cars produced and sold in the United States. The Great Depression began in the summer of 1929, possibly as early as June. The initial downturn was relatively mild but the contraction accelerated after the crash of the stock market at the end of October. Real total GNP fell 10.2 percent from 1929 to 1930 while real GNP per capita fell 11.5 percent from 1929 to 1930.

image002 image004

Price changes during the 1920s are shown in Figure 2. The Consumer Price Index, CPI, is a better measure of changes in the prices of commodities and services that a typical consumer would purchase, while the Wholesale Price Index, WPI, is a better measure in the changes in the cost of inputs for businesses. As the figure shows the 1920-1921 depression was marked by extraordinarily large price decreases. Consumer prices fell 11.3 percent from 1920 to 1921 and fell another 6.6 percent from 1921 to 1922. After that consumer prices were relatively constant and actually fell slightly from 1926 to 1927 and from 1927 to 1928. Wholesale prices show greater variation. The 1920-1921 depression hit farmers very hard. Prices had been bid up with the increasing foreign demand during the First World War. As European production began to recover after the war prices began to fall. Though the prices of agricultural products fell from 1919 to 1920, the depression brought on dramatic declines in the prices of raw agricultural produce as well as many other inputs that firms employ. In the scramble to beat price increases during 1919 firms had built up large inventories of raw materials and purchased inputs and this temporary increase in demand led to even larger price increases. With the depression firms began to draw down those inventories. The result was that the prices of raw materials and manufactured inputs fell rapidly along with the prices of agricultural produce—the WPI dropped 45.9 percent between 1920 and 1921. The price changes probably tend to overstate the severity of the 1920-1921 depression. Romer’s recent work (1988) suggests that prices changed much more easily in that depression reducing the drop in production and employment. Wholesale prices in the rest of the 1920s were relatively stable though they were more likely to fall than to rise.

Economic Growth in the 1920s

Despite the 1920-1921 depression and the minor interruptions in 1924 and 1927, the American economy exhibited impressive economic growth during the 1920s. Though some commentators in later years thought that the existence of some slow growing or declining sectors in the twenties suggested weaknesses that might have helped bring on the Great Depression, few now argue this. Economic growth never occurs in all sectors at the same time and at the same rate. Growth reallocates resources from declining or slower growing sectors to the more rapidly expanding sectors in accordance with new technologies, new products and services, and changing consumer tastes.

Economic growth in the 1920s was impressive. Ownership of cars, new household appliances, and housing was spread widely through the population. New products and processes of producing those products drove this growth. The combination of the widening use of electricity in production and the growing adoption of the moving assembly line in manufacturing combined to bring on a continuing rise in the productivity of labor and capital. Though the average workweek in most manufacturing remained essentially constant throughout the 1920s, in a few industries, such as railroads and coal production, it declined. (Whaples 2001) New products and services created new markets such as the markets for radios, electric iceboxes, electric irons, fans, electric lighting, vacuum cleaners, and other laborsaving household appliances. This electricity was distributed by the growing electric utilities. The stocks of those companies helped create the stock market boom of the late twenties. RCA, one of the glamour stocks of the era, paid no dividends but its value appreciated because of expectations for the new company. Like the Internet boom of the late 1990s, the electricity boom of the 1920s fed a rapid expansion in the stock market.

Fed by continuing productivity advances and new products and services and facilitated by an environment of stable prices that encouraged production and risk taking, the American economy embarked on a sustained expansion in the 1920s.

Population and Labor in the 1920s

At the same time that overall production was growing, population growth was declining. As can be seen in Figure 3, from an annual rate of increase of 1.85 and 1.93 percent in 1920 and 1921, respectively, population growth rates fell to 1.23 percent in 1928 and 1.04 percent in 1929.

These changes in the overall growth rate were linked to the birth and death rates of the resident population and a decrease in foreign immigration. Though the crude death rate changed little during the period, the crude birth rate fell sharply into the early 1930s. (Figure 4) There are several explanations for the decline in the birth rate during this period. First, there was an accelerated rural-to-urban migration. Urban families have tended to have fewer children than rural families because urban children do not augment family incomes through their work as unpaid workers as rural children do. Second, the period also saw continued improvement in women’s job opportunities and a rise in their labor force participation rates.

Immigration also fell sharply. In 1917 the federal government began to limit immigration and in 1921 an immigration act limited the number of prospective citizens of any nationality entering the United States each year to no more than 3 percent of that nationality’s resident population as of the 1910 census. A new act in 1924 lowered this to 2 percent of the resident population at the 1890 census and more firmly blocked entry for people from central, southern, and eastern European nations. The limits were relaxed slightly in 1929.

The American population also continued to move during the interwar period. Two regions experienced the largest losses in population shares, New England and the Plains. For New England this was a continuation of a long-term trend. The population share for the Plains region had been rising through the nineteenth century. In the interwar period its agricultural base, combined with the continuing shift from agriculture to industry, led to a sharp decline in its share. The regions gaining population were the Southwest and, particularly, the far West.— California began its rapid growth at this time.

 Real Average Weekly or Daily Earnings for Selected=During the 1920s the labor force grew at a more rapid rate than population. This somewhat more rapid growth came from the declining share of the population less than 14 years old and therefore not in the labor force. In contrast, the labor force participation rates, or fraction of the population aged 14 and over that was in the labor force, declined during the twenties from 57.7 percent to 56.3 percent. This was entirely due to a fall in the male labor force participation rate from 89.6 percent to 86.8 percent as the female labor force participation rate rose from 24.3 percent to 25.1 percent. The primary source of the fall in male labor force participation rates was a rising retirement rate. Employment rates for males who were 65 or older fell from 60.1 percent in 1920 to 58.0 percent in 1930.

With the depression of 1920-1921 the unemployment rate rose rapidly from 5.2 to 8.7 percent. The recovery reduced unemployment to an average rate of 4.8 percent in 1923. The unemployment rate rose to 5.8 percent in the recession of 1924 and to 5.0 percent with the slowdown in 1927. Otherwise unemployment remained relatively low. The onset of the Great Depression from the summer of 1929 on brought the unemployment rate from 4.6 percent in 1929 to 8.9 percent in 1930. (Figure 5)

Earnings for laborers varied during the twenties. Table 1 presents average weekly earnings for 25 manufacturing industries. For these industries male skilled and semi-skilled laborers generally commanded a premium of 35 percent over the earnings of unskilled male laborers in the twenties. Unskilled males received on average 35 percent more than females during the twenties. Real average weekly earnings for these 25 manufacturing industries rose somewhat during the 1920s. For skilled and semi-skilled male workers real average weekly earnings rose 5.3 percent between 1923 and 1929, while real average weekly earnings for unskilled males rose 8.7 percent between 1923 and 1929. Real average weekly earnings for females rose on 1.7 percent between 1923 and 1929. Real weekly earnings for bituminous and lignite coal miners fell as the coal industry encountered difficult times in the late twenties and the real daily wage rate for farmworkers in the twenties, reflecting the ongoing difficulties in agriculture, fell after the recovery from the 1920-1921 depression.

The 1920s were not kind to labor unions even though the First World War had solidified the dominance of the American Federation of Labor among labor unions in the United States. The rapid growth in union membership fostered by federal government policies during the war ended in 1919. A committee of AFL craft unions undertook a successful membership drive in the steel industry in that year. When U.S. Steel refused to bargain, the committee called a strike, the failure of which was a sharp blow to the unionization drive. (Brody, 1965) In the same year, the United Mine Workers undertook a large strike and also lost. These two lost strikes and the 1920-21 depression took the impetus out of the union movement and led to severe membership losses that continued through the twenties. (Figure 6)

Under Samuel Gompers’s leadership, the AFL’s “business unionism” had attempted to promote the union and collective bargaining as the primary answer to the workers’ concerns with wages, hours, and working conditions. The AFL officially opposed any government actions that would have diminished worker attachment to unions by providing competing benefits, such as government sponsored unemployment insurance, minimum wage proposals, maximum hours proposals and social security programs. As Lloyd Ulman (1961) points out, the AFL, under Gompers’ direction, differentiated on the basis of whether the statute would or would not aid collective bargaining. After Gompers’ death, William Green led the AFL in a policy change as the AFL promoted the idea of union-management cooperation to improve output and promote greater employer acceptance of unions. But Irving Bernstein (1965) concludes that, on the whole, union-management cooperation in the twenties was a failure.

To combat the appeal of unions in the twenties, firms used the “yellow-dog” contract requiring employees to swear they were not union members and would not join one; the “American Plan” promoting the open shop and contending that the closed shop was un-American; and welfare capitalism. The most common aspects of welfare capitalism included personnel management to handle employment issues and problems, the doctrine of “high wages,” company group life insurance, old-age pension plans, stock-purchase plans, and more. Some firms formed company unions to thwart independent unionization and the number of company-controlled unions grew from 145 to 432 between 1919 and 1926.

Until the late thirties the AFL was a voluntary association of independent national craft unions. Craft unions relied upon the particular skills the workers had acquired (their craft) to distinguish the workers and provide barriers to the entry of other workers. Most craft unions required a period of apprenticeship before a worker was fully accepted as a journeyman worker. The skills, and often lengthy apprenticeship, constituted the entry barrier that gave the union its bargaining power. There were only a few unions that were closer to today’s industrial unions where the required skills were much less (or nonexistent) making the entry of new workers much easier. The most important of these industrial unions was the United Mine Workers, UMW.

The AFL had been created on two principles: the autonomy of the national unions and the exclusive jurisdiction of the national union.—Individual union members were not, in fact, members of the AFL; rather, they were members of the local and national union, and the national was a member of the AFL. Representation in the AFL gave dominance to the national unions, and, as a result, the AFL had little effective power over them. The craft lines, however, had never been distinct and increasingly became blurred. The AFL was constantly mediating jurisdictional disputes between member national unions. Because the AFL and its individual unions were not set up to appeal to and work for the relatively less skilled industrial workers, union organizing and growth lagged in the twenties.

Agriculture

The onset of the First World War in Europe brought unprecedented prosperity to American farmers. As agricultural production in Europe declined, the demand for American agricultural exports rose, leading to rising farm product prices and incomes. In response to this, American farmers expanded production by moving onto marginal farmland, such as Wisconsin cutover property on the edge of the woods and hilly terrain in the Ozark and Appalachian regions. They also increased output by purchasing more machinery, such as tractors, plows, mowers, and threshers. The price of farmland, particularly marginal farmland, rose in response to the increased demand, and the debt of American farmers increased substantially.

This expansion of American agriculture continued past the end of the First World War as farm exports to Europe and farm prices initially remained high. However, agricultural production in Europe recovered much faster than most observers had anticipated. Even before the onset of the short depression in 1920, farm exports and farm product prices had begun to fall. During the depression, farm prices virtually collapsed. From 1920 to 1921, the consumer price index fell 11.3 percent, the wholesale price index fell 45.9 percent, and the farm products price index fell 53.3 percent. (HSUS, Series E40, E42, and E135)

Real average net income per farm fell over 72.6 percent between 1920 and 1921 and, though rising in the twenties, never recovered the relative levels of 1918 and 1919. (Figure 7) Farm mortgage foreclosures rose and stayed at historically high levels for the entire decade of the 1920s. (Figure 8) The value of farmland and buildings fell throughout the twenties and, for the first time in American history, the number of cultivated acres actually declined as farmers pulled back from the marginal farmland brought into production during the war. Rather than indicators of a general depression in agriculture in the twenties, these were the results of the financial commitments made by overoptimistic American farmers during and directly after the war. The foreclosures were generally on second mortgages rather than on first mortgages as they were in the early 1930s. (Johnson, 1973; Alston, 1983)

A Declining Sector

A major difficulty in analyzing the interwar agricultural sector lies in separating the effects of the 1920-21 and 1929-33 depressions from those that arose because agriculture was declining relative to the other sectors. A relatively very slow growing demand for basic agricultural products and significant increases in the productivity of labor, land, and machinery in agricultural production combined with a much more rapid extensive economic growth in the nonagricultural sectors of the economy required a shift of resources, particularly labor, out of agriculture. (Figure 9) The market induces labor to voluntarily move from one sector to another through income differentials, suggesting that even in the absence of the effects of the depressions, farm incomes would have been lower than nonfarm incomes so as to bring about this migration.

The continuous substitution of tractor power for horse and mule power released hay and oats acreage to grow crops for human consumption. Though cotton and tobacco continued as the primary crops in the south, the relative production of cotton continued to shift to the west as production in Arkansas, Missouri, Oklahoma, Texas, New Mexico, Arizona, and California increased. As quotas reduced immigration and incomes rose, the demand for cereal grains grew slowly—more slowly than the supply—and the demand for fruits, vegetables, and dairy products grew. Refrigeration and faster freight shipments expanded the milk sheds further from metropolitan areas. Wisconsin and other North Central states began to ship cream and cheeses to the Atlantic Coast. Due to transportation improvements, specialized truck farms and the citrus industry became more important in California and Florida. (Parker, 1972; Soule, 1947)

The relative decline of the agricultural sector in this period was closely related to the highly inelastic income elasticity of demand for many farm products, particularly cereal grains, pork, and cotton. As incomes grew, the demand for these staples grew much more slowly. At the same time, rising land and labor productivity were increasing the supplies of staples, causing real prices to fall.

Table 3 presents selected agricultural productivity statistics for these years. Those data indicate that there were greater gains in labor productivity than in land productivity (or per acre yields). Per acre yields in wheat and hay actually decreased between 1915-19 and 1935-39. These productivity increases, which released resources from the agricultural sector, were the result of technological improvements in agriculture.

Technological Improvements In Agricultural Production

In many ways the adoption of the tractor in the interwar period symbolizes the technological changes that occurred in the agricultural sector. This changeover in the power source that farmers used had far-reaching consequences and altered the organization of the farm and the farmers’ lifestyle. The adoption of the tractor was land saving (by releasing acreage previously used to produce crops for workstock) and labor saving. At the same time it increased the risks of farming because farmers were now much more exposed to the marketplace. They could not produce their own fuel for tractors as they had for the workstock. Rather, this had to be purchased from other suppliers. Repair and replacement parts also had to be purchased, and sometimes the repairs had to be undertaken by specialized mechanics. The purchase of a tractor also commonly required the purchase of new complementary machines; therefore, the decision to purchase a tractor was not an isolated one. (White, 2001; Ankli, 1980; Ankli and Olmstead, 1981; Musoke, 1981; Whatley, 1987). These changes resulted in more and more farmers purchasing and using tractors, but the rate of adoption varied sharply across the United States.

Technological innovations in plants and animals also raised productivity. Hybrid seed corn increased yields from an average of 40 bushels per acre to 100 to 120 bushels per acre. New varieties of wheat were developed from the hardy Russian and Turkish wheat varieties which had been imported. The U.S. Department of Agriculture’s Experiment Stations took the lead in developing wheat varieties for different regions. For example, in the Columbia River Basin new varieties raised yields from an average of 19.1 bushels per acre in 1913-22 to 23.1 bushels per acre in 1933-42. (Shepherd, 1980) New hog breeds produced more meat and new methods of swine sanitation sharply increased the survival rate of piglets. An effective serum for hog cholera was developed, and the federal government led the way in the testing and eradication of bovine tuberculosis and brucellosis. Prior to the Second World War, a number of pesticides to control animal disease were developed, including cattle dips and disinfectants. By the mid-1920s a vaccine for “blackleg,” an infectious, usually fatal disease that particularly struck young cattle, was completed. The cattle tick, which carried Texas Fever, was largely controlled through inspections. (Schlebecker, 1975; Bogue, 1983; Wood, 1980)

Federal Agricultural Programs in the 1920s

Though there was substantial agricultural discontent in the period from the Civil War to late 1890s, the period from then to the onset of the First World War was relatively free from overt farmers’ complaints. In later years farmers dubbed the 1910-14 period as agriculture’s “golden years” and used the prices of farm crops and farm inputs in that period as a standard by which to judge crop and input prices in later years. The problems that arose in the agricultural sector during the twenties once again led to insistent demands by farmers for government to alleviate their distress.

Though there were increasing calls for direct federal government intervention to limit production and raise farm prices, this was not used until Roosevelt took office. Rather, there was a reliance upon the traditional method to aid injured groups—tariffs, and upon the “sanctioning and promotion of cooperative marketing associations.” In 1921 Congress attempted to control the grain exchanges and compel merchants and stockyards to charge “reasonable rates,” with the Packers and Stockyards Act and the Grain Futures Act. In 1922 Congress passed the Capper-Volstead Act to promote agricultural cooperatives and the Fordney-McCumber Tariff to impose high duties on most agricultural imports.—The Cooperative Marketing Act of 1924 did not bolster failing cooperatives as it was supposed to do. (Hoffman and Liebcap, 1991)

Twice between 1924 and 1928 Congress passed “McNary-Haugan” bills, but President Calvin Coolidge vetoed both. The McNary-Haugan bills proposed to establish “fair” exchange values (based on the 1910-14 period) for each product and to maintain them through tariffs and a private corporation that would be chartered by the government and could buy enough of each commodity to keep its price up to the computed fair level. The revenues were to come from taxes imposed on farmers. The Hoover administration passed the Hawley-Smoot tariff in 1930 and an Agricultural Marketing Act in 1929. This act committed the federal government to a policy of stabilizing farm prices through several nongovernment institutions but these failed during the depression. Federal intervention in the agricultural sector really came of age during the New Deal era of the 1930s.

Manufacturing

Agriculture was not the only sector experiencing difficulties in the twenties. Other industries, such as textiles, boots and shoes, and coal mining, also experienced trying times. However, at the same time that these industries were declining, other industries, such as electrical appliances, automobiles, and construction, were growing rapidly. The simultaneous existence of growing and declining industries has been common to all eras because economic growth and technological progress never affect all sectors in the same way. In general, in manufacturing there was a rapid rate of growth of productivity during the twenties. The rise of real wages due to immigration restrictions and the slower growth of the resident population spurred this. Transportation improvements and communications advances were also responsible. These developments brought about differential growth in the various manufacturing sectors in the United States in the 1920s.

Because of the historic pattern of economic development in the United States, the northeast was the first area to really develop a manufacturing base. By the mid-nineteenth century the East North Central region was creating a manufacturing base and the other regions began to create manufacturing bases in the last half of the nineteenth century resulting in a relative westward and southern shift of manufacturing activity. This trend continued in the 1920s as the New England and Middle Atlantic regions’ shares of manufacturing employment fell while all of the other regions—excluding the West North Central region—gained. There was considerable variation in the growth of the industries and shifts in their ranking during the decade. The largest broadly defined industries were, not surprisingly, food and kindred products; textile mill products; those producing and fabricating primary metals; machinery production; and chemicals. When industries are more narrowly defined, the automobile industry, which ranked third in manufacturing value added in 1919, ranked first by the mid-1920s.

Productivity Developments

Gavin Wright (1990) has argued that one of the underappreciated characteristics of American industrial history has been its reliance on mineral resources. Wright argues that the growing American strength in industrial exports and industrialization in general relied on an increasing intensity in nonreproducible natural resources. The large American market was knit together as one large market without internal barriers through the development of widespread low-cost transportation. Many distinctively American developments, such as continuous-process, mass-production methods were associated with the “high throughput” of fuel and raw materials relative to labor and capital inputs. As a result the United States became the dominant industrial force in the world 1920s and 1930s. According to Wright, after World War II “the process by which the United States became a unified ‘economy’ in the nineteenth century has been extended to the world as a whole. To a degree, natural resources have become commodities rather than part of the ‘factor endowment’ of individual countries.” (Wright, 1990)

In addition to this growing intensity in the use of nonreproducible natural resources as a source of productivity gains in American manufacturing, other technological changes during the twenties and thirties tended to raise the productivity of the existing capital through the replacement of critical types of capital equipment with superior equipment and through changes in management methods. (Soule, 1947; Lorant, 1967; Devine, 1983; Oshima, 1984) Some changes, such as the standardization of parts and processes and the reduction of the number of styles and designs, raised the productivity of both capital and labor. Modern management techniques, first introduced by Frederick W. Taylor, were introduced on a wider scale.

One of the important forces contributing to mass production and increased productivity was the transfer to electric power. (Devine, 1983) By 1929 about 70 percent of manufacturing activity relied on electricity, compared to roughly 30 percent in 1914. Steam provided 80 percent of the mechanical drive capacity in manufacturing in 1900, but electricity provided over 50 percent by 1920 and 78 percent by 1929. An increasing number of factories were buying their power from electric utilities. In 1909, 64 percent of the electric motor capacity in manufacturing establishments used electricity generated on the factory site; by 1919, 57 percent of the electricity used in manufacturing was purchased from independent electric utilities.

The shift from coal to oil and natural gas and from raw unprocessed energy in the forms of coal and waterpower to processed energy in the form of internal combustion fuel and electricity increased thermal efficiency. After the First World War energy consumption relative to GNP fell, there was a sharp increase in the growth rate of output per labor-hour, and the output per unit of capital input once again began rising. These trends can be seen in the data in Table 3. Labor productivity grew much more rapidly during the 1920s than in the previous or following decade. Capital productivity had declined in the decade previous to the 1920s while it also increased sharply during the twenties and continued to rise in the following decade. Alexander Field (2003) has argued that the 1930s were the most technologically progressive decade of the twentieth century basing his argument on the growth of multi-factor productivity as well as the impressive array of technological developments during the thirties. However, the twenties also saw impressive increases in labor and capital productivity as, particularly, developments in energy and transportation accelerated.

 Average Annual Rates of Labor Productivity and Capital Productivity Growth.

Warren Devine, Jr. (1983) reports that in the twenties the most important result of the adoption of electricity was that it would be an indirect “lever to increase production.” There were a number of ways in which this occurred. Electricity brought about an increased flow of production by allowing new flexibility in the design of buildings and the arrangement of machines. In this way it maximized throughput. Electric cranes were an “inestimable boon” to production because with adequate headroom they could operate anywhere in a plant, something that mechanical power transmission to overhead cranes did not allow. Electricity made possible the use of portable power tools that could be taken anywhere in the factory. Electricity brought about improved illumination, ventilation, and cleanliness in the plants, dramatically improving working conditions. It improved the control of machines since there was no longer belt slippage with overhead line shafts and belt transmission, and there were less limitations on the operating speeds of machines. Finally, it made plant expansion much easier than when overhead shafts and belts had been relied upon for operating power.

The mechanization of American manufacturing accelerated in the 1920s, and this led to a much more rapid growth of productivity in manufacturing compared to earlier decades and to other sectors at that time. There were several forces that promoted mechanization. One was the rapidly expanding aggregate demand during the prosperous twenties. Another was the technological developments in new machines and processes, of which electrification played an important part. Finally, Harry Jerome (1934) and, later, Harry Oshima (1984) both suggest that the price of unskilled labor began to rise as immigration sharply declined with new immigration laws and falling population growth. This accelerated the mechanization of the nation’s factories.

Technological changes during this period can be documented for a number of individual industries. In bituminous coal mining, labor productivity rose when mechanical loading devices reduced the labor required from 24 to 50 percent. The burst of paved road construction in the twenties led to the development of a finishing machine to smooth the surface of cement highways, and this reduced the labor requirement from 40 to 60 percent. Mechanical pavers that spread centrally mixed materials further increased productivity in road construction. These replaced the roadside dump and wheelbarrow methods of spreading the cement. Jerome (1934) reports that the glass in electric light bulbs was made by new machines that cut the number of labor-hours required for their manufacture by nearly half. New machines to produce cigarettes and cigars, for warp-tying in textile production, and for pressing clothes in clothing shops also cut labor-hours. The Banbury mixer reduced the labor input in the production of automobile tires by half, and output per worker of inner tubes increased about four times with a new production method. However, as Daniel Nelson (1987) points out, the continuing advances were the “cumulative process resulting from a vast number of successive small changes.” Because of these continuing advances in the quality of the tires and in the manufacturing of tires, between 1910 and 1930 “tire costs per thousand miles of driving fell from $9.39 to $0.65.”

John Lorant (1967) has documented other technological advances that occurred in American manufacturing during the twenties. For example, the organic chemical industry developed rapidly due to the introduction of the Weizman fermentation process. In a similar fashion, nearly half of the productivity advances in the paper industry were due to the “increasingly sophisticated applications of electric power and paper manufacturing processes,” especially the fourdrinier paper-making machines. As Avi Cohen (1984) has shown, the continuing advances in these machines were the result of evolutionary changes to the basic machine. Mechanization in many types of mass-production industries raised the productivity of labor and capital. In the glass industry, automatic feeding and other types of fully automatic production raised the efficiency of the production of glass containers, window glass, and pressed glass. Giedion (1948) reported that the production of bread was “automatized” in all stages during the 1920s.

Though not directly bringing about productivity increases in manufacturing processes, developments in the management of manufacturing firms, particularly the largest ones, also significantly affected their structure and operation. Alfred D. Chandler, Jr. (1962) has argued that the structure of a firm must follow its strategy. Until the First World War most industrial firms were centralized, single-division firms even when becoming vertically integrated. When this began to change the management of the large industrial firms had to change accordingly.

Because of these changes in the size and structure of the firm during the First World War, E. I. du Pont de Nemours and Company was led to adopt a strategy of diversifying into the production of largely unrelated product lines. The firm found that the centralized, divisional structure that had served it so well was not suited to this strategy, and its poor business performance led its executives to develop between 1919 and 1921 a decentralized, multidivisional structure that boosted it to the first rank among American industrial firms.

General Motors had a somewhat different problem. By 1920 it was already decentralized into separate divisions. In fact, there was so much decentralization that those divisions essentially remained separate companies and there was little coordination between the operating divisions. A financial crisis at the end of 1920 ousted W. C. Durant and brought in the du Ponts and Alfred Sloan. Sloan, who had seen the problems at GM but had been unable to convince Durant to make changes, began reorganizing the management of the company. Over the next several years Sloan and other GM executives developed the general office for a decentralized, multidivisional firm.

Though facing related problems at nearly the same time, GM and du Pont developed their decentralized, multidivisional organizations separately. As other manufacturing firms began to diversify, GM and du Pont became the models for reorganizing the management of the firms. In many industrial firms these reorganizations were not completed until well after the Second World War.

Competition, Monopoly, and the Government

The rise of big businesses, which accelerated in the postbellum period and particularly during the first great turn-of-the-century merger wave, continued in the interwar period. Between 1925 and 1939 the share of manufacturing assets held by the 100 largest corporations rose from 34.5 to 41.9 percent. (Niemi, 1980) As a public policy, the concern with monopolies diminished in the 1920s even though firms were growing larger. But the growing size of businesses was one of the convenient scapegoats upon which to blame the Great Depression.

However, the rise of large manufacturing firms in the interwar period is not so easily interpreted as an attempt to monopolize their industries. Some of the growth came about through vertical integration by the more successful manufacturing firms. Backward integration was generally an attempt to ensure a smooth supply of raw materials where that supply was not plentiful and was dispersed and firms “feared that raw materials might become controlled by competitors or independent suppliers.” (Livesay and Porter, 1969) Forward integration was an offensive tactic employed when manufacturers found that the existing distribution network proved inadequate. Livesay and Porter suggested a number of reasons why firms chose to integrate forward. In some cases they had to provide the mass distribution facilities to handle their much larger outputs; especially when the product was a new one. The complexity of some new products required technical expertise that the existing distribution system could not provide. In other cases “the high unit costs of products required consumer credit which exceeded financial capabilities of independent distributors.” Forward integration into wholesaling was more common than forward integration into retailing. The producers of automobiles, petroleum, typewriters, sewing machines, and harvesters were typical of those manufacturers that integrated all the way into retailing.

In some cases, increases in industry concentration arose as a natural process of industrial maturation. In the automobile industry, Henry Ford’s invention in 1913 of the moving assembly line—a technological innovation that changed most manufacturing—lent itself to larger factories and firms. Of the several thousand companies that had produced cars prior to 1920, 120 were still doing so then, but Ford and General Motors were the clear leaders, together producing nearly 70 percent of the cars. During the twenties, several other companies, such as Durant, Willys, and Studebaker, missed their opportunity to become more important producers, and Chrysler, formed in early 1925, became the third most important producer by 1930. Many went out of business and by 1929 only 44 companies were still producing cars. The Great Depression decimated the industry. Dozens of minor firms went out of business. Ford struggled through by relying on its huge stockpile of cash accumulated prior to the mid-1920s, while Chrysler actually grew. By 1940, only eight companies still produced cars—GM, Ford, and Chrysler had about 85 percent of the market, while Willys, Studebaker, Nash, Hudson, and Packard shared the remainder. The rising concentration in this industry was not due to attempts to monopolize. As the industry matured, growing economies of scale in factory production and vertical integration, as well as the advantages of a widespread dealer network, led to a dramatic decrease in the number of viable firms. (Chandler, 1962 and 1964; Rae, 1984; Bernstein, 1987)

It was a similar story in the tire industry. The increasing concentration and growth of firms was driven by scale economies in production and retailing and by the devastating effects of the depression in the thirties. Although there were 190 firms in 1919, 5 firms dominated the industry—Goodyear, B. F. Goodrich, Firestone, U.S. Rubber, and Fisk, followed by Miller Rubber, General Tire and Rubber, and Kelly-Springfield. During the twenties, 166 firms left the industry while 66 entered. The share of the 5 largest firms rose from 50 percent in 1921 to 75 percent in 1937. During the depressed thirties, there was fierce price competition, and many firms exited the industry. By 1937 there were 30 firms, but the average employment per factory was 4.41 times as large as in 1921, and the average factory produced 6.87 times as many tires as in 1921. (French, 1986 and 1991; Nelson, 1987; Fricke, 1982)

The steel industry was already highly concentrated by 1920 as U.S. Steel had around 50 percent of the market. But U. S. Steel’s market share declined through the twenties and thirties as several smaller firms competed and grew to become known as Little Steel, the next six largest integrated producers after U. S. Steel. Jonathan Baker (1989) has argued that the evidence is consistent with “the assumption that competition was a dominant strategy for steel manufacturers” until the depression. However, the initiation of the National Recovery Administration (NRA) codes in 1933 required the firms to cooperate rather than compete, and Baker argues that this constituted a training period leading firms to cooperate in price and output policies after 1935. (McCraw and Reinhardt, 1989; Weiss, 1980; Adams, 1977)

Mergers

A number of the larger firms grew by merger during this period, and the second great merger wave in American industry occurred during the last half of the 1920s. Figure 10 shows two series on mergers during the interwar period. The FTC series included many of the smaller mergers. The series constructed by Carl Eis (1969) only includes the larger mergers and ends in 1930.

This second great merger wave coincided with the stock market boom of the twenties and has been called “merger for oligopoly” rather than merger for monopoly. (Stigler, 1950) This merger wave created many larger firms that ranked below the industry leaders. Much of the activity in occurred in the banking and public utilities industries. (Markham, 1955) In manufacturing and mining, the effects on industrial structure were less striking. Eis (1969) found that while mergers took place in almost all industries, they were concentrated in a smaller number of them, particularly petroleum, primary metals, and food products.

The federal government’s antitrust policies toward business varied sharply during the interwar period. In the 1920s there was relatively little activity by the Justice Department, but after the Great Depression the New Dealers tried to take advantage of big business to make business exempt from the antitrust laws and cartelize industries under government supervision.

With the passage of the FTC and Clayton Acts in 1914 to supplement the 1890 Sherman Act, the cornerstones of American antitrust law were complete. Though minor amendments were later enacted, the primary changes after that came in the enforcement of the laws and in swings in judicial decisions. Their two primary areas of application were in the areas of overt behavior, such as horizontal and vertical price-fixing, and in market structure, such as mergers and dominant firms. Horizontal price-fixing involves firms that would normally be competitors getting together to agree on stable and higher prices for their products. As long as most of the important competitors agree on the new, higher prices, substitution between products is eliminated and the demand becomes much less elastic. Thus, increasing the price increases the revenues and the profits of the firms who are fixing prices. Vertical price-fixing involves firms setting the prices of intermediate products purchased at different stages of production. It also tends to eliminate substitutes and makes the demand less elastic.

Price-fixing continued to be considered illegal throughout the period, but there was no major judicial activity regarding it in the 1920s other than the Trenton Potteries decision in 1927. In that decision 20 individuals and 23 corporations were found guilty of conspiring to fix the prices of bathroom bowls. The evidence in the case suggested that the firms were not very successful at doing so, but the court found that they were guilty nevertheless; their success, or lack thereof, was not held to be a factor in the decision. (Scherer and Ross, 1990) Though criticized by some, the decision was precedent setting in that it prohibited explicit pricing conspiracies per se.

The Justice Department had achieved success in dismantling Standard Oil and American Tobacco in 1911 through decisions that the firms had unreasonably restrained trade. These were essentially the same points used in court decisions against the Powder Trust in 1911, the thread trust in 1913, Eastman Kodak in 1915, the glucose and cornstarch trust in 1916, and the anthracite railroads in 1920. The criterion of an unreasonable restraint of trade was used in the 1916 and 1918 decisions that found the American Can Company and the United Shoe Machinery Company innocent of violating the Sherman Act; it was also clearly enunciated in the 1920 U. S. Steel decision. This became known as the rule of reason standard in antitrust policy.

Merger policy had been defined in the 1914 Clayton Act to prohibit only the acquisition of one corporation’s stock by another corporation. Firms then shifted to the outright purchase of a competitor’s assets. A series of court decisions in the twenties and thirties further reduced the possibilities of Justice Department actions against mergers. “Only fifteen mergers were ordered dissolved through antitrust actions between 1914 and 1950, and ten of the orders were accomplished under the Sherman Act rather than Clayton Act proceedings.”

Energy

The search for energy and new ways to translate it into heat, light, and motion has been one of the unending themes in history. From whale oil to coal oil to kerosene to electricity, the search for better and less costly ways to light our lives, heat our homes, and move our machines has consumed much time and effort. The energy industries responded to those demands and the consumption of energy materials (coal, oil, gas, and fuel wood) as a percent of GNP rose from about 2 percent in the latter part of the nineteenth century to about 3 percent in the twentieth.

Changes in the energy markets that had begun in the nineteenth century continued. Processed energy in the forms of petroleum derivatives and electricity continued to become more important than “raw” energy, such as that available from coal and water. The evolution of energy sources for lighting continued; at the end of the nineteenth century, natural gas and electricity, rather than liquid fuels began to provide more lighting for streets, businesses, and homes.

In the twentieth century the continuing shift to electricity and internal combustion fuels increased the efficiency with which the American economy used energy. These processed forms of energy resulted in a more rapid increase in the productivity of labor and capital in American manufacturing. From 1899 to 1919, output per labor-hour increased at an average annual rate of 1.2 percent, whereas from 1919 to 1937 the increase was 3.5 percent per year. The productivity of capital had fallen at an average annual rate of 1.8 percent per year in the 20 years prior to 1919, but it rose 3.1 percent a year in the 18 years after 1919. As discussed above, the adoption of electricity in American manufacturing initiated a rapid evolution in the organization of plants and rapid increases in productivity in all types of manufacturing.

The change in transportation was even more remarkable. Internal combustion engines running on gasoline or diesel fuel revolutionized transportation. Cars quickly grabbed the lion’s share of local and regional travel and began to eat into long distance passenger travel, just as the railroads had done to passenger traffic by water in the 1830s. Even before the First World War cities had begun passing laws to regulate and limit “jitney” services and to protect the investments in urban rail mass transit. Trucking began eating into the freight carried by the railroads.

These developments brought about changes in the energy industries. Coal mining became a declining industry. As Figure 11 shows, in 1925 the share of petroleum in the value of coal, gas, and petroleum output exceeded bituminous coal, and it continued to rise. Anthracite coal’s share was much smaller and it declined while natural gas and LP (or liquefied petroleum) gas were relatively unimportant. These changes, especially the declining coal industry, were the source of considerable worry in the twenties.

Coal

One of the industries considered to be “sick” in the twenties was coal, particularly bituminous, or soft, coal. Income in the industry declined, and bankruptcies were frequent. Strikes frequently interrupted production. The majority of the miners “lived in squalid and unsanitary houses, and the incidence of accidents and diseases was high.” (Soule, 1947) The number of operating bituminous coal mines declined sharply from 1923 through 1932. Anthracite (or hard) coal output was much smaller during the twenties. Real coal prices rose from 1919 to 1922, and bituminous coal prices fell sharply from then to 1925. (Figure 12) Coal mining employment plummeted during the twenties. Annual earnings, especially in bituminous coal mining, also fell because of dwindling hourly earnings and, from 1929 on, a shrinking workweek. (Figure 13)

The sources of these changes are to be found in the increasing supply due to productivity advances in coal production and in the decreasing demand for coal. The demand fell as industries began turning from coal to electricity and because of productivity advances in the use of coal to create energy in steel, railroads, and electric utilities. (Keller, 1973) In the generation of electricity, larger steam plants employing higher temperatures and steam pressures continued to reduce coal consumption per kilowatt hour. Similar reductions were found in the production of coke from coal for iron and steel production and in the use of coal by the steam railroad engines. (Rezneck, 1951) All of these factors reduced the demand for coal.

Productivity advances in coal mining tended to be labor saving. Mechanical cutting accounted for 60.7 percent of the coal mined in 1920 and 78.4 percent in 1929. By the middle of the twenties, the mechanical loading of coal began to be introduced. Between 1929 and 1939, output per labor-hour rose nearly one third in bituminous coal mining and nearly four fifths in anthracite as more mines adopted machine mining and mechanical loading and strip mining expanded.

The increasing supply and falling demand for coal led to the closure of mines that were too costly to operate. A mine could simply cease operations, let the equipment stand idle, and lay off employees. When bankruptcies occurred, the mines generally just turned up under new ownership with lower capital charges. When demand increased or strikes reduced the supply of coal, idle mines simply resumed production. As a result, the easily expanded supply largely eliminated economic profits.

The average daily employment in coal mining dropped by over 208,000 from its peak in 1923, but the sharply falling real wages suggests that the supply of labor did not fall as rapidly as the demand for labor. Soule (1947) notes that when employment fell in coal mining, it meant fewer days of work for the same number of men. Social and cultural characteristics tended to tie many to their home region. The local alternatives were few, and ignorance of alternatives outside the Appalachian rural areas, where most bituminous coal was mined, made it very costly to transfer out.

Petroleum

In contrast to the coal industry, the petroleum industry was growing throughout the interwar period. By the thirties, crude petroleum dominated the real value of the production of energy materials. As Figure 14 shows, the production of crude petroleum increased sharply between 1920 and 1930, while real petroleum prices, though highly variable, tended to decline.

The growing demand for petroleum was driven by the growth in demand for gasoline as America became a motorized society. The production of gasoline surpassed kerosene production in 1915. Kerosene’s market continued to contract as electric lighting replaced kerosene lighting. The development of oil burners in the twenties began a switch from coal toward fuel oil for home heating, and this further increased the growing demand for petroleum. The growth in the demand for fuel oil and diesel fuel for ship engines also increased petroleum demand. But it was the growth in the demand for gasoline that drove the petroleum market.

The decline in real prices in the latter part of the twenties shows that supply was growing even faster than demand. The discovery of new fields in the early twenties increased the supply of petroleum and led to falling prices as production capacity grew. The Santa Fe Springs, California strike in 1919 initiated a supply shock as did the discovery of the Long Beach, California field in 1921. New discoveries in Powell, Texas and Smackover Arkansas further increased the supply of petroleum in 1921. New supply increases occurred in 1926 to 1928 with petroleum strikes in Seminole, Oklahoma and Hendricks, Texas. The supply of oil increased sharply in 1930 to 1931 with new discoveries in Oklahoma City and East Texas. Each new discovery pushed down real oil prices, and the prices of petroleum derivatives, and the growing production capacity led to a general declining trend in petroleum prices. McMillin and Parker (1994) argue that supply shocks generated by these new discoveries were a factor in the business cycles during the 1920s.

The supply of gasoline increased more than the supply of crude petroleum. In 1913 a chemist at Standard Oil of Indiana introduced the cracking process to refine crude petroleum; until that time it had been refined by distillation or unpressurized heating. In the heating process, various refined products such as kerosene, gasoline, naphtha, and lubricating oils were produced at different temperatures. It was difficult to vary the amount of the different refined products produced from a barrel of crude. The cracking process used pressurized heating to break heavier components down into lighter crude derivatives; with cracking, it was possible to increase the amount of gasoline obtained from a barrel of crude from 15 to 45 percent. In the early twenties, chemists at Standard Oil of New Jersey improved the cracking process, and by 1927 it was possible to obtain twice as much gasoline from a barrel of crude petroleum as in 1917.

The petroleum companies also developed new ways to distribute gasoline to motorists that made it more convenient to purchase gasoline. Prior to the First World War, gasoline was commonly purchased in one- or five-gallon cans and the purchaser used a funnel to pour the gasoline from the can into the car. Then “filling stations” appeared, which specialized in filling cars’ tanks with gasoline. These spread rapidly, and by 1919 gasoline companies werebeginning to introduce their own filling stations or contract with independent stations to exclusively distribute their gasoline. Increasing competition and falling profits led filling station operators to expand into other activities such as oil changes and other mechanical repairs. The general name attached to such stations gradually changed to “service stations” to reflect these new functions.

Though the petroleum firms tended to be large, they were highly competitive, trying to pump as much petroleum as possible to increase their share of the fields. This, combined with the development of new fields, led to an industry with highly volatile prices and output. Firms desperately wanted to stabilize and reduce the production of crude petroleum so as to stabilize and raise the prices of crude petroleum and refined products. Unable to obtain voluntary agreement on output limitations by the firms and producers, governments began stepping in. Led by Texas, which created the Texas Railroad Commission in 1891, oil-producing states began to intervene to regulate production. Such laws were usually termed prorationing laws and were quotas designed to limit each well’s output to some fraction of its potential. The purpose was as much to stabilize and reduce production and raise prices as anything else, although generally such laws were passed under the guise of conservation. Although the federal government supported such attempts, not until the New Deal were federal laws passed to assist this.

Electricity

By the mid 1890s the debate over the method by which electricity was to be transmitted had been won by those who advocated alternating current. The reduced power losses and greater distance over which electricity could be transmitted more than offset the necessity for transforming the current back to direct current for general use. Widespread adoption of machines and appliances by industry and consumers then rested on an increase in the array of products using electricity as the source of power, heat, or light and the development of an efficient, lower cost method of generating electricity.

General Electric, Westinghouse, and other firms began producing the electrical appliances for homes and an increasing number of machines based on electricity began to appear in industry. The problem of lower cost production was solved by the introduction of centralized generating facilities that distributed the electric power through lines to many consumers and business firms.

Though initially several firms competed in generating and selling electricity to consumers and firms in a city or area, by the First World War many states and communities were awarding exclusive franchises to one firm to generate and distribute electricity to the customers in the franchise area. (Bright, 1947; Passer, 1953) The electric utility industry became an important growth industry and, as Figure 15 shows, electricity production and use grew rapidly.

The electric utilities increasingly were regulated by state commissions that were charged with setting rates so that the utilities could receive a “fair return” on their investments. Disagreements over what constituted a “fair return” and the calculation of the rate base led to a steady stream of cases before the commissions and a continuing series of court appeals. Generally these court decisions favored the reproduction cost basis. Because of the difficulty and cost in making these calculations, rates tended to be in the hands of the electric utilities that, it has been suggested, did not lower rates adequately to reflect the rising productivity and lowered costs of production. The utilities argued that a more rapid lowering of rates would have jeopardized their profits. Whether or not this increased their monopoly power is still an open question, but it should be noted, that electric utilities were hardly price-taking industries prior to regulation. (Mercer, 1973) In fact, as Figure 16 shows, the electric utilities began to systematically practice market segmentation charging users with less elastic demands, higher prices per kilowatt-hour.

Energy in the American Economy of the 1920s

The changes in the energy industries had far-reaching consequences. The coal industry faced a continuing decline in demand. Even in the growing petroleum industry, the periodic surges in the supply of petroleum caused great instability. In manufacturing, as described above, electrification contributed to a remarkable rise in productivity. The transportation revolution brought about by the rise of gasoline-powered trucks and cars changed the way businesses received their supplies and distributed their production as well as where they were located. The suburbanization of America and the beginnings of urban sprawl were largely brought about by the introduction of low-priced gasoline for cars.

Transportation

The American economy was forever altered by the dramatic changes in transportation after 1900. Following Henry Ford’s introduction of the moving assembly production line in 1914, automobile prices plummeted, and by the end of the 1920s about 60 percent of American families owned an automobile. The advent of low-cost personal transportation led to an accelerating movement of population out of the crowded cities to more spacious homes in the suburbs and the automobile set off a decline in intracity public passenger transportation that has yet to end. Massive road-building programs facilitated the intercity movement of people and goods. Trucks increasingly took over the movement of freight in competition with the railroads. New industries, such as gasoline service stations, motor hotels, and the rubber tire industry, arose to service the automobile and truck traffic. These developments were complicated by the turmoil caused by changes in the federal government’s policies toward transportation in the United States.

With the end of the First World War, a debate began as to whether the railroads, which had been taken over by the government, should be returned to private ownership or nationalized. The voices calling for a return to private ownership were much stronger, but doing so fomented great controversy. Many in Congress believed that careful planning and consolidation could restore the railroads and make them more efficient. There was continued concern about the near monopoly that the railroads had on the nation’s intercity freight and passenger transportation. The result of these deliberations was the Transportation Act of 1920, which was premised on the continued domination of the nation’s transportation by the railroads—an erroneous presumption.

The Transportation Act of 1920 presented a marked change in the Interstate Commerce Commission’s ability to control railroads. The ICC was allowed to prescribe exact rates that were to be set so as to allow the railroads to earn a fair return, defined as 5.5 percent, on the fair value of their property. The ICC was authorized to make an accounting of the fair value of each regulated railroad’s property; however, this was not completed until well into the 1930s, by which time the accounting and rate rules were out of date. To maintain fair competition between railroads in a region, all roads were to have the same rates for the same goods over the same distance. With the same rates, low-cost roads should have been able to earn higher rates of return than high-cost roads. To handle this, a recapture clause was inserted: any railroad earning a return of more than 6 percent on the fair value of its property was to turn the excess over to the ICC, which would place half of the money in a contingency fund for the railroad when it encountered financial problems and the other half in a contingency fund to provide loans to other railroads in need of assistance.

In order to address the problem of weak and strong railroads and to bring better coordination to the movement of rail traffic in the United States, the act was directed to encourage railroad consolidation, but little came of this in the 1920s. In order to facilitate its control of the railroads, the ICC was given two additional powers. The first was the control over the issuance or purchase of securities by railroads, and the second was the power to control changes in railroad service through the control of car supply and the extension and abandonment of track. The control of the supply of rail cars was turned over to the Association of American Railroads. Few extensions of track were proposed, but as time passed, abandonment requests grew. The ICC, however, trying to mediate between the conflicting demands of shippers, communities and railroads, generally refused to grant abandonments, and this became an extremely sensitive issue in the 1930s.

As indicated above, the premises of the Transportation Act of 1920 were wrong. Railroads experienced increasing competition during the 1920s, and both freight and passenger traffic were drawn off to competing transport forms. Passenger traffic exited from the railroads much more quickly. As the network of all weather surfaced roads increased, people quickly turned from the train to the car. Harmed even more by the move to automobile traffic were the electric interurban railways that had grown rapidly just prior to the First World War. (Hilton-Due, 1960) Not surprisingly, during the 1920s few railroads earned profits in excess of the fair rate of return.

The use of trucks to deliver freight began shortly after the turn of the century. Before the outbreak of war in Europe, White and Mack were producing trucks with as much as 7.5 tons of carrying capacity. Most of the truck freight was carried on a local basis, and it largely supplemented the longer distance freight transportation provided by the railroads. However, truck size was growing. In 1915 Trailmobile introduced the first four-wheel trailer designed to be pulled by a truck tractor unit. During the First World War, thousands of trucks were constructed for military purposes, and truck convoys showed that long distance truck travel was feasible and economical. The use of trucks to haul freight had been growing by over 18 percent per year since 1925, so that by 1929 intercity trucking accounted for more than one percent of the ton-miles of freight hauled.

The railroads argued that the trucks and buses provided “unfair” competition and believed that if they were also regulated, then the regulation could equalize the conditions under which they competed. As early as 1925, the National Association of Railroad and Utilities Commissioners issued a call for the regulation of motor carriers in general. In 1928 the ICC called for federal regulation of buses and in 1932 extended this call to federal regulation of trucks.

Most states had began regulating buses at the beginning of the 1920s in an attempt to reduce the diversion of urban passenger traffic from the electric trolley and railway systems. However, most of the regulation did not aim to control intercity passenger traffic by buses. As the network of surfaced roads expanded during the twenties, so did the routes of the intercity buses. In 1929 a number of smaller bus companies were incorporated in the Greyhound Buslines, the carrier that has since dominated intercity bus transportation. (Walsh, 2000)

A complaint of the railroads was that interstate trucking competition was unfair because it was subsidized while railroads were not. All railroad property was privately owned and subject to property taxes, whereas truckers used the existing road system and therefore neither had to bear the costs of creating the road system nor pay taxes upon it. Beginning with the Federal Road-Aid Act of 1916, small amounts of money were provided as an incentive for states to construct rural post roads. (Dearing-Owen, 1949) However, through the First World War most of the funds for highway construction came from a combination of levies on the adjacent property owners and county and state taxes. The monies raised by the counties were commonly 60 percent of the total funds allocated, and these primarily came from property taxes. In 1919 Oregon pioneered the state gasoline tax, which then began to be adopted by more and more states. A highway system financed by property taxes and other levies can be construed as a subsidization of motor vehicles, and one study for the period up to 1920 found evidence of substantial subsidization of trucking. (Herbst-Wu, 1973) However, the use of gasoline taxes moved closer to the goal of users paying the costs of the highways. Neither did the trucks have to pay for all of the highway construction because automobiles jointly used the highways. Highways had to be constructed in more costly ways in order to accommodate the larger and heavier trucks. Ideally the gasoline taxes collected from trucks should have covered the extra (or marginal) costs of highway construction incurred because of the truck traffic. Gasoline taxes tended to do this.

The American economy occupies a vast geographic region. Because economic activity occurs over most of the country, falling transportation costs have been crucial to knitting American firms and consumers into a unified market. Throughout the nineteenth century the railroads played this crucial role. Because of the size of the railroad companies and their importance in the economic life of Americans, the federal government began to regulate them. But, by 1917 it appeared that the railroad system had achieved some stability, and it was generally assumed that the post-First World War era would be an extension of the era from 1900 to 1917. Nothing could have been further from the truth. Spurred by public investments in highways, cars and trucks voraciously ate into the railroad’s market, and, though the regulators failed to understand this at the time, the railroad’s monopoly on transportation quickly disappeared.

Communications

Communications had joined with transportation developments in the nineteenth century to tie the American economy together more completely. The telegraph had benefited by using the railroads’ right-of-ways, and the railroads used the telegraph to coordinate and organize their far-flung activities. As the cost of communications fell and information transfers sped, the development of firms with multiple plants at distant locations was facilitated. The interwar era saw a continuation of these developments as the telephone continued to supplant the telegraph and the new medium of radio arose to transmit news and provide a new entertainment source.

Telegraph domination of business and personal communications had given way to the telephone as long distance telephone calls between the east and west coasts with the new electronic amplifiers became possible in 1915. The number of telegraph messages handled grew 60.4 percent in the twenties. The number of local telephone conversations grew 46.8 percent between 1920 and 1930, while the number of long distance conversations grew 71.8 percent over the same period. There were 5 times as many long distance telephone calls as telegraph messages handled in 1920, and 5.7 times as many in 1930.

The twenties were a prosperous period for AT&T and its 18 major operating companies. (Brooks, 1975; Temin, 1987; Garnet, 1985; Lipartito, 1989) Telephone usage rose and, as Figure 19 shows, the share of all households with a telephone rose from 35 percent to nearly 42 percent. In cities across the nation, AT&T consolidated its system, gained control of many operating companies, and virtually eliminated its competitors. It was able to do this because in 1921 Congress passed the Graham Act exempting AT&T from the Sherman Act in consolidating competing telephone companies. By 1940, the non-Bell operating companies were all small relative to the Bell operating companies.

Surprisingly there was a decline in telephone use on the farms during the twenties. (Hadwiger-Cochran, 1984; Fischer 1987) Rising telephone rates explain part of the decline in rural use. The imposition of connection fees during the First World War made it more costly for new farmers to hook up. As AT&T gained control of more and more operating systems, telephone rates were increased. AT&T also began requiring, as a condition of interconnection, that independent companies upgrade their systems to meet AT&T standards. Most of the small mutual companies that had provided service to farmers had operated on a shoestring—wires were often strung along fenceposts, and phones were inexpensive “whoop and holler” magneto units. Upgrading to AT&T’s standards raised costs, forcing these companies to raise rates.

However, it also seems likely that during the 1920s there was a general decline in the rural demand for telephone services. One important factor in this was the dramatic decline in farm incomes in the early twenties. The second reason was a change in the farmers’ environment. Prior to the First World War, the telephone eased farm isolation and provided news and weather information that was otherwise hard to obtain. After 1920 automobiles, surfaced roads, movies, and the radio loosened the isolation and the telephone was no longer as crucial.

Othmar Merganthaler’s development of the linotype machine in the late nineteenth century had irrevocably altered printing and publishing. This machine, which quickly created a line of soft, lead-based metal type that could be printed, melted down and then recast as a new line of type, dramatically lowered the costs of printing. Previously, all type had to be painstakingly set by hand, with individual cast letter matrices picked out from compartments in drawers to construct words, lines, and paragraphs. After printing, each line of type on the page had to be broken down and each individual letter matrix placed back into its compartment in its drawer for use in the next printing job. Newspapers often were not published every day and did not contain many pages, resulting in many newspapers in most cities. In contrast to this laborious process, the linotype used a keyboard upon which the operator typed the words in one of the lines in a news column. Matrices for each letter dropped down from a magazine of matrices as the operator typed each letter and were assembled into a line of type with automatic spacers to justify the line (fill out the column width). When the line was completed the machine mechanically cast the line of matrices into a line of lead type. The line of lead type was ejected into a tray and the letter matrices mechanically returned to the magazine while the operator continued typing the next line in the news story. The first Merganthaler linotype machine was installed in the New York Tribune in 1886. The linotype machine dramatically lowered the costs of printing newspapers (as well as books and magazines). Prior to the linotype a typical newspaper averaged no more than 11 pages and many were published only a few times a week. The linotype machine allowed newspapers to grow in size and they began to be published more regularly. A process of consolidation of daily and Sunday newspapers began that continues to this day. Many have termed the Merganthaler linotype machine the most significant printing invention since the introduction of movable type 400 years earlier.

For city families as well as farm families, radio became the new source of news and entertainment. (Barnouw, 1966; Rosen, 1980 and 1987; Chester-Garrison, 1950) It soon took over as the prime advertising medium and in the process revolutionized advertising. By 1930 more homes had radio sets than had telephones. The radio networks sent news and entertainment broadcasts all over the country. The isolation of rural life, particularly in many areas of the plains, was forever broken by the intrusion of the “black box,” as radio receivers were often called. The radio began a process of breaking down regionalism and creating a common culture in the United States.

The potential demand for radio became clear with the first regular broadcast of Westinghouse’s KDKA in Pittsburgh in the fall of 1920. Because the Department of Commerce could not deny a license application there was an explosion of stations all broadcasting at the same frequency and signal jamming and interference became a serious problem. By 1923 the Department of Commerce had gained control of radio from the Post Office and the Navy and began to arbitrarily disperse stations on the radio dial and deny licenses creating the first market in commercial broadcast licenses. In 1926 a U.S. District Court decided that under the Radio Law of 1912 Herbert Hoover, the secretary of commerce, did not have this power. New stations appeared and the logjam and interference of signals worsened. A Radio Act was passed in January of 1927 creating the Federal Radio Commission (FRC) as a temporary licensing authority. Licenses were to be issued in the public interest, convenience, and necessity. A number of broadcasting licenses were revoked; stations were assigned frequencies, dial locations, and power levels. The FRC created 24 clear-channel stations with as much as 50,000 watts of broadcasting power, of which 21 ended up being affiliated with the new national radio networks. The Communications Act of 1934 essentially repeated the 1927 act except that it created a permanent, seven-person Federal Communications Commission (FCC).

Local stations initially created and broadcast the radio programs. The expenses were modest, and stores and companies operating radio stations wrote this off as indirect, goodwill advertising. Several forces changed all this. In 1922, AT&T opened up a radio station in New York City, WEAF (later to become WNBC). AT&T envisioned this station as the center of a radio toll system where individuals could purchase time to broadcast a message transmitted to other stations in the toll network using AT&T’s long distance lines and an August 1922 broadcast by a Long Island realty company became the first conscious use of direct advertising.

Though advertising continued to be condemned, the fiscal pressures on radio stations to accept advertising began rising. In 1923 the American Society of Composers and Publishers (ASCAP), began demanding a performance fee anytime ASCAP-copyrighted music was performed on the radio, either live or on record. By 1924 the issue was settled, and most stations began paying performance fees to ASCAP. AT&T decided that all stations broadcasting with non AT&T transmitters were violating their patent rights and began asking for annual fees from such stations based on the station’s power. By the end of 1924, most stations were paying the fees. All of this drained the coffers of the radio stations, and more and more of them began discreetly accepting advertising.

RCA became upset at AT&T’s creation of a chain of radio stations and set up its own toll network using the inferior lines of Western Union and Postal Telegraph, because AT&T, not surprisingly, did not allow any toll (or network) broadcasting on its lines except by its own stations. AT&T began to worry that its actions might threaten its federal monopoly in long distance telephone communications. In 1926 a new firm was created, the National Broadcasting Company (NBC), which took over all broadcasting activities from AT&T and RCA as AT&T left broadcasting. When NBC debuted in November of 1926, it had two networks: the Red, which was the old AT&T network, and the Blue, which was the old RCA network. Radio networks allowed advertisers to direct advertising at a national audience at a lower cost. Network programs allowed local stations to broadcast superior programs that captured a larger listening audience and in return received a share of the fees the national advertiser paid to the network. In 1927 a new network, the Columbia Broadcasting System (CBS) financed by the Paley family began operation and other new networks entered or tried to enter the industry in the 1930s.

Communications developments in the interwar era present something of a mixed picture. By 1920 long distance telephone service was in place, but rising rates slowed the rate of adoption in the period, and telephone use in rural areas declined sharply. Though direct dialing was first tried in the twenties, its general implementation would not come until the postwar era, when other changes, such as microwave transmission of signals and touch-tone dialing, would also appear. Though the number of newspapers declined, newspaper circulation generally held up. The number of competing newspapers in larger cities began declining, a trend that also would accelerate in the postwar American economy.

Banking and Securities Markets

In the twenties commercial banks became “department stores of finance.”— Banks opened up installment (or personal) loan departments, expanded their mortgage lending, opened up trust departments, undertook securities underwriting activities, and offered safe deposit boxes. These changes were a response to growing competition from other financial intermediaries. Businesses, stung by bankers’ control and reduced lending during the 1920-21 depression, began relying more on retained earnings and stock and bond issues to raise investment and, sometimes, working capital. This reduced loan demand. The thrift institutions also experienced good growth in the twenties as they helped fuel the housing construction boom of the decade. The securities markets boomed in the twenties only to see a dramatic crash of the stock market in late 1929.

There were two broad classes of commercial banks; those that were nationally chartered and those that were chartered by the states. Only the national banks were required to be members of the Federal Reserve System. (Figure 21) Most banks were unit banks because national regulators and most state regulators prohibited branching. However, in the twenties a few states began to permit limited branching; California even allowed statewide branching.—The Federal Reserve member banks held the bulk of the assets of all commercial banks, even though most banks were not members. A high bank failure rate in the 1920s has usually been explained by “overbanking” or too many banks located in an area, but H. Thomas Johnson (1973-74) makes a strong argument against this. (Figure 22)— If there were overbanking, on average each bank would have been underutilized resulting in intense competition for deposits and higher costs and lower earnings. One common reason would have been the free entry of banks as long as they achieved the minimum requirements then in force. However, the twenties saw changes that led to the demise of many smaller rural banks that would likely have been profitable if these changes had not occurred. Improved transportation led to a movement of business activities, including banking, into the larger towns and cities. Rural banks that relied on loans to farmers suffered just as farmers did during the twenties, especially in the first half of the twenties. The number of bank suspensions and the suspension rate fell after 1926. The sharp rise in bank suspensions in 1930 occurred because of the first banking crisis during the Great Depression.

Prior to the twenties, the main assets of commercial banks were short-term business loans, made by creating a demand deposit or increasing an existing one for a borrowing firm. As business lending declined in the 1920s commercial banks vigorously moved into new types of financial activities. As banks purchased more securities for their earning asset portfolios and gained expertise in the securities markets, larger ones established investment departments and by the late twenties were an important force in the underwriting of new securities issued by nonfinancial corporations.

The securities market exhibited perhaps the most dramatic growth of the noncommercial bank financial intermediaries during the twenties, but others also grew rapidly. (Figure 23) The assets of life insurance companies increased by 10 percent a year from 1921 to 1929; by the late twenties they were a very important source of funds for construction investment. Mutual savings banks and savings and loan associations (thrifts) operated in essentially the same types of markets. The Mutual savings banks were concentrated in the northeastern United States. As incomes rose, personal savings increased, and housing construction expanded in the twenties, there was an increasing demand for the thrifts’ interest earning time deposits and mortgage lending.

But the dramatic expansion in the financial sector came in new corporate securities issues in the twenties—especially common and preferred stock—and in the trading of existing shares of those securities. (Figure 24) The late twenties boom in the American economy was rapid, highly visible, and dramatic. Skyscrapers were being erected in most major cities, the automobile manufacturers produced over four and a half million new cars in 1929; and the stock market, like a barometer of this prosperity, was on a dizzying ride to higher and higher prices. “Playing the market” seemed to become a national pastime.

The Dow-Jones index hit its peak of 381 on September 3 and then slid to 320 on October 21. In the following week the stock market “crashed,” with a record number of shares being traded on several days. At the end of Tuesday, October, 29th, the index stood at 230, 96 points less than one week before. On November 13, 1929, the Dow-Jones index reached its lowest point for the year at 198—183 points less than the September 3 peak.

The path of the stock market boom of the twenties can be seen in Figure 25. Sharp price breaks occurred several times during the boom, and each of these gave rise to dark predictions of the end of the bull market and speculation. Until late October of 1929, these predictions turned out to be wrong. Between those price breaks and prior to the October crash, stock prices continued to surge upward. In March of 1928, 3,875,910 shares were traded in one day, establishing a record. By late 1928, five million shares being traded in a day was a common occurrence.

New securities, from rising merger activity and the formation of holding companies, were issued to take advantage of the rising stock prices.—Stock pools, which were not illegal until the 1934 Securities and Exchange Act, took advantage of the boom to temporarily drive up the price of selected stocks and reap large gains for the members of the pool. In stock pools a group of speculators would pool large amounts of their funds and then begin purchasing large amounts of shares of a stock. This increased demand led to rising prices for that stock. Frequently pool insiders would “churn” the stock by repeatedly buying and selling the same shares among themselves, but at rising prices. Outsiders, seeing the price rising, would decide to purchase the stock whose price was rising. At a predetermined higher price the pool members would, within a short period, sell their shares and pull out of the market for that stock. Without the additional demand from the pool, the stock’s price usually fell quickly bringing large losses for the unsuspecting outside investors while reaping large gains for the pool insiders.

Another factor commonly used to explain both the speculative boom and the October crash was the purchase of stocks on small margins. However, contrary to popular perception, margin requirements through most of the twenties were essentially the same as in previous decades. Brokers, recognizing the problems with margin lending in the rapidly changing market, began raising margin requirements in late 1928, and by the fall of 1929, margin requirements were the highest in the history of the New York Stock Exchange. In the 1920s, as was the case for decades prior to that, the usual margin requirements were 10 to 15 percent of the purchase price, and, apparently, more often around 10 percent. There were increases in this percentage by 1928 and by the fall of 1928, well before the crash and at the urging of a special New York Clearinghouse committee, margin requirements had been raised to some of the highest levels in New York Stock Exchange history. One brokerage house required the following of its clients. Securities with a selling price below $10 could only be purchased for cash. Securities with a selling price of $10 to $20 had to have a 50 percent margin; for securities of $20 to $30 a margin requirement of 40 percent; and, for securities with a price above $30 the margin was 30 percent of the purchase price. In the first half of 1929 margin requirements on customers’ accounts averaged a 40 percent margin, and some houses raised their margins to 50 percent a few months before the crash. These were, historically, very high margin requirements. (Smiley and Keehn, 1988)—Even so, during the crash when additional margin calls were issued, those investors who could not provide additional margin saw the brokers’ sell their stock at whatever the market price was at the time and these forced sales helped drive prices even lower.

The crash began on Monday, October 21, as the index of stock prices fell 3 points on the third-largest volume in the history of the New York Stock Exchange. After a slight rally on Tuesday, prices began declining on Wednesday and fell 21 points by the end of the day bringing on the third call for more margin in that week. On Black Thursday, October 24, prices initially fell sharply, but rallied somewhat in the afternoon so that the net loss was only 7 points, but the volume of thirteen million shares set a NYSE record. Friday brought a small gain that was wiped out on Saturday. On Monday, October 28, the Dow Jones index fell 38 points on a volume of nine million shares—three million in the final hour of trading. Black Tuesday, October 29, brought declines in virtually every stock price. Manufacturing firms, which had been lending large sums to brokers for margin loans, had been calling in these loans and this accelerated on Monday and Tuesday. The big Wall Street banks increased their lending on call loans to offset some of this loss of loanable funds. The Dow Jones Index fell 30 points on a record volume of nearly sixteen and a half million shares exchanged. Black Thursday and Black Tuesday wiped out entire fortunes.

Though the worst was over, prices continued to decline until November 13, 1929, as brokers cleaned up their accounts and sold off the stocks of clients who could not supply additional margin. After that, prices began to slowly rise and by April of 1930 had increased 96 points from the low of November 13,— “only” 87 points less than the peak of September 3, 1929. From that point, stock prices resumed their depressing decline until the low point was reached in the summer of 1932.

 

—There is a long tradition that insists that the Great Bull Market of the late twenties was an orgy of speculation that bid the prices of stocks far above any sustainable or economically justifiable level creating a bubble in the stock market. John Kenneth Galbraith (1954) observed, “The collapse in the stock market in the autumn of 1929 was implicit in the speculation that went before.”—But not everyone has agreed with this.

In 1930 Irving Fisher argued that the stock prices of 1928 and 1929 were based on fundamental expectations that future corporate earnings would be high.— More recently, Murray Rothbard (1963), Gerald Gunderson (1976), and Jude Wanniski (1978) have argued that stock prices were not too high prior to the crash.—Gunderson suggested that prior to 1929, stock prices were where they should have been and that when corporate profits in the summer and fall of 1929 failed to meet expectations, stock prices were written down.— Wanniski argued that political events brought on the crash. The market broke each time news arrived of advances in congressional consideration of the Hawley-Smoot tariff. However, the virtually perfect foresight that Wanniski’s explanation requires is unrealistic.— Charles Kindleberger (1973) and Peter Temin (1976) examined common stock yields and price-earnings ratios and found that the relative constancy did not suggest that stock prices were bid up unrealistically high in the late twenties.—Gary Santoni and Gerald Dwyer (1990) also failed to find evidence of a bubble in stock prices in 1928 and 1929.—Gerald Sirkin (1975) found that the implied growth rates of dividends required to justify stock prices in 1928 and 1929 were quite conservative and lower than post-Second World War dividend growth rates.

However, examination of after-the-fact common stock yields and price-earning ratios can do no more than provide some ex post justification for suggesting that there was not excessive speculation during the Great Bull Market.— Each individual investor was motivated by that person’s subjective expectations of each firm’s future earnings and dividends and the future prices of shares of each firm’s stock. Because of this element of subjectivity, not only can we never accurately know those values, but also we can never know how they varied among individuals. The market price we observe will be the end result of all of the actions of the market participants, and the observed price may be different from the price almost all of the participants expected.

In fact, there are some indications that there were differences in 1928 and 1929. Yields on common stocks were somewhat lower in 1928 and 1929. In October of 1928, brokers generally began raising margin requirements, and by the beginning of the fall of 1929, margin requirements were, on average, the highest in the history of the New York Stock Exchange. Though the discount and commercial paper rates had moved closely with the call and time rates on brokers’ loans through 1927, the rates on brokers’ loans increased much more sharply in 1928 and 1929.— This pulled in funds from corporations, private investors, and foreign banks as New York City banks sharply reduced their lending. These facts suggest that brokers and New York City bankers may have come to believe that stock prices had been bid above a sustainable level by late 1928 and early 1929. White (1990) created a quarterly index of dividends for firms in the Dow-Jones index and related this to the DJI. Through 1927 the two track closely, but in 1928 and 1929 the index of stock prices grows much more rapidly than the index of dividends.

The qualitative evidence for a bubble in the stock market in 1928 and 1929 that White assembled was strengthened by the findings of J. Bradford De Long and Andre Shleifer (1991). They examined closed-end mutual funds, a type of fund where investors wishing to liquidate must sell their shares to other individual investors allowing its fundamental value to be exactly measurable.— Using evidence from these funds, De Long and Shleifer estimated that in the summer of 1929, the Standard and Poor’s composite stock price index was overvalued about 30 percent due to excessive investor optimism. Rappoport and White (1993 and 1994) found other evidence that supported a bubble in the stock market in 1928 and 1929. There was a sharp divergence between the growth of stock prices and dividends; there were increasing premiums on call and time brokers’ loans in 1928 and 1929; margin requirements rose; and stock market volatility rose in the wake of the 1929 stock market crash.

There are several reasons for the creation of such a bubble. First, the fundamental values of earnings and dividends become difficult to assess when there are major industrial changes, such as the rapid changes in the automobile industry, the new electric utilities, and the new radio industry.— Eugene White (1990) suggests that “While investors had every reason to expect earnings to grow, they lacked the means to evaluate easily the future path of dividends.” As a result investors bid up prices as they were swept up in the ongoing stock market boom. Second, participation in the stock market widened noticeably in the twenties. The new investors were relatively unsophisticated, and they were more likely to be caught up in the euphoria of the boom and bid prices upward.— New, inexperienced commission sales personnel were hired to sell stocks and they promised glowing returns on stocks they knew little about.

These observations were strengthened by the experimental work of economist Vernon Smith. (Bishop, 1987) In a number of experiments over a three-year period using students and Tucson businessmen and businesswomen, bubbles developed as inexperienced investors valued stocks differently and engaged in price speculation. As these investors in the experiments began to realize that speculative profits were unsustainable and uncertain, their dividend expectations changed, the market crashed, and ultimately stocks began trading at their fundamental dividend values. These bubbles and crashes occurred repeatedly, leading Smith to conjecture that there are few regulatory steps that can be taken to prevent a crash.

Though the bubble of 1928 and 1929 made some downward adjustment in stock prices inevitable, as Barsky and De Long have shown, changes in fundamentals govern the overall movements. And the end of the long bull market was almost certainly governed by this. In late 1928 and early 1929 there was a striking rise in economic activity, but a decline began somewhere between May and July of that year and was clearly evident by August of 1929. By the middle of August, the rise in stock prices had slowed down as better information on the contraction was received. There were repeated statements by leading figures that stocks were “overpriced” and the Federal Reserve System sharply increased the discount rate in August 1929 was well as continuing its call for banks to reduce their margin lending. As this information was assessed, the number of speculators selling stocks increased, and the number buying decreased. With the decreased demand, stock prices began to fall, and as more accurate information on the nature and extent of the decline was received, stock prices fell more. The late October crash made the decline occur much more rapidly, and the margin purchases and consequent forced selling of many of those stocks contributed to a more severe price fall. The recovery of stock prices from November 13 into April of 1930 suggests that stock prices may have been driven somewhat too low during the crash.

There is now widespread agreement that the 1929 stock market crash did not cause the Great Depression. Instead, the initial downturn in economic activity was a primary determinant of the ending of the 1928-29 stock market bubble. The stock market crash did make the downturn become more severe beginning in November 1929. It reduced discretionary consumption spending (Romer, 1990) and created greater income uncertainty helping to bring on the contraction (Flacco and Parker, 1992). Though stock market prices reached a bottom and began to recover following November 13, 1929, the continuing decline in economic activity took its toll and by May 1930 stock prices resumed their decline and continued to fall through the summer of 1932.

Domestic Trade

In the nineteenth century, a complex array of wholesalers, jobbers, and retailers had developed, but changes in the postbellum period reduced the role of the wholesalers and jobbers and strengthened the importance of the retailers in domestic trade. (Cochran, 1977; Chandler, 1977; Marburg, 1951; Clewett, 1951) The appearance of the department store in the major cities and the rise of mail order firms in the postbellum period changed the retailing market.

Department Stores

A department store is a combination of specialty stores organized as departments within one general store. A. T. Stewart’s huge 1846 dry goods store in New York City is often referred to as the first department store. (Resseguie, 1965; Sobel-Sicilia, 1986) R. H. Macy started his dry goods store in 1858 and Wanamaker’s in Philadelphia opened in 1876. By the end of the nineteenth century, every city of any size had at least one major department store. (Appel, 1930; Benson, 1986; Hendrickson, 1979; Hower, 1946; Sobel, 1974) Until the late twenties, the department store field was dominated by independent stores, though some department stores in the largest cities had opened a few suburban branches and stores in other cities. In the interwar period department stores accounted for about 8 percent of retail sales.

The department stores relied on a “one-price” policy, which Stewart is credited with beginning. In the antebellum period and into the postbellum period, it was common not to post a specific price on an item; rather, each purchaser haggled with a sales clerk over what the price would be. Stewart posted fixed prices on the various dry goods sold, and the customer could either decide to buy or not buy at the fixed price. The policy dramatically lowered transactions costs for both the retailer and the purchaser. Prices were reduced with a smaller markup over the wholesale price, and a large sales volume and a quicker turnover of the store’s inventory generated profits.

Mail Order Firms

What changed the department store field in the twenties was the entrance of Sears Roebuck and Montgomery Ward, the two dominant mail order firms in the United States. (Emmet-Jeuck, 1950; Chandler, 1962, 1977) Both firms had begun in the late nineteenth century and by 1914 the younger Sears Roebuck had surpassed Montgomery Ward. Both located in Chicago due to its central location in the nation’s rail network and both had benefited from the advent of Rural Free Delivery in 1896 and low cost Parcel Post Service in 1912.

In 1924 Sears hired Robert C. Wood, who was able to convince Sears Roebuck to open retail stores. Wood believed that the declining rural population and the growing urban population forecast the gradual demise of the mail order business; survival of the mail order firms required a move into retail sales. By 1925 Sears Roebuck had opened 8 retail stores, and by 1929 it had 324 stores. Montgomery Ward quickly followed suit. Rather than locating these in the central business district (CBD), Wood located many on major streets closer to the residential areas. These moves of Sears Roebuck and Montgomery Ward expanded department store retailing and provided a new type of chain store.

Chain Stores

Though chain stores grew rapidly in the first two decades of the twentieth century, they date back to the 1860s when George F. Gilman and George Huntington Hartford opened a string of New York City A&P (Atlantic and Pacific) stores exclusively to sell tea. (Beckman-Nolen, 1938; Lebhar, 1963; Bullock, 1933) Stores were opened in other regions and in 1912 their first “cash-and-carry” full-range grocery was opened. Soon they were opening 50 of these stores each week and by the 1920s A&P had 14,000 stores. They then phased out the small stores to reduce the chain to 4,000 full-range, supermarket-type stores. A&P’s success led to new grocery store chains such as Kroger, Jewel Tea, and Safeway.

Prior to A&P’s cash-and-carry policy, it was common for grocery stores, produce (or green) grocers, and meat markets to provide home delivery and credit, both of which were costly. As a result, retail prices were generally marked up well above the wholesale prices. In cash-and-carry stores, items were sold only for cash; no credit was extended, and no expensive home deliveries were provided. Markups on prices could be much lower because other costs were much lower. Consumers liked the lower prices and were willing to pay cash and carry their groceries, and the policy became common by the twenties.

Chains also developed in other retail product lines. In 1879 Frank W. Woolworth developed a “5 and 10 Cent Store,” or dime store, and there were over 1,000 F. W. Woolworth stores by the mid-1920s. (Winkler, 1940) Other firms such as Kresge, Kress, and McCrory successfully imitated Woolworth’s dime store chain. J.C. Penney’s dry goods chain store began in 1901 (Beasley, 1948), Walgreen’s drug store chain began in 1909, and shoes, jewelry, cigars, and other lines of merchandise also began to be sold through chain stores.

Self-Service Policies

In 1916 Clarence Saunders, a grocer in Memphis, Tennessee, built upon the one-price policy and began offering self-service at his Piggly Wiggly store. Previously, customers handed a clerk a list or asked for the items desired, which the clerk then collected and the customer paid for. With self-service, items for sale were placed on open shelves among which the customers could walk, carrying a shopping bag or pushing a shopping cart. Each customer could then browse as he or she pleased, picking out whatever was desired. Saunders and other retailers who adopted the self-service method of retail selling found that customers often purchased more because of exposure to the array of products on the shelves; as well, self-service lowered the labor required for retail sales and therefore lowered costs.

Shopping Centers

Shopping Centers, another innovation in retailing that began in the twenties, was not destined to become a major force in retail development until after the Second World War. The ultimate cause of this innovation was the widening ownership and use of the automobile. By the 1920s, as the ownership and use of the car began expanding, population began to move out of the crowded central cities toward the more open suburbs. When General Robert Wood set Sears off on its development of urban stores, he located these not in the central business district, CBD, but as free-standing stores on major arteries away from the CBD with sufficient space for parking.

At about the same time, a few entrepreneurs began to develop shopping centers. Yehoshua Cohen (1972) says, “The owner of such a center was responsible for maintenance of the center, its parking lot, as well as other services to consumers and retailers in the center.” Perhaps the earliest such shopping center was the Country Club Plaza built in 1922 by the J. C. Nichols Company in Kansas City, Missouri. Other early shopping centers appeared in Baltimore and Dallas. By the mid-1930s the concept of a planned shopping center was well known and was expected to be the means to capture the trade of the growing number of suburban consumers.

International Trade and Finance

In the twenties a gold exchange standard was developed to replace the gold standard of the prewar world. Under a gold standard, each country’s currency carried a fixed exchange rate with gold, and the currency had to be backed up by gold. As a result, all countries on the gold standard had fixed exchange rates with all other countries. Adjustments to balance international trade flows were made by gold flows. If a country had a deficit in its trade balance, gold would leave the country, forcing the money stock to decline and prices to fall. Falling prices made the deficit countries’ exports more attractive and imports more costly, reducing the deficit. Countries with a surplus imported gold, which increased the money stock and caused prices to rise. This made the surplus countries’ exports less attractive and imports more attractive, decreasing the surplus. Most economists who have studied the prewar gold standard contend that it did not work as the conventional textbook model says, because capital flows frequently reduced or eliminated the need for gold flows for long periods of time. However, there is no consensus on whether fortuitous circumstances, rather than the gold standard, saved the international economy from periodic convulsions or whether the gold standard as it did work was sufficient to promote stability and growth in international transactions.

After the First World War it was argued that there was a “shortage” of fluid monetary gold to use for the gold standard, so some method of “economizing” on gold had to be found. To do this, two basic changes were made. First, most nations, other than the United States, stopped domestic circulation of gold. Second, the “gold exchange” system was created. Most countries held their international reserves in the form of U.S. dollars or British pounds and international transactions used dollars or pounds, as long as the United States and Great Britain stood ready to exchange their currencies for gold at fixed exchange rates. However, the overvaluation of the pound and the undervaluation of the franc threatened these arrangements. The British trade deficit led to a capital outflow, higher interest rates, and a weak economy. In the late twenties, the French trade surplus led to the importation of gold that they did not allow to expand the money supply.

Economizing on gold by no longer allowing its domestic circulation and by using key currencies as international monetary reserves was really an attempt to place the domestic economies under the control of the nations’ politicians and make them independent of international events. Unfortunately, in doing this politicians eliminated the equilibrating mechanism of the gold standard but had nothing with which to replace it. The new international monetary arrangements of the twenties were potentially destabilizing because they were not allowed to operate as a price mechanism promoting equilibrating adjustments.

There were other problems with international economic activity in the twenties. Because of the war, the United States was abruptly transformed from a debtor to a creditor on international accounts. Though the United States did not want reparations payments from Germany, it did insist that Allied governments repay American loans. The Allied governments then insisted on war reparations from Germany. These initial reparations assessments were quite large. The Allied Reparations Commission collected the charges by supervising Germany’s foreign trade and by internal controls on the German economy, and it was authorized to increase the reparations if it was felt that Germany could pay more. The treaty allowed France to occupy the Ruhr after Germany defaulted in 1923.

Ultimately, this tangled web of debts and reparations, which was a major factor in the course of international trade, depended upon two principal actions. First, the United States had to run an import surplus or, on net, export capital out of the United States to provide a pool of dollars overseas. Germany then had either to have an export surplus or else import American capital so as to build up dollar reserves—that is, the dollars the United States was exporting. In effect, these dollars were paid by Germany to Great Britain, France, and other countries that then shipped them back to the United States as payment on their U.S. debts. If these conditions did not occur, (and note that the “new” gold standard of the twenties had lost its flexibility because the price adjustment mechanism had been eliminated) disruption in international activity could easily occur and be transmitted to the domestic economies.

In the wake of the 1920-21 depression Congress passed the Emergency Tariff Act, which raised tariffs, particularly on manufactured goods. (Figures 26 and 27) The Fordney-McCumber Tariff of 1922 continued the Emergency Tariff of 1921, and its protection on many items was extremely high, ranging from 60 to 100 percent ad valorem (or as a percent of the price of the item). The increases in the Fordney-McCumber tariff were as large and sometimes larger than the more famous (or “infamous”) Smoot-Hawley tariff of 1930. As farm product prices fell at the end of the decade presidential candidate Herbert Hoover proposed, as part of his platform, tariff increases and other changes to aid the farmers. In January 1929, after Hoover’s election, but before he took office, a tariff bill was introduced into Congress. Special interests succeeded in gaining additional (or new) protection for most domestically produced commodities and the goal of greater protection for the farmers tended to get lost in the increased protection for multitudes of American manufactured products. In spite of widespread condemnation by economists, President Hoover signed the Smoot-Hawley Tariff in June 1930 and rates rose sharply.

Following the First World War, the U.S. government actively promoted American exports, and in each of the postwar years through 1929, the United States recorded a surplus in its balance of trade. (Figure 28) However, the surplus declined in the 1930s as both exports and imports fell sharply after 1929. From the mid-1920s on finished manufactures were the most important exports, while agricultural products dominated American imports.

The majority of the funds that allowed Germany to make its reparations payments to France and Great Britain and hence allowed those countries to pay their debts to the United States came from the net flow of capital out of the United States in the form of direct investment in real assets and investments in long- and short-term foreign financial assets. After the devastating German hyperinflation of 1922 and 1923, the Dawes Plan reformed the German economy and currency and accelerated the U.S. capital outflow. American investors began to actively and aggressively pursue foreign investments, particularly loans (Lewis, 1938) and in the late twenties there was a marked deterioration in the quality of foreign bonds sold in the United States. (Mintz, 1951)

The system, then, worked well as long as there was a net outflow of American capital, but this did not continue. In the middle of 1928, the flow of short-term capital began to decline. In 1928 the flow of “other long-term” capital out of the United States was 752 million dollars, but in 1929 it was only 34 million dollars. Though arguments now exist as to whether the booming stock market in the United States was to blame for this, it had far-reaching effects on the international economic system and the various domestic economies.

The Start of the Depression

The United States had the majority of the world’s monetary gold, about 40 percent, by 1920. In the latter part of the twenties, France also began accumulating gold as its share of the world’s monetary gold rose from 9 percent in 1927 to 17 percent in 1929 and 22 percent by 1931. In 1927 the Federal Reserve System had reduced discount rates (the interest rate at which they lent reserves to member commercial banks) and engaged in open market purchases (purchasing U.S. government securities on the open market to increase the reserves of the banking system) to push down interest rates and assist Great Britain in staying on the gold standard. By early 1928 the Federal Reserve System was worried about its loss of gold due to this policy as well as the ongoing boom in the stock market. It began to raise the discount rate to stop these outflows. Gold was also entering the United States so that foreigners could obtain dollars to invest in stocks and bonds. As the United States and France accumulated more and more of the world’s monetary gold, other countries’ central banks took contractionary steps to stem the loss of gold. In country after country these deflationary strategies began contracting economic activity and by 1928 some countries in Europe, Asia, and South America had entered into a depression. More countries’ economies began to decline in 1929, including the United States, and by 1930 a depression was in force for almost all of the world’s market economies. (Temin, 1989; Eichengreen, 1992)

Monetary and Fiscal Policies in the 1920s

Fiscal Policies

As a tool to promote stability in aggregate economic activity, fiscal policy is largely a post-Second World War phenomenon. Prior to 1930 the federal government’s spending and taxing decisions were largely, but not completely, based on the perceived “need” for government-provided public goods and services.

Though the fiscal policy concept had not been developed, this does not mean that during the twenties no concept of the government’s role in stimulating economic activity existed. Herbert Stein (1990) points out that in the twenties Herbert Hoover and some of his contemporaries shared two ideas about the proper role of the federal government. The first was that federal spending on public works could be an important force in reducin Smiley and Keehn, 1995.  investment. Both concepts fit the ideas held by Hoover and others of his persuasion that the U.S. economy of the twenties was not the result of laissez-faire workings but of “deliberate social engineering.”

The federal personal income tax was enacted in 1913. Though mildly progressive, its rates were low and topped out at 7 percent on taxable income in excess of $750,000. (Table 4) As the United States prepared for war in 1916, rates were increased and reached a maximum marginal rate of 12 percent. With the onset of the First World War, the rates were dramatically increased. To obtain additional revenue in 1918, marginal rates were again increased. The share of federal revenue generated by income taxes rose from 11 percent in 1914 to 69 percent in 1920. The tax rates had been extended downward so that more than 30 percent of the nation’s income recipients were subject to income taxes by 1918. However, through the purchase of tax exempt state and local securities and through steps taken by corporations to avoid the cash distribution of profits, the number of high income taxpayers and their share of total taxes paid declined as Congress kept increasing the tax rates. The normal (or base) tax rate was reduced slightly for 1919 but the surtax rates, which made the income tax highly progressive, were retained. (Smiley-Keehn, 1995)

President Harding’s new Secretary of the Treasury, Andrew Mellon, proposed cutting the tax rates, arguing that the rates in the higher brackets had “passed the point of productivity” and rates in excess of 70 percent simply could not be collected. Though most agreed that the rates were too high, there was sharp disagreement on how the rates should be cut. Democrats and  Smiley and Keehn, 1995.  Progressive Republicans argued for rate cuts targeted for the lower income taxpayers while maintaining most of the steep progressivity of the tax rates. They believed that remedies could be found to change the tax laws to stop the legal avoidance of federal income taxes. Republicans argued for sharper cuts that reduced the progressivity of the rates. Mellon proposed a maximum rate of 25 percent.

Though the federal income tax rates were reduced and made less progressive, it took three tax rate cuts in 1921, 1924, and 1925 before Mellon’s goal was finally achieved. The highest marginal tax rate was reduced from 73 percent to 58 percent to 46 percent and finally to 25 percent for the 1925 tax year. All of the other rates were also reduced and exemptions increased. By 1926, only about the top 10 percent of income recipients were subject to federal income taxes. As tax rates were reduced, the number of high income tax returns increased and the share of total federal personal income taxes paid rose. (Tables 5 and 6) Even with the dramatic income tax rate cuts and reductions in the number of low income taxpayers, federal personal income tax revenue continued to rise during the 1920s. Though early estimates of the distribution of personal income showed sharp increases in income inequality during the 1920s (Kuznets, 1953; Holt, 1977), more recent estimates have found that the increases in inequality were considerably less and these appear largely to be related to the sharp rise in capital gains due to the booming stock market in the late twenties. (Smiley, 1998 and 2000)

Each year in the twenties the federal government generated a surplus, in some years as much as 1 percent of GNP. The surpluses were used to reduce the federal deficit and it declined by 25 percent between 1920 and 1930. Contrary to simple macroeconomic models that argue a federal government budget surplus must be contractionary and tend to stop an economy from reaching full employment, the American economy operated at full-employment or close to it throughout the twenties and saw significant economic growth. In this case, the surpluses were not contractionary because the dollars were circulated back into the economy through the purchase of outstanding federal debt rather than pulled out as currency and held in a vault somewhere.

Monetary Policies

In 1913 fear of the “money trust” and their monopoly power led Congress to create 12 central banks when they created the Federal Reserve System. The new central banks were to control money and credit and act as lenders of last resort to end banking panics. The role of the Federal Reserve Board, located in Washington, D.C., was to coordinate the policies of the 12 district banks; it was composed of five presidential appointees and the current secretary of the treasury and comptroller of the currency. All national banks had to become members of the Federal Reserve System, the Fed, and any state bank meeting the qualifications could elect to do so.

The act specified fixed reserve requirements on demand and time deposits, all of which had to be on deposit in the district bank. Commercial banks were allowed to rediscount commercial paper and given Federal Reserve currency. Initially, each district bank set its own rediscount rate. To provide additional income when there was little rediscounting, the district banks were allowed to engage in open market operations that involved the purchasing and selling of federal government securities, short-term securities of state and local governments issued in anticipation of taxes, foreign exchange, and domestic bills of exchange. The district banks were also designated to act as fiscal agents for the federal government. Finally, the Federal Reserve System provided a central check clearinghouse for the entire banking system.

When the Federal Reserve System was originally set up, it was believed that its primary role was to be a lender of last resort to prevent banking panics and become a check-clearing mechanism for the nation’s banks. Both the Federal Reserve Board and the Governors of the District Banks were bodies established to jointly exercise these activities. The division of functions was not clear, and a struggle for power ensued, mainly between the New York Federal Reserve Bank, which was led by J. P. Morgan’s protege, Benjamin Strong, through 1928, and the Federal Reserve Board. By the thirties the Federal Reserve Board had achieved dominance.

There were really two conflicting criteria upon which monetary actions were ostensibly based: the Gold Standard and the Real Bills Doctrine. The Gold Standard was supposed to be quasi-automatic, with an effective limit to the quantity of money. However, the Real Bills Doctrine (which required that all loans be made on short-term, self-liquidating commercial paper) had no effective limit on the quantity of money. The rediscounting of eligible commercial paper was supposed to lead to the required “elasticity” of the stock of money to “accommodate” the needs of industry and business. Actually the rediscounting of commercial paper, open market purchases, and gold inflows all had the same effects on the money stock.

The 1920-21 Depression

During the First World War, the Fed kept discount rates low and granted discounts on banks’ customer loans used to purchase V-bonds in order to help finance the war. The final Victory Loan had not been floated when the Armistice was signed in November of 1918: in fact, it took until October of 1919 for the government to fully sell this last loan issue. The Treasury, with the secretary of the treasury sitting on the Federal Reserve Board, persuaded the Federal Reserve System to maintain low interest rates and discount the Victory bonds necessary to keep bond prices high until this last issue had been floated. As a result, during this period the money supply grew rapidly and prices rose sharply.

A shift from a federal deficit to a surplus and supply disruptions due to steel and coal strikes in 1919 and a railroad strike in early 1920 contributed to the end of the boom. But the most—common view is that the Fed’s monetary policy was the main determinant of the end of the expansion and inflation and the beginning of the subsequent contraction and severe deflation. When the Fed was released from its informal agreement with the Treasury in November of 1919, it raised the discount rate from 4 to 4.75 percent. Benjamin Strong (the governor of the New York bank) was beginning to believe that the time for strong action was past and that the Federal Reserve System’s actions should be moderate. However, with Strong out of the country, the Federal Reserve Board increased the discount rate from 4.75 to 6 percent in late January of 1920 and to 7 percent on June 1, 1920. By the middle of 1920, economic activity and employment were rapidly falling, and prices had begun their downward spiral in one of the sharpest price declines in American history. The Federal Reserve System kept the discount rate at 7 percent until May 5, 1921, when it was lowered to 6.5 percent. By June of 1922, the rate had been lowered yet again to 4 percent. (Friedman and Schwartz, 1963)

The Federal Reserve System authorities received considerable criticism then and later for their actions. Milton Friedman and Anna Schwartz (1963) contend that the discount rate was raised too much too late and then kept too high for too long, causing the decline to be more severe and the price deflation to be greater. In their opinion the Fed acted in this manner due to the necessity of meeting the legal reserve requirement with a safe margin of gold reserves. Elmus Wicker (1966), however, argues that the gold reserve ratio was not the main factor determining the Federal Reserve policy in the episode. Rather, the Fed knowingly pursued a deflationary policy because it felt that the money supply was simply too large and prices too high. To return to the prewar parity for gold required lowering the price level, and there was an excessive stock of money because the additional money had been used to finance the war, not to produce consumer goods. Finally, the outstanding indebtedness was too large due to the creation of Fed credit.

Whether statutory gold reserve requirements to maintain the gold standard or domestic credit conditions were the most important determinant of Fed policy is still an open question, though both certainly had some influence. Regardless of the answer to that question, the Federal Reserve System’s first major undertaking in the years immediately following the First World War demonstrated poor policy formulation.

Federal Reserve Policies from 1922 to 1930

By 1921 the district banks began to recognize that their open market purchases had effects on interest rates, the money stock, and economic activity. For the next several years, economists in the Federal Reserve System discussed how this worked and how it could be related to discounting by member banks. A committee was created to coordinate the open market purchases of the district banks.

The recovery from the 1920-1921 depression had proceeded smoothly with moderate price increases. In early 1923 the Fed sold some securities and increased the discount rate from 4 percent as they believed the recovery was too rapid. However, by the fall of 1923 there were some signs of a business slump. McMillin and Parker (1994) argue that this contraction, as well as the 1927 contraction, were related to oil price shocks. By October of 1923 Benjamin Strong was advocating securities purchases to counter this. Between then and September 1924 the Federal Reserve System increased its securities holdings by over $500 million. Between April and August of 1924 the Fed reduced the discount rate to 3 percent in a series of three separate steps. In addition to moderating the mild business slump, the expansionary policy was also intended to reduce American interest rates relative to British interest rates. This reversed the gold flow back toward Great Britain allowing Britain to return to the gold standard in 1925. At the time it appeared that the Fed’s monetary policy had successfully accomplished its goals.

By the summer of 1924 the business slump was over and the economy again began to grow rapidly. By the mid-1920s real estate speculation had arisen in many urban areas in the United States and especially in Southeastern Florida. Land prices were rising sharply. Stock market prices had also begun rising more rapidly. The Fed expressed some worry about these developments and in 1926 sold some securities to gently slow the real estate and stock market boom. Amid hurricanes and supply bottlenecks the Florida real estate boom collapsed but the stock market boom continued.

The American economy entered into another mild business recession in the fall of 1926 that lasted until the fall of 1927. One of the factors in this was Henry’s Ford’s shut down of all of his factories to changeover from the Model T to the Model A. His employees were left without a job and without income for over six months. International concerns also reappeared. France, which was preparing to return to the gold standard, had begun accumulating gold and gold continued to flow into the United States. Some of this gold came from Great Britain making it difficult for the British to remain on the gold standard. This occasioned a new experiment in central bank cooperation. In July 1927 Benjamin Strong arranged a conference with Governor Montagu Norman of the Bank of England, Governor Hjalmar Schacht of the Reichsbank, and Deputy Governor Charles Ritt of the Bank of France in an attempt to promote cooperation among the world’s central bankers. By the time the conference began the Fed had already taken steps to counteract the business slump and reduce the gold inflow. In early 1927 the Fed reduced discount rates and made large securities purchases. One result of this was that the gold stock fell from $4.3 billion in mid-1927 to $3.8 billion in mid-1928. Some of the gold exports went to France and France returned to the gold standard with its undervalued currency. The loss of gold from Britain eased allowing it to maintain the gold standard.

By early 1928 the Fed was again becoming worried. Stock market prices were rising even faster and the apparent speculative bubble in the stock market was of some concern to Fed authorities. The Fed was also concerned about the loss of gold and wanted to bring that to an end. To do this they sold securities and, in three steps, raised the discount rate to 5 percent by July 1928. To this point the Federal Reserve Board had largely agreed with district Bank policy changes. However, problems began to develop.

During the stock market boom of the late 1920s the Federal Reserve Board preferred to use “moral suasion” rather than increases in discount rates to lessen member bank borrowing. The New York City bank insisted that moral suasion would not work unless backed up by literal credit rationing on a bank by bank basis which they, and the other district banks, were unwilling to do. They insisted that discount rates had to be increased. The Federal Reserve Board countered that this general policy change would slow down economic activity in general rather than be specifically targeted to stock market speculation. The result was that little was done for a year. Rates were not raised but no open market purchases were undertaken. Rates were finally raised to 6 percent in August of 1929. By that time the contraction had already begun. In late October the stock market crashed, and America slid into the Great Depression.

In November, following the stock market crash the Fed reduced discount rates to 4.5 percent. In January they again decreased discount rates and began a series of discount rate decreases until the rate reached 2.5 percent at the end of 1930. No further open market operations were undertaken for the next six months. As banks reduced their discounting in 1930, the stock of money declined. There was a banking crisis in the southeast in November and December of 1930, and in its wake the public’s holding of currency relative to deposits and banks’ reserve ratios began to rise and continued to do so through the end of the Great Depression.

Conclusion

Though some disagree, there is growing evidence that the behavior of the American economy in the 1920s did not cause the Great Depression. The depressed 1930s were not “retribution” for the exuberant growth of the 1920s. The weakness of a few economic sectors in the 1920s did not forecast the contraction from 1929 to 1933. Rather it was the depression of the 1930s and the Second World War that interrupted the economic growth begun in the 1920s and resumed after the Second World War. Just as the construction of skyscrapers that began in the 1920s resumed in the 1950s, so did real economic growth and progress resume. In retrospect we can see that the introduction and expansion of new technologies and industries in the 1920s, such as autos, household electric appliances, radio, and electric utilities, are echoed in the 1990s in the effects of the expanding use and development of the personal computer and the rise of the internet. The 1920s have much to teach us about the growth and development of the American economy.

References

Adams, Walter, ed. The Structure of American Industry, 5th ed. New York: Macmillan Publishing Co., 1977.

Aldcroft, Derek H. From Versailles to Wall Street, 1919-1929. Berkeley: The University of California Press, 1977.

Allen, Frederick Lewis. Only Yesterday. New York: Harper and Sons, 1931.

Alston, Lee J. “Farm Foreclosures in the United States During the Interwar Period.” The Journal of Economic History 43, no. 4 (1983): 885-904.

Alston, Lee J., Wayne A. Grove, and David C. Wheelock. “Why Do Banks Fail? Evidence from the 1920s.” Explorations in Economic History 31 (1994): 409-431.

Ankli, Robert. “Horses vs. Tractors on the Corn Belt.” Agricultural History 54 (1980): 134-148.

Ankli, Robert and Alan L. Olmstead. “The Adoption of the Gasoline Tractor in California.” Agricultural History 55 (1981):— 213-230.

Appel, Joseph H. The Business Biography of John Wanamaker, Founder and Builder. New York: The Macmillan Co., 1930.

Baker, Jonathan B. “Identifying Cartel Pricing Under Uncertainty: The U.S. Steel Industry, 1933-1939.” The Journal of Law and Economics 32 (1989): S47-S76.

Barger, E. L, et al. Tractors and Their Power Units. New York: John Wiley and Sons, 1952.

Barnouw, Eric. A Tower in Babel: A History of Broadcasting in the United States: Vol. I—to 1933. New York: Oxford University Press, 1966.

Barnouw, Eric. The Golden Web: A History of Broadcasting in the United States: Vol. II—1933 to 1953. New York: Oxford University Press, 1968.

Beasley, Norman. Main Street Merchant: The Story of the J. C. Penney Company. New York: Whittlesey House, 1948.

Beckman, Theodore N. and Herman C. Nolen. The Chain Store Problem: A Critical Analysis. New York: McGraw-Hill Book Co., 1938.

Benson, Susan Porter. Counter Cultures: Saleswomen, Managers, and Customers in American Department Stores, 1890-1940. Urbana, IL: University of Illinois Press, 1986.

Bernstein, Irving. The Lean Years: A History of the American Worker, 1920-1933. Boston: Houghton Mifflin Co., 1960.

Bernstein, Michael A. The Great Depression: Delayed Recovery and Economic Change in America, 1929-1939. New York: Cambridge University Press, 1987.

Bishop, Jerry E. “Stock Market Experiment Suggests Inevitability of Booms and Busts.” The Wall Street Journal, 17 November, 1987.

Board of Governors of the Federal Reserve System. Banking and Monetary Statistics. Washington: USGOP, 1943.

Bogue, Allan G. “Changes in Mechanical and Plant Technology: The Corn Belt, 1910-1940.” The Journal of Economic History 43 (1983): 1-26.

Breit, William and Elzinga, Kenneth. The Antitrust Casebook: Milestones in Economic Regulation, 2d ed. Chicago: The Dryden Press, 1989.

Bright, Arthur A., Jr. The Electric Lamp Industry: Technological Change and Economic Development from 1800 to 1947. New York: Macmillan, 1947.

Brody, David. Labor in Crisis: The Steel Strike. Philadelphia: J. B. Lippincott Co., 1965.

Brooks, John. Telephone: The First Hundred Years. New York: Harper and Row, 1975.

Brown, D. Clayton. Electricity for Rural America: The Fight for the REA. Westport, CT: The Greenwood Press, 1980.

Brown, William A., Jr. The International Gold Standard Reinterpreted, 1914-1934, 2 vols. New York: National Bureau of Economic Research, 1940.

Brunner, Karl and Allen Meltzer. “What Did We Learn from the Monetary Experience of the United States in the Great Depression?” Canadian Journal of Economics 1 (1968): 334-48.

Bryant, Keith L., Jr., and Henry C. Dethloff. A History of American Business. Englewood Cliffs, NJ: Prentice-Hall, Inc., 1983.

Bucklin, Louis P. Competition and Evolution in the Distributive Trades. Englewood Cliffs, NJ: Prentice-Hall, 1972.

Bullock, Roy J. “The Early History of the Great Atlantic & Pacific Tea Company,” Harvard Business Review 11 (1933): 289-93.

Bullock, Roy J. “A History of the Great Atlantic & Pacific Tea Company Since 1878.” Harvard Business Review 12 (1933): 59-69.

Cecchetti, Stephen G. “Understanding the Great Depression: Lessons for Current Policy.” In The Economics of the Great Depression, Edited by Mark Wheeler. Kalamazoo, MI: W. E. Upjohn Institute for Employment Research, 1998.

Chandler, Alfred D., Jr. Strategy and Structure: Chapters in the History of the American Industrial Enterprise. Cambridge, MA: The M.I.T. Press, 1962.

Chandler, Alfred D., Jr. The Visible Hand: The Managerial Revolution in American Business. Cambridge, MA: the Belknap Press Harvard University Press, 1977.

Chandler, Alfred D., Jr. Giant Enterprise: Ford, General Motors, and the American Automobile Industry. New York: Harcourt, Brace, and World, 1964.

Chester, Giraud, and Garnet R. Garrison. Radio and Television: An Introduction. New York: Appleton-Century Crofts, 1950.

Clewett, Richard C. “Mass Marketing of Consumers’ Goods.” In The Growth of the American Economy, 2d ed. Edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Cochran, Thomas C. 200 Years of American Business. New York: Delta Books, 1977.

Cohen, Yehoshua S. Diffusion of an Innovation in an Urban System: The Spread of Planned Regional Shopping Centers in the United States, 1949-1968. Chicago: The University of Chicago, Department of Geography, Research Paper No. 140, 1972.

Daley, Robert. An American Saga: Juan Trippe and His Pan American Empire. New York: Random House, 1980.

De Long, J. Bradford, and Andre Shleifer. “The Stock Market Bubble of 1929: Evidence from Closed-end Mutual Funds.” The Journal of Economic History 51 (September 1991): 675-700.

Clarke, Sally. “New Deal Regulation and the Revolution in American Farm Productivity: A Case Study of the Diffusion of the Tractor in the Corn Belt, 1920-1940.” The Journal of Economic History 51, no. 1 (1991): 105-115.

Cohen, Avi. “Technological Change as Historical Process: The Case of the U.S. Pulp and Paper Industry, 1915-1940.” The Journal of Economic History 44 (1984): 775-79.

Davies, R. E. G. A History of the World’s Airlines. London: Oxford University Press, 1964.

Dearing, Charles L., and Wilfred Owen. National Transportation Policy. Washington: The Brookings Institution, 1949.

Degen, Robert A. The American Monetary System: A Concise Survey of Its Evolution Since 1896. Lexington, MA: Lexington Books, 1987.

De Long, J. Bradford and Andre Shleifer. “The Stock Market Bubble of 1929: Evidence from Closed-end Mutual Funds.” The Journal of Economic History 51 (1991): 675-700.

Devine, Warren D., Jr. “From Shafts to Wires: Historical Perspectives on Electrification.” The Journal of Economic History 43 (1983): 347-372.

Eckert, Ross D., and George W. Hilton. “The Jitneys.” The Journal of Law and Economics 15 (October 1972): 293-326.

Eichengreen, Barry, ed. The Gold Standard in Theory and History. New York: Metheun, 1985.

Barry Eichengreen. “The Political Economy of the Smoot-Hawley Tariff.” Research in Economic History 12 (1989): 1-43.

Eichengreen, Barry. Golden Fetters: The Gold Standard and the Great Depression, 1919-1939. New York: Oxford University Press, 1992.

Eis, Carl. “The 1919-1930 Merger Movement in American Industry.” The Journal of Law and Economics XII (1969): 267-96.

Emmet, Boris, and John E. Jeuck. Catalogues and Counters: A History of Sears Roebuck and Company. Chicago: University of Chicago Press, 1950.

Fearon, Peter. War, Prosperity, & Depression: The U.S. Economy, 1947-1945. Lawrence, KS: University of Kansas Press, 1987.

Field, Alexander J. “The Most Technologically Progressive Decade of the Century.” The American Economic Review 93 (2003): 1399-1413.

Fischer, Claude. “The Revolution in Rural Telephony, 1900-1920.” Journal of Social History 21 (1987): 221-38.

Fischer, Claude. “Technology’s Retreat: The Decline of Rural Telephony in the United States, 1920-1940.” Social Science History, Vol. 11 (Fall 1987), pp. 295-327.

Fisher, Irving. The Stock Market Crash—and After. New York: Macmillan, 1930.

French, Michael J. “Structural Change and Competition in the United States Tire Industry, 1920-1937.” Business History Review 60 (1986): 28-54.

French, Michael J. The U.S. Tire Industry. Boston: Twayne Publishers, 1991.

Fricke, Ernest B. “The New Deal and the Modernization of Small Business: The McCreary Tire and Rubber Company, 1930-1940.” Business History Review 56 (1982):— 559-76.

Friedman, Milton, and Anna J. Schwartz. A Monetary History of the United States, 1867-1960. Princeton: Princeton University Press, 1963.

Galbraith, John Kenneth. The Great Crash. Boston: Houghton Mifflin, 1954.

Garnet, Robert W. The Telephone Enterprise: The Evolution of the Bell System’s Horizontal Structure, 1876-1900. Baltimore: The Johns Hopkins University Press, 1985.

Gideonse, Max. “Foreign Trade, Investments, and Commercial Policy.” In The Growth of the American Economy, 2d ed. Edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Giedion, Simon. Mechanization Takes Command. New York: Oxford University Press, 1948.

Gordon, Robert Aaron. Economic Instability and Growth: The American Record. New York: Harper and Row, 1974.

Gray, Roy Burton. Development of the Agricultural Tractor in the United States, 2 vols. Washington, D. C.: USGPO, 1954.

Gunderson, Gerald. An Economic History of America. New York: McGraw-Hill, 1976.

Hadwiger, Don F., and Clay Cochran. “Rural Telephones in the United States.” Agricultural History 58 (July 1984): 221-38.

Hamilton, James D. “Monetary Factors in the Great Depression.” Journal of Monetary Economics 19 (1987): 145-169.

Hamilton, James D. “The Role of the International Gold Standard in Propagating the Great Depression.” Contemporary Policy Issues 6 (1988): 67-89.

Hayek, Friedrich A. Prices and Production. New York: Augustus M. Kelly reprint of 1931 edition.

Hayford, Marc and Carl A. Pasurka, Jr. “The Political Economy of the Fordney-McCumber and Smoot-Hawley Tariff Acts.” Explorations in Economic History 29 (1992): 30-50.

Hendrickson, Robert. The Grand Emporiums: The Illustrated History of America’s Great Department Stores. Briarcliff Manor, NY: Stein and Day, 1979.

Herbst, Anthony F., and Joseph S. K. Wu. “Some Evidence of Subsidization of the U.S. Trucking Industry, 1900-1920.” The Journal of Economic History— 33 (June 1973): 417-33.

Higgs, Robert. Crisis and Leviathan: Critical Episodes in the Growth of American Government. New York: Oxford University Press, 1987.

Hilton, George W., and John Due. The Electric Interurban Railways in America. Stanford: Stanford University Press, 1960.

Hoffman, Elizabeth and Gary D. Liebcap. “Institutional Choice and the Development of U.S. Agricultural Policies in the 1920s.” The Journal of Economic History 51 (1991): 397-412.

Holt, Charles F. “Who Benefited from the Prosperity of the Twenties?” Explorations in Economic History 14 (1977): 277-289.

Hower, Ralph W. History of Macy’s of New York, 1858-1919. Cambridge, MA: Harvard University Press, 1946.

Hubbard, R. Glenn, Ed. Financial Markets and Financial Crises. Chicago: University of Chicago Press, 1991.

Hunter, Louis C. “Industry in the Twentieth Century.” In The Growth of the American Economy, 2d ed., edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Jerome, Harry. Mechanization in Industry. New York: National Bureau of Economic Research, 1934.

Johnson, H. Thomas. “Postwar Optimism and the Rural Financial Crisis.” Explorations in Economic History 11, no. 2 (1973-1974): 173-192.

Jones, Fred R. and William H. Aldred. Farm Power and Tractors, 5th ed. New York: McGraw-Hill, 1979.

Keller, Robert. “Factor Income Distribution in the United States During the 20’s: A Reexamination of Fact and Theory.” The Journal of Economic History 33 (1973): 252-95.

Kelly, Charles J., Jr. The Sky’s the Limit: The History of the Airlines. New York: Coward-McCann, 1963.

Kindleberger, Charles. The World in Depression, 1929-1939. Berkeley: The University of California Press, 1973.

Klebaner, Benjamin J. Commercial Banking in the United States: A History. New York: W. W. Norton and Co., 1974.

Kuznets, Simon. Shares of Upper Income Groups in Income and Savings. New York: NBER, 1953.

Lebhar, Godfrey M. Chain Stores in America, 1859-1962. New York: Chain Store Publishing Corp., 1963.

Lewis, Cleona. America’s Stake in International Investments. Washington: The Brookings Institution, 1938.

Livesay, Harold C. and Patrick G. Porter. “Vertical Integration in American Manufacturing, 1899-1948.” The Journal of Economic History 29 (1969): 494-500.

Lipartito, Kenneth. The Bell System and Regional Business: The Telephone in the South, 1877-1920. Baltimore: The Johns Hopkins University Press, 1989.

Liu, Tung, Gary J. Santoni, and Courteney C. Stone. “In Search of Stock Market Bubbles: A Comment on Rappoport and White.” The Journal of Economic History 55 (1995): 647-654.

Lorant, John. “Technological Change in American Manufacturing During the 1920s.” The Journal of Economic History 33 (1967): 243-47.

MacDonald, Forrest. Insull. Chicago: University of Chicago Press, 1962.

Marburg, Theodore. “Domestic Trade and Marketing.” In The Growth of the American Economy, 2d ed. Edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Markham, Jesse. “Survey of the Evidence and Findings on Mergers.” In Business Concentration and Price Policy, National Bureau of Economic Research. Princeton: Princeton University Press, 1955.

Markin, Rom J. The Supermarket: An Analysis of Growth, Development, and Change. Rev. ed. Pullman, WA: Washington State University Press, 1968.

McCraw, Thomas K. TVA and the Power Fight, 1933-1937. Philadelphia: J. B. Lippincott, 1971.

McCraw, Thomas K. and Forest Reinhardt. “Losing to Win: U.S. Steel’s Pricing, Investment Decisions, and Market Share, 1901-1938.” The Journal of Economic History 49 (1989): 592-620.

McMillin, W. Douglas and Randall E. Parker. “An Empirical Analysis of Oil Price Shocks in the Interwar Period.” Economic Inquiry 32 (1994): 486-497.

McNair, Malcolm P., and Eleanor G. May. The Evolution of Retail Institutions in the United States. Cambridge, MA: The Marketing Science Institute, 1976.

Mercer, Lloyd J. “Comment on Papers by Scheiber, Keller, and Raup.” The Journal of Economic History 33 (1973): 291-95.

Mintz, Ilse. Deterioration in the Quality of Foreign Bonds Issued in the United States, 1920-1930. New York: National Bureau of Economic Research, 1951.

Mishkin, Frederic S. “Asymmetric Information and Financial Crises: A Historical Perspective.” In Financial Markets and Financial Crises Edited by R. Glenn Hubbard. Chicago: University of Chicago Press, 1991.

Morris, Lloyd. Not So Long Ago. New York: Random House, 1949.

Mosco, Vincent. Broadcasting in the United States: Innovative Challenge and Organizational Control. Norwood, NJ: Ablex Publishing Corp., 1979.

Moulton, Harold G. et al. The American Transportation Problem. Washington: The Brookings Institution, 1933.

Mueller, John. “Lessons of the Tax-Cuts of Yesteryear.” The Wall Street Journal, March 5, 1981.

Musoke, Moses S. “Mechanizing Cotton Production in the American South: The Tractor, 1915-1960.” Explorations in Economic History 18 (1981): 347-75.

Nelson, Daniel. “Mass Production and the U.S. Tire Industry.” The Journal of Economic History 48 (1987): 329-40.

Nelson, Ralph L. Merger Movements in American Industry, 1895-1956. Princeton: Princeton University Press, 1959.

Niemi, Albert W., Jr., U.S. Economic History, 2nd ed. Chicago: Rand McNally Publishing Co., 1980.

Norton, Hugh S. Modern Transportation Economics. Columbus, OH: Charles E. Merrill Books, Inc., 1963.

Nystrom, Paul H. Economics of Retailing, vol. 1, 3rd ed. New York: The Ronald Press Co., 1930.

Oshima, Harry T. “The Growth of U.S. Factor Productivity: The Significance of New Technologies in the Early Decades of the Twentieth Century.” The Journal of Economic History 44 (1984): 161-70.

Parker, Randall and W. Douglas McMillin. “An Empirical Analysis of Oil Price Shocks During the Interwar Period.” Economic Inquiry 32 (1994): 486-497.

Parker, Randall and Paul Flacco. “Income Uncertainty and the Onset of the Great Depression.” Economic Inquiry 30 (1992): 154-171.

Parker, William N. “Agriculture.” In American Economic Growth: An Economist’s History of the United States, edited by Lance E. Davis, Richard A. Easterlin, William N. Parker, et. al. New York: Harper and Row, 1972.

Passer, Harold C. The Electrical Manufacturers, 1875-1900. Cambridge: Harvard University Press, 1953.

Peak, Hugh S., and Ellen F. Peak. Supermarket Merchandising and Management. Englewood Cliffs, NJ: Prentice-Hall, 1977.

Pilgrim, John. “The Upper Turning Point of 1920: A Reappraisal.” Explorations in Economic History 11 (1974): 271-98.

Rae, John B. Climb to Greatness: The American Aircraft Industry, 1920-1960. Cambridge: The M.I.T. Press, 1968.

Rae, John B. The American Automobile Industry. Boston: Twayne Publishers, 1984.

Rappoport, Peter and Eugene N. White. “Was the Crash of 1929 Expected?” American Economic Review 84 (1994): 271-281.

Rappoport, Peter and Eugene N. White. “Was There a Bubble in the 1929 Stock Market?” The Journal of Economic History 53 (1993): 549-574.

Resseguie, Harry E. “Alexander Turney Stewart and the Development of the Department Store, 1823-1876,” Business History Review 39 (1965): 301-22.

Rezneck, Samuel. “Mass Production and the Use of Energy.” In The Growth of the American Economy, 2d ed., edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Rockwell, Llewellyn H., Jr., ed. The Gold Standard: An Austrian Perspective. Lexington, MA: Lexington Books, 1985.

Romer, Christina. “Spurious Volatility in Historical Unemployment Data.” The Journal of Political Economy 91 (1986): 1-37.

Romer, Christina. “New Estimates of Prewar Gross National Product and Unemployment.” Journal of Economic History 46 (1986): 341-352.

Romer, Christina. “World War I and the Postwar Depression: A Reinterpretation Based on Alternative Estimates of GNP.” Journal of Monetary Economics 22 (1988): 91-115.

Romer, Christina and Jeffrey A. Miron. “A New Monthly Index of Industrial Production, 1884-1940.” Journal of Economic History 50 (1990): 321-337.

Romer, Christina. “The Great Crash and the Onset of the Great Depression.” Quarterly Journal of Economics 105 (1990): 597-625.

Romer, Christina. “Remeasuring Business Cycles.” The Journal of Economic History 54 (1994): 573-609.

Roose, Kenneth D. “The Production Ceiling and the Turning Point of 1920.” American Economic Review 48 (1958): 348-56.

Rosen, Philip T. The Modern Stentors: Radio Broadcasters and the Federal Government, 1920-1934. Westport, CT: The Greenwood Press, 1980.

Rosen, Philip T. “Government, Business, and Technology in the 1920s: The Emergence of American Broadcasting.” In American Business History: Case Studies. Edited by Henry C. Dethloff and C. Joseph Pusateri. Arlington Heights, IL: Harlan Davidson, 1987.

Rothbard, Murray N. America’s Great Depression. Kansas City: Sheed and Ward, 1963.

Sampson, Roy J., and Martin T. Ferris. Domestic Transportation: Practice, Theory, and Policy, 4th ed. Boston: Houghton Mifflin Co., 1979.

Samuelson, Paul and Everett E. Hagen. After the War—1918-1920. Washington: National Resources Planning Board, 1943.

Santoni, Gary and Gerald P. Dwyer, Jr. “The Great Bull Markets, 1924-1929 and 1982-1987: Speculative Bubbles or Economic Fundamentals?” Federal Reserve Bank of St. Louis Review 69 (1987): 16-29.

Santoni, Gary, and Gerald P. Dwyer, Jr. “Bubbles vs. Fundamentals: New Evidence from the Great Bull Markets.” In Crises and Panics: The Lessons of History. Edited by Eugene N. White. Homewood, IL: Dow Jones/Irwin, 1990.

Scherer, Frederick M. and David Ross. Industrial Market Structure and Economic Performance, 3d ed. Boston: Houghton Mifflin, 1990.

Schlebecker, John T. Whereby We Thrive: A History of American Farming, 1607-1972. Ames, IA: The Iowa State University Press, 1975.

Shepherd, James. “The Development of New Wheat Varieties in the Pacific Northwest.” Agricultural History 54 (1980): 52-63.

Sirkin, Gerald. “The Stock Market of 1929 Revisited: A Note.” Business History Review 49 (Fall 1975): 233-41.

Smiley, Gene. The American Economy in the Twentieth Century. Cincinnati: South-Western Publishing Co., 1994.

Smiley, Gene. “New Estimates of Income Shares During the 1920s.” In Calvin Coolidge and the Coolidge Era: Essays on the History of the 1920s, edited by John Earl Haynes, 215-232. Washington, D.C.: Library of Congress, 1998.

Smiley, Gene. “A Note on New Estimates of the Distribution of Income in the 1920s.” The Journal of Economic History 60, no. 4 (2000): 1120-1128.

Smiley, Gene. Rethinking the Great Depression: A New View of Its Causes and Consequences. Chicago: Ivan R. Dee, 2002.

Smiley, Gene, and Richard H. Keehn. “Margin Purchases, Brokers’ Loans and the Bull Market of the Twenties.” Business and Economic History. 2d series. 17 (1988): 129-42.

Smiley, Gene and Richard H. Keehn. “Federal Personal Income Tax Policy in the 1920s.” the Journal of Economic History 55, no. 2 (1995): 285-303.

Sobel, Robert. The Entrepreneuers: Explorations Within the American Business Tradition. New York: Weybright and Talley, 1974.

Soule, George. Prosperity Decade: From War to Depression: 1917-1929. New York: Holt, Rinehart, and Winston, 1947.

Stein, Herbert. The Fiscal Revolution in America, revised ed. Washington, D.C.: AEI Press, 1990.

Stigler, George J. “Monopoly and Oligopoly by Merger.” American Economic Review, 40 (May 1950): 23-34.

Sumner, Scott. “The Role of the International Gold Standard in Commodity Price Deflation: Evidence from the 1929 Stock Market Crash.” Explorations in Economic History 29 (1992): 290-317.

Swanson, Joseph and Samuel Williamson. “Estimates of National Product and Income for the United States Economy, 1919-1941.” Explorations in Economic History 10, no. 1 (1972): 53-73.

Temin, Peter. “The Beginning of the Depression in Germany.” Economic History Review. 24 (May 1971): 240-48.

Temin, Peter. Did Monetary Forces Cause the Great Depression. New York: W. W. Norton, 1976.

Temin, Peter. The Fall of the Bell System. New York: Cambridge University Press, 1987.

Temin, Peter. Lessons from the Great Depression. Cambridge, MA: The M.I.T. Press, 1989.

Thomas, Gordon, and Max Morgan-Witts. The Day the Bubble Burst. Garden City, NY: Doubleday, 1979.

Ulman, Lloyd. “The Development of Trades and Labor Unions,” In American Economic History, edited by Seymour E. Harris, chapter 14. New York: McGraw-Hill Book Co., 1961.

Ulman Lloyd. The Rise of the National Trade Union. Cambridge, MA: Harvard University Press, 1955.

U.S. Department of Commerce, Bureau of the Census. Historical Statistics of the United States: Colonial Times to 1970, 2 volumes. Washington, D.C.: USGPO, 1976.

Walsh, Margaret. Making Connections: The Long Distance Bus Industry in the U.S.A. Burlington, VT: Ashgate, 2000.

Wanniski, Jude. The Way the World Works. New York: Simon and Schuster, 1978.

Weiss, Leonard W. Cast Studies in American Industry, 3d ed. New York: John Wiley & Sons, 1980.

Whaples, Robert. “Hours of Work in U.S. History.” EH.Net Encyclopedia, edited by Robert Whaples, August 15 2001 URL— http://www.eh.net/encyclopedia/contents/whaples.work.hours.us.php

Whatley, Warren. “Southern Agrarian Labor Contracts as Impediments to Cotton Mechanization.” The Journal of Economic History 87 (1987): 45-70.

Wheelock, David C. and Subal C. Kumbhakar. “The Slack Banker Dances: Deposit Insurance and Risk-Taking in the Banking Collapse of the 1920s.” Explorations in Economic History 31 (1994): 357-375.

White, Eugene N. “The Stock Market Boom and Crash of 1929 Revisited.” The Journal of Economic Perspectives. 4 (Spring 1990): 67-83.

White, Eugene N., Ed. Crises and Panics: The Lessons of History. Homewood, IL: Dow Jones/Irwin, 1990.

White, Eugene N. “When the Ticker Ran Late: The Stock Market Boom and Crash of 1929.” In Crises and Panics: The Lessons of History Edited by Eugene N. White. Homewood, IL: Dow Jones/Irwin, 1990.

White, Eugene N. “Stock Market Bubbles? A Reply.” The Journal of Economic History 55 (1995): 655-665.

White, William J. “Economic History of Tractors in the United States.” EH.Net Encyclopedia, edited by Robert Whaples, August 15 2001 URL http://www.eh.net/encyclopedia/contents/white.tractors.history.us.php

Wicker, Elmus. “Federal Reserve Monetary Policy, 1922-1933: A Reinterpretation.” Journal of Political Economy 73 (1965): 325-43.

Wicker, Elmus. “A Reconsideration of Federal Reserve Policy During the 1920-1921 Depression.” The Journal of Economic History 26 (1966): 223-38.

Wicker, Elmus. Federal Reserve Monetary Policy, 1917-1933. New York: Random House, 1966.

Wigmore, Barrie A. The Crash and Its Aftermath. Westport, CT: Greenwood Press, 1985.

Williams, Raburn McFetridge. The Politics of Boom and Bust in Twentieth-Century America. Minneapolis/St. Paul: West Publishing Co., 1994.

Williamson, Harold F., et al. The American Petroleum Industry: The Age of Energy, 1899-1959. Evanston, IL: Northwestern University Press, 1963.

Wilson, Thomas. Fluctuations in Income and Employment, 3d ed. New York: Pitman Publishing, 1948.

Wilson, Jack W., Richard E. Sylla, and Charles P. Jones. “Financial Market Panics and Volatility in the Long Run, 1830-1988.” In Crises and Panics: The Lessons of History Edited by Eugene N. White. Homewood, IL: Dow Jones/Irwin, 1990.

Winkler, John Kennedy. Five and Ten: The Fabulous Life of F. W. Woolworth. New York: R. M. McBride and Co., 1940.

Wood, Charles. “Science and Politics in the War on Cattle Diseases: The Kansas Experience, 1900-1940.” Agricultural History 54 (1980): 82-92.

Wright, Gavin. Old South, New South: Revolutions in the Southern Economy Since the Civil War. New York: Basic Books, 1986.

Wright, Gavin. “The Origins of American Industrial Success, 1879-1940.” The American Economic Review 80 (1990): 651-668.

Citation: Smiley, Gene. “US Economy in the 1920s”. EH.Net Encyclopedia, edited by Robert Whaples. June 29, 2004. URL http://eh.net/encyclopedia/the-u-s-economy-in-the-1920s/

The Dutch Economy in the Golden Age (16th – 17th Centuries)

Donald J. Harreld, Brigham Young University

In just over one hundred years, the provinces of the Northern Netherlands went from relative obscurity as the poor cousins of the industrious and heavily urbanized Southern Netherlands provinces of Flanders and Brabant to the pinnacle of European commercial success. Taking advantage of a favorable agricultural base, the Dutch achieved success in the fishing industry and the Baltic and North Sea carrying trade during the fifteenth and sixteenth centuries before establishing a far-flung maritime empire in the seventeenth century.

The Economy of the Netherlands up to the Sixteenth Century

In many respects the seventeenth-century Dutch Republic inherited the economic successes of the Burgundian and Habsburg Netherlands. For centuries, Flanders and to a lesser extent Brabant had been at the forefront of the medieval European economy. An indigenous cloth industry was present throughout all areas of Europe in the early medieval period, but Flanders was the first to develop the industry with great intensity. A tradition of cloth manufacture in the Low Countries existed from antiquity when the Celts and then the Franks continued an active textile industry learned from the Romans.

As demand grew early textile production moved from its rural origins to the cities and had become, by the twelfth century, an essentially urban industry. Native wool could not keep up with demand, and the Flemings imported English wool in great quantities. The resulting high quality product was much in demand all over Europe, from Novgorod to the Mediterranean. Brabant also rose to an important position in textile industry, but only about a century after Flanders. By the thirteenth century the number of people engaged in some aspect of the textile industry in the Southern Netherlands had become more than the total engaged in all other crafts. It is possible that this emphasis on cloth manufacture was the reason that the Flemish towns ignored the emerging maritime shipping industry which was eventually dominated by others, first the German Hanseatic League, and later Holland and Zeeland.

By the end of the fifteenth century Antwerp in Brabant had become the commercial capital of the Low Countries as foreign merchants went to the city in great numbers in search of the high-value products offered at the city’s fairs. But the traditional cloths manufactured in Flanders had lost their allure for most European markets, particularly as the English began exporting high quality cloths rather than the raw materials the Flemish textile industry depended on. Many textile producers turned to the lighter weight and cheaper “new draperies.” Despite protectionist measures instituted in the mid-fifteenth century, English cloth found an outlet in Antwerp ‘s burgeoning markets. By the early years of the sixteenth century the Portuguese began using Antwerp as an outlet for their Asian pepper and spice imports, and the Germans continued to bring their metal products (copper and silver) there. For almost a hundred years Antwerp remained the commercial capital of northern Europe, until the religious and political events of the 1560s and 1570s intervened and the Dutch Revolt against Spanish rule toppled the commercial dominance of Antwerp and the southern provinces. Within just a few years of the Fall of Antwerp (1585), scores of merchants and mostly Calvinist craftsmen fled the south for the relative security of the Northern Netherlands.

The exodus from the south certainly added to the already growing population of the north. However, much like Flanders and Brabant, the northern provinces of Holland and Zeeland were already populous and heavily urbanized. The population of these maritime provinces had been steadily growing throughout the sixteenth century, perhaps tripling between the first years of the sixteenth century to about 1650. The inland provinces grew much more slowly during the same period. Not until the eighteenth century, when the Netherlands as a whole faced declining fortunes would the inland provinces begin to match the growth of the coastal core of the country.

Dutch Agriculture

During the fifteenth century, and most of the sixteenth century, the Northern Netherlands provinces were predominantly rural compared to the urbanized southern provinces. Agriculture and fishing formed the basis for the Dutch economy in the fifteenth and sixteenth centuries. One of the characteristics of Dutch agriculture during this period was its emphasis on intensive animal husbandry. Dutch cattle were exceptionally well cared for and dairy produce formed a significant segment of the agricultural sector. During the seventeenth century, as the Dutch urban population saw dramatic growth many farmers also turned to market gardening to supply the cities with vegetables.

Some of the impetus for animal production came from the trade in slaughter cattle from Denmark and Northern Germany. Holland was an ideal area for cattle feeding and fattening before eventual slaughter and export to the cities of the Southern provinces. The trade in slaughter cattle expanded from about 1500 to 1660, but protectionist measures on the part of Dutch authorities who wanted to encourage the fattening of home-bred cattle ensured a contraction of the international cattle trade between 1660 and 1750.

Although agriculture made up the largest segment of the Dutch economy, cereal production in the Netherlands could not keep up with demand particularly by the seventeenth century as migration from the southern provinces contributed to population increases. The provinces of the Low Countries traditionally had depended on imported grain from the south (France and the Walloon provinces) and when crop failures interrupted the flow of grain from the south, the Dutch began to import grain from the Baltic. Baltic grain imports experienced sustained growth from about the middle of the sixteenth century to roughly 1650 when depression and stagnation characterized the grain trade into the eighteenth century.

Indeed, the Baltic grain trade (see below), a major source of employment for the Dutch, not only in maritime transport but in handling and storage as well, was characterized as the “mother trade.” In her recent book on the Baltic grain trade, Mijla van Tielhof defined “mother trade” as the oldest and most substantial trade with respect to ships, sailors and commodities for the Northern provinces. Over the long term, the Baltic grain trade gave rise to shipping and trade on other routes as well as to manufacturing industries.

Dutch Fishing

Along with agriculture, the Dutch fishing industry formed part of the economic base of the northern Netherlands. Like the Baltic grain trade, it also contributed to the rise of Dutch the shipping industry.

The backbone of the fishing industry was the North Sea herring fishery, which was quite advanced and included a form of “factory” ship called the herring bus. The herring bus was developed in the fifteenth century in order to allow the herring catch to be processed with salt at sea. This permitted the herring ship to remain at sea longer and increased the range of the herring fishery. Herring was an important export product for the Netherlands particularly to inland areas, but also to the Baltic offsetting Baltic grain imports.

The herring fishery reached its zenith in the first half of the seventeenth century. Estimates put the size of the herring fleet at roughly 500 busses and the catch at about 20,000 to 25,000 lasts (roughly 33,000 metric tons) on average each year in the first decades of the seventeenth century. The herring catch as well as the number of busses began to decline in the second half of the seventeenth century, collapsing by about the mid-eighteenth century when the catch amounted to only about 6000 lasts. This decline was likely due to competition resulting from a reinvigoration of the Baltic fishing industry that succeeded in driving prices down, as well as competition within the North Sea by the Scottish fishing industry.

The Dutch Textile Industry

The heartland for textile manufacturing had been Flanders and Brabant until the onset of the Dutch Revolt around 1568. Years of warfare continued to devastate the already beaten down Flemish cloth industry. Even the cloth producing towns of the Northern Netherlands that had been focusing on producing the “new draperies” saw their output decline as a result of wartime interruptions. But textiles remained the most important industry for the Dutch Economy.

Despite the blow it suffered during the Dutch revolt, Leiden’s textile industry, for instance, rebounded in the early seventeenth century – thanks to the influx of textile workers from the Southern Netherlands who emigrated there in the face of religious persecution. But by the 1630s Leiden had abandoned the heavy traditional wool cloths in favor of a lighter traditional woolen (laken) as well as a variety of other textiles such as says, fustians, and camlets. Total textile production increased from 50,000 or 60,000 pieces per year in the first few years of the seventeenth century to as much as 130,000 pieces per year during the 1660s. Leiden’s wool cloth industry probably reached peak production by 1670. The city’s textile industry was successful because it found export markets for its inexpensive cloths in the Mediterranean, much to the detriment of Italian cloth producers.

Next to Lyons, Leiden may have been Europe’s largest industrial city at end of seventeenth century. Production was carried out through the “putting out” system, whereby weavers with their own looms and often with other dependent weavers working for them, obtained imported raw materials from merchants who paid the weavers by the piece for their work (the merchant retained ownership of the raw materials throughout the process). By the end of the seventeenth century foreign competition threatened the Dutch textile industry. Production in many of the new draperies (says, for example) decreased considerably throughout the eighteenth century; profits suffered as prices declined in all but the most expensive textiles. This left the production of traditional woolens to drive what was left of Leiden’s textile industry in the eighteenth century.

Although Leiden certainly led the Netherlands in the production of wool cloth, it was not the only textile producing city in the United Provinces. Amsterdam, Utrecht, Delft and Haarlem, among others, had vibrant textile industries. Haarlem, for example, was home to an important linen industry during the first half of the seventeenth century. Like Leiden’s cloth industry, Haarlem’s linen industry benefited from experienced linen weavers who migrated from the Southern Netherlands during the Dutch Revolt. Haarlem’s hold on linen production, however, was due more to its success in linen bleaching and finishing. Not only was locally produced linen finished in Haarlem, but linen merchants from other areas of Europe sent their products to Haarlem for bleaching and finishing. As linen production moved to more rural areas as producers sought to decrease costs in the second half of the seventeenth century, Haarlem’s industry went into decline.

Other Dutch Industries

Industries also developed as a result of overseas colonial trade, in particular Amsterdam’s sugar refining industry. During the sixteenth century, Antwerp had been Europe’s most important sugar refining city, a title it inherited from Venice once the Atlantic sugar islands began to surpass Mediterranean sugar production. Once Antwerp fell to Spanish troops during the Revolt, however, Amsterdam replaced it as Europe’s dominant sugar refiner. The number of sugar refineries in Amsterdam increased from about 3 around 1605 to about 50 by 1662, thanks in no small part to Portuguese investment. Dutch merchants purchased huge amounts of sugar from both the French and the English islands in the West Indies, along with a great deal of tobacco. Tobacco processing became an important Amsterdam industry in the seventeenth century employing large numbers of workers and leading to attempts to develop domestic tobacco cultivation.

With the exception of some of the “colonial” industries (sugar, for instance), Dutch industry experienced a period of stagnation after the 1660s and eventual decline beginning around the turn of the eighteenth century. It would seem that as far as industrial production is concerned, the Dutch Golden Age lasted from the 1580s until about 1670. This period was followed by roughly one hundred years of declining industrial production. De Vries and van der Woude concluded that Dutch industry experienced explosive growth after 1580s because of the migration of skilled labor and merchant capital from the southern Netherlands at roughly the time Antwerp fell to the Spanish and because of the relative advantage continued warfare in the south gave to the Northern Provinces. After the 1660s most Dutch industries experienced either steady or steep decline as many Dutch industries moved from the cities into the countryside, while some (particularly the colonial industries) remained successful well into the eighteenth century.

Dutch Shipping and Overseas Commerce

Dutch shipping began to emerge as a significant sector during the fifteenth century. Probably stemming from the inaction on the part of merchants from the Southern Netherlands to participate in seaborne transport, the towns of Zeeland and Holland began to serve the shipping needs of the commercial towns of Flanders and Brabant (particularly Antwerp ). The Dutch, who were already active in the North Sea as a result of the herring fishery, began to compete with the German Hanseatic League for Baltic markets by exporting their herring catches, salt, wine, and cloth in exchange for Baltic grain.

The Grain Trade

Baltic grain played an essential role for the rapidly expanding markets in western and southern Europe. By the beginning of the sixteenth century the urban populations had increased in the Low Countries fueling the market for imported grain. Grain and other Baltic products such as tar, hemp, flax, and wood were not only destined for the Low Countries, but also England and for Spain and Portugal via Amsterdam, the port that had succeeded in surpassing Lübeck and other Hanseatic towns as the primary transshipment point for Baltic goods. The grain trade sparked the development of a variety of industries. In addition to the shipbuilding industry, which was an obvious outgrowth of overseas trade relationships, the Dutch manufactured floor tiles, roof tiles, and bricks for export to the Baltic; the grain ships carried them as ballast on return voyages to the Baltic.

The importance of the Baltic markets to Amsterdam, and to Dutch commerce in general can be illustrated by recalling that when the Danish closed the Sound to Dutch ships in 1542, the Dutch faced financial ruin. But by the mid-sixteenth century, the Dutch had developed such a strong presence in the Baltic that they were able to exact transit rights from Denmark (Peace of Speyer, 1544) allowing them freer access to the Baltic via Danish waters. Despite the upheaval caused by the Dutch and the commercial crisis that hit Antwerp in the last quarter of the sixteenth century, the Baltic grain trade remained robust until the last years of the seventeenth century. That the Dutch referred to the Baltic trade as their “mother trade” is not surprising given the importance Baltic markets continued to hold for Dutch commerce throughout the Golden Age. Unfortunately for Dutch commerce, Europe ‘s population began to decline somewhat at the close of the seventeenth century and remained depressed for several decades. Increased grain production in Western Europe and the availability of non-Baltic substitutes (American and Italian rice, for example) further decreased demand for Baltic grain resulting in a downturn in Amsterdam ‘s grain market.

Expansion into African, American and Asian Markets – “World Primacy”

Building on the early successes of their Baltic trade, Dutch shippers expanded their sphere of influence east into Russia and south into the Mediterranean and the Levantine markets. By the turn of the seventeenth century, Dutch merchants had their eyes on the American and Asian markets that were dominated by Iberian merchants. The ability of Dutch shippers to effectively compete with entrenched merchants, like the Hanseatic League in the Baltic, or the Portuguese in Asia stemmed from their cost cutting strategies (what de Vries and van der Woude call “cost advantages and institutional efficiencies,” p. 374). Not encumbered by the costs and protective restrictions of most merchant groups of the sixteenth century, the Dutch trimmed their costs enough to undercut the competition, and eventually establish what Jonathan Israel has called “world primacy.”

Before Dutch shippers could even attempt to break in to the Asian markets they needed to first expand their presence in the Atlantic. This was left mostly to the émigré merchants from Antwerp, who had relocated to Zeeland following the Revolt. These merchants set up the so-called Guinea trade with West Africa, and initiated Dutch involvement in the Western Hemisphere. Dutch merchants involved in the Guinea trade ignored the slave trade that was firmly in the hands of the Portuguese in favor of the rich trade in gold, ivory, and sugar from São Tomé. Trade with West Africa grew slowly, but competition was stiff. By 1599, the various Guinea companies had agreed to the formation of a cartel to regulate trade. Continued competition from a slew of new companies, however, insured that the cartel would be only partially effective until the organization of the Dutch West India Company in 1621 that also held monopoly rights in the West Africa trade.

The Dutch at first focused their trade with the Americas on the Caribbean. By the mid-1590s only a few Dutch ships each year were making the voyage across the Atlantic. When the Spanish instituted an embargo against the Dutch in 1598, shortages in products traditionally obtained in Iberia (like salt) became common. Dutch shippers seized the chance to find new sources for products that had been supplied by the Spanish and soon fleets of Dutch ships sailed to the Americas. The Spanish and Portuguese had a much larger presence in the Americas than the Dutch could mount, despite the large number vessels they sent to the area. Dutch strategy was to avoid Iberian strongholds while penetrating markets where the products they desired could be found. For the most part, this strategy meant focusing on Venezuela, Guyana, and Brazil. Indeed, by the turn of the seventeenth century, the Dutch had established forts on the coasts of Guyana and Brazil.

While competition between rival companies from the towns of Zeeland marked Dutch trade with the Americas in the first years of the seventeenth century, by the time the West India Company finally received its charter in 1621 troubles with Spain once again threatened to disrupt trade. Funding for the new joint-stock company came slowly, and oddly enough came mostly from inland towns like Leiden rather than coastal towns. The West India Company was hit with setbacks in the Americas from the very start. The Portuguese began to drive the Dutch out of Brazil in 1624 and by 1625 the Dutch were loosing their position in the Caribbean as well. Dutch shippers in the Americas soon found raiding (directed at the Spanish and Portuguese) to be their most profitable activity until the Company was able to establish forts in Brazil again in the 1630s and begin sugar cultivation. Sugar remained the most lucrative activity for the Dutch in Brazil, and once the revolt of Portuguese Catholic planters against the Dutch plantation owners broke out the late 1640s, the fortunes of the Dutch declined steadily.

The Dutch faced the prospect of stiff Portuguese competition in Asia as well. But, breaking into the lucrative Asian markets was not just a simple matter of undercutting less efficient Portuguese shippers. The Portuguese closely guarded the route around Africa. Not until roughly one hundred years after the first Portuguese voyage to Asia were the Dutch in a position to mount their own expedition. Thanks to the travelogue of Jan Huyghen van Linschoten, which was published in 1596, the Dutch gained the information they needed to make the voyage. Linschoten had been in the service of the Bishop of Goa, and kept excellent records of the voyage and his observations in Asia.

The United East India Company (VOC)

The first few Dutch voyages to Asia were not particularly successful. These early enterprises managed to make only enough to cover the costs of the voyage, but by 1600 dozens of Dutch merchant ships made the trip. This intense competition among various Dutch merchants had a destabilizing effect on prices driving the government to insist on consolidation in order to avoid commercial ruin. The United East India Company (usually referred to by its Dutch initials, VOC) received a charter from the States General in 1602 conferring upon it monopoly trading rights in Asia. This joint stock company attracted roughly 6.5 million florins in initial capitalization from over 1,800 investors, most of whom were merchants. Management of the company was vested in 17 directors (Heren XVII) chosen from among the largest shareholders.

In practice, the VOC became virtually a “country” unto itself outside of Europe, particularly after about 1620 when the company’s governor-general in Asia, Jan Pieterszoon Coen, founded Batavia (the company factory) on Java. While Coen and later governors-general set about expanding the territorial and political reach of the VOC in Asia, the Heren XVII were most concerned about profits, which they repeatedly reinvested in the company much to the chagrin of investors. In Asia, the strategy of the VOC was to insert itself into the intra-Asian trade (much like the Portuguese had done in the sixteenth century) in order to amass enough capital to pay for the spices shipped back to the Netherlands. This often meant displacing the Portuguese by waging war in Asia, while trying to maintain peaceful relations within Europe.

Over the long term, the VOC was very profitable during the seventeenth century despite the company’s reluctance to pay cash dividends in first few decades (the company paid dividends in kind until about 1644). As the English and French began to institute mercantilist strategies (for instance, the Navigation Acts of 1551 and 1660 in England, and import restrictions and high tariffs in the case of France ) Dutch dominance in foreign trade came under attack. Rather than experience a decline like domestic industry did at the end of the seventeenth century, the Dutch Asia trade continued to ship goods at steady volumes well into the eighteenth century. Dutch dominance, however, was met with stiff competition by rival India companies as the Asia trade grew. As the eighteenth century wore on, the VOC’s share of the Asia trade declined significantly compared to its rivals, the most important of which was the English East India Company.

Dutch Finance

The last sector that we need to highlight is finance, perhaps the most important sector for the development of the early modern Dutch economy. The most visible manifestation of Dutch capitalism was the exchange bank founded in Amsterdam in 1609; only two years after the city council approved the construction of a bourse (additional exchange banks were founded in other Dutch commercial cities). The activities of the bank were limited to exchange and deposit banking. A lending bank, founded in Amsterdam in 1614, rounded out the financial services in the commercial capital of the Netherlands.

The ability to manage the wealth generated by trade and industry (accumulated capital) in new ways was one of the hallmarks of the economy during the Golden Age. As early as the fourteenth century, Italian merchants had been experimenting with ways to decrease the use of cash in long-distance trade. The resulting instrument was the bill of exchange developed as a way to for a seller to extend credit to a buyer. The bill of exchange required the debtor to pay the debt at a specified place and time. But the creditor rarely held on to the bill of exchange until maturity preferring to sell it or otherwise use it to pay off debts. These bills of exchange were not routinely used in commerce in the Low Countries until the sixteenth century when Antwerp was still the dominant commercial city in the region. In Antwerp the bill of exchange could be assigned to another, and eventually became a negotiable instrument with the practice of discounting the bill.

The idea of the flexibility of bills of exchange moved to the Northern Netherlands with the large numbers of Antwerp merchants who brought with them their commercial practices. In an effort to standardize the practices surrounding bills of exchange, the Amsterdam government restricted payment of bills of exchange to the new exchange bank. The bank was wildly popular with merchants; deposits increasing from just less than one million guilders in 1611 to over sixteen million by 1700. Amsterdam ‘s exchange bank flourished because of its ability to handle deposits and transfers, and to settle international debts.

By the second half of the seventeenth century many wealthy merchant families had turned away from foreign trade and began engaging in speculative activities on a much larger scale. They traded in commodity values (futures), shares in joint-stock companies, and dabbled in insurance and currency exchanges to name only a few of the most important ventures.

Conclusion

Building on its fifteenth- and sixteenth-century successes in agricultural productivity, and in North Sea and Baltic shipping, the Northern Netherlands inherited the economic legacy of the southern provinces as the Revolt tore the Low Countries apart. The Dutch Golden Age lasted from roughly 1580, when the Dutch proved themselves successful in their fight with the Spanish, to about 1670, when the Republic’s economy experienced a down-turn. Economic growth was very fast during until about 1620 when it slowed, but continued to grow steadily until the end of the Golden Age. The last decades of the seventeenth century were marked by declining production and loss of market dominance overseas.

Bibliography

Attman, Artur. The Struggle for Baltic Markets: Powers in Conflict, 1558-1618. Göborg: Vetenskaps- o. vitterhets-samhäet, 1979.

Barbour, Violet. Capitalism in Amsterdam in the Seventeenth Century. Ann Arbor: University of Michigan Press, 1963.

Bulut, M. “Rethinking the Dutch Economy and Trade in the Early Modern Period, 1570-1680.” Journal of European Economic History 32 (2003): 391-424.

Christensen, Aksel. Dutch Trade to the Baltic about 1600. Copenhagen: Einar Munksgaard, 1941.

De Vries, Jan and Ad van der Woude, The First Modern Economy: Success, Failure, and Perseverance of the Dutch Economy, 1500-1815. Cambridge: Cambridge University Press, 1997.

De Vries, Jan, The Economy of Europe in an Age of Crisis, 1600-1750. Cambridge: Cambridge University Press, 1976.

Gelderblom, Oscar. Zuid-Nederlandse kooplieden en de opkomst van de Amsterdamse stapalmarkt (1578-1630). Hilversum: Uitgeverij Verloren, 2000.

Gijsbers, W. Kapitale Ossen: De internationale handel in slachtvee in Noordwest-Europa (1300-1750). Hilversum: Uitgeverij Verloren, 1999.

Haley, K.H.D. The Dutch in the Seventeenth Century. New York: Harcourt, Brace and Jovanovich, 1972.

Harreld, Donald J. “Atlantic Sugar and Antwerp’s Trade with Germany in the Sixteenth Century.” Journal of Early Modern History 7 (2003): 148-163.

Heers, W. G., et al, editors. From Dunkirk to Danzig: Shipping and Trade in the North Sea and the Baltic, 1350-1850. Hiversum: Verloren, 1988.

Israel, Jonathan I. “Spanish Wool Exports and the European Economy, 1610-1640.” Economic History Review 33 (1980): 193-211.

Israel, Jonathan I., Dutch Primacy in World Trade, 1585-1740. (Oxford: Clarendon Press, 1989).

O’Brien, Patrick, et al, editors. Urban Achievement in Early Modern Europe: Golden Ages in Antwerp, Amsterdam and London. Cambridge: Cambridge University Press, 2001.

Pirenne, Henri. “The Place of the Netherlands in the Economic History of Medieval Europe ” Economic History Review 2 (1929): 20-40.

Price, J.L. Dutch Society, 1588-1713. London: Longman, 2000.

Tracy, James D. “Herring Wars: The Habsburg Netherlands and the Struggle for Control of the North Sea, ca. 1520-1560.” Sixteenth Century Journal 24 no. 2 (1993): 249-272.

Unger, Richard W. “Dutch Herring, Technology, and International Trade in the Seventeenth Century.” Journal of Economic History 40 (1980): 253-280.

Van Tielhof, Mijla. The ‘Mother of all Trades': The Baltic Grain Trade in Amsterdam from the Late Sixteenth to the Early Nineteenth Century. Leiden: Brill, 2002.

Wilson, Charles. “Cloth Production and International Competition in the Seventeenth Century.” Economic History Review 13 (1960): 209-221.

Citation: Harreld, Donald. “Dutch Economy in the “Golden Age” (16th-17th Centuries)”. EH.Net Encyclopedia, edited by Robert Whaples. August 12, 2004. URL http://eh.net/encyclopedia/the-dutch-economy-in-the-golden-age-16th-17th-centuries/

The Dust Bowl

Geoff Cunfer, Southwest Minnesota State University

What Was “The Dust Bowl”?

The phrase “Dust Bowl” holds a powerful place in the American imagination. It connotes a confusing mixture of concepts. Is the Dust Bowl a place? Was it an event? An era? American popular culture employs the term in all three ways. Ask most people about the Dust Bowl and they can place it in the Middle West, though in the imagination it wanders widely, from the Rocky Mountains, through the Great Plains, to Illinois and Indiana. Many people can situate the event in the 1930s. Ask what happened then, and a variety of stories emerge. A combination of severe drought and economic depression created destitution among farmers. Millions of desperate people took to the roads, seeking relief in California where they became exploited itinerant farm laborers. Farmers plowed up a pristine wilderness for profit, and suffered ecological collapse because of their recklessness. Dust Bowl stories, like its definitions, are legion, and now approach the mythological.

The words also evoke powerful graphic images taken from art and literature. Consider these lines from the opening chapter of John Steinbeck’s The Grapes of Wrath (1939):

“Now the wind grew strong and hard and it worked at the rain crust in the corn fields. Little by little the sky was darkened by the mixing dust, and carried away. The wind grew stronger. The rain crust broke and the dust lifted up out of the fields and drove gray plumes into the air like sluggish smoke. The corn threshed the wind and made a dry, rushing sound. The finest dust did not settle back to earth now, but disappeared into the darkening sky. … The people came out of their houses and smelled the hot stinging air and covered their noses from it. And the children came out of the houses, but they did not run or shout as they would have done after a rain. Men stood by their fences and looked at the ruined corn, drying fast now, only a little green showing through the film of dust. The men were silent and they did not move often. And the women came out of the houses to stand beside their men – to feel whether this time the men would break.”

When Americans hear the words “Dust Bowl,” grainy black and white photographs of devastated landscapes and destitute people leap to mind. Dorothea Lange and Arthur Rothstein classics bring the Dust Bowl vividly to life in our imaginations (Figures [1] [2] [3] [4]). For the musically inclined, Woody Guthrie’s Dust Bowl ballads define the event with evocative lyrics such as those in “The Great Dust Storm” (Figure 5). Some of America’s most memorable art – literature, photography, music – emerged from the Dust Bowl and that art helped to define the event and build the myth in American popular culture.

The Dust Bowl was an event defined by artists and by government bureaucrats. It has become part of American mythology, an episode in the nation’s progression from the Pilgrims to Lexington and Concord, through Civil War and frontier settlement, to industrial modernization, Depression, and Dust Bowl. Many of the great themes of American history are tied up in the Dust Bowl story: agricultural settlement and frontier struggle; industrial mechanization with the arrival of tractors; the migration from farm to city, the transformation from rural to urban. Add the Great Depression and the rise of a powerful federal government, and we have covered many of the themes of a standard U.S. history survey course.

Despite the multiple uses of the phrase “Dust Bowl” it was an event which occurred in a specific place and time. The Dust Bowl was a coincidence of drought, severe wind erosion, and economic depression that occurred on the Southern and Central Great Plains during the 1930s. The drought – the longest and deepest in over a century of systematic meteorological observation – began in 1933 and continued through 1940. In 1941 rain poured down on the region, dust storms ceased, crops thrived, economic prosperity returned, and the Dust Bowl was over. But for those eight years crops failed, sandy soils blew and drifted over failed croplands, and rural people, unable to meet cash obligations, suffered through tax delinquency, farm foreclosure, business failure, and out-migration. The Dust Bowl was defined by a combination of:

  • extended severe drought and unusually high temperatures
  • episodic regional dust storms and routine localized wind erosion
  • agricultural failure, including both cropland and livestock operations
  • the collapse of the rural economy, affecting farmers, rural businesses, and local governments
  • an aggressive reform movement by the federal government
  • migration from rural to urban areas and out of the region

The Dust Bowl on the Great Plains coincided with the Great Depression. Though few plainsmen suffered directly from the 1929 stock market crash, they were too intimately connected to national and world markets to be immune from economic repercussions. The farm recession had begun in the 1920s; after the 1919 Armistice transformed Europe from an importer to an exporter of agricultural products, American farmers again faced their constant nemesis: production so high that prices were pushed downward. Farmers grew more cotton, wheat, and corn, than the market could consume, and prices fell, fell more, and then hit rock bottom by the early 1930s. Cotton, one of the staple crops of the southern plains, for example, sold for 36 cents per pound in 1919, dropped to 18 cents in 1928, then collapsed to a dismal 6 cents per pound in 1931. One irony of the Dust Bowl is that the world could not really buy all of the crops Great Plains farmers produced. Even the severe drought and crop failures of the 1930s had little impact on the flood of farm commodities inundating the world market.

Routine Dust Storms on the Southern and Central Plains

The location of the drought and the dust storms shifted from place to place between 1934 and 1940 (Figure 6 [large]). The core of the Dust Bowl was in the Texas and Oklahoma panhandles, southwestern Kansas and southeastern Colorado. The drought began on the Great Plains, from the Dakotas through Texas and New Mexico, in 1931. The following year was wetter, but 1933 and 1934 set low rainfall records across the plains. In some places is did not rain at all. Others quickly accumulated a deep deficit. Figure 7 [large] shows percent difference from average rainfall over five-year periods, with the location of the shifting Dust Bowl over top. Only a handful of counties (mapped in blue) had more rain than average between 1932 and 1940. And few counties fall into the 0 to -10 percent range. Most counties were 10 percent drier than average, or more, and more than eighty counties were at least 20 percent drier. Scientists now believe that the 1930s drought coincided with a severe La Nina event in the Pacific Ocean. Cool sea surface temperatures reduced the amount of moisture entering the jet stream and directed it south of the continental U.S. The drought was deep, extensive, and persisted for more than a decade.

Whenever there is drought on the southern and central plains dust blows. The flat topography and continental climate mean that winds are routinely high. When soil moisture declines, plant cover, whether native plants or crops, diminishes in tandem. Normally dry conditions mean that native plants typically cover less than 60 percent of the ground surface, leaving the other 40+ percent in bare, exposed soils. During the driest conditions native prairie vegetation sometimes covers less than 20 percent of the ground surface, exposing 80 percent or more of the soil to strong prairie winds. Failed crop fields are completely bare of vegetation. In these circumstances soil blows. Local wind erosion can drift soil from one field into ridges and ripples in a neighboring field (Figure 8). Stronger regional dust storms can move dirt many miles before it drifts down along fence lines and around buildings (Figure 9). In rare instances very large dust storms carry soils high into the air where they can travel for many hundreds of miles. These “black blizzards” are the most spectacular and memorable of dust storms, but happen only infrequently (Figure 10).

When wind erosion and dust storms began in the 1930s experienced plains residents hardly welcomed the development, but neither did it surprise them. Dust storms were an occasional spring occurrence from Texas and New Mexico through Kansas and Colorado. They did not happen every year, but often enough to be treated casually. This series of excerpts from the Salina, Kansas Journal and Herald in 1879 indicates that dust storms were a routine part of plains life in dry years:

“For the past few days the gentle winds have enveloped the city with dust decorations. And some of this time it has been intensely hot. Imagine the pleasantness of the situation.”

“During the past few days we have had several exhibitions of what dust can do when propelled by a gale. We had the disagreeable March winds, and saw with ample disgust the evolutions and gyrations of the dust. We have had enough of it, but will undoubtedly get much more of the same kind during this very disagreeable month.”

“Real estate moved considerably this week.”

“Another ‘hardest’ blow ever seen in Kansas … Salina was tantalized with a small sprinkle of rain Thursday afternoon. The wind and dust soon resumed full sway.”

“People have just got through digging from the pores of the skin the dirt driven there by the furious dust storms which for several days since our last issue have been lifting this county ‘clean off its toes.’ Even sinners have stood some chance of being translated with such favoring gales.”

“The wind which held high carnival in this section last Thursday, filled the air with such clouds of dust that darkness of the ‘consistency of twilight’ prevailed. Buildings across the street could not be distinguished. The title of all land about for a while was not worth a cotton hat – it was so ‘unsettled.’ It was of the nature of personal property, because it was not a ‘fixture’ and very moveable. The air was so filled with dust as to be stifling even within houses.”

The Salina newspapers reported dust storms many springs through the late nineteenth century. An item in the Journal in 1885 epitomizes the local attitude: “When the March winds commenced raising dust Monday, the average citizen calmly smiled and whispered ‘so natural!'”

What Made the 1930s Different?

Dust storms were not new to the region in the 1930s, but a number of demographic and cultural factors were new. First there were a lot more people living in the region in the 1930s than there had been in the 1880s. The population of the Great Plains – 450 counties stretching from Texas and New Mexico to the Dakotas and Montana – stood at only 800,000 in 1880; it was seven times that, at 5.6 million in 1930. The dust storms affected many more people than they had ever done before. And many of those people were relative newcomers, having only arrived in recent years. They had no personal or family memory of life in the plains, and many interpreted the arrival of episodic dust storms as an entirely new phenomenon. An example is the reminiscence by Minnie Zeller Doehring, written in 1981. Having moved with her family to western Kansas in 1906, at age 7, she reported “I remember the first Dirt storm in Western Kansas. I think it was about 1911. And a drouth that year followed by a severe winter.” Neither she nor her family had experienced any of the nineteenth century dust storms reported in local newspapers, so when one arrived during a dry spring five years after they arrived, it seemed like a brand new development.

Second, this drought and sequence of dust storms coincided with an international economic depression, the worst in two centuries of American history. The financial stresses and personal misery of the Depression blended seamlessly into the environmental disasters of drought, crop failure, farm loss, and dust. It was difficult to assign blame. Were farmers failing because of the economic crisis? Bank failures? Landlords squeezing tenants? Drought? Dust storms? In the midst of these concurrent crises emerged an activist and newly powerful federal government. Franklin Roosevelt’s New Deal roared into Washington in 1933 with a landslide mandate from voters to fix all of the ills plaguing the nation: depression, bank failures, unemployment, agricultural overproduction, underconsumption, the list went on and on. And several items quickly added to that list of ills to be fixed were rural poverty, agricultural land use, soil erosion, and dust storms.

The drought and dust storms were certainly hard on farmers. Crop failure was widespread and repeated. In 1935 46.6 million acres of crops failed on the Great Plains, with over 130 counties losing more than half their planted acreage. Many farmers lived on the edge of financial failure. In debt for land, tractor, automobile, and even for last year’s seed, one or two years with reduced income often meant bankruptcy. Tax delinquency became a serious problem throughout the plains. As land owners fell behind on their local property tax payments, county governments grew desperate. Many counties had delinquency rates over 40 percent for several consecutive years, and were faced with laying off teachers, police, and other employees. A few counties considered closing county government altogether and merging with neighboring counties. Their only alternative was to foreclose on now nearly worthless farms which they could neither rent nor sell. Many families behind on mortgage payments and taxes simply packed up and left without notice. The crisis was not restricted to farmers, bankers, and county employees. Throughout the plains sales of tractors, automobiles, and fertilizer declined in the early 1930s, affecting small town merchants across the board.

Consider the example of William and Sallie DeLoach, typical southern plains farmers who moved from farm to farm through the early twentieth century, repeatedly trying to buy land and repeatedly losing it to the bank in the face of drought or low crop prices. After an earlier failed attempt to buy land, the family invested in a 177 acre cotton farm in Lamb County, Texas in 1924, paying 30 dollars per acre. A month later they passed up a chance to sell it for 35 dollars an acre. Within three months of the purchase late summer rains failed to arrive, the cotton crop bloomed late, and the first freeze of winter killed it. Unable to make the upcoming mortgage payment, the DeLoaches forfeited their land and the 200 dollars they had already paid toward it. One bad season meant default. Through the rest of the 1920s the DeLoaches rented from Sallie’s father and farmed cotton in Lamb County. In September, 1929, just weeks before the stock market crashed, William thought the time auspicious to invest in land again, and bought 90 acres. He farmed it, then rented part of it to another farmer. Rain was plentiful in 1931, and by the end of that year DeLoach had repaid back rent to his father-in-law, paid off all outstanding debts except his land mortgage, and started 1932 in good shape. But the 1930s were hard on the southern plains, with the extended drought, dust storms, and widespread poverty. The one bright spot for farmers was the farm subsidies instituted by Franklin Roosevelt’s New Deal. In 1933 DeLoach plowed up 55 acres of already growing cotton in exchange for a check from the federal government. Lamb County led the state in the cotton reduction program, bringing nearly 1.4 million dollars into the county in 1933. Drought lingered over the Texas panhandle through 1934 and 1935, and by early 1936 DeLoach was beleaguered again. When the Supreme Court declared the Agricultural Adjustment Act (AAA) unconstitutional it appeared that federal farm subsidies would disappear. A few weeks after that decision DeLoach had a visit from his real estate agent:

Mr. Gholson came by this A.M. and wanted to know what I was going to do about my land notes. I told him I could do nothing, only let them have the land back. … I told him I had payed the school tax for 1934. Owed the state and county for 1935, also the state for 1934. All tole [sic] about $37.50. He said he would pay that and we (wife & I) could deed the land back to the Nugent people. I hate to lose the land and what I have payed on it, but I can’t do any thing else. ‘Big fish eat the little ones.’ The law is take from the poor devil that wants a home, give to the rich. I have lost about $1000.00 on the land.

A week later:

Mr. Gholson came by. Told me about the deed he had drawn in Dallas. … He said if I would pay for the deed and stamps, which would be $5.00, the deal would be closed. I asked him if that meant just as the land stood now. He said yes. He said they would pay the balance of taxes. Well, they ought to. I have payed $800.00 or better on the land, but got behind and could not do any thing else. Any way my mind is at ease. I do not think Gholson or any of the cold blooded land grafters would lose any sleep on account of taking a home away from any poor devil.

For the third time in his career DeLoach defaulted and turned over his farm. Later that month Congress rewrote the AAA legislation to meet Constitutional requirements, and the farm programs have continued ever since. With federal program income again assured, DeLoach purchased yet another 68 acre farm in September, 1936, moved the family onto it, and tried again. Other families were not as persistent, and when crop failure led to bankruptcy they packed up and left the region. The term popularly assigned to such emigrants, “Dust Bowl refugees,” assigned a single cause – dust storms – to what was in fact a complex and multi-causal event (Figure 11).

Like dust storms and agricultural setbacks, high out-migration was not new to the plains. Throughout the settlement period, from about 1870 to 1920, there was very high turnover in population. Many people moved into the region, but many moved out also. James Malin found that 10 year population turnover on the western Kansas frontier ranged from 41 to 67 percent between 1895 and 1930. Many people were half farmers, half land speculators, buying frontier land cheap (or homesteading it for free), then selling a few years later on a rising market. People moved from farm to farm, always looking for a better opportunity, often following a succession of frontiers over a lifetime, from Ohio to Illinois to Kansas to Colorado. Outmigration from the Great Plains in the 1930s was not considerably higher than it had been over the previous 50 years. What changed in the 1930s was that new immigrants stopped moving in to replace those leaving. Many rural areas of the grassland began a slow population decline that had not yet bottomed out in 2000.

The New Deal Response to Drought and Dust Storms

Emigrants from the Great Plains were not new in the 1930s. Neither was drought, agricultural crisis, or dust storms. This drought and these dust storms were certainly more severe than those that wracked the plains in 1879-1880, in the mid 1890s, and again in 1911. And more people were adversely affected because total population was higher. But what was most different about the 1930s was the response of the federal government. In past crises, when farmers went bankrupt, when grassland counties lost 20 percent of their population, when dust storms descended, the federal government stood aloof. It felt no responsibility for the problems, no popular mandate to solve them. Just the opposite was the case in the 1930s. The New Deal set out to solve the nation’s problems, and in the process contributed to the creation of the Dust Bowl as an historic event of mythological proportions.

The economic and agricultural disaster of the 1930s provided an opening for experimentation with federal land use management. The idea had begun among economists in agricultural colleges in the 1920s who proposed removing “submarginal” land from crop production. “Submarginal” referred to land low in productivity, unsuited for the production of farm crops, or incapable of profitable cultivation. A “land utilization” movement emerged in the 1920s to classify farm land as good, poor, marginal, or submarginal, and to forcibly retire the latter from production. Such rational planning aimed to reduce farm poverty, contract chronic overproduction of farm crops, and protect land vulnerable to damage. M.L. Wilson, of Montana State Agricultural College, focused the academic movement while Lewis C. Gray, at the Bureau of Agricultural Economics (BAE), led the effort within the U.S. Department of Agriculture. The land utilization movement began well before the 1930s, but the drought and dust storms of that decade provided a fortuitous justification for a land use policy already on the table, and newly created agencies like the Soil Conservation Service (SCS), the Resettlement Administration (RA), and the Farm Security Administration (FSA) were the loudest to publicize and deplore the Dust Bowl wracking America’s heartland.

Whereas the land use adjustment movement had begun as an attempt to solve chronic rural poverty, the arrival of dust storms in 1934 provided a second justification for aggressive federal action to change land use practices. Federal bureaucrats created the central narrative of the Dust Bowl, in part because it emphasized the need for these new reform agencies. The FSA launched a sophisticated public relations campaign to publicize the disaster unfolding in the Great Plains. It hired world class photographers to document the suffering of plains people, giving them specific instructions from Washington to photograph the most eroded landscapes and the most destitute people. Dorothea Lange’s photographs of emigrants on the road to California still stand as some of the most evocative images in American history (Figures 12-13). The Resettlement Administration also hired filmmaker Pare Lorentz the make a series of movies, including “The Plow that Broke the Plains.”

The narrative behind this publicity campaign was this: in the nineteenth and early twentieth centuries farmers had come to the dry western plains, encouraged by a misguided Homestead Act, where they plowed up land unsuited for farming. The grassland should have been left in native grass for grazing, but small farmers, hoping to make profits growing cash crops like wheat had plowed the land, exposing soils to relentless winds. When serious drought struck in the 1930s the wounded landscape succumbed to dust storms that devastated farms, farmers, and local economies. The result was a mass exodus of desperately poor people, a social failure caused by misuse of land. The profit motive and private land ownership were behind this failure, and only a scientifically grounded federal bureaucracy could manage land use wisely in the interests of all Americans, rather than for the profit of a few individuals. Federal agents would retire land from cultivation, return it to grassland, and teach remaining farmers how to use their land more carefully to prevent erosion. This effort would, of course, require large budgets and thousands of employees, but it was vital to resolving a rural disaster.

The New Deal government, with Congressional support and appropriations, began to put reform plan into place. A host of new agencies vied to manage the program, including the FSA, the SCS, the RA, and the Agricultural Adjustment Administration (AAA). Each implemented a variety of reforms. The RA began purchasing “submarginal” land from farmers, eventually acquiring some 10 million acres for former farmland in the Great Plains. (These lands are now mostly managed by the U.S. Forest Service as National Grasslands leased to nearby private ranchers for grazing.) The RA and the FSA worked to relocate destitute farmers on better lands, or move them out of farming altogether. The SCS established demonstration projects in counties across the nation, where local cooperator farmers implemented recommended soils conservation techniques on their farms, such as fallowing, strip cropping, contour plowing, terracing, growing cover crops, and a variety of cultivation techniques. There were efforts in each county to establish Land Use Planning Committees made of local farmers and federal agents who would have authority over land use practices on private farms. These committees functioned for several years in the late 1930s, but ended in most places by the early 1940s. The most important and expensive measure was the AAA’s development of a comprehensive system of farm subsidies, which paid farmers cash for reducing their acreage of commodity crops. The subsidies, created as an emergency Depression measure, have become routine and persist 70 years later. They brought millions of dollars into nearly every farming county in the U.S. and permanently transformed the economics of agriculture. In a multitude of innovative ways the federal government set out to remake American farming. The Dust Bowl narrative served exceedingly well to justify these massive and revolutionary changes in farming, America’s most common occupation for most of its history.

Conclusion

The Dust Bowl finally ended in 1941 with the arrival of drenching rains on the southern and central plains and with the advent of World War II. The rains restored crops and settled the dust. The war diverted public and government attention from the plains. In a telling move, the FSA photography corps was reconstituted as the Office of War Information, the propaganda wing of the government’s war effort. The narrative of World War II replaced the Dust Bowl narrative in the public’s attention. Congress diverted funding away from the Great Plains and toward mobilization. The Land Utilization Program stopped buying submarginal land and the county Land Use Planning Committees ceased. Some of the New Deal reforms became permanent. The AAA subsidy system continued through the present and the Soil Conservation Service (now the Natural Resources Conservation Service) created a stable niche promoting wise agricultural land management and soil mapping.

Ironically, overall land use on the Great Plains had changed little during the decade. About the same amount of land was devoted to crops in the second half of the twentieth century as in the first half. Farmers grew the same crops in the same mixtures. Many implemented the milder reforms promoted by New Dealers – contour plowing, terracing – but little cropland was converted back to pasture. The “submarginal” regions have continued to grow wheat, sorghum, and other crops in roughly the same quantities. Despite these facts the public has generally adopted the Dust Bowl narrative. If asked, most will identify the Dust Bowl as caused by misuse of land. The descendants of the federal agencies created in the 1930s still claim to have played a leading role in solving the crisis. Periodic droughts and dust storms have returned to the region since 1941, notably in the early 1950s and again in the 1970s. Towns in the core dust storm region still have dust storms in dry years. Lubbock, Texas, for example, experienced 35 dust storms in 1973-74. Rural depopulation continues in the Great Plains (although cities in the region have grown even faster than rural places have declined). None of these droughts, dust storms, or periods of depopulation have received the concentrated public attention that those of the 1930s did. Nonetheless, environmentalists and critics of modern agricultural systems continue to warn that unless we reform modern farming the Dust Bowl may return.

References and Additional Reading

Bonnifield, Mathew P. The Dust Bowl: Men, Dirt, and Depression. Albuquerque: University of New Mexico Press, 1979.

Cronon, William. “A Place for Stories: Nature, History, and Narrative.” Journal of American History 78 (March 1992): 1347-1376.

Cunfer, Geoff. “Causes of the Dust Bowl.” In Past Time, Past Place: GIS for History, edited by Anne Kelly Knowles, 93-104. Redlands, CA: ESRI Press, 2002.

Cunfer, Geoff. “The New Deal’s Land Utilization Program in the Great Plains.” Great Plains Quarterly 21 (Summer 2001): 193-210.

Cunfer, Geoff. On the Great Plains: Agriculture and Environment. Texas A&M University Press, 2005.

The Future of the Great Plains: Report of the Great Plains Committee. Washington: Government Printing Office, 1936.

Ganzel, Bill. Dust Bowl Descent. Lincoln: University of Nebraska Press, 1984.

Great Plains Quarterly 6 (Spring 1986), special issue on the Dust Bowl.

Gregory, James N. American Exodus: The Dust Bowl Migration and Okie Culture in California. New York: Oxford University Press, 1989.

Guthrie, Woody. Dust Bowl Ballads. New York: Folkway Records, 1964.

Gutmann, Myron P. and Geoff Cunfer. “A New Look at the Causes of the Dust Bowl.” Charles L. Wood Agricultural History Lecture Series, no. 99-1. Lubbock: International Center for Arid and Semiarid Land Studies, Texas Tech University, 1999.

Hansen, Zeynep K. and Gary D. Libecap. “Small Farms, Externalities, and the Dust Bowl of the 1930s.” Journal of Political Economy 112 (2004): 665-694.

Hurt, R. Douglas. The Dust Bowl: An Agricultural and Social History. Chicago: Nelson-Hall, 1981.

Lookingbill, Brad. Dust Bowl USA: Depression America and the Ecological Imagination, 1929-1941. Athens: Ohio University Press, 2001.

Lorentz, Pare. The Plow that Broke the Plains. Washington: Resettlement Administration, 1936.

Malin, James C. “Dust Storms, 1850-1900.” Kansas Historical Quarterly 14 (May, August, and November 1946): 129-144, 265-296; 391-413.

Malin, James C. Essays on Historiography. Ann Arbor, Michigan: Edwards Brothers, 1946.

Malin, James C. The Grassland of North America: Prolegomena to Its History. Lawrence, Kansas, privately printed, 1961.

Riney-Kehrberg, Pamela. Rooted in Dust: Surviving Drought and Depression in Southwestern Kansas. Lawrence: University Press of Kansas, 1994.

Riney-Kehrberg, Pamela, editor. Waiting on the Bounty: The Dust Bowl Diary of Mary Knackstedt Dyck. Iowa City: University of Iowa Press, 1999.

Svobida, Lawrence. Farming the Dust Bowl: A Firsthand Account from Kansas. Lawrence: University Press of Kansas, 1986.

Wooten, H.H. The Land Utilization Program, 1934 to 1964: Origin, Development, and Present Status. U.S.D.A. Economic Research Service Agricultural Economic Report no. 85. Washington: Government Printing Office, 1965.

Worster, Donald. Dust Bowl: The Southern Plains in the 1930s. New York: Oxford University Press, 1979.

Wunder, John R., Frances W. Kaye, and Vernon Carstensen. Americans View Their Dust Bowl Experience. Niwot: University Press of Colorado, 1999.

Citation: Cunfer, Geoff. “The Dust Bowl”. EH.Net Encyclopedia, edited by Robert Whaples. August 18, 2004. URL http://eh.net/encyclopedia/the-dust-bowl/

History of Food and Drug Regulation in the United States

Marc T. Law, University of Vermont

Throughout history, governments have regulated food and drug products. In general, the focus of this regulation has been on ensuring the quality and safety of food and drugs. Food and drug regulation as we know it today in the United States had its roots in the late nineteenth century when state and local governments began to enact food and drug regulations in earnest. Federal regulation of the industry began on a large scale in the early twentieth century when Congress enacted the Pure Food and Drugs Act of 1906. The regulatory agency spawned by this law – the U.S. Food and Drug Administration (FDA) – now directly regulates between one-fifth and one-quarter of U.S. gross domestic product (GDP) and possesses significant power over product entry, the ways in which food and drugs are marketed to consumers, and the manufacturing practices of food and drug firms. This article will focus on the evolution of food and drug regulation in the United States from the middle of the nineteenth century until the present day.1

General Issues in Food and Drug Regulation

Perhaps the most enduring problem in the food and drug industry has been the issue of “adulteration” – the cheapening of products through the addition of impure or inferior ingredients. Since ancient times, producers of food and drug products have attempted to alter their wares in an effort to obtain dear prices for cheaper goods. For instance, water has often been added to wine, the cream skimmed from milk, and chalk added to bread. Hence, regulations governing what could or could not be added to food and drug products have been very common, as have regulations that require the use of official weights and measures. Because the adulteration of food and drugs may pose both economic and health risks to consumers, the stated public interest motivation for food and drug regulation has generally been to protect consumers from fraudulent and/or unsafe food and drug products.

From an economic perspective, regulations like these may be justified in markets where producers know more about product quality than consumers. As Akerlof (1970) demonstrates, when consumers have less information about product quality than producers, lower quality products (which are generally cheaper to produce) may drive out higher quality products. Asymmetric information about product quality may thus result in lower quality products – the so-called “lemons” – dominating the market. To the extent that regulators are better informed about quality than consumers, regulation that punishes firms that cheat on quality or that requires firms to disclose information about product quality can improve efficiency. Thus, regulations governing what can or cannot be added to products, how products are labeled, and whether certain products can be safely sold to consumers, can be justified in the public interest if consumers do not possess the information to accurately discern these aspects of product quality on their own. Regulations that solve the asymmetric information problem benefit consumers who desire better information about product quality, as well as producers of higher quality products, who desire to segment the market for their wares.

For certain products, it may be relatively easy for consumers to know whether or not they have been deceived into purchasing a low quality product after consuming it. For such goods, sometimes called “experience goods,” market mechanisms like branding or repeat purchase may be adequate to solve the asymmetric information problem. Consumers can “punish” firms that cheat on quality by taking their business elsewhere (Klein and Leffler 1981). Hence, as long as consumers are able to identify whether or not they have been cheated, regulation may not be needed to solve the asymmetric information problem. However, for those products where quality is not easily ascertained by consumers even after consuming the product, market mechanisms are unlikely to be adequate since it is impossible for consumers to punish cheaters if they cannot determine whether or not they have in fact been cheated (Darby and Karni 1973; McCluskey 2000). For such “credence goods,” market mechanisms may therefore be insufficient to ensure that the right level of quality is delivered. Like all goods, food and drugs are multidimensional in terms of product quality. Some dimensions of quality (for instance, flavor or texture) are experience goods because they can be easily determined upon consumption. Other dimensions (for instance, the ingredients contained in certain foods, the caloric content of foods, whether or not an item is “organic,” or the therapeutic merits of medicines) are better characterized as credence goods since it may not be obvious to even a sophisticated consumer whether or not he has been cheated. Hence, there are a priori reasons to believe that market forces will not be adequate to solve the asymmetric information problem that plagues many dimensions of food and drug quality.

Economists have long recognized that regulation is not always enacted to improve efficiency and advance the public interest. Indeed, since Stigler (1971) and Peltzman (1976), it has often been argued that regulation is sought by specific industry groups in order to tilt the competitive playing field to their advantage. For instance, by functioning as an entry barrier, regulation may raise the profits of incumbent firms by precluding the entry of new firms and new products. In these instances of “regulatory capture,” regulation harms efficiency by limiting the extent of competition and innovation in the market. In the context of product quality regulations like those applying to food and drugs, regulation may help incumbent producers by making it more costly for newer products to enter the market. Indeed, regulations that require producers to meet certain minimum standards or that ban the use of certain additives may benefit incumbent producers at the expense of producers of cheaper substitutes. Such regulations may also harm consumers, whose needs may be better met by these new prohibited products. The observation that select producer interests are often among the most vocal proponents of regulation is consistent with this regulatory capture explanation for regulation. Indeed, as we will see, a desire to shift the competitive playing field in favor of the producers of certain products has historically been an important motivation for food and drug regulation.

The fact that producer groups are often among the most important political constituencies in favor of regulation need not, however, imply that regulation necessarily advances the interests of these producers at the expense of efficiency. As noted earlier, to the extent that regulation reduces informational asymmetries about product quality, regulation may benefit producers of higher quality items as well as the consumers of such goods. Indeed, such efficiency-enhancing regulation may be particularly desirable for those producers whose goods are least amenable to market-based solutions to the asymmetric information problem (i.e., credence goods) precisely because it helps these producers expand the market for their wares and increase their profits. Hence, because it is possible for regulation that benefits certain producers to also improve welfare, producer support for regulation should not be taken as prima facie evidence of Stiglerian regulation.

United States’ Experience with Food and Drug Regulation

From colonial times until the mid to late nineteenth century, most food and drug regulation in America was enacted at the state and local level. Additionally, these regulations were generally targeted toward specific food products (Hutt and Hutt 1984). For instance, in 1641 Massachusetts introduced its first food adulteration law, which required the official inspection of beef, pork and fish; this was followed in the 1650s with legislation that regulated the quality of bread. Meanwhile, Virginia in the 1600s enacted laws to regulate weights and measures for corn and to outlaw the sale of adulterated wines.

During the latter half of the nineteenth century, the scale and scope of state level food regulation expanded considerably. Several factors contributed to this growth in legislation. For instance:

  • Specialization and urbanization made households more dependent on food purchased in impersonal markets. While these forces increased the variety of foods available, it also increased uncertainty about quality, since the more specialized and urbanized consumers became, the less they knew about the quality of products purchased from others (Wallis and North 1986).
  • Technological change in food manufacturing gave rise to new products and increased product complexity. The late nineteenth century witnessed the introduction of several new food products including alum-based baking powders, oleomargarine (the first viable substitute for butter), glucose, canned foods, “dressed” (i.e. refrigerated) beef, blended whiskey, chemical preservatives, and so on (Strasser 1989; Young 1989; Goodwin 1999). Unfamiliarity with these new products generated consumer concerns about food safety and food adulteration. Moreover, because many of these new products directly challenged the dominant position enjoyed by more traditional foods, these developments also give rise to demands for regulation on the part of traditional food producers who desired regulation to disadvantage these new competitors (Wood 1986).
  • Related to the previous point, the rise of analytic chemistry facilitated the “cheapening” of food in ways that were difficult for consumers to detect. For instance, the introduction of preservatives made it possible for food manufacturers to mask food deterioration. Additionally, the development of glucose as a cheap alternative to sugar facilitated deception on the part of producers of high priced products like maple syrup. Hence, concerns about adulteration were increasingly felt. Curiously, however, the rise of analytic chemistry also improved the ability of experts to detect these more subtle forms of food adulteration.
  • Because food adulteration became more difficult to detect, market mechanisms that relied on the ability of consumers to detect cheating ex post became less effective in solving the food adulteration problem. Hence, there was a growing perception that regulation by experts was necessary.2

Given this environment, it is perhaps unsurprising that a mixture of incentives gave rise to food regulation in the late nineteenth century. General pure food and dairy laws that required producers to properly label their products to indicate whether mixtures or impurities were added were likely enacted to help reduce asymmetric information about product quality (Law 2003). While producers of “pure” items also played a role in demanding these regulations, consumer groups – specifically women’s groups and leaders of the fledgling home economics movement – were also an important constituency in favor of regulation because they desired better information about food ingredients (Young 1989; Goodwin 1999). In contrast, narrow producer interest motivations seem to have been more important in generating a demand for more specific food regulations. For instance, state and federal oleomargarine restrictions were clearly enacted at the behest of dairy producing interests, who wanted to limit the availability of oleomargarine (Dupré 1999). Additionally, state and federal meat inspection laws were introduced to placate local butchers and local slaughterhouses in eastern markets who desired to reduce the competitive threat posed by the large mid-western meat packers (Yeager 1981; Libecap 1992).

Federal regulation of the food and drug industry was mostly piecemeal until the early 1900s. In 1848, Congress enacted the Drug Importation Act to curb the import of adulterated medicines. The 1886 oleomargarine tax required margarine manufacturers to stamp their product in various ways, imposed an internal revenue tax of 2 cents per pound on all oleomargarine produced in the United States, and levied a fee of $600 per year on oleomargarine producers, $480 per year on oleomargarine wholesalers, and $48 per year on oleomargarine retailers (Lee 1973; Dupré 1999). The 1891 Meat Inspection Act mandated the inspection of all live cattle for export as well as for all live cattle that were to be slaughtered and the meat exported. In 1897 the Tea Importation Act was passed which required Customs inspection of tea imported into the United States. Finally, in 1902 Congress enacted the Biologics Control Act to regulate the safety of vaccinations and serums used to prevent diseases in humans.

The 1906 Pure Food and Drugs Act and the 1906 Meat Inspection Act

The first general pure food and drug law at the federal level was not enacted until 1906 with the passage of the Pure Food and Drugs Act. While interest in federal regulation arose contemporaneously with interest in state regulation, conflict among competing interest groups regarding the provisions of a federal law made it difficult to build an effective political constituency in favor of federal regulation (Anderson 1958; Young 1989; Law and Libecap 2004). The law that emerged from this long legislative battle was similar in character to the state pure food laws that preceded it in that its focus was on accurate product labeling: it outlawed interstate trade in “adulterated” and “misbranded” foods, and required producers to indicate the presence of mixtures and/or impurities on product labels. Unlike earlier state legislation, however, the adulteration and misbranding provisions of this law also applied to drugs. Additionally, drugs listed in the United States Pharmacopoeia (USP) and the National Formulary (NF) were required to conform to USP and NF standards. Congress enacted the Pure Food and Drug Act along with the 1906 Meat Inspection Act, which tightened the USDA’s oversight of meat production. This new meat inspection law mandated ante and post mortem inspection of livestock, established sanitary standards for slaughterhouses and processing plants, and required continuous USDA inspection of meat processing and packaging. While the desire to create more uniform national food regulations was an important underlying motivation for regulation, it is noteworthy that both of these laws were enacted following a flurry of investigative journalism about the quality of meat and patent medicines. Specifically, the publication of Upton Sinclair’s The Jungle, with its vivid description of the conditions of the meat packing industry, as well as a series of articles by Samuel Hopkins Adams published in Collier’s Weekly about the dangers associated with patent medicine use, played a key role in provoking legislators to enact federal regulation of food and drugs (Wood 1986; Young 1989; Carpenter 2001; Law and Libecap 2004).3

Responsibility for enforcing the Pure Food and Drugs Act fell to the Bureau of Chemistry, a division within the USDA, which conducted some of the earliest studies of food adulteration within the United States. The Bureau of Chemistry was renamed the Food, Drug, and Insecticide Administration in 1927. In 1931 the name was shortened to the Food and Drug Administration (FDA). In 1940 the FDA was transferred from the USDA to the Federal Security Agency, which, in 1953, was renamed the Department of Health, Education and Welfare.

Whether the 1906 Pure Food and Drugs Act was enacted to advance special interests or to improve efficiency is a subject of some debate. Kolko (1967), for instance, suggests that the law reflected regulatory capture by large, national food manufacturers, who wanted to use federal legislation to disadvantage smaller, local firms. Coppin and High (1999) argue that rent-seeking on the part of bureaucrats within the government – specifically, Dr. Harvey Wiley, chief of the Bureau of Chemistry – was a critical factor in the emergence of this law. According to Coppin and High, Wiley was a “bureaucratic entrepreneur” who sought to ensure the future of his agency. By building ties with pro-regulation interest groups and lobbying in favor of a federal food and drug law, Wiley secured a lasting policy area for his organization. Law and Libecap (2004) argue that a mixture of bureaucratic, producer and consumer interests were in favor of federal food and drugs regulation, but that the last-minute onset of consumer interest in regulation (provoked by muckraking journalism about food and drug quality) played a key role in influencing the timing of regulation.

Enforcement of the Pure Food and Drugs Act met with mixed success. Indeed, the evidence from the enforcement of this law suggests that neither the pure industry capture nor public interest hypotheses provide an adequate account for regulation. On the one hand, some evidence suggests that the fledgling FDA’s enforcement work helped raise standards and reduce informational asymmetries about food quality. For instance, under the Net Weight Amendment of 1919, food and drug packages shipped in interstate commerce were required to be “plainly and conspicuously marked to show the quantity of contents in terms of weight, measure, and numerical count” (Weber 1928, p. 28). Similarly, under the Seafood Amendment of 1934, Gulf coast shrimp packaged under FDA supervision was required to be stamped with a label stating “Production supervised by the U.S. Food and Drug Administration” as a mechanism for ensuring quality and freshness. Additionally, during this period, investigators from the FDA played a key role in helping manufacturers improve the quality and reliability of processed foods, poultry products, food colorings, and canned items (Robinson 1900; Young 1992; Law 2003). On the other hand, the FDA’s efforts to regulate the patent medicine industry – specifically, to regulate the therapeutic claims that patent medicine firms made about their products – were largely unsuccessful. In U.S. vs. Johnson (1911), the Supreme Court ruled that therapeutic claims were essentially subjective and hence beyond the reach of this law. This situation was partially alleviated by the Sherley Amendment of 1912, which made it possible for the government to prosecute patent medicine producers who intended to defraud consumers. Effective regulation of pharmaceuticals was generally not possible, however, because under this amendment the government needed to prove fraud in order to successfully prosecute a patent medicine firm for making false therapeutic claims about its products (Young 1967). Hence, until new legislation was enacted in 1938, the patent medicine industry continued to escape effective federal control.

The 1938 Food, Drugs and Cosmetics Act

Like the law it replaced (the 1906 Pure Food and Drugs Act), the Food, Drugs and Cosmetics Act of 1938 was enacted following a protracted legislative battle. In the early 1930s, the FDA and its Congressional supporters began to lobby in favor of replacing the Pure Food and Drugs Act with stronger legislation that would give the agency greater authority to regulate the patent medicine industry. These efforts were successfully challenged by the patent medicine industry and its Congressional allies until 1938, when the so-called “Elixir Sulfanilamide tragedy” made it impossible for Congress to continue to ignore demands for tighter regulation. The story behind the Elixir Sulfanilamide tragedy is as follows. In 1937, Massengill, a Tennessee drug company, began to market a liquid sulfa drug called Elixir Sulfanilamide. Unfortunately, the solvent in this drug was a highly toxic variant of antifreeze; as a result, over 100 people died from taking this drug. Public outcry over this tragedy was critical in breaking the Congressional deadlock over tighter regulation (Young 1967; Jackson 1970; Carpenter and Sin 2002).

Under the 1938 law, the FDA was given considerably greater authority over the food and drug industry. The FDA was granted the power to regulate the therapeutic claims drug manufacturers printed on their product labels; authority over drug advertising, however, rested with the Federal Trade Commission (FTC) under the Wheeler-Lea Act of 1938. Additionally, the new law required that drugs be marketed with adequate directions for safe use, and FDA authority was extended to include medical devices and cosmetics. Perhaps the most striking and novel feature of the 1938 law was that it introduced mandatory pre-market approval for new drugs. Under this new law, drug manufacturers were required to demonstrate to the FDA that a new drug was safe before it could be released to the market. This feature of the legislation was clearly a reaction to the Elixir Sulfanilamide incident; food and drug bills introduced in Congress prior to 1938 did not include provisions requiring mandatory pre-market approval of new drugs.

Within a short period of time, the FDA began to deem some drugs to be so dangerous that no adequate directions could be written for direct use by patients. As a consequence, the FDA created a new class of drugs which would only be available with a physician’s prescription. Ambiguity over whether certain medicines – specifically, amphetamines and barbiturates – could be safely marketed directly to consumers or required a physician’s prescription led to disagreements between physicians, pharmacists, drug companies, and the FDA (Temin 1980). The political response to these conflicts was the Humphrey-Durham Amendment in 1951, which permitted a drug to be sold directly to patients “unless, because of its toxicity or other potential for harmful effect or because the method of collateral measures necessary to its use, it may safely be sold and used only under the supervision of a practitioner.”

The most significant expansion in FDA authority over drugs in the post World War II period occurred when Congress enacted the 1962 Drug Amendments (also known as the Kefauver-Harris Amendments) to the Food, Drugs and Cosmetics Act. Like the 1938 law, the 1962 Drug Amendments were passed in response to a therapeutic crisis – in this instance, the discovery that the use of thalidomide (a sedative that was marketed to combat the symptoms associated with morning sickness) by pregnant women caused birth deformities in thousands of babies in Europe.4 As a result of these amendments, drug companies were required to establish that drugs were both safe and effective prior to market release (the 1938 law only required proof of safety) and the FDA was granted greater authority to oversee clinical trials for new drugs. Under the 1962 Drug Amendments, responsibility for regulating prescription drug advertising was transferred from the FTC to the FDA; furthermore, the FDA was given the authority to establish good manufacturing practices in the drug industry and the power to access company records to monitor these practices. As a result of these amendments, the United States today has among the toughest drug approval regimes in the developed world.

A large and growing body of scholarship has been devoted to analyzing the economics and politics of the drug approval process. Early work has focused on the extent to which the FDA’s pre-market approval process has affected the rate of innovation and the availability of new pharmaceuticals.5 Peltzman (1973), among others, argues that 1962 Drug Amendments significantly reduced the flow of new drugs onto the market and imposed large welfare losses on society. These views have been challenged by Temin (1980) who maintains that much of the decline in new drug introductions occurred prior to the 1962 Drug Amendments. More recent work, however, suggests that the FDA’s pre-market approval process has indeed reduced the availability of new medicines (Wiggins 1981). In international comparisons, scholars have also found that new medicines generally become available more quickly in Europe than in America, suggesting that tighter regulation in the U.S. has induced a drug-lag (Wardell and Lasagna 1975; Grabowsky and Vernon 1983; Kaitin and Brown 1995). Some critics believe that the costs of this drug lag are large relative to the benefits because delay in the introduction of new drugs prevents patients from accessing new and more effective medicines. Gieringer (1985), for instance, estimates that the number of deaths that can be attributed to the drug lag far exceeds the number of lives saved by extra caution on the part of the FDA. Hence, according to these authors, the 1962 Drug Amendments may have had adverse consequences for overall welfare.

Other scholarship has examined the pattern of drug approval times in the post 1962 period. It is commonly observed that larger pharmaceutical firms receive faster drug approvals than smaller firms. One interpretation of this fact is that larger firms have “captured” the drug approval process and use the process to disadvantage their smaller competitors. Empirical work by Olson (1997) and Carpenter (2002), however, casts some doubt on this Stiglerian interpretation.6 These authors find that while larger firms do generally receive quicker drug approvals, drug approval times are also responsive to several other factors, including the specific disease at which a drug is directed, the number of applications submitted by the drug company, and the existence of a disease-specific interest group. Indeed, in other work, Carpenter (2004a) demonstrates that a regulator that seeks to maximize its reputation for protecting consumer safety may approve new drugs in ways that appear to benefit large firms.7 Hence, the fact that large pharmaceutical firms obtain faster drug approvals than small firms need not imply that the FDA has been “captured” by these corporations.

Food and Drug Regulation since the 1960s

Since the passage of the 1962 Drug Amendments, federal food and drug regulation in the United States has evolved along several lines. In some cases, regulation has strengthened the government’s authority over various aspects of the food and drug trade. For instance, the 1976 Medical Device Amendments required medical device manufacturers to register with the FDA and to follow quality control guideline. These amendments also established pre-market approval guidelines for medical devices. Along similar lines, the 1990 Nutrition Labeling and Education Act required all packaged foods to contain standardized nutritional information and standardized information on serving sizes.8

In other cases, regulations have been enacted to streamline the pre-market approval process for new drugs. Concerns that mandatory pre-market approval of new drugs may have reduced the rate at which new pharmaceuticals become available to consumers prompted the FDA to issue new rules in 1991 to accelerate the review of drugs for life-threatening diseases. Similar concerns also motivated Congress to enact the Prescription Drug User Fee Act of 1992 which required drug manufacturers to pay fees to the FDA to review drug approval applications and required the FDA to use these fees to pay for more reviewers to assess these new drug applications.9 Speedier drug approval times have not, however, come without costs. Evidence presented by Olson (2002) suggests that faster drug approval times have also contributed to a higher incidence of adverse drug reactions from new pharmaceuticals.

Finally, in a few instances, legislation has weakened government’s authority over food and drug products. For example, the 1976 Vitamins and Minerals Amendments precluded the FDA from establishing standards that limited the potency of vitamins and minerals added to foods. Similarly, the 1994 Dietary Supplements and Nutritional Labeling Act weakened the FDA’s ability to regulate dietary supplements by classifying them as foods rather than drugs. In these cases, the consumers and producers of “natural” or “herbal” remedies played a key role in pushing Congress to limit the FDA’s authority.

References

Akerlof, George A. “The Market for ‘Lemons’: Quality Uncertainty and the Market Mechanism.” Quarterly Journal of Economics 84, no. 3 (1970): 488-500

Anderson, Oscar E. Jr. The Health of a Nation: Harvey W. Wiley and the Fight for Pure Food. Chicago: University of Chicago Press, 1958.

Carpenter, Daniel P. The Forging of Bureaucratic Autonomy: Reputation, Networks, and Policy Innovation in Executive Agencies, 1862-1928. Princeton: Princeton University Press, 2001.

Carpenter, Daniel P. “Groups, the Media, Agency Waiting Costs, and FDA Drug Approval.” American Journal of Political Science 46, no. 2 (2002):490-505

Carpenter, Daniel P. “Protection without Capture: Drug Approval by a Political Responsive, Bayesian Regulator.” American Political Science Review, (2004a), Forthcoming.

Carpenter, Daniel P. “The Political Economy of FDA Drug Review: Processing, Politics, and Lessons for Policy.” Health Affairs 23, no. 1 (2004b):52-63.

Carpenter, Daniel P. and Gisela Sin. “Crisis and the Emergence of Economic Regulation: The Food, Drug and Cosmetics Act of 1938.” University of Michigan, Department of Political Science, unpublished manuscript, 2002.

Comanor, William S. “The Political Economy of the Pharmaceutical Industry.” Journal of Economic Literature 24, no. 3 (1986): 1178-1217.

Coppin, Clayton and Jack High. The Politics of Purity: Harvey Washington Wiley and the Origins of Federal Food Policy. Ann Arbor: University of Michigan Press, 1999.

Darby, Michael R. and Edi Karni. “Free Competition and the Optimal Amount of Fraud.” Journal of Law and Economics 16, no. 1 (1973): 67-88.

Dupré, Ruth. “If It’s Yellow, It Must be Butter: Margarine Regulation in North America since 1886.” Journal of Economic History 59, no 2 (1999): 353-71.

French, Michael and Jim Phillips. Cheated Not Poisoned? Food Regulation in the United Kingdom, 1875-1938. Manchester: Manchester University Press, 2000.

Gieringer, Dale H. “The Safety and Efficacy of New Drug Approvals.” Cato Journal 5, no. 1 (1985): 177-201.

Goodwin, Lorine S. The Pure Food, Drink, and Drug Crusaders, 1879-1914. Jefferson, NC: McFarland & Company, 1999.

Grabowski, Henry G. and John M. Vernon. The Regulation of Pharmaceuticals: Balancing the Benefits and Risks. Washington, DC: American Enterprise Institute, 1983.

Harris, Steven B. “The Right Lesson to Learn from Thalidomide.” 1992. Available at: http://w3.aces.uiuc.edu:8001/Liberty/Tales/Thalidomide.html.

Hutt, Peter Barton and Peter Barton Hutt II. “A History of Government Regulation of Adulteration and Misbranding of Food.” Food, Drug and Cosmetic Law Journal 39 (1984): 2-73.

Ippolito, Pauline M. and Janis K. Pappalardo. Advertising, Nutrition, and Health: Evidence from Food Advertising, 1977-1997. Bureau of Economics Staff Report. Washington, DC: Federal Trade Commission, 2002.

Jackson, Charles O. Food and Drug Legislation in the New Deal. Princeton: Princeton University Press, 1970.

Kaitin, Kenneth I. and Jeffrey S. Brown. “A Drug Lag Update.” Drug Information Journal 29, no. 2 (1995): 361-73.

Klein, Benjamin and Keith B. Leffler. “The Role of Market Forces in Assuring Contractual Performance.” Journal of Political Economy 89, no. 4 (1981): 615-41.

Kolko, Gabriel. The Triumph of Conservatism: A Reinterpretation of American History. New York: MacMillan, 1967.

Law, Marc T. “The Origins of State Pure Food Regulation.” Journal of Economic History 63, no. 4 (2003): 1103-1130.

Law, Marc T. “How Do Regulators Regulate? Enforcement of the Pure Food and Drugs Act, 1907-38.” University of Vermont, Department of Economics, unpublished manuscript, 2003.

Law, Marc T. and Gary D. Libecap. “The Determinants of Progressive Era Reform: The Pure Food and Drug Act of 1906.” In Corruption and Reform: Lessons from America’s History, edited by Edward Glaeser and Claudia Goldin. Chicago: University of Chicago Press, 2004 (forthcoming).

Lee, R. Alton. A History of Regulatory Taxation. Lexington: University of Kentucky Press, 1973.

Libecap, Gary D. “The Rise of the Chicago Packers and the Origins of Meat Inspection and Antitrust.” Economic Inquiry 30, no. 2 (1992): 242-262.

Mathios, Alan D. “The Impact of Mandatory Disclosure Laws on Product Choices: An Analysis of the Salad Dressing Market.” Journal of Law and Economics 43, no. 2 (2002): 651-77.

McCluskey, Jill J. “A Game Theoretic Approach to Organic Foods: An Analysis of Asymmetric Information and Policy.” Agricultural and Resource Economics Review 29, no. 1 (2000): 1-9.

Olson, Mary K. “Regulatory Agency Discretion Among Competing Industries: Inside the FDA.” Journal of Law, Economics, and Organization 11, no. 2 (1995): 379-401.

Olson, Mary K. “Explaining Regulatory Behavior in the FDA: Political Control vs. Agency Discretion.” In Advances in the Study of Entrepreneurship, Innovation, and Economic Growth, edited by Gary D. Libecap, 71-108, Greenwich: JAI Press, 1996a.

Olson, Mary K. “Substitution in Regulatory Agencies: FDA Enforcement Alternatives.” Journal of Law, Economics, and Organization 12, no. 2 (1996b): 376-407.

Olson, Mary K. “Firms’ Influences on FDA Drug Approval.” Journal of Economics and Management Strategy 6, no. 2 (1997): 377-401.

Olson, Mary K. “Regulatory Reform and Bureaucratic Responsiveness to Firms: The Impact of User Fees in the FDA.” Journal of Economics and Management Strategy 9, no. 3 (2000): 363-95.

Olson, Mary K. “Pharmaceutical Policy Change and the Safety of New Drugs.” Journal of Law and Economics 45, no 2, Part II (2002): 615-42.

Peltzman, Sam. “An Evaluation of Consumer Protection Legislation: The 1962 Drug Amendments.” Journal of Political Economy 81, no. 5 (1973): 1049-1091

Peltzman, Sam. “Toward a More General Theory of Regulation.” Journal of Law and Economics 19, no. 2 (1976): 211-40.

Robinson, Lisa M. “Regulating What We Eat: Mary Engle Pennington and the Food Research Laboratory.” Agricultural History 64 (1990): 143-53.

Stigler, George J. “The Theory of Economic Regulation.” Bell Journal of Economics and Management Science 2, no. 1 (1971): 3-21.

Strasser, Susan. Satisfaction Guaranteed: The Making of the American Mass Market. New York: Pantheon Books, 1989.

Temin, Peter. Taking Your Medicine: Drug Regulation in the United States. Cambridge: Harvard University Press, 1980.

Wallis, John J. and Douglass C. North. “Measuring the Transaction Sector of the American Economic, 1870-1970.” In Long Term Factors in American Economic Growth, edited by Stanley Engerman and Robert Gallman, 95-148. Chicago: University of Chicago Press, 1986.

Wardell, William M. and Louis Lasagna. Regulation and Drug Development. Washington, DC: American Enterprise Institute, 1975.

Weber, Gustavus. The Food, Drug and Insecticide Administration: Its History, Activities, and Organization. Baltimore: Johns Hopkins University Press, 1928.

Wiggins, Steven N. “Product Quality Regulation and New Drug Introductions: Some New Evidence from the 1970s.” Review of Economics and Statistics 63, no. 4 (1981): 615-19.

Wood, Donna J. The Strategic Use of Public Policy: Business and Government in the Progressive Era. Marshfield, MA: Pitman Publishing, 1986.

Yeager, Mary A. Competition and Regulation: The Development of Oligopoly in the Meat Packing Industry. Greenwich, CT: JAI Press, 1981.

Young, James H. The Medical Messiahs: A Social History of Quackery in Twentieth Century America. Princeton: Princeton University Press, 1967.

Young, James H. Pure Food: Securing the Federal Food and Drugs Act of 1906. Princeton: Princeton University Press. 1986.

Young, James H. “Food and Drug Enforcers in the 1920s: Restraining and Educating Business.” Business and Economic History 21 (1992): 119-128.

1 See Hutt and Hutt (1984) for an excellent survey of the history of food regulation in earlier times. French and Phillips (2000) discuss the development of food regulation in the United Kingdom.

2 This rationale for regulation was articulated by a member of the 49th Congress (1885):

In ordinary cases the consumer may be left to his own intelligence to protect himself against impositions. By the exercise of a reasonable degree of caution, he can protect himself from frauds in under-weight and in under-measure. If he can not detect a paper-soled shoe on inspection he detects it in the wearing of it, and in one way or another he can impose a penalty upon the fraudulent vendor. As a general rule the doctrine of laissez faire can be applied. Not so with many of the adulterations of food. Scientific inspection is needed to detect the fraud, and scientific inspection is beyond the reach of the ordinary consumer. In such cases, the Government should intervene (Congressional Record, 49th Congress, 1st Session, pp. 5040-41).

3 It is noteworthy that in writing The Jungle, Sinclair’s motivation was not to obtain federal meat inspection legislation, but rather, to provoke public outrage over industrial working conditions. “I aimed at the public’s heart,” he later wrote, “and by accident I hit it in the stomach.” (Quoted in Kolko 1967, p. 103.)

4 Thalidomide was not approved for sale in the U.S. The fact that an FDA official – Dr. Frances Kelsey, an FDA drug examiner – played a key role in blocking its availability in the United States gave even more legitimacy to the view that the FDA’s authority over pharmaceuticals needed to be strengthened. See Temin (1980, pp. 123-24). Ironically, Dr. Kelsey’s efforts to block the introduction of thalidomide in the United States stemmed not from knowledge about the fact that thalidomide caused birth defects, but rather, from concerns that thalidomide might cause neuropathy (a disease of the nervous system) in some of its users. Indeed, the association between thalidomide and birth defects was discovered by researchers in Europe, not by drug investigators at the FDA. Hence, the FDA may not in fact have deserved the credit it was given in preventing the thalidomide tragedy from spreading to the U.S. (Harris 1992).

5 See Comanor (1986) for a summary of this literature.

6 Along these lines, Olson (1995, 1996a, 1996b) also finds that other aspects of the FDA’s enforcement work from the 1970s until the present are generally responsive to pressures from multiple interest groups including firms, consumer groups, the media, and Congress.

7 For a very readable discussion of this perspective see Carpenter (2004b).

8 See Mathios (2000) and Ippolito and Pappalardo (2002) for analyses of the effects of this law on food consumption choices.

9 See Olson (2000) for analysis of the effects of these user fees on approval times.

Citation: Law, Marc. “History of Food and Drug Regulation in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. October 11, 2004. URL http://eh.net/encyclopedia/history-of-food-and-drug-regulation-in-the-united-states/

Economy of England at the Time of the Norman Conquest

John McDonald, Flinders University, Adelaide, Australia

The Domesday Survey of 1086 provides high quality and detailed information on the inputs, outputs and tax assessments of most English estates. This article describes how the data have been used to reconstruct the eleventh-century Domesday economy. By exploiting modern economic theory and statistical methods the reconstruction has led to a radically different assessment of the way in which the Domesday economy and fiscal system were organized. It appears that tax assessments were based on a capacity to pay principle subject to politically expedient concessions and we can discover who received lenient assessments and why. Penetrating questions can be asked about the economy. We can compare the efficiency of Domesday agricultural production with the efficiency of more modern economies, measure the productivity of inputs and assess the impact of feudalism and manorialism on economic activity. The emerging picture of a reasonably well organized economy and fair tax system contrasts with the assessment of earlier historians who saw the Normans as capable military and civil administrators but regarded the economy as haphazardly run and tax assessments as “artificial” or arbitrary. The next section describes the Survey, the contemporary institutional arrangements and the main features of Domesday agricultural production. Some key findings on the Domesday economy and tax system are then briefly discussed.

Domesday England and the Domesday Survey

William the Conqueror invaded England from France in 1066 and carried out the Domesday Survey twenty years later. By 1086, Norman rule had been largely consolidated, although only after rebellion and civil dissent had been harshly put down. The Conquest was achieved by an elite, and, although the Normans brought new institutions and practices, these were superimposed on the existing order. Most of the Anglo-Saxon aristocracy were eliminated, the lands of over 4,000 English lords passing to less than 200 Norman barons, with much of the land held by just a handful of magnates.

William ruled vigorously through the Great Council. England was divided into shires, or counties, which were subdivided into hundreds. There was a sophisticated and long established shire administration. The sheriff was the king’s agent in the county, royal orders could be transmitted through the county and hundred courts, and an effective taxation collection system was in place.

England was a feudal state. All land belonged to the king. He appointed tenants-in-chief, both lay and ecclesiastical, who usually held land in return for providing a quota of fully equipped knights. The tenants-in-chief might then grant the land to sub-tenants in return for rents or services, or work the estate themselves through a bailiff. Although the Survey records 112 boroughs, agriculture was the predominant economic activity, with stock rearing of greater importance in the south-west and arable farming more important in the east and midlands. Manorialism was a pervasive influence, although it existed in most parts of England in a modified form. On the manor the peasants worked the lord’s demesne in return for protection, housing, and the use of plots of land to cultivate their own crops. They were tied to the lord and the manor and provided a resident workforce. The demesne was also worked by slaves who were fed and housed by the lord.

The Domesday Survey was commissioned on Christmas day, 1085, and it is generally thought that work on summarizing the Survey was terminated with the death of William in September 1087. The task was facilitated by the availability of Anglo-Saxon hidage (tax) lists. The counties of England were grouped into (probably) seven circuits. Each circuit was visited by a team of commissioners, bishops, lawyers and lay barons who had no material interests in the area. The commissioners were responsible for circulating a list of questions to land holders, for subjecting the responses to a review in the county court by the hundred juries, often consisting of half Englishmen and half Frenchmen, and for supervising the compilation of county and circuit returns. The circuit returns were then sent to the Exchequer in Winchester where they were summarized, edited and compiled into Great Domesday Book.

Unlike modern surveys, individual questionnaire responses were not treated confidentially but became public knowledge, being verified in the courts by landholders with local knowledge. In such circumstances, the opportunities for giving false or misleading evidence were limited.

Domesday Book consists of two volumes, Great (or Exchequer) Domesday and Little Domesday. Little Domesday is a detailed original survey circuit return of circuit VII, Essex, Norfolk and Suffolk. Great Domesday is a summarized version of the other circuit returns sent to the King’s treasury in Winchester. (It is thought that the death of William occurred before Essex and East Anglia could be included in Great Domesday.) The two volumes contain information on the net incomes or outputs (referred to as the annual values), tax assessments and resources of most manors in England in 1086, some information for 1066, and sometimes also for an intermediate year. The information was used to revise tax assessments and document the feudal structure, “who held what, and owed what, to whom.”

Taxation

The Domesday tax assessments relate to a non-feudal tax, the geld, thought to be levied annually by the end of William’s reign. The tax can be traced back to the danegeld, and, although originally a land tax, by Norman times, it was more broadly based and a significant impost on landholders.

There is an extensive literature on the Norman tax system, much of it influenced by Round (1895), who considered the assessments to be “artificial,” in the sense that they were imposed from above via the county and hundred with little or no consideration of the capacity of an individual estate to pay the tax. Round largely based his argument on an unsystematic and subjective review of the distribution of the assessments across estates, vills and the hundreds of counties.

In (1985a) and (1986, Ch. 4), Graeme Snooks and I argued that, contrary to Round’s hypothesis, the tax assessments were based on a capacity to pay principle, subject to some politically expedient tax concessions. Similar tax systems operate in most modern societies and reflect an attempt to collect revenue in a politically acceptable way. We found empirical support for the hypothesis, using statistical methods. We showed, for example, that for Essex lay estates about 65 percent of variation in the tax assessments could be attributed to variations in manorial net incomes or manorial resources, two alternative ways of measuring capacity to pay. Similar results were obtained for other counties. Capacity to pay explains from 64 to 89 percent of variation in individual estate assessment data for the counties of Buckinghamshire, Cambridgeshire, Essex and Wiltshire, and from 72 to 81 percent for aggregate data for 29 counties (see McDonald and Snooks, 1987a). The estimated tax relationships capture the main features of the tax system.

Capacity to pay explains most variation in tax assessments, but some variation remains. Who and which estates were treated favorably? And what factors were associated with lenient taxation? These issues were investigated in McDonald (1998) where frontier methods were used to derive a measure of how favorable the tax assessments were for each Essex lay estate. (The frontier methods, also known as “data envelopment analysis,” use the tax and income observations to trace out an outer bound, or frontier, for the tax relationship.) Estates, tenants-in-chief and local areas (hundreds) of the county with lenient assessments were identified, and statistical methods used to discover factors associated with favorable assessments. Some significant factors were the tenant-in-chief holding the estate (assessments tended to be less beneficial for the tenants-in-chief holding a large number of estates in Essex), the hundred location (some hundreds receiving more favorable treatment than others), proximity to an urban center (estates remote from the urban centers being more favorably treated), economic size of the estate (larger estates being less favorably treated) and tenure (estates held as sub-tenancies having more lenient assessments). The results suggest a similarity with more modern tax systems, with some groups and activities receiving minor concessions and the administrative process inducing some unevenness in the assessments. Although many details of the tax system have been lost in the mists of time, careful analysis of the Survey data has enabled us to rediscover its main features.

Production

Since Victorian times historians have used Domesday Book to study the political, institutional and social structures and the geography of Domesday England. However, the early scholars tended to draw away from economic issues. They were unable to perceive that systematic economic relationships were present in the Domesday economy, and, in contrast to their view that the Normans displayed considerable ability in civil administration and military matters, economic production was regarded as poorly organized (see McDonald and Snooks, 1985a, 1985b and 1986, especially Ch 3). One explanation why the Domesday scholars were unable to discover consistent relationships in the economy lies in the empirical method they adopted. Rather than examining the data as a whole using statistical techniques, conclusions were drawn by generalizing from a few (often atypical) cases. It is not surprising that no consistent pattern was evident when data were restricted to a few unusual observations. It would also appear that the researchers often did not have a firm grasp of economic theory (for example, seemingly being perplexed that the same annual value, that is, net output, could be generated by estates with different input mixes, see McDonald and Snooks, 1986, Ch. 3).

In McDonald and Snooks (1986), using modern economic and statistical methods, Graeme Snooks and I reanalyzed manorial production relationships. The study shows that strong relationships existed linking estate net output to inputs. We estimated manorial production functions which indicate many interesting characteristics of Domesday production: returns to scale were close to constant, oxen plough teams and meadowland were prized inputs in production but horses contributed little, and villans, bordars and slaves (the less free workers) contributed far more than freemen and sokemen ( the more free) to the estate’s net output. The evidence suggested that in many ways Domesday landholders operated in a manner similar to modern entrepreneurs. Unresolved by this research was the question of how similar was the pattern of medieval and modern economic activity. In particular, how well organized was estate production?

Clearly, in an absolute sense Domesday estate production was inefficient. With modern technology, using, for example, motorized tractors, output could have been increased many-fold. A more interesting question is: Given the contemporary technology and institutions, how efficient was production?

In McDonald (1998) frontier methods were used to measure best practice, given the economic environment. We then measured how far, on average, estate production was below the best practice frontier. Providing some estates were effectively organized, so that best practice was good practice, this will be a useful measure. If many estates were run haphazardly and ineffectively, average efficiency will be low and efficiency dispersion measures large. Comparisons with average efficiency levels in similar production situations will give an indication of whether Domesday average efficiency was unusually low.

A large number of efficiency studies have been reported in the literature. Three case studies with characteristics similar to Domesday production are Hall’s (1975) study of agriculture after the Civil War in the American South, Hall and LeVeen’s (1978) analysis of small Californian farms and Byrnes, Färe, Grosskopf and Lovell’s (1988) study of American surface coalmines. For all three studies the individual establishment is the production unit, the economic activity is unsophisticated primary production and similar frontier methods are used to measure efficiency.

The comparison studies suggest that efficiency levels varied less across Domesday estates than they did among postbellum Southern farms and small Californian farms in the 1970s (and were very similar for Domesday estates and US surface coalmines). Certainly, the average Domesday estate efficiency level does not appear to be unusually low when compared with average efficiency levels in similar production situations.

In McDonald (1998) estate efficiency measures are also used to examine details of production on individual estates and statistical methods employed to find factors associated with efficiency. Some of these include the estate’s tenant-in-chief (some tenants-in-chief displayed more entrepreneurial flair than others), the size of the estate (larger estates, using inputs in different proportions to smaller estates, tended to be more efficient) and the kind of agriculture undertaken (estates specialized in grazing were more efficient).

Largely through the influences of feudalism and manorialism, Domesday agriculture suffered from poorly developed factor markets and considerable immobility of inputs. Although there were exceptions to the rule, as a first approximation, manorial production can be characterized in terms of estates worked by a residential labor force using the resources, which were available on the estate.

Input productivity depends on the mix of inputs used in production, and with estates endowed with widely different resource mixes, one might expect that input productivities would vary greatly across estates. The frontier analysis generates input productivity measures (shadow prices), and these confirm this expectation — indeed on many estates some inputs made very little contribution to production. The frontier analysis also allows us to estimate the economic cost of input rigidity induced by the feudal and manorial arrangements. The calculation indicates that if inputs had been mobile among estates an increase in total net output of 40.1 percent would have been possible. This potential loss in output is considerable. The frontier analysis indicates the loss in total net output resulting from estates not being fully efficient was 51.0 percent. The loss in output due to input rigidities is smaller, but of a similar order of magnitude.

Domesday Book is indeed a rich data source. It is remarkable that so much can be discovered about the English economy almost one thousand years ago.

Further reading

Background information on Domesday England is contained in McDonald and Snooks (1986, Ch. 1 and 2; 1985a, 1985b, 1987a and 1987b) and McDonald (1998). For more comprehensive accounts of the history of the period see Brown (1984), Clanchy (1983), Loyn (1962), (1965), (1983), Stenton (1943), and Stenton (1951). Other useful references include Ballard (1906), Darby (1952), (1977), Galbraith (1961), Hollister (1965), Lennard (1959), Maitland (1897), Miller and Hatcher (1978), Postan (1966), (1972), Round (1895), (1903), the articles in Williams (1987) and references cited in McDonald and Snooks (1986). The Survey is discussed in McDonald and Snooks (1986, sec. 2.2), the references cited there, and the articles in Williams (1987). The Domesday and modern surveys are compared in McDonald and Snooks (1985c).
The reconstruction of the Domesday economy is described in McDonald and Snooks (1986). Part 1 contains information on the basic tax and production relationships and Part 2 describes the methods used to estimate the relationships. The tax and production frontier analysis and efficiency comparisons are described in McDonald (1998). The book also explains the frontier methodology. A series of articles describe features of the research to different audiences: McDonald and Snooks (1985a, 1985b, 1987a, 1987b), economic historians; McDonald (2000), economists; McDonald (1997), management scientists; McDonald (2002), accounting historians (who recognize that Domesday Book possesses many attributes of an accounting record); and McDonald and Snooks (1985c), statisticians. Others who have made important contributions to our understanding of the Domesday economy include Miller and Hatcher (1978), Harvey (1983) and the contributors to the volumes edited by Aston (1987), Holt (1987), Hallam (1988) and Britnell and Campbell (1995).

References

Aston, T.H., editor. Landlords, Peasants and Politics in Medieval England. Cambridge: Cambridge University Press, 1987.
Ballard, Adolphus. The Domesday Inquest. London: Methuen, 1906.
Brittnell, Richard H. and Bruce M.S. Campbell, editors. A Commercialising Economy: England 1086 to c. 1300. Manchester: Manchester University Press, 1995.
Brown, R. Allen. The Normans. Woodbridge: Boydell Press, 1984.
Byrnes, P., R. Färe, S. Grosskopf and C.A. K. Lovell. “The Effect of Unions on Productivity: U.S. Surface Mining of Coal.” Management Science 34 (1988): 1037-53.
Clanchy, M.T. England and Its Rulers, 1066-1272. Glasgow: Fontana, 1983.
Darby, H.C. The Domesday Geography of Eastern England. Reprinted 1971. Cambridge: Cambridge University Press, 1952.
Darby, H.C. Domesday England. Reprinted 1979. Cambridge: Cambridge University Press, 1977.
Darby, H.C. and I.S. Maxwell, editor. The Domesday Geography of Northern England. Cambridge: Cambridge University Press, 1962.
Galbraith, V.H. The Making of Domesday Book. Oxford: Clarendon Press,1961.
Hall, A. R. “The Efficiency of Post-Bellum Southern Agriculture.” Ann Arbor, MI: University Microfilms International, 1975.
Hall, B. F. and E. P. LeVeen. “Farm Size and Economic Efficiency: The Case of California.” American Journal of Agricultural Economics 60 (1978): 589-600.
Hallam, H.E. Rural England, 1066-1348. Brighton: Fontana, 1981.
Hallam, H.E., editor. The Agrarian History of England and Wales, II: 1042-1350. Cambridge: Cambridge University Press, 1988.
Harvey, S.P.J. “The Extent and Profitability of Demesne Agriculture in the Latter Eleventh Century.” In Social Relations and Ideas: Essays in Honour of R.H. Hilton, edited by T.H. Ashton et al. Cambridge, Cambridge University Press, 1983.
Hollister, C.W. The Military Organisation of Norman England. Oxford: Clarendon Press. 1965.
Holt, J. C., editor. Domesday Studies. Woodbridge: Boydell Press, 1987.
Langdon, J. “The Economics of Horses and Oxen in Medieval England.” Agricultural History Review 30 (1982): 31-40.
Lennard, R. Rural England 1086-1135: A Study of Social and Agrarian Conditions. Oxford: Clarendon Press, 1959.
Loyn, R. Anglo-Saxon England and the Norman Conquest. Reprinted 1981. London: Longman, 1962.
Loyn, R. The Norman Conquest. Reprinted 1981. London: Longman, 1965.
Loyn, R. The Governance of Anglo-Saxon England, 500-1087. London: Edward Arnold, 1983.
McDonald, John. “Manorial Efficiency in Domesday England.” Journal of Productivity Analysis 8 (1997): 199-213.
McDonald, John. Production Efficiency in Domesday England. London: Routledge, 1998.
McDonald, John. “Domesday Economy: An Analysis of the English Economy Early in the Second Millennium.” National Institute Economic Review 172 (2000): 105-114.
McDonald, John. “Tax Fairness in Eleventh Century England.” Accounting Historians Journal 29 (2002): 173-193.
McDonald, John. and G. D. Snooks. “Were the Tax Assessments of Domesday England Artificial? The Case of Essex.” Economic History Review 38 (1985a): 353-373.
McDonald, John. and G. D. Snooks. “The Determinants of Manorial Income in Domesday England: Evidence from Essex.” Journal of Economic History 45 (1985b): 541-556.
McDonald, John. and G. D. Snooks. “Statistical Analysis of Domesday Book (1086).” Journal of the Royal Statistical Society, Series A 148 (1985c): 147-160.
McDonald, John. and G. D. Snooks. Domesday Economy: A New Approach to Anglo-Norman History. Oxford: Clarendon Press, 1986.
McDonald, John. and G. D. Snooks. “The Suitability of Domesday Book for Cliometric Analysis.” Economic History Review 40 (1987a): 252-261.
McDonald, John. and G. D. Snooks. “The Economics of Domesday England.” In
A. Williams, editor, Domesday Book Studies. London: Alecto Historical Editions, 1987.
Maitland, Frederic William. Domesday Book and Beyond. Reprinted 1921, Cambridge: Cambridge University Press, 1897.
Miller, Edward, and John Hatcher. Medieval England: Rural Society and Economic Change 1086-1348. London: Longman, 1978.
Morris, J., general editor. Domesday Book: A Survey of the Counties of England. Chichester: Phillimore, 1975.
Postan, M. M. Medieval Agrarian Society in Its Prime, The Cambridge Economic History of Europe. Vol. 1, M. M. Postan, editor. Cambridge: Cambridge University Press, 1966.
Postan, M. M. The Medieval Economy and Society: An Economic History of Britain in the Middle Ages. London: Weidenfeld & Nicolson, 1972.
Raftis, J. A. The Estates of Ramsey Abbey: A Study in Economic Growth and Organisation. Toronto: Pontifical Institute of Medieval Studies, 1957.
Round, John Horace. Feudal England: Historical Studies on the Eleventh and Twelfth Centuries. Reprinted 1964. London: Allen & Unwin, 1895.
Round, John Horace. “Essex Survey.” In VCH Essex. Vol. 1, reprinted 1977. London: Dawson, 1903.
Snooks, G. D. “The Dynamic Role of the Market in the Anglo-Saxon Economy and Beyond, 1086-1300.” In A Commercialising Economy: England 1086 to c. 1300, edited by R. H. Brittnell and M. S. Campbell. Manchester: Manchester University Press, 1995.
Stenton, D. M. English Society in the Middle Ages. Reprinted 1983. Harmondsworth: Penguin, 1951.
Stenton, F. M. Anglo-Saxon England. Reprinted 1975. Oxford: Clarendon Press, 1943.
Victoria County History. London: Oxford University Press, 1900-.
Williams, A., editor. Domesday Book Studies. London: Alecto Historical Editions, 1987.

Citation: McDonald, John. “Economy of England at the Time of the Norman Conquest”. EH.Net Encyclopedia, edited by Robert Whaples. September 9, 2004. URL http://eh.net/encyclopedia/economy-of-england-at-the-time-of-the-norman-conquest/

The Depression of 1893

David O. Whitten, Auburn University

The Depression of 1893 was one of the worst in American history with the unemployment rate exceeding ten percent for half a decade. This article describes economic developments in the decades leading up to the depression; the performance of the economy during the 1890s; domestic and international causes of the depression; and political and social responses to the depression.

The Depression of 1893 can be seen as a watershed event in American history. It was accompanied by violent strikes, the climax of the Populist and free silver political crusades, the creation of a new political balance, the continuing transformation of the country’s economy, major changes in national policy, and far-reaching social and intellectual developments. Business contraction shaped the decade that ushered out the nineteenth century.

Unemployment Estimates

One way to measure the severity of the depression is to examine the unemployment rate. Table 1 provides estimates of unemployment, which are derived from data on output — annual unemployment was not directly measured until 1929, so there is no consensus on the precise magnitude of the unemployment rate of the 1890s. Despite the differences in the two series, however, it is obvious that the Depression of 1893 was an important event. The unemployment rate exceeded ten percent for five or six consecutive years. The only other time this occurred in the history of the US economy was during the Great Depression of the 1930s.

Timing and Depth of the Depression

The National Bureau of Economic Research estimates that the economic contraction began in January 1893 and continued until June 1894. The economy then grew until December 1895, but it was then hit by a second recession that lasted until June 1897. Estimates of annual real gross national product (which adjust for this period’s deflation) are fairly crude, but they generally suggest that real GNP fell about 4% from 1892 to 1893 and another 6% from 1893 to 1894. By 1895 the economy had grown past its earlier peak, but GDP fell about 2.5% from 1895 to 1896. During this period population grew at about 2% per year, so real GNP per person didn’t surpass its 1892 level until 1899. Immigration, which had averaged over 500,000 people per year in the 1880s and which would surpass one million people per year in the first decade of the 1900s, averaged only 270,000 from 1894 to 1898.

Table 1
Estimates of Unemployment during the 1890s

Year Lebergott Romer
1890 4.0% 4.0%
1891 5.4 4.8
1892 3.0 3.7
1893 11.7 8.1
1894 18.4 12.3
1895 13.7 11.1
1896 14.5 12.0
1897 14.5 12.4
1898 12.4 11.6
1899 6.5 8,7
1900 5.0 5.0

Source: Romer, 1984

The depression struck an economy that was more like the economy of 1993 than that of 1793. By 1890, the US economy generated one of the highest levels of output per person in the world — below that in Britain, but higher than the rest of Europe. Agriculture no longer dominated the economy, producing only about 19 percent of GNP, well below the 30 percent produced in manufacturing and mining. Agriculture’s share of the labor force, which had been about 74% in 1800, and 60% in 1860, had fallen to roughly 40% in 1890. As Table 2 shows, only the South remained a predominantly agricultural region. Throughout the country few families were self-sufficient, most relied on selling their output or labor in the market — unlike those living in the country one hundred years earlier.

Table 2
Agriculture’s Share of the Labor Force by Region, 1890

Northeast 15%
Middle Atlantic 17%
Midwest 43%
South Atlantic 63%
South Central 67%
West 29%

Economic Trends Preceding the 1890s

Between 1870 and 1890 the number of farms in the United States rose by nearly 80 percent, to 4.5 million, and increased by another 25 percent by the end of the century. Farm property value grew by 75 percent, to $16.5 billion, and by 1900 had increased by another 25 percent. The advancing checkerboard of tilled fields in the nation’s heartland represented a vast indebtedness. Nationwide about 29% of farmers were encumbered by mortgages. One contemporary observer estimated 2.3 million farm mortgages nationwide in 1890 worth over $2.2 billion. But farmers in the plains were much more likely to be in debt. Kansas croplands were mortgaged to 45 percent of their true value, those in South Dakota to 46 percent, in Minnesota to 44, in Montana 41, and in Colorado 34 percent. Debt covered a comparable proportion of all farmlands in those states. Under favorable conditions the millions of dollars of annual charges on farm mortgages could be borne, but a declining economy brought foreclosures and tax sales.

Railroads opened new areas to agriculture, linking these to rapidly changing national and international markets. Mechanization, the development of improved crops, and the introduction of new techniques increased productivity and fueled a rapid expansion of farming operations. The output of staples skyrocketed. Yields of wheat, corn, and cotton doubled between 1870 and 1890 though the nation’s population rose by only two-thirds. Grain and fiber flooded the domestic market. Moreover, competition in world markets was fierce: Egypt and India emerged as rival sources of cotton; other areas poured out a growing stream of cereals. Farmers in the United States read the disappointing results in falling prices. Over 1870-73, corn and wheat averaged $0.463 and $1.174 per bushel and cotton $0.152 per pound; twenty years later they brought but $0.412 and $0.707 a bushel and $0.078 a pound. In 1889 corn fell to ten cents in Kansas, about half the estimated cost of production. Some farmers in need of cash to meet debts tried to increase income by increasing output of crops whose overproduction had already demoralized prices and cut farm receipts.

Railroad construction was an important spur to economic growth. Expansion peaked between 1879 and 1883, when eight thousand miles a year, on average, were built including the Southern Pacific, Northern Pacific and Santa Fe. An even higher peak was reached in the late 1880s, and the roads provided important markets for lumber, coal, iron, steel, and rolling stock.

The post-Civil War generation saw an enormous growth of manufacturing. Industrial output rose by some 296 percent, reaching in 1890 a value of almost $9.4 billion. In that year the nation’s 350,000 industrial firms employed nearly 4,750,000 workers. Iron and steel paced the progress of manufacturing. Farm and forest continued to provide raw materials for such established enterprises as cotton textiles, food, and lumber production. Heralding the machine age, however, was the growing importance of extractives — raw materials for a lengthening list of consumer goods and for producing and fueling locomotives, railroad cars, industrial machinery and equipment, farm implements, and electrical equipment for commerce and industry. The swift expansion and diversification of manufacturing allowed a growing independence from European imports and was reflected in the prominence of new goods among US exports. Already the value of American manufactures was more than half the value of European manufactures and twice that of Britain.

Onset and Causes of the Depression

The depression, which was signaled by a financial panic in 1893, has been blamed on the deflation dating back to the Civil War, the gold standard and monetary policy, underconsumption (the economy was producing goods and services at a higher rate than society was consuming and the resulting inventory accumulation led firms to reduce employment and cut back production), a general economic unsoundness (a reference less to tangible economic difficulties and more to a feeling that the economy was not running properly), and government extravagance .

Economic indicators signaling an 1893 business recession in the United States were largely obscured. The economy had improved during the previous year. Business failures had declined, and the average liabilities of failed firms had fallen by 40 percent. The country’s position in international commerce was improved. During the late nineteenth century, the United States had a negative net balance of payments. Passenger and cargo fares paid to foreign ships that carried most American overseas commerce, insurance charges, tourists’ expenditures abroad, and returns to foreign investors ordinarily more than offset the effect of a positive merchandise balance. In 1892, however, improved agricultural exports had reduced the previous year’s net negative balance from $89 million to $20 million. Moreover, output of non-agricultural consumer goods had risen by more than 5 percent, and business firms were believed to have an ample backlog of unfilled orders as 1893 opened. The number checks cleared between banks in the nation at large and outside New York, factory employment, wholesale prices, and railroad freight ton mileage advanced through the early months of the new year.

Yet several monthly series of indicators showed that business was falling off. Building construction had peaked in April 1892, later moving irregularly downward, probably in reaction to over building. The decline continued until the turn of the century, when construction volume finally turned up again. Weakness in building was transmitted to the rest of the economy, dampening general activity through restricted investment opportunities and curtailed demand for construction materials. Meanwhile, a similar uneven downward drift in business activity after spring 1892 was evident from a composite index of cotton takings (cotton turned into yarn, cloth, etc.) and raw silk consumption, rubber imports, tin and tin plate imports, pig iron manufactures, bituminous and anthracite coal production, crude oil output, railroad freight ton mileage, and foreign trade volume. Pig iron production had crested in February, followed by stock prices and business incorporations six months later.

The economy exhibited other weaknesses as the March 1893 date for Grover Cleveland’s inauguration to the presidency drew near. One of the most serious was in agriculture. Storm, drought, and overproduction during the preceding half-dozen years had reversed the remarkable agricultural prosperity and expansion of the early 1880s in the wheat, corn, and cotton belts. Wheat prices tumbled twenty cents per bushel in 1892. Corn held steady, but at a low figure and on a fall of one-eighth in output. Twice as great a decline in production dealt a severe blow to the hopes of cotton growers: the season’s short crop canceled gains anticipated from a recovery of one cent in prices to 8.3 cents per pound, close to the average level of recent years. Midwestern and Southern farming regions seethed with discontent as growers watched staple prices fall by as much as two-thirds after 1870 and all farm prices by two-fifths; meanwhile, the general wholesale index fell by one-fourth. The situation was grave for many. Farmers’ terms of trade had worsened, and dollar debts willingly incurred in good times to permit agricultural expansion were becoming unbearable burdens. Debt payments and low prices restricted agrarian purchasing power and demand for goods and services. Significantly, both output and consumption of farm equipment began to fall as early as 1891, marking a decline in agricultural investment. Moreover, foreclosure of farm mortgages reduced the ability of mortgage companies, banks, and other lenders to convert their earning assets into cash because the willingness of investors to buy mortgage paper was reduced by the declining expectation that they would yield a positive return.

Slowing investment in railroads was an additional deflationary influence. Railroad expansion had long been a potent engine of economic growth, ranging from 15 to 20 percent of total national investment in the 1870s and 1880s. Construction was a rough index of railroad investment. The amount of new track laid yearly peaked at 12,984 miles in 1887, after which it fell off steeply. Capital outlays rose through 1891 to provide needed additions to plant and equipment, but the rate of growth could not be sustained. Unsatisfactory earnings and a low return for investors indicated the system was over built and overcapitalized, and reports of mismanagement were common. In 1892, only 44 percent of rail shares outstanding returned dividends, although twice that proportion of bonds paid interest. In the meantime, the completion of trunk lines dried up local capital sources. Political antagonism toward railroads, spurred by the roads’ immense size and power and by real and imagined discrimination against small shippers, made the industry less attractive to investors. Declining growth reduced investment opportunity even as rail securities became less appealing. Capital outlays fell in 1892 despite easy credit during much of the year. The markets for ancillary industries, like iron and steel, felt the impact of falling railroad investment as well; at times in the 1880s rails had accounted for 90 percent of the country’s rolled steel output. In an industry whose expansion had long played a vital role in creating new markets for suppliers, lagging capital expenditures loomed large in the onset of depression.

European Influences

European depression was a further source of weakness as 1893 began. Recession struck France in 1889, and business slackened in Germany and England the following year. Contemporaries dated the English downturn from a financial panic in November. Monetary stringency was a base cause of economic hard times. Because specie — gold and silver — was regarded as the only real money, and paper money was available in multiples of the specie supply, when people viewed the future with doubt they stockpiled specie and rejected paper. The availability of specie was limited, so the longer hard times prevailed the more difficult it was for anyone to secure hard money. In addition to monetary stringency, the collapse of extensive speculations in Australian, South African, and Argentine properties; and a sharp break in securities prices marked the advent of severe contraction. The great banking house of Baring and Brothers, caught with excessive holdings of Argentine securities in a falling market, shocked the financial world by suspending business on November 20, 1890. Within a year of the crisis, commercial stagnation had settled over most of Europe. The contraction was severe and long-lived. In England many indices fell to 80 percent of capacity; wholesale prices overall declined nearly 6 percent in two years and had declined 15 percent by 1894. An index of the prices of principal industrial products declined by almost as much. In Germany, contraction lasted three times as long as the average for the period 1879-1902. Not until mid-1895 did Europe begin to revive. Full prosperity returned a year or more later.

Panic in the United Kingdom and falling trade in Europe brought serious repercussions in the United States. The immediate result was near panic in New York City, the nation’s financial center, as British investors sold their American stocks to obtain funds. Uneasiness spread through the country, fostered by falling stock prices, monetary stringency, and an increase in business failures. Liabilities of failed firms during the last quarter of 1890 were $90 million — twice those in the preceding quarter. Only the normal year’s end grain exports, destined largely for England, averted a gold outflow.

Circumstances moderated during the early months of 1891, although gold flowed to Europe, and business failures remained high. Credit eased, if slowly: in response to pleas for relief, the federal treasury began the premature redemption of government bonds to put additional money into circulation, and the end of the harvest trade reduced demand for credit. Commerce quickened in the spring. Perhaps anticipation of brisk trade during the harvest season stimulated the revival of investment and business; in any event, the harvest of 1891 buoyed the economy. A bumper American wheat crop coincided with poor yields in Europe increase exports and the inflow of specie: US exports in fiscal 1892 were $150 million greater than in the preceding year, a full 1 percent of gross national product. The improved market for American crops was primarily responsible for a brief cycle of prosperity in the United States that Europe did not share. Business thrived until signs of recession began to appear in late 1892 and early 1893.

The business revival of 1891-92 only delayed an inevitable reckoning. While domestic factors led in precipitating a major downturn in the United States, the European contraction operated as a powerful depressant. Commercial stagnation in Europe decisively affected the flow of foreign investment funds to the United States. Although foreign investment in this country and American investment abroad rose overall during the 1890s, changing business conditions forced American funds going abroad and foreign funds flowing into the United States to reverse as Americans sold off foreign holdings and foreigners sold off their holdings of American assets. Initially, contraction abroad forced European investors to sell substantial holdings of American securities, then the rate of new foreign investment fell off. The repatriation of American securities prompted gold exports, deflating the money stock and depressing prices. A reduced inflow of foreign capital slowed expansion and may have exacerbated the declining growth of the railroads; undoubtedly, it dampened aggregate demand.

As foreign investors sold their holdings of American stocks for hard money, specie left the United States. Funds secured through foreign investment in domestic enterprise were important in helping the country meet its usual balance of payments deficit. Fewer funds invested during the 1890s was one of the factors that, with a continued negative balance of payments, forced the United States to export gold almost continuously from 1892 to 1896. The impact of depression abroad on the flow of capital to this country can be inferred from the history of new capital issues in Britain, the source of perhaps 75 percent of overseas investment in the United States. British issues varied as shown in Table 3.

Table 3
British New Capital Issues, 1890-1898 (millions of pounds, sterling)

1890 142.6
1891 104.6
1892 81.1
1893 49.1
1894 91.8
1895 104.7
1896 152.8
1897 157.3
1898 150.2

Source: Hoffmann, p. 193

Simultaneously, the share of new British investment sent abroad fell from one-fourth in 1891 to one-fifth two years later. Over that same period, British net capital flows abroad declined by about 60 percent; not until 1896 and 1897 did they resume earlier levels.

Thus, the recession that began in 1893 had deep roots. The slowdown in railroad expansion, decline in building construction, and foreign depression had reduced investment opportunities, and, following the brief upturn effected by the bumper wheat crop of 1891, agricultural prices fell as did exports and commerce in general. By the end of 1893, business failures numbering 15,242 averaging $22,751 in liabilities, had been reported. Plagued by successive contractions of credit, many essentially sound firms failed which would have survived under ordinary circumstances. Liabilities totaled a staggering $357 million. This was the crisis of 1893.

Response to the Depression

The financial crises of 1893 accelerated the recession that was evident early in the year into a major contraction that spread throughout the economy. Investment, commerce, prices, employment, and wages remained depressed for several years. Changing circumstances and expectations, and a persistent federal deficit, subjected the treasury gold reserve to intense pressure and generated sharp counterflows of gold. The treasury was driven four times between 1894 and 1896 to resort to bond issues totaling $260 million to obtain specie to augment the reserve. Meanwhile, restricted investment, income, and profits spelled low consumption, widespread suffering, and occasionally explosive labor and political struggles. An extensive but incomplete revival occurred in 1895. The Democratic nomination of William Jennings Bryan for the presidency on a free silver platform the following year amid an upsurge of silverite support contributed to a second downturn peculiar to the United States. Europe, just beginning to emerge from depression, was unaffected. Only in mid-1897 did recovery begin in this country; full prosperity returned gradually over the ensuing year and more.

The economy that emerged from the depression differed profoundly from that of 1893. Consolidation and the influence of investment bankers were more advanced. The nation’s international trade position was more advantageous: huge merchandise exports assured a positive net balance of payments despite large tourist expenditures abroad, foreign investments in the United States, and a continued reliance on foreign shipping to carry most of America’s overseas commerce. Moreover, new industries were rapidly moving to ascendancy, and manufactures were coming to replace farm produce as the staple products and exports of the country. The era revealed the outlines of an emerging industrial-urban economic order that portended great changes for the United States.

Hard times intensified social sensitivity to a wide range of problems accompanying industrialization, by making them more severe. Those whom depression struck hardest as well as much of the general public and major Protestant churches, shored up their civic consciousness about currency and banking reform, regulation of business in the public interest, and labor relations. Although nineteenth century liberalism and the tradition of administrative nihilism that it favored remained viable, public opinion began to slowly swing toward governmental activism and interventionism associated with modern, industrial societies, erecting in the process the intellectual foundation for the reform impulse that was to be called Progressivism in twentieth century America. Most important of all, these opposed tendencies in thought set the boundaries within which Americans for the next century debated the most vital questions of their shared experience. The depression was a reminder of business slumps, commonweal above avarice, and principle above principal.

Government responses to depression during the 1890s exhibited elements of complexity, confusion, and contradiction. Yet they also showed a pattern that confirmed the transitional character of the era and clarified the role of the business crisis in the emergence of modern America. Hard times, intimately related to developments issuing in an industrial economy characterized by increasingly vast business units and concentrations of financial and productive power, were a major influence on society, thought, politics, and thus, unavoidably, government. Awareness of, and proposals of means for adapting to, deep-rooted changes attending industrialization, urbanization, and other dimensions of the current transformation of the United States long antedated the economic contraction of the nineties.

Selected Bibliography

*I would like to thank Douglas Steeples, retired dean of the College of Liberal Arts and professor of history, emeritus, Mercer University. Much of this article has been taken from Democracy in Desperation: The Depression of 1893 by Douglas Steeples and David O. Whitten, which was declared an Exceptional Academic Title by Choice. Democracy in Desperation includes the most recent and extensive bibliography for the depression of 1893.

Clanton, Gene. Populism: The Humane Preference in America, 1890-1900. Boston: Twayne, 1991.

Friedman, Milton, and Anna Jacobson Schwartz. A Monetary History of the United States, 1867-1960. Princeton: Princeton University Press, 1963.

Goodwyn, Lawrence. Democratic Promise: The Populist Movement in America. New York: Oxford University Press, 1976.

Grant, H. Roger. Self Help in the 1890s Depression. Ames: Iowa State University Press, 1983.

Higgs, Robert. The Transformation of the American Economy, 1865-1914. New York: Wiley, 1971.

Himmelberg, Robert F. The Rise of Big Business and the Beginnings of Antitrust and Railroad Regulation, 1870-1900. New York: Garland, 1994.

Hoffmann, Charles. The Depression of the Nineties: An Economic History. Westport, CT: Greenwood Publishing, 1970.

Jones, Stanley L. The Presidential Election of 1896. Madison: University of Wisconsin Press, 1964.

Kindleberger, Charles Poor. Manias, Panics, and Crashes: A History of Financial Crises. Revised Edition. New York: Basic Books, 1989.

Kolko, Gabriel. Railroads and Regulation, 1877-1916. Princeton: Princeton University Press, 1965.

Lamoreaux, Naomi R. The Great Merger Movement in American Business, 1895-1904. New York: Cambridge University Press, 1985.

Rees, Albert. Real Wages in Manufacturing, 1890-1914. Princeton, NJ: Princeton University Press, 1961.

Ritter, Gretchen. Goldbugs and Greenbacks: The Antimonopoly Tradition and the Politics of Finance in America. New York: Cambridge University Press, 1997.

Romer, Christina. “Spurious Volatility in Historical Unemployment Data.” Journal of Political Economy 94, no. 1. (1986): 1-37.

Schwantes, Carlos A. Coxey’s Army: An American Odyssey. Lincoln: University of Nebraska Press, 1985.

Steeples, Douglas, and David Whitten. Democracy in Desperation: The Depression of 1893. Westport, CT: Greenwood Press, 1998.

Timberlake, Richard. “Panic of 1893.” In Business Cycles and Depressions: An Encyclopedia, edited by David Glasner. New York: Garland, 1997.

White, Gerald Taylor. Years of Transition: The United States and the Problems of Recovery after 1893. University, AL: University of Alabama Press, 1982.

Citation: Whitten, David. “Depression of 1893″. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/the-depression-of-1893/

An Economic History of Denmark

Ingrid Henriksen, University of Copenhagen

Denmark is located in Northern Europe between the North Sea and the Baltic. Today Denmark consists of the Jutland Peninsula bordering Germany and the Danish Isles and covers 43,069 square kilometers (16,629 square miles). 1 The present nation is the result of several cessions of territory throughout history. The last of the former Danish territories in southern Sweden were lost to Sweden in 1658, following one of the numerous wars between the two nations, which especially marred the sixteenth and seventeenth centuries. Following defeat in the Napoleonic Wars, Norway was separated from Denmark in 1814. After the last major war, the Second Schleswig War in 1864, Danish territory was further reduced by a third when Schleswig and Holstein were ceded to Germany. After a regional referendum in 1920 only North-Schleswig returned to Denmark. Finally, Iceland, withdrew from the union with Denmark in 1944. The following will deal with the geographical unit of today’s Denmark.

Prerequisites of Growth

Throughout history a number of advantageous factors have shaped the Danish economy. From this perspective it may not be surprising to find today’s Denmark among the richest societies in the world. According to the OECD, it ranked seventh in 2004, with income of $29.231 per capita (PPP). Although we can identify a number of turning points and breaks, for the time period over which we have quantitative evidence this long-run position has changed little. Thus Maddison (2001) in his estimate of GDP per capita around 1600 places Denmark as number six. One interpretation could be that favorable circumstances, rather than ingenious institutions or policies, have determined Danish economic development. Nevertheless, this article also deals with time periods in which the Danish economy was either diverging from or converging towards the leading economies.

Table 1:
Average Annual GDP Growth (at factor costs)
Total Per capita
1870-1880 1.9% 0.9%
1880-1890 2.5% 1.5%
1890-1900 2.9% 1.8%
1900-1913 3.2% 2.0%
1913-1929 3.0% 1.6%
1929-1938 2.2% 1.4%
1938-1950 2.4% 1.4%
1950-1960 3.4% 2.6%
1960-1973 4.6% 3.8%
1973-1982 1.5% 1.3%
1982-1993 1.6% 1.5%
1993-2004 2.2% 2.0%

Sources: Johansen (1985) and Statistics Denmark ‘Statistikbanken’ online.

Denmark’s geographical location in close proximity of the most dynamic nations of sixteenth-century Europe, the Netherlands and the United Kingdom, no doubt exerted a positive influence on the Danish economy and Danish institutions. The North German area influenced Denmark both through long-term economic links and through the Lutheran Protestant Reformation which the Danes embraced in 1536.

The Danish economy traditionally specialized in agriculture like most other small and medium-sized European countries. It is, however, rather unique to find a rich European country in the late-nineteenth and mid-twentieth century which retained such a strong agrarian bias. Only in the late 1950s did the workforce of manufacturing industry overtake that of agriculture. Thus an economic history of Denmark must take its point of departure in agricultural development for quite a long stretch of time.

Looking at resource endowments, Denmark enjoyed a relatively high agricultural land-to-labor ratio compared to other European countries, with the exception of the UK. This was significant for several reasons since it, in this case, was accompanied by a comparatively wealthy peasantry.

Denmark had no mineral resources to speak of until the exploitation of oil and gas in the North Sea began in 1972 and 1984, respectively. From 1991 on Denmark has been a net exporter of energy although on a very modest scale compared to neighboring Norway and Britain. The small deposits are currently projected to be depleted by the end of the second decade of the twenty-first century.

Figure 1. Percent of GDP in selected=

Source: Johansen (1985) and Statistics Denmark ’Nationalregnskaber’

Good logistic can be regarded as a resource in pre-industrial economies. The Danish coast line of 7,314 km and the fact that no point is more than 50 km from the sea were advantages in an age in which transport by sea was more economical than land transport.

Decline and Transformation, 1500-1750

The year of the Lutheran Reformation (1536) conventionally marks the end of the Middle Ages in Danish historiography. Only around 1500 did population growth begin to pick up after the devastating effect of the Black Death. Growth thereafter was modest and at times probably stagnant with large fluctuations in mortality following major wars, particularly during the seventeenth century, and years of bad harvests. About 80-85 percent of the population lived from subsistence agriculture in small rural communities and this did not change. Exports are estimated to have been about 5 percent of GDP between 1550 and 1650. The main export products were oxen and grain. The period after 1650 was characterized by a long lasting slump with a marked decline in exports to the neighboring countries, the Netherlands in particular.

The institutional development after the Black Death showed a return to more archaic forms. Unlike other parts of northwestern Europe, the peasantry on the Danish Isles afterwards became a victim of a process of re-feudalization during the last decades of the fifteenth century. A likely explanation is the low population density that encouraged large landowners to hold on to their labor by all means. Freehold tenure among peasants effectively disappeared during the seventeenth century. Institutions like bonded labor that forced peasants to stay on the estate where they were born, and labor services on the demesne as part of the land rent bring to mind similar arrangements in Europe east of the Elbe River. One exception to the East European model was crucial, however. The demesne land, that is the land worked directly under the estate, never made up more than nine percent of total land by the mid eighteenth century. Although some estate owners saw an interest in encroaching on peasant land, the state protected the latter as production units and, more importantly, as a tax base. Bonded labor was codified in the all-encompassing Danish Law of Christian V in 1683. It was further intensified by being extended, though under another label, to all Denmark during 1733-88, as a means for the state to tide the large landlords over an agrarian crisis. One explanation for the long life of such an authoritarian institution could be that the tenants were relatively well off, with 25-50 acres of land on average. Another reason could be that reality differed from the formal rigor of the institutions.

Following the Protestant Reformation in 1536, the Crown took over all church land, thereby making it the owner of 50 percent of all land. The costs of warfare during most of the sixteenth century could still be covered by the revenue of these substantial possessions. Around 1600 the income from taxation and customs, mostly Sound Toll collected from ships that passed the narrow strait between Denmark and today’s Sweden, on the one hand and Crown land revenues on the other were equally large. About 50 years later, after a major fiscal crisis had led to the sale of about half of all Crown lands, the revenue from royal demesnes declined relatively to about one third, and after 1660 the full transition from domain state to tax state was completed.

The bulk of the former Crown land had been sold to nobles and a few common owners of estates. Consequently, although the Danish constitution of 1665 was the most stringent version of absolutism found anywhere in Europe at the time, the Crown depended heavily on estate owners to perform a number of important local tasks. Thus, conscription of troops for warfare, collection of land taxes and maintenance of law and order enhanced the landlords’ power over their tenants.

Reform and International Market Integration, 1750-1870

The driving force of Danish economic growth, which took off during the late eighteenth century was population growth at home and abroad – which triggered technological and institutional innovation. Whereas the Danish population during the previous hundred years grew by about 0.4 percent per annum, growth climbed to about 0.6 percent, accelerating after 1775 and especially from the second decade of the nineteenth century (Johansen 2002). Like elsewhere in Northern Europe, accelerating growth can be ascribed to a decline in mortality, mainly child mortality. Probably this development was initiated by fewer spells of epidemic diseases due to fewer wars and to greater inherited immunity against contagious diseases. Vaccination against smallpox and formal education of midwives from the early nineteenth century might have played a role (Banggård 2004). Land reforms that entailed some scattering of the farm population may also have had a positive influence. Prices rose from the late eighteenth century in response to the increase in populations in Northern Europe, but also following a number of international conflicts. This again caused a boom in Danish transit shipping and in grain exports.

Population growth rendered the old institutional set up obsolete. Landlords no longer needed to bind labor to their estate, as a new class of landless laborers or cottagers with little land emerged. The work of these day-laborers was to replace the labor services of tenant farmers on the demesnes. The old system of labor services obviously presented an incentive problem all the more since it was often carried by the live-in servants of the tenant farmers. Thus, the labor days on the demesnes represented a loss to both landlords and tenants (Henriksen 2003). Part of the land rent was originally paid in grain. Some of it had been converted to money which meant that real rents declined during the inflation. The solution to these problems was massive land sales both from the remaining crown lands and from private landlords to their tenants. As a result two-thirds of all Danish farmers became owner-occupiers compared to only ten percent in the mid-eighteenth century. This development was halted during the next two and a half decades but resumed as the business cycle picked up during the 1840s and 1850s. It was to become of vital importance to the modernization of Danish agriculture towards the end of the nineteenth century that 75 percent of all agricultural land was farmed by owners of middle-sized farms of about 50 acres. Population growth may also have put a pressure on common lands in the villages. At any rate enclosure begun in the 1760s, accelerated in the 1790s supported by legislation and was almost complete in the third decade of the nineteenth century.

The initiative for the sweeping land reforms from the 1780s is thought to have come from below – that is from the landlords and in some instances also from the peasantry. The absolute monarch and his counselors were, however, strongly supportive of these measures. The desire for peasant land as a tax base weighed heavily and the reforms were believed to enhance the efficiency of peasant farming. Besides, the central government was by now more powerful than in the preceding centuries and less dependent on landlords for local administrative tasks.

Production per capita rose modestly before the 1830s and more pronouncedly thereafter when a better allocation of labor and land followed the reforms and when some new crops like clover and potatoes were introduced at a larger scale. Most importantly, the Danes no longer lived at the margin of hunger. No longer do we find a correlation between demographic variables, deaths and births, and bad harvest years (Johansen 2002).

A liberalization of import tariffs in 1797 marked the end of a short spell of late mercantilism. Further liberalizations during the nineteenth and the beginning of the twentieth century established the Danish liberal tradition in international trade that was only to be broken by the protectionism of the 1930s.

Following the loss of the secured Norwegian market for grain in 1814, Danish exports began to target the British market. The great rush forward came as the British Corn Law was repealed in 1846. The export share of the production value in agriculture rose from roughly 10 to around 30 percent between 1800 and 1870.

In 1849 absolute monarchy was peacefully replaced by a free constitution. The long-term benefits of fundamental principles such as the inviolability of private property rights, the freedom of contracting and the freedom of association were probably essential to future growth though hard to quantify.

Modernization and Convergence, 1870-1914

During this period Danish economic growth outperformed that of most other European countries. A convergence in real wages towards the richest countries, Britain and the U.S., as shown by O’Rourke and Williamsson (1999), can only in part be explained by open economy forces. Denmark became a net importer of foreign capital from the 1890s and foreign debt was well above 40 percent of GDP on the eve of WWI. Overseas emigration reduced the potential workforce but as mortality declined population growth stayed around one percent per annum. The increase in foreign trade was substantial, as in many other economies during the heyday of the gold standard. Thus the export share of Danish agriculture surged to a 60 percent.

The background for the latter development has featured prominently in many international comparative analyses. Part of the explanation for the success, as in other Protestant parts of Northern Europe, was a high rate of literacy that allowed a fast spread of new ideas and new technology.

The driving force of growth was that of a small open economy, which responded effectively to a change in international product prices, in this instance caused by the invasion of cheap grain to Western Europe from North America and Eastern Europe. Like Britain, the Netherlands and Belgium, Denmark did not impose a tariff on grain, in spite of the strong agrarian dominance in society and politics.

Proposals to impose tariffs on grain, and later on cattle and butter, were turned down by Danish farmers. The majority seems to have realized the advantages accruing from the free imports of cheap animal feed during the ongoing process of transition from vegetable to animal production, at a time when the prices of animal products did not decline as much as grain prices. The dominant middle-sized farm was inefficient for wheat but had its comparative advantage in intensive animal farming with the given technology. O’Rourke (1997) found that the grain invasion only lowered Danish rents by 4-5 percent, while real wages rose (according to expectation) but more than in any other agrarian economy and more than in industrialized Britain.

The move from grain exports to exports of animal products, mainly butter and bacon, was to a great extent facilitated by the spread of agricultural cooperatives. This organization allowed the middle-sized and small farms that dominated Danish agriculture to benefit from the economy of scale in processing and marketing. The newly invented steam-driven continuous cream separator skimmed more cream from a kilo of milk than conventional methods and had the further advantage of allowing transported milk brought together from a number of suppliers to be skimmed. From the 1880s the majority of these creameries in Denmark were established as cooperatives and about 20 years later, in 1903, the owners of 81 percent of all milk cows supplied to a cooperative (Henriksen 1999). The Danish dairy industry captured over a third of the rapidly expanding British butter-import market, establishing a reputation for consistent quality that was reflected in high prices. Furthermore, the cooperatives played an active role in persuading the dairy farmers to expand production from summer to year-round dairying. The costs of intensive feeding during the wintertime were more than made up for by a winter price premium (Henriksen and O’Rourke 2005). Year-round dairying resulted in a higher rate of utilization of agrarian capital – that is of farm animals and of the modern cooperative creameries. Not least did this intensive production mean a higher utilization of hitherto underemployed labor. From the late 1890’s, in particular, labor productivity in agriculture rose at an unanticipated speed at par with productivity increase in the urban trades.

Industrialization in Denmark took its modest beginning in the 1870s with a temporary acceleration in the late 1890s. It may be a prime example of an industrialization process governed by domestic demand for industrial goods. Industry’s export never exceeded 10 percent of value added before 1914, compared to agriculture’s export share of 60 percent. The export drive of agriculture towards the end of the nineteenth century was a major force in developing other sectors of the economy not least transport, trade and finance.

Weathering War and Depression, 1914-1950

Denmark, as a neutral nation, escaped the devastating effects of World War I and was even allowed to carry on exports to both sides in the conflict. The ensuing trade surplus resulted in a trebling of the money supply. As the monetary authorities failed to contain the inflationary effects of this development, the value of the Danish currency slumped to about 60 percent of its pre-war value in 1920. The effects of monetary policy failure were aggravated by a decision to return to the gold standard at the 1913 level. When monetary policy was finally tightened in 1924, it resulted in fierce speculation in an appreciation of the Krone. During 1925-26 the currency returned quickly to its pre-war parity. As this was not counterbalanced by an equal decline in prices, the result was a sharp real appreciation and a subsequent deterioration in Denmark’s competitive position (Klovland 1997).

Figure 2. Indices of the Krone Real Exchange Rate and Terms Of Trade (1980=100; Real rates based on Wholesale Price Index

Source: Abildgren (2005)

Note: Trade with Germany is included in the calculation of the real effective exchange rate for the whole period, including 1921-23.

When, in September 1931, Britain decided to leave the gold standard again, Denmark, together with Sweden and Norway, followed only a week later. This move was beneficial as the large real depreciation lead to a long-lasting improvement in Denmark’s competitiveness in the 1930s. It was, no doubt, the single most important policy decision during the depression years. Keynesian demand management, even if it had been fully understood, was barred by a small public sector, only about 13 percent of GDP. As it was, fiscal orthodoxy ruled and policy was slightly procyclical as taxes were raised to cover the deficit created by crisis and unemployment (Topp 1995).

Structural development during the 1920s, surprisingly for a rich nation at this stage, was in favor of agriculture. The total labor force in Danish agriculture grew by 5 percent from 1920 to 1930. The number of employees in agriculture was stagnating whereas the number of self-employed farmers increased by a larger number. The development in relative incomes cannot account for this trend but part of the explanation must be found in a flawed Danish land policy, which actively supported a further parceling out of land into small holdings and restricted the consolidation into larger more viable farms. It took until the early 1960s before this policy began to be unwound.

When the world depression hit Denmark with a minor time lag, agriculture still employed one-third of the total workforce while its contribution to total GDP was a bit less than one-fifth. Perhaps more importantly, agricultural goods still made up 80 percent of total exports.

Denmark’s terms of trade, as a consequence, declined by 24 percent from 1930 to 1932. In 1933 and 1934 bilateral trade agreements were forced upon Denmark by Britain and Germany. In 1932 Denmark had adopted exchange control, a harsh measure even for its time, to stem the net flow of foreign exchange out of the country. By rationing imports exchange control also offered some protection of domestic industry. At the end of the decade manufacture’s GDP had surpassed that of agriculture. In spite of the protectionist policy, unemployment soared to 13-15 percent of the workforce.

The policy mistakes during World War I and its immediate aftermath served as a lesson for policymakers during World War II. The German occupation force (April 9, 1940 until May 5, 1945) drew the funds for its sustenance and for exports to Germany on the Danish central bank whereby the money supply more than doubled. In response the Danish authorities in 1943 launched a policy of absorbing money through open market operations and, for the first time in history, through a surplus on the state budget.

Economic reconstruction after World War II was swift, as again Denmark had been spared the worst consequences of a major war. In 1946 GDP recovered its highest pre-war level. In spite of this, Denmark received relatively generous support through the Marshall Plan of 1948-52, when measured in dollars per capita.

From Riches to Crisis, 1950-1973: Liberalizations and International Integration Once Again

The growth performance during 1950-1957 was markedly lower than the Western European average. The main reason was the high share of agricultural goods in Danish exports, 63 percent in 1950. International trade in agricultural products to a large extent remained regulated. Large deteriorations in the terms of trade caused by the British devaluation 1949, when Denmark followed suit, the outbreak of the Korean War in 1950, and the Suez-crisis of 1956 made matters worse. The ensuing deficits on the balance of payment led the government to contractionary policy measures which restrained growth.

The liberalization of the flow of goods and capital in Western Europe within the framework of the OEEC (the Organization for European Economic Cooperation) during the 1950s probably dealt a blow to some of the Danish manufacturing firms, especially in the textile industry, that had been sheltered through exchange control and wartime. Nevertheless, the export share of industrial production doubled from 10 percent to 20 percent before 1957, at the same time as employment in industry surpassed agricultural employment.

On the question of European economic integration Denmark linked up with its largest trading partner, Britain. After the establishment of the European Common Market in 1958 and when the attempts to create a large European free trade area failed, Denmark entered the European Free Trade Association (EFTA) created under British leadership in 1960. When Britain was finally able to join the European Economic Community (EEC) in 1973, Denmark followed, after a referendum on the issue. Long before admission to the EEC, the advantages to Danish agriculture from the Common Agricultural Policy (CAP) had been emphasized. The higher prices within the EEC were capitalized into higher land prices at the same time that investments were increased based on the expected gains from membership. As a result the most indebted farmers who had borrowed at fixed interests rates were hit hard by two developments from the early 1980s. The EEC started to reduce the producers’ benefits of the CAP because of overproduction and, after 1982, the Danish economy adjusted to a lower level of inflation, and therefore, nominal interest rates. According to Andersen (2001) Danish farmers were left with the highest interest burden of all European Union (EU) farmers in the 1990’s.

Denmark’s relations with the EU, while enthusiastic at the beginning, have since been characterized by a certain amount of reserve. A national referendum in 1992 turned down the treaty on the European Union, the Maastricht Treaty. The Danes, then, opted out of four areas, common citizenship, a common currency, common foreign and defense politics and a common policy on police and legal matters. Once more, in 2000, adoption of the common currency, the Euro, was turned down by the Danish electorate. In the debate leading up to the referendum the possible economic advantages of the Euro in the form of lower transaction costs were considered to be modest, compared to the existent regime of fixed exchange rates vis-à-vis the Euro. All the major political parties, nevertheless, are pro-European, with only the extreme Right and the extreme Left being against. It seems that there is a discrepancy between the general public and the politicians on this particular issue.

As far as domestic economic policy is concerned, the heritage from the 1940s was a new commitment to high employment modified by a balance of payment constraint. The Danish policy differed from that of some other parts of Europe in that the remains of the planned economy from the war and reconstruction period in the form of rationing and price control were dismantled around 1950 and that no nationalizations took place.

Instead of direct regulation, economic policy relied on demand management with fiscal policy as its main instrument. Monetary policy remained a bone of contention between politicians and economists. Coordination of policies was the buzzword but within that framework monetary policy was allotted a passive role. The major political parties for a long time were wary of letting the market rate of interest clear the loan market. Instead, some quantitative measures were carried out with the purpose of dampening the demand for loans.

From Agricultural Society to Service Society: The Growth of the Welfare State

Structural problems in foreign trade extended into the high growth period of 1958-73, as Danish agricultural exports were met with constraints both from the then EEC-member countries and most EFTA countries, as well. During the same decade, the 1960s, as the importance of agriculture was declining the share of employment in the public sector grew rapidly until 1983. Building and construction also took a growing share of the workforce until 1970. These developments left manufacturing industry with a secondary position. Consequently, as pointed out by Pedersen (1995) the sheltered sectors in the economy crowded out the sectors that were exposed to international competition, that is mostly industry and agriculture, by putting a pressure on labor and other costs during the years of strong expansion.

Perhaps the most conspicuous feature of the Danish economy during the Golden Age was the steep increase in welfare-related costs from the mid 1960s and not least the corresponding increases in the number of public employees. Although the seeds of the modern Scandinavian welfare state were sown at a much earlier date, the 1960s was the time when public expenditure as a share of GDP exceeded that of most other countries.

As in other modern welfare states, important elements in the growth of the public sector during the 1960s were the expansion in public health care and education, both free for all citizens. The background for much of the increase in the number of public employees from the late 1960s was the rise in labor participation by married women from the late 1960s until about 1990, partly at least as a consequence. In response, the public day care facilities for young children and old people were expanded. Whereas in 1965 7 percent of 0-6 year olds were in a day nursery or kindergarten, this share rose to 77 per cent in 2000. This again spawned more employment opportunities for women in the public sector. Today the labor participation for women, around 75 percent of 16-66 year olds, is among the highest in the world.

Originally social welfare programs targeted low income earners who were encouraged to take out insurance against sickness (1892), unemployment (1907) and disability (1922). The public subsidized these schemes and initiated a program for the poor among old people (1891). The high unemployment period in the 1930s inspired some temporary relief and some administrative reform, but little fundamental change.

Welfare policy in the first four decades following World War II is commonly believed to have been strongly influenced by the Social Democrat party which held around 30 percent of the votes in general elections and was the party in power for long periods of time. One of the distinctive features of the Danish welfare state has been its focus on the needs of the individual person rather than on the family context. Another important characteristic is the universal nature of a number of benefits starting with a basic old age pension for all in 1956. The compensation rates in a number of schedules are high in international comparison, particularly for low income earners. Public transfers gained a larger share in total public outlays both because standards were raised – that is benefits became higher – and because the number of recipients increased dramatically following the high unemployment regime from the mid 1970s to the mid 1990s. To pay for the high transfers and the large public sector – around 30 percent of the work force – the tax load is also high in international perspective. The share public sector and social expenditure has risen to above 50 percent of GDP, only second to the share in Sweden.

Figure 3. Unemployment, Denmark (percent of total labor force)

Source: Statistics Denmark ‘50 års-oversigten’ and ADAM’s databank

The Danish labor market model has recently attracted favorable international attention (OECD 2005). It has been declared successful in fighting unemployment – especially compared to the policies of countries like Germany and France. The so-called Flexicurity model rests on three pillars. The first is low employment protection, the second is relatively high compensation rates for the unemployed and the third is the requirement for active participation by the unemployed. Low employment protection has a long tradition in Denmark and there is no change in this factor when comparing the twenty years of high unemployment – 8-12 per cent of the labor force – from the mid 1970s to the mid 1990s, to the past ten years when unemployment has declined to a mere 4.5 percent in 2006. The rules governing compensation to the unemployed were tightened from 1994, limiting the number of years the unemployed could receive benefits from 7 to 4. Most noticeably labor market policy in 1994 turned from ‘passive’ measures – besides unemployment benefits, an early retirement scheme and a temporary paid leave scheme – toward ‘active’ measures that were devoted to getting people back to work by providing training and jobs. It is commonly supposed that the strengthening of economic incentives helped to lower unemployment. However, as Andersen and Svarer (2006) point out, while unemployment has declined substantially a large and growing share of Danes of employable age receives transfers other than unemployment benefit – that is benefits related to sickness or social problems of various kinds, early retirement benefits, etc. This makes it hazardous to compare the Danish labor market model with that of many other countries.

Exchange Rates and Macroeconomic Policy

Denmark has traditionally adhered to a fixed exchange rate regime. The belief is that for a small and open economy, a floating exchange rate could lead to very volatile exchange rates which would harm foreign trade. After having abandoned the gold standard in 1931, the Danish currency (the Krone) was, for a while, pegged to the British pound, only to join the IMF system of fixed but adjustable exchange rates, the so-called Bretton Woods system after World War II. The close link with the British economy still manifested itself when the Danish currency was devaluated along with the pound in 1949 and, half way, in 1967. The devaluation also reflected that after 1960, Denmark’s international competitiveness had gradually been eroded by rising real wages, corresponding to a 30 percent real appreciation of the currency (Pedersen 1996).

When the Bretton Woods system broke down in the early 1970s, Denmark joined the European exchange rate cooperation, the “Snake” arrangement, set up in 1972, an arrangement that was to be continued in the form of the Exchange Rate Mechanism within the European Monetary System from 1979. The Deutschmark was effectively the nominal anchor in European currency cooperation until the launch of the Euro in 1999, a fact that put Danish competitiveness under severe pressure because of markedly higher inflation in Denmark compared to Germany. In the end the Danish government gave way before the pressure and undertook four discrete devaluations from 1979 to 1982. Since compensatory increases in wages were held back, the balance of trade improved perceptibly.

This improvement could, however, not make up for the soaring costs of old loans at a time when the international real rates of interests were high. The Danish devaluation strategy exacerbated this problem. The anticipation of further devaluations was mirrored in a steep increase in the long-term rate of interest. It peaked at 22 percent in nominal terms in 1982, with an interest spread to Germany of 10 percent. Combined with the effects of the second oil crisis on the Danish terms of trade, unemployment rose to 10 percent of the labor force. Given the relatively high compensation ratios for the unemployed, the public deficit increased rapidly and public debt grew to about 70 percent of GDP.

Figure 4. Current Account and Foreign Debt (Denmark)

Source: Statistics Denmark Statistical Yearbooks and ADAM’s Databank

In September 1982 the Social Democrat minority government resigned without a general election and was relieved by a Conservative-Liberal minority government. The new government launched a program to improve the competitiveness of the private sector and to rebalance public finances. An important element was a disinflationary economic policy based on fixed exchange rates pegging the Krone to the participants of the EMS and, from 1999, to the Euro. Furthermore, automatic wage indexation that had occurred, with short interruptions since 1920 (with a short lag and high coverage), was abolished. Fiscal policy was tightened, thus bringing an end to the real increases in public expenditure that had lasted since the 1960’s.

The stabilization policy was successful in bringing down inflation and long interest rates. Pedersen (1995) finds that this process, nevertheless, was slower than might have been expected. In view of former Danish exchange rate policy it took some time for the market to believe in the credible commitment to fixed exchange rates. From the late 1990s the interest spread to Germany/ Euroland has been negligible, however.

The initial success of the stabilization policy brought a boom to the Danish economy that, once again, caused overheating in the form of high wage increases (in 1987) and a deterioration of the current account. The solution to this was a number of reforms in 1986-87 aiming at encouraging private savings that had by then fallen to an historical low. Most notable was the reform that reduced tax deductibility of private interest on debts. These measures resulted in a hard landing to the economy caused by the collapse of the housing market.

The period of low growth was further prolonged by the international recession in 1992. In 1993 yet another shift of regime occurred in Danish economic policy. A new Social Democrat government decided to ‘kick start’ the economy by means of a moderate fiscal expansion whereas, in 1994, the same government tightened labor market policies substantially, as we have seen. Mainly as a consequence of these measures the Danish economy from 1994 entered a period of moderate growth with unemployment steadily falling to the level of the 1970s. A new feature that still puzzles Danish economists is that the decline in unemployment over these years has not yet resulted in any increase in wage inflation.

Denmark at the beginning of the twenty-first century in many ways fits the description of a Small Successful European Economy according to Mokyr (2006). Unlike in most of the other small economies, however, Danish exports are broad based and have no “niche” in the world market. Like some other small European countries, Ireland, Finland and Sweden, the short term economic fluctuations as described above have not followed the European business cycle very closely for the past thirty years (Andersen 2001). Domestic demand and domestic economic policy has, after all, played a crucial role even in a very small and very open economy.

References

Abildgren, Kim. “Real Effective Exchange Rates and Purchasing-Power-parity Convergence: Empirical Evidence for Denmark, 1875-2002.” Scandinavian Economic History Review 53, no. 3 (2005): 58-70.

Andersen, Torben M. et al. The Danish Economy: An international Perspective. Copenhagen: DJØF Publishing, 2001.

Andersen, Torben M. and Michael Svarer. “Flexicurity: den danska arbetsmarknadsmodellen.” Ekonomisk debatt 34, no. 1 (2006): 17-29.

Banggaard, Grethe. Befolkningsfremmende foranstaltninger og faldende børnedødelighed. Danmark, ca. 1750-1850. Odense: Syddansk Universitetsforlag, 2004

Hansen, Sv. Aage. Økonomisk vækst i Danmark: Volume I: 1720-1914 and Volume II: 1914-1983. København: Akademisk Forlag, 1984.

Henriksen, Ingrid. “Avoiding Lock-in: Cooperative Creameries in Denmark, 1882-1903.” European Review of Economic History 3, no. 1 (1999): 57-78

Henriksen, Ingrid. “Freehold Tenure in Late Eighteenth-Century Denmark.” Advances in Agricultural Economic History 2 (2003): 21-40.

Henriksen, Ingrid and Kevin H. O’Rourke. “Incentives, Technology and the Shift to Year-round Dairying in Late Nineteenth-century Denmark.” Economic History Review 58, no. 3 (2005):.520-54.

Johansen, Hans Chr. Danish Population History, 1600-1939. Odense: University Press of Southern Denmark, 2002.

Johansen, Hans Chr. Dansk historisk statistik, 1814-1980. København: Gyldendal, 1985.

Klovland, Jan T. “Monetary Policy and Business Cycles in the Interwar Years: The Scandinavian Experience.” European Review of Economic History 2, no. 3 (1998): 309-44.

Maddison, Angus. The World Economy: A Millennial Perspective. Paris: OECD, 2001

Mokyr, Joel. “Successful Small Open Economies and the Importance of Good Institutions.” In The Road to Prosperity. An Economic History of Finland, edited by Jari Ojala, Jari Eloranta and Jukka Jalava, 8-14. Helsinki: SKS, 2006.

Pedersen, Peder J. “Postwar Growth of the Danish Economy.” In Economic Growth in Europe since 1945, edited by Nicholas Crafts and Gianni Toniolo. Cambridge: Cambridge University Press, 1995.

OECD, Employment Outlook, 2005.

O’Rourke, Kevin H. “The European Grain Invasion, 1870-1913.” Journal of Economic History 57, no. 4 (1997): 775-99.

O’Rourke, Kevin H. and Jeffrey G. Williamson. Globalization and History: The Evolution of a Nineteenth-century Atlantic Economy. Cambridge, MA: MIT Press, 1999

Topp, Niels-Henrik. “Influence of the Public Sector on Activity in Denmark, 1929-39.” Scandinavian Economic History Review 43, no. 3 (1995): 339-56.


Footnotes

1 Denmark also includes the Faeroe Islands, with home rule since 1948, and Greenland, with home rule since 1979, both in the North Atlantic. These territories are left out of this account.

Citation: Henriksen, Ingrid. “An Economic History of Denmark”. EH.Net Encyclopedia, edited by Robert Whaples. October 6, 2006. URL http://eh.net/encyclopedia/an-economic-history-of-denmark/

Fertility and Mortality in the United States

Michael Haines, Colgate University

Every modern, economically developed nation has experienced the demographic transition from high to low levels of fertility and mortality. America is no exception. In the early nineteenth century, the typical American woman had between seven and eight live births in her lifetime and people probably lived fewer than forty years on average. But America was also distinctive. First, its fertility transition began in the late eighteenth or early nineteenth century at the latest. Other Western nations began their sustained fertility declines in the late nineteenth or early twentieth century, with the exception of France, whose decline also began early. Second, the fertility rate in America commenced its sustained decline long before that of mortality. This contrasts with the more typical demographic transition in which mortality decline precedes or occurs simultaneously with fertility decline. American mortality did not experience a sustained and irreversible decline until about the 1870s. Third, both these processes were influenced by America’s very high level of net in-migration and also by the significant population redistribution to frontier areas and later to cities, towns, and suburbs.

One particular difficulty for American historical demography is lack of data. During the colonial period, there was neither a regular enumeration nor vital registration. Some scholars, however, have conducted family reconstitutions and other demographic reconstructions using genealogies, parish registers, biographical data, and other local records, so we do know something about vital rates and population characteristics. In 1790, of course, the federal government commenced the decennial U.S. census, which has been the principal source for the study of population growth, structure, and redistribution, as well as fertility prior to the twentieth century. But vital registration was left to state and local governments. Massachusetts was the first state to institute continuous recording of births, deaths, and marriages, beginning in 1842 (some individual cities had registered vital events earlier), but the entire nation was not covered until 1933.

For the colonial period, we know more about population size than other matters, since the British colonial authorities did conduct some enumerations. The population of the British mainland colonies increased from several hundred non-Amerindian individuals in the early seventeenth century to about 2.5 million (2 million whites and about half a million blacks) in 1780. Birthrates were high, ranging between over forty and over fifty live births per one thousand people per annum. The high fertility of American women attracted comment from late eighteenth-century observers, including Benjamin Franklin and Thomas Malthus. Mortality rates were probably moderate, with crude death rates ranging from about twenty per one thousand people per annum to over forty. We know a good deal about mortality rates in New England, somewhat less about the Middle Colonies, and least about the South. But apparently mortality was lower from Pennsylvania and New Jersey northward, and higher in the South. Life expectancy at birth ranged from the late twenties to almost forty.

Information on America’s demographic transition becomes more plentiful for the nineteenth and twentieth centuries. The accompanying table provides summary measures of fertility and mortality for the period 1800-2000. It includes, for fertility, the crude birthrate, the child-woman ratio (based solely on census data), and the total fertility rate; and, for mortality, life expectancy at birth and the infant mortality rate. The results are given for the white and black populations separately because of their very different social, economic, and demographic experiences.

Table 1 indicates the sustained decline in white birthrates from at least 1800 and of black fertility from at least 1850. Family sizes were large early in the nineteenth century, being approximately seven children per woman at the beginning of the century and between seven and eight for the largely rural slave population at mid-century. The table also reveals that mortality did not begin to decline until about the 1870s or so. Prior to that, death rates fluctuated, being affected by periodic epidemics and changes in the disease environment. There is some evidence of rising death rates during the 1830s and 1840s. The table also shows that American blacks had both higher fertility and higher mortality relative to the white population, although both groups experienced fertility and mortality transitions. For example, both participated in the rise in birthrates after World War II known as the baby boom, as well as the subsequent resumption of birthrate declines in the 1960s.

Conventional explanations for the fertility transition have involved the rising cost of children because of urbanization, the growth of incomes and nonagricultural employment, the increased value of education, rising female employment, child labor laws and compulsory education, and declining infant and child mortality. Changing attitudes toward large families and contraception, as well as better contraceptive techniques, have also been cited. Recent literature suggests that women were largely responsible for much of the birthrate decline in the nineteenth century — part of a movement for greater control over their lives. The structural explanations fit the American experience since the late nineteenth century, but they are less appropriate for the fertility decline in rural areas prior to about 1870. The increased scarcity and higher cost of good agricultural land has been proposed as a prime factor, although this is controversial. The standard explanations do not adequately explain the post-World War II baby boom and subsequent baby bust. More complex theories, including the interaction of the size of generations with their income prospects, preferences for children versus material goods, and expectations about family size, have been proposed.

The mortality decline since the late nineteenth century seems to have been the result particularly of improvements in public health and sanitation, especially better water supplies and sewage disposal. The improving diet, clothing, and shelter of the American population over the period since about 1870 also played a role. Specific medical interventions beyond more general environmental public health measures were not statistically important until well into the twentieth century. It is difficult to disentangle the separate effects of these factors. But it is clear that much of the decline was due to rapid reductions in specific infectious and parasitic diseases, including tuberculosis, pneumonia, bronchitis, and gastro-intestinal infections, as well as such well-known lethal diseases as cholera, smallpox, diphtheria, and typhoid fever. Nineteenth-century cities were especially unhealthy places, particularly the largest ones. This began to change by about the 1890s, when the largest cities instituted new public works sanitation projects (such as piped water, sewer systems, filtration and chlorination of water) and public health administration. They then experienced rapid improvements in death rates. As for the present, rural-urban mortality differentials have converged and largely disappeared. This, unfortunately, is not true of the differentials between whites and blacks.

Table 1
Fertility and Mortality in the United States, 1800-1999

Approx. Date Birthratea Child-Woman Ratio b Total Fertility Rate c Life Expectancy d Infant Mortality Rate e
White Blackf White Black White Blackf White Blackf White Blackf
1800 55.0 1342 7.04
1810 54.3 1358 6.92
1820 52.8 1295 1191 6.73
1830 51.4 1145 1220 6.55
1840 48.3 1085 1154 6.14
1850 43.3 58.6g 892 1087 5.42 7.90g 39.5 23.0 216.8 340.0
1860 41.4 55.0h 905 1072 5.21 7.58h 43.6 181.3
1870 38.3 55.4i 814 997 4.55 7.69i 45.2 175.5
1880 35.2 51.9j 780 1090 4.24 7.26j 40.5 214.8
1890 31.5 48.1 685 930 3.87 6.56 46.8 150.7
1900 30.1 44.4 666 845 3.56 5.61 51.8k 41.8k 110.8k 170.3
1910 29.2 38.5 631 736 3.42 4.61 54.6l 46.8l 96.5l 142.6
1920 26.9 35.0 604 608 3.17 3.64 57.4 47.0 82.1 131.7
1930 20.6 27.5 506 554 2.45 2.98 60.9 48.5 60.1 99.9
1940 18.6 26.7 419 513 2.22 2.87 64.9 53.9 43.2 73.8
1950 23.0 33.3 580 663 2.98 3.93 69.0 60.7 26.8 44.5
1960 22.7 32.1 717 895 3.53 4.52 70.7 63.9 22.9 43.2
1970 17.4 25.1 507 689 2.39 3.07 71.6 64.1 17.8 30.9
1980 15.1 21.3 300 367 1.77 2.18 74.5 68.5 10.9 22.2
1990 15.8 22.4 298 359 2.00 2.48 76.1 69.1 7.6 18.0
2000 13.9 17.0 343 401 2.05 2.13 77.4 71.7 5.7 14.1

a Births per 1000 population per annum.
b Children aged 0-4 per 1000 women aged 20-44. Taken from U.S. Bureau of the Census, (1975), Series 67-68 for 1800-1970. For the black population 1820-1840, W.S. Thompson and P.K. Whelpton, Population Trends in the United States (New York: McGraw-Hill, 1933), Table 74, adjusted upward 47% for relative under-numeration of black children aged 0-4 for the censuses of 1820-1840.
c Total number of births per woman if she experienced the current period age-specific fertility rates throughout her life.
d Expectation of life at birth for both sexes combined.
e Infant deaths per 1000 live births per annum.
f Black and other population for birth rate (1920-1970), total fertility rate (1940-1990), life expectancy at birth (1950-1960) and infant mortality rate (1920-1970).
g Average for 1850-59.
h Average for 1860-69.
i Average for 1870-79.
j Average for 1880-84.
k Approximately 1895.
l Approximately 1904.

Sources: U.S. Bureau of the Census, Historical Statistics of the United States (Washington, DC: G.P.O, 1975). U.S. Bureau of the Census, Statistical Abstract of the United States, 1986 (Washington, DC: G.P.O, 1985). Statistical Abstract of the United States, 2001 (Washington, DC: G.P.O, 2001). National Center for Health Statistics, National Vital Statistics Reports, various issues. Census 2000 Summary File 1: National File (May, 2003). Ansley J. Coale and Melvin Zelnik, New Estimates of Fertility and Population in the United States (Princeton, NJ: Princeton University Press 1963). Ansley J. Coale and Norfleet W. Rives, “A Statistical Reconstruction of the Black Population of the United States, 1880-1970: Estimates of True Numbers by Age and Sex, Birth Rates, and Total Fertility,” Population Index 39, no. 1 (Jan., 1973): 3-36. Michael R. Haines, “Estimated Life Tables for the United States, 1850-1900,” Historical Methods, 31, no. 4 (Fall 1998): 149-169. Samuel H. Preston and Michael R. Haines, Fatal Years: Child Mortality in Late Nineteenth Century America (Princeton, NJ: Princeton University Press, 1991), Table 2.5. Richard H. Steckel, “A Dreadful Childhood: The Excess Mortality of American Slaves,” Social Science History (Winter 1986): 427-465.

References

Haines, Michael R. and Richard H. Steckel (editors). A Population History of North America. New York: Cambridge University Press, 2001.

Klein, Herbert. A Population History of the United States. New York: Cambridge University Press, 2004.

Vinovskis, Maris (editor). Studies in American Historical Demography. New York: Academic Press, 1979.

Citation: Haines, Michael. “Fertility and Mortality in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 19, 2008. URL http://eh.net/encyclopedia/fertility-and-mortality-in-the-united-states/

Deflation

Pierre L. Siklos, Wilfrid Laurier University

What is Deflation?

Deflation is a persistent fall in some generally followed aggregate indicator of price movements, such as the consumer price index or the GDP deflator. Generally, a one-time fall in the price level does not constitute a deflation. Instead, one has to see continuously falling prices for well over a year before concluding that the economy suffers from deflation. How long the fall has to continue before the public and policy makers conclude that the phenomenon is reflected in expectations of future price developments is open to question. For example, in Japan, which has the distinction of experiencing the longest post World War II period of deflation, it took several years for deflationary expectations to emerge.

Most observers tend to focus on changes in consumer or producer prices since, as far as monetary policy is concerned, central banks are responsible for ensuring some form of price stability (usually defined as inflation rates of +3% or less in much of the industrial world). However, sustained decreases in asset prices, such as for stock market shares or housing, can also pose serious economic problems since, other things equal, such outcomes imply lower wealth and, in turn, reduced consumption spending. While the connection between goods price and asset price inflation or deflation remains a contentious one in the economics profession, policy makers are undoubtedly worried about the existence of a link, as Alan Greenspan’s “irrational exuberance” remark of 1996 illustrates.

Historical and Contemporary Worries about Deflation

Until 2002, prospects for a deflation outside Japan remained unlikely. Prior to that time, deflation had been a phenomenon primarily of the 1930s and inextricably linked with the Great Depression, especially in the United States. Most observers viewed Japan’s deflation as part of a general economic malaise stemming from a mix of bad policy choices, bad politics, and a banking industry insolvency problem that would simply not go away. However, by 2001, reports of falling US producer prices, a sluggish economy, and the spread of deflation beyond Japan to China, Taiwan, and Hong Kong, to name a few countries, eventually led policy makers at the US Federal Reserve Board to publicly express their determination at avoiding deflation (e.g. See IMF 2003, Borio and Filardo 2004). Governor Bernanke of the US Federal Reserve raised the issue of deflation in late 2002 when he argued that the US public ought not to be overly worried since the Fed was on top of the issue and, in any event, the US was not Japan. Nevertheless, he also stressed that “central banks must today try to avoid major changes in the inflation rate in either direction. In central bank speak, we now face “symmetric” inflation risks.”1 The risks Governor Bernanke was referring to stem from the fact that, now that low inflation rates have been achieved, the public has to maintain the belief that central banks will neither allow inflation to creep up nor permit the onset of deflation. Even the IMF began to worry about the likelihood of deflation, as reflected in a major report, released in mid-2003, that assessed the probability that deflation might become a global phenomenon. While the risk that deflation might catch on in the US was deemed fairly low, the threat of deflation in Germany, for example, was viewed as being much greater.

Deflation in the Great Depression Era

It is evident from the foregoing illustrations that deflation has again emerged as public policy enemy number one in some circles. Most observers need only think back to the global depression of the 1930s, when the combination of a massive fall in output and the price level devastated the U.S. economy. While the Great Depression was a global phenomenon, actual output losses varied considerably from modest losses to the massive losses incurred by the U.S. economy. During the period 1928-1933 output fell by approximately 25% as did prices. Other countries, such as Canada and Germany, also suffered large output losses. Canada also experienced a fall in output of at least 25% over the same period while prices in 1933 were only about 78% of prices in 1928. In the case of Germany, the deflation rate over the same 1928-1933 period was similar to that experienced in Canada while output fell just over 20% in that time. No wonder analysts associate deflation with “ugly” economic consequences. Nevertheless, as shall see, there exist varieties of deflationary experiences. In any event, it needs to be underlined that the Great Depression period of the 1930s did not result in massive output losses worldwide. In particular, seminal analyses by Friedman and Schwartz (1982), and Meltzer (2003), concluded that the 1930s represented a deflationary episode driven by falling aggregate demand, compounded by poor policy choices by the leadership at the US Federal Reserve that was wedded at the time to a faulty ideology (a version of the ‘real bills’ doctrine2). Indeed, the competence of the interwar Fed has been the subject of considerable ongoing debate throughout the decades. Disagreements over the role of credit in deflation and concerns about how to reinvigorate the economy were, of course, also expressed in public at the time. Strikingly, the relationship between deflation and central bank policy was often entirely missing from the discussion, however.

The Debt-Deflation Problem

The prevailing ideology treated the occasional deflation as one that acted as a necessary spur for economic growth, a symptom of economic health, not one indicative of economic malaise. However, there were notable exceptions to the chorus of views favorable to deflation. Irving Fisher developed what is now referred to as the “debt-deflation” hypothesis. Falling prices increase the debt burden and adversely affect firms’ balance sheets. This was particularly true of the plight faced by farmers in many countries, including the United States, during the 1920s when falling agricultural prices combined with tight monetary policies sharply raised the costs of servicing existing debts. The same was true of the prices of raw materials. The Table below illustrates the rather precipitous drop in the price level of some key commodities in a single year.

Table 1
Commodity Prices in the U.S., 1923-24

Commodity Group May 1924 May 1923
All commodities 147 156
Farm Products 136 139
Foods 137 144
Clothes and Clothing 187 201
Fuel and Lighting 177 190
Metals 134 152
Building Materials 180 202
Chemicals and drugs 127 134
House furnishings 173 187
Miscellaneous 112 125
Source: Federal Reserve Bulletin, July 1924, p. 532.
Note: Prices in 1913 are defined to equal 100

The Postponing Purchases Problem

Hence, a deflation is a harbinger of a financial crisis with repercussions for the economy as a whole. Others, such as Keynes, also worried about the impact of deflation on aggregate demand in the economy, as individuals and firms postpone their purchases in the hopes of purchasing durable goods especially at lower future prices. He actually advocated a policy that is not too dissimilar to what we would refer to today as inflation targeting (e.g., see Burdekin and Siklos 2004, ch. 1). Unfortunately, the prevailing ideology was that deflation was a purgative of sorts, that is, the price to be paid for economic excesses during the boom years, and necessary to establish to conditions for economic recovery. The reason is that economic booms were believed to be associated with excessive inflation which had to be rooted out of the system. Hence, prices that rose too fast could only be cured if they returned to lower levels.

Not All Deflations Are Bad

So, are all deflations bad? Not necessarily. The United Kingdom experienced several years of falling prices in the 1870-1939. However, since the deflation was apparently largely anticipated (e.g., see Capie and Wood 2004) the deflation did not produce adverse economic consequences. Finally, an economy that experiences a surge of financial and technological innovations would effectively see rising aggregate supply that, with only modest growth in aggregate demand, would translate into lower prices over time. Indeed, estimates based on simple relationships suggest that the sometime calamitous effects that are thought to be associated with deflation can largely be explained by the rather unique event of the Great Depression of the 1930s (Burdekin and Siklos 2004, ch. 1). However, the other difficulty is that a deflation may at first appear to be supply driven until policy makers come to the realization that aggregate demand is the proximate cause. This seems to be the case of the modern-day episodes of deflation in Japan and China.

Differences between the 1930s and Today

What’s different about the prospects of a deflation today? First, and perhaps most obviously, we know considerably more than in the 1930s about the transmission mechanism of monetary policy decisions. Second, the prevailing economic ideology favors flexible exchange rates. Almost all of the countries that suffered from the Great Depression adhered to some form of fixed exchange rates, usually under the aegis of the Gold Standard. As a result, the transmission of deflation from one country to another was much stronger than under flexible exchange rate conditions. Third, policy makers have many more instruments of policy today than seventy years ago. Not only can monetary policy be more effective when correctly applied, but fiscal policy exists on a scale that was not possible during the 1930s. Nevertheless, fiscal policy, if misused, as has apparently been the case in Japan, can actually add to the difficulties of extricating an economy out of deflationary slump. There are similar worries about the US case as the anticipated surpluses have turned into large deficits for the foreseeable future. Likewise the fiscal rules adopted by the European Union severely hinder, even altogether prevent some would say, the scope for a stimulative fiscal policy. Fourth, policy-making institutions are both more open and accountable than in past decades. Central banks are autonomous and accountable and their efforts at making monetary policy more transparent to financial markets ought to reduce the likelihood of serious policy errors as these are considered to be powerful devices to enhance credibility.

Parallels between the 1930s and Today

Nevertheless, in spite of the obvious differences between the situation today and the ones faced seven decades ago, some parallels remain. For example, until 2000, many policy makers, including the central bankers at the Fed, felt that the technological developments of the 1990s might lead to economic growth almost without end and, in this “new” era, the prospect of a bad deflation seemed the furthest thing on their minds. Similarly, the Bank of Japan was long convinced that their deflation was of the good variety. It has taken its policy makers a decade to recognize the seriousness of their situation. In Japan, the debate over the menu of needed reforms and policies to extricate the economy from its deflationary trap continues unabated. Worse still the recent Japanese experience raises the specter of Keynes’ famous liquidity trap (Krugman 1998), namely a state of affairs where lower interest rates are unable to stimulate investment or economic activity more generally. Hence, deflation, combined with expectations of falling prices, conspires to make the so-called ‘zero lower bound’ for nominal interest rates an increasingly binding one (see below).

Two More Concerns: Labor Market and Credit Market Impacts of Deflation

There are at least two other reasons to worry about the onset of a deflation with devastating economic consequences. Labor markets exhibit considerably less flexibility than several decades ago. Consequently, it is considerably more difficult for the necessary fall in nominal wages to match a drop in prices. Otherwise, real wages would actually rise in a deflation and this would produce even more slack in the labor market with the resulting increases in the unemployment rate contributing to further reduce aggregate demand, the exact opposite of what is needed. A second consideration is the ability of monetary policy to stimulate the economy when interest rates are close to zero. The so-called “zero lower bound” constraint for interest rates means that if the rate of deflation rises so do real interest rates further depressing aggregate demand. Therefore, while history need not repeat itself, the mistakes of the past need to be kept firmly in mind.

Frequency of Deflation in the Historical Record

As noted above, inflation has been an all too common occurrence since 1945. The table below shows that deflation has become a much less common feature of the macroeconomic landscape. One has to go back to the 1930s before encountering successive years of deflation.3 Indeed, for the countries listed below, the number of times prices fell year over year for two years or more is a relatively small number. Hence, deflation is a fairly unusual event.

Table 2: Episodes of Deflation from the mid-1800s to 1945

Country
(year record begins)
Number of
occurrences of
deflation
until 1945
Years of persistent deflation/Crisis
Austria (1915) 1
Australia (1862) 5 BC: 1893
CC: 1933-33
Belgium (1851) 9 1892-96
BC, CC: 1924-26
1931-35, BC: 1931
BC,CC: 1934-35
Canada (1914) 2 CC: 1891,1893, 1908, 1921, 1929-31
1930-33
Denmark (1851) 9 1882-86, BC: 1885, 1892-96, BC: 1907-08
1921-32, BC, CC: 1921-22, 1931-32
Finland (1915) 1 BC: 1900,1921 1929-34, BC, CC: 1931-32
France (1851) 4 CC, BC: 1888-89, 1907, 1923, 1926
1932-35, BC: 1930-32
Germany (1851) 8 1892-96, BC, CC:1893, 1901,1907
1930-33, BC, CC: 1931, 1934
Ireland (1923) 2 1930-33
Italy (1862) 6 1881-84, BC, CC: 1891, 1893-94, 1907-08, 1921
1930-34, BC: 1930-31, 1934-35
Japan (1923) 1 CC, BC: 1900-01, 1907-08, 1921
1925-31, BC, CC: 1931-32
Netherlands (1881) 6 1893-96, BC, CC: 1897, 1921
1930-32
CC, BC: 1935, 1939
Norway (1902) 2 BC, CC: 1891, 1921-23
1926-33, BC CC: 1931
New Zealand (1908) 1 BC: 1920, 1924-25
1929-33, BC, CC: 1931
Spain (1915) 2
Sweden (1851) 9 1882-87
1930-33
Switzerland (1891) 4 1930-34
UK (1851) 8 1884-87
1926-33
US (1851) 9 1875-79
1930-33

Notes: Data are from chapter 1, Richard C.K. Burdekin and Pierre L. Siklos, editors, Deflation: Current and Historical Perspectives, New York: Cambridge University Press, 2004. The numbers in parenthesis in the first column refer to the first year for which we have data. The second column gives the frequency of occurrences of deflation defined as two or more consecutive years with falling prices. The last column provides some illustrations of especially persistent declines in the price level, defined in terms of consumer prices. In italics, years with currency crises (CC) or banking crises (BC), are shown where data are available. The dates are from Michael D. Bordo, Barry Eichengreen, Daniela Klingebiel, and Maria Soledad Martinez-Peria, “Financial Crises: Lessons from the Last 120 Years,” Economic Policy, April 2001.

Is There an Empirical Deflation-Recession Link?

If that is indeed the case why has there been so much concern expressed over the possibility of renewed deflation? One reason is the mediocre economic performance that has been associated with the Japan’s deflation. Furthermore, the foregoing table makes clear that in a number of countries the 1930s deflation was associated with the Great Depression. Indeed, as the Table also indicates for countries where we have data, the Great Depression represented a combination of several crises, simultaneously financial and economic in nature. However, it is also clear that deflation need not always be associated either with a currency crisis or a banking crisis. Since the Great Depression was a singularly devastating event from an economic perspective, it is not entirely surprising that observers would associate deflation with depression.

But is this necessarily so? After all, the era roughly from 1870 to 1890 was also a period of deflation in several countries and, as the figure below suggests, in the United States and elsewhere, deflation was accompanied by strong economic growth. It is what some economists might refer to as a “good” deflation since it occurred at a time of tremendous technological improvements (in transportation and communications especially). That is not to say, even under such circumstances, that opposition from some quarters over the effects of such developments was unheard of. Indeed, the deflation prompted some, most famously William Jennings Bryan in the United States, to run for office believing that the Gold Standard’s proclivity to create deflation was akin to crucifying “mankind upon a cross of gold.” In contrast, the Great Depression would be characterized as a “bad” or even “ugly” deflation since it is associated with a great deal of slack in the economy.

Figure 1
Prices Changes versus the Output Gap, 1870s and 1930s

Notes: The top figure plots the rate of CPI inflation for the periods 1875-79 and 1929-33 for the United States. The bottom figure is an estimate of the output gap for the U.S., that is, the difference between actual and potential real GDP. A negative number signifies actual real GDP is higher than potential real GDP and vice-versa when the output gap is positive. See Burdekin and Siklos (2004) for the details. The vertical line captures the gap in the data, as observations for 1880-1929 are not plotted.

Conclusions

Whereas policy makers today speak of the need to avoid deflation their assessment is colored by the experience of the bad deflation of the 1930s, and its spread internationally, and the ongoing deflation in Japan. Hence, not only do policy makers worry about deflation proper they also worry about its spread on a global scale.

If ideology can blind policymakers to introducing necessary reforms then the second lesson from history is that, once entrenched, expectations of deflation may be difficult to reverse. The occasional fall in aggregate prices is unlikely to significantly affect longer-term expectations of inflation. This is especially true if the monetary authority is independent from political control, and if the central bank is required to meet some kind of inflation objective. Indeed, many analysts have repeatedly suggested the need to introduce an inflation target for Japan. While the Japanese have responded by stating that inflation targeting alone is incapable of helping the economy escape from deflation, the Bank of Japan’s stubborn refusal to adopt such a monetary policy strategy signals an unwillingness to commit to a different monetary policy strategy. Hence, expectations are even more unlikely to be influenced by other policies ostensibly meant to reverse the course of Japanese prices. The Federal Reserve, of course, does not have a formal inflation target but has repeatedly stated that its policies are meant to control inflation within a 0-3% band. Whether formal versus informal inflation targets represent substantially different monetary policy strategies continues to be debated, though the growing popularity of this type of monetary policy strategy suggests that it greatly assists in anchoring expectations of inflation.

References

Borio, Claudio, and Andrew Filardo. “Back to the Future? Assessing the Deflation Record.” Bank for International Settlements, March 2004.

Burdekin, Richard C.K., and Pierre L. Siklos. “Fears of Deflation and Policy Responses Then and Now.” In Deflation: Current and Historical Perspectives, edited by Richard C.K. Burdekin and Pierre L. Siklos. New York: Cambridge: Cambridge University Press, 2004.

Capie, Forrest, and Geoffrey Wood. “Price Change, Financial Stability, and the British Economy, 1870-1939.” In Deflation: Current and Historical Perspectives, edited by Richard C.K. Burdekin and Pierre L. Siklos. New York: Cambridge: Cambridge University Press, 2004.

Friedman, Milton, and Anna J. Schwartz. Monetary Trends in the United States and the United Kingdom. Chicago: University of Chicago Press, 1982.

Humphrey, Thomas M. “The Real Bills Doctrine.” Federal Reserve Bank of Richmond Economic Review 68, no. 5 (1982).

International Monetary Fund. “Deflation: Determinants, Risks, and Policy Options “Findings of an Independent Task Force.” April 30, 2003.

Krugman, Paul. “Its Baaaaack: Japan’s Slump and the Return of the Liquidity Trap.” Brookings Papers on Economic Activity 2 (1998): 137-205.

Meltzer, Allan H. A History of the Federal Reserve. Chicago: Chicago University Press, 2003.

Citation: Siklos, Pierre. “Deflation”. EH.Net Encyclopedia, edited by Robert Whaples. May 11, 2004. URL http://eh.net/encyclopedia/deflation/

The United States Public Debt, 1861 to 1975

Franklin Noll, Ph.D.

Introduction

On January 1, 1790, the United States’ public debt stood at $52,788,722.03 (Bayley 31). It consisted of the debt of the Continental Congress and $191,608.81 borrowed by Secretary of the Treasury Alexander Hamilton in the spring of 1789 from New York banks to meet the new government’s first payroll (Bayley 108). Since then the public debt has passed by a number of historical milestones: the assumption of Revolutionary War debt in August 1790, the redemption of the debt in 1835, the financing innovations rising from Civil War in 1861, the introduction of war loan drives in 1917, the rise of deficit spending after 1932, the lasting expansion of the debt from World War II, and the passage of the Budget Control Act in 1975. (The late 1990s may mark another point of significance in the history of the public debt, but it is still too soon to tell.) This short study examines the public debt between the Civil War and the Budget Control Act, the period in which the foundations of our present public debt of over $7 trillion were laid. (See Figure 1.) We start our investigation by asking, “What exactly is the public debt?”

Source: Nominal figures from “Principal of the Public Debt, Fiscal Years 1790-1975,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix. (Washington, DC: Government Printing Office, 1975), 62-63 and Robert Sahr, Oregon State University. URL: http://oregonstate.edu/Dept/pol_sci/fac/sahr/sahrhome.htm. Real figures adjust for inflation. These figures and conversion factors provided by Robert Sahr.

Definitions

Throughout its history, the Treasury has recognized various categories of government debt. The oldest category and the largest in size is the public debt. The public debt, simply put, is all debt for which the government of the United States is wholly liable. In turn, the general public is ultimately responsible for such debt through taxation. Some authors use the terms federal debt and national debt interchangeably with public debt. From the view of the United States Treasury, this is incorrect.

Federal debt, as defined by the Treasury, is the public debt plus debt issued by government-sponsored agencies for their own use. The term first appears in 1973 when it is officially defined as including “the obligations issued by Federal Government agencies which are part of the unified budget totals and in which there is an element of Federal ownership, along with the marketable and nonmarketable obligations of the Department of the Treasury” (Annual Report of the Secretary of the Treasury, 1973: 13). Put more succinctly, federal debt is made up of the public debt plus contingent debt. The government is partially or, more precisely, contingently liable for the debt of government-sponsored enterprises for which it has pledged its guarantee. On the contingency that a government-sponsored enterprise such as the Government National Mortgage Association ever defaults on its debt, the United States government becomes liable for the debt.

National debt, though a popular term and used by Alexander Hamilton, has never been technically defined by the Treasury. The term suggests that one is referring to all debt for which the government could be liable–wholly or in part. During the period 1861 to 1975, the debt for which the government could be partially or contingently liable has included that of government-sponsored enterprises, railroads, insular possessions (Puerto Rico and the Philippines), and the District of Columbia. Taken together, these categories of debt could be considered the true national debt which, to my knowledge, has never been calculated.

Structure

But it is the public debt–only that debt for which the government is wholly liable–which has been totaled and mathematically examined in a myriad of ways by scholars and pundits. Yet, very few have broken down the public debt into its component parts of marketable and nonmarketable debt instruments: those securities, such as bills, bonds, and notes that make up the basis of the debt. In a simplified form, the structure of the public debt is as follows:

  • Interest-bearing debt
    • Marketable debt
      • Treasuries
    • Nonmarketable debt
      • Depositary Series
    • Foreign Government Series
    • Government Account Series
    • Investment Series
    • REA Series
    • SLG Series
    • US Savings Securities
  • Matured debt
  • Debt bearing no interest

Though the elements of the debt varied over time, this basic structure remained constant from 1861 to 1975 and into the present. As we investigate further the elements making up the structure of the public debt, we will focus on information from 1975, the last year of our study. By doing so, we can see the debt at its largest and most complex for the period 1861 to 1975 and in a structure most like that currently held by the public debt. It was also in 1975 that the Bureau of the Public Debt’s accounting and reporting of the public debt took on its present form.

Some Financial Terms

Bearer Security
A bearer security is one in which ownership is determined solely by possession or the bearer of the security.
Callable
The term callable refers to whether and under what conditions the government has the right to redeem a debt issue prior to its maturity date. The date at which a security can be called by the government for redemption is known as its call date.
Coupon
A coupon is a detachable part of a security that bears the interest payment date and the amount due. The bearer of the security detaches the appropriate coupon and presents it to the Treasury for payment. Coupon is synonymous with interest in financial parlance: the coupon rate refers to the interest rate.
Coupon Security
A coupon security is any security that has attached coupons, and usually refers to a bearer security.
Discount
The term discount refers to the sale of a debt instrument at a price below its face or par value.
Liquidity
A security is liquid if it can be easily bought and sold in the secondary market or easily converted to cash.
Maturity
The maturity of a security is the date at which it becomes payable in full.
Negotiable
A negotiable security is one that can be freely sold or transferred to another holder.
Par
Par is the nominal dollar amount assigned to a security by the government. It is the security’s face value.
Premium
The term premium refers to the sale of a debt instrument at a price above its face or par value.
Registered Security
A registered security is one in which the owner of the security is recorded by the Bureau of the Public Debt. Usually both the principal and interest are registered, making them non-negotiable or non-transferable.

Interest-Bearing Debt, Matured Debt, and Debt Bearing No Interest

This major division in the structure of the public debt is fairly self-explanatory. Interest-bearing debt contains all securities that carry an obligation on the part of the government to pay interest to the security’s owner on a regular basis. These debt instruments have not reached maturity. Almost all of the public debt falls into the interest-bearing debt category. (See Figure 2.) Securities that are past maturity (and therefore no longer paying interest), but have not yet been redeemed by their holders are located within the category of matured debt. This is an extremely small part of the total public debt. In the category of debt bearing no interest are securities that are non-negotiable and non-interest-bearing such as Special Notes of the United States issued to the International Monetary Fund. Securities in this category are often issued for one-time or extraordinary purposes. Also in the category are obsolete forms of currency such as fractional currency, legal tender notes, and silver certificates. In total, old currency made up only .114% of the public debt in 1975. The Federal Reserve Notes which have been issued since 1914 and which we deal with on a daily basis are obligations of the Federal Reserve and thus not part of the public debt.

Source: “Principal of the Public Debt, Fiscal Years 1790-1975,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix (Washington, DC: Government Printing Office, 1975), 62-63.

During the period under study, the value of outstanding matured debt generally grew with the overall size of the debt, except for a spike in the amount of unredeemed securities in the mid and late 1950s. (See Figure 3.) This was caused by the maturation of United States Savings Bonds bought during World War II. Many of these war bonds lay forgotten in people’s safe-deposit boxes for years. Wartime purchases of Defense Savings Stamps and War Savings Stamps account for much of the sudden increase in debt bearing no interest from 1943 to 1947. (See Figure 4.) The year 1947 saw the United States issuing non-interest paying notes to fund the establishment of the International Monetary Fund and the International Bank for Reconstruction and Development (part of the World Bank). As interest-bearing debt makes up over 99% of the public debt, it is basically equivalent to it. (See Figure 5.) And, the history of the overall public debt will be examined later.

Source: “Principal of the Public Debt, Fiscal Years 1790-1975,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix (Washington, DC: Government Printing Office, 1975), 62-63.

Marketable Debt and Nonmarketable Debt

Interest-bearing debt is divided between marketable debt and nonmarketable debt. Marketable debt consists of securities that can be easily bought and sold in the secondary market. The Treasury has used the term since World War II to describe issues that are available to the general public in registered or bearer form without any condition of sale. Nonmarketable debt refers to securities that cannot be bought and sold in the secondary market though there are rare exceptions. Generally, nonmarketable government securities may only be bought from or sold to the Treasury. They are issued in registered form only and/or can be bought only by government agencies, specific business enterprises, or individuals under strict conditions.

The growth of the marketable debt largely mirrors that of total interest-bearing debt; and until 1918, there was no such thing as nonmarketable debt. (See Figure 6.) Nonmarketable debt arose in fiscal year 1918, when securities were sold to the Federal Reserve in an emergency move to raise money as the United States entered World War I. This was the first sale of “special issue” securities as nonmarketable debt securities were classified prior to World War II. Special or nonmarketable issues continued through the interwar period and grew with the establishment of government programs. Such securities were sometimes issued by the Treasury in the name of a government fund or program and were then bought by the Treasury. In effect, the Treasury extended a loan to the government entity. More often the Treasury would sell a special security to the government fund or program for cash, creating a loan to the Treasury and an investment vehicle for the government entity. And, as the number of government programs grew and the size of government funds (like those associated with Social Security) expanded, so did the number and value of nonmarketable securities–greatly contributing to the rapid growth of nonmarketable debt. By 1975, these intragovernment securities combined with United States Savings Bonds helped make nonmarketable debt 40% of the total public debt. (See Figure 7.)

Source: The following were used to calculate outstanding marketable debt: Data for 1861 to 1880 derived from Rafael A. Bayley, The National Loans of the United States from July 4, 1776, to June 30, 1880, second edition, facs rpt (New York: Burt Franklin, 1970 [1881]), 180-84 and Annual Report of the Secretary of the Treasury on the State of the Finances, (Washington, DC: Government Printing Office, 1861), 44. Post-1880 numbers derived from “Analysis of the Principal of the Interest-Bearing Public Debt of the United States from July 1, 1856 to July 1, 1912,” idem (1912), 102-03; “Comparative Statement of the Public Debt Outstanding June 30, 1933 to 1939,” idem (1939), 452-53; “Composition of the Public Debt at the End of the Fiscal Years 1916 to 1938,” idem, 454-55; “Public Debt by Security Classes, June 30, 1939-49,” idem (1949), 400-01; “Public Debt Outstanding by Security Classes, June 30, 1945-55,” idem (1955); “Public Debt Outstanding by Classification,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix (Washington, DC: Government Printing Office, 1975), 67-71. The marketable debt figures were then subtracted from total outstanding interest bearing debt to obtain nonmarketable figures.

Source: “Public Debt Outstanding by Classification,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix (Washington, DC: Government Printing Office, 1975), 67-71.

Marketable Debt Securities: Treasuries

The general public is most familiar with those marketable debt instruments falling within the category of Treasury securities, more popularly known as simply Treasuries. These securities can be bought by anyone and have active secondary markets. The most commonly issued Treasuries between 1861 and 1975 are the following, listed in order of length of time to maturity, shortest to longest:

Treasury certificate of indebtedness
A couponed, short-term, interest-bearing security. It can have a maturity of as little as one day or as long as five years. Maturity is usually between 3 and 12 months. These securities were largely replaced by Treasury bills.
Treasury bill
A short-term security issued on a discount basis rather than at par. The price is determined by competitive bidding at auction. They have a maturity of a year or less and are usually sold on a weekly basis with maturities of 13 weeks and 26 weeks. They were first issued in December 1929.
Treasury note
A couponed, interest-bearing security that generally matures in 2 to 5 years. In 1968, the Treasury began to issue 7-year notes, and in 1976, the maximum maturity of Treasury notes was raised to 10 years.
Treasury bond
A couponed interest-bearing security that normally matures after 10 or more years.

The story of these securities between 1861 and 1975 is one of a general movement by the Treasury to issue ever more securities in the shorter maturities–certificates of indebtedness, bills, and notes. Until World War I, the security of preference was the bond with a call date before maturity. (See Figure 8.) Such an instrument provided the minimum attainable interest rate for the Treasury and was in demand as a long-term investment vehicle by investors. The pre-maturity call date allowed the Treasury the flexibility to redeem the bonds during a period of surplus revenue. Between 1861 and 1917, certificates of indebtedness were issued on occasion to manage cash flow through the Treasury and notes were issued only during the financial crisis years of the Civil War.

Source: Franklin Noll, A Guide to Government Obligations, 1861-1976, unpublished ms., 2004.

In terms of both numbers and values, the change to shorter maturity Treasury securities began with World War I. Unprepared for the financial demands of World War I, the Treasury was perennially short of cash and issued a great number of certificates of indebtedness and short-term notes. A market developed for these securities, and they were issued throughout the interwar period to meet cash demands and refund the remaining World War I debt. While the number of bonds issued rose in the World War I and World War II years, by 1975 bond issues had become rare; and by the late 1960s, the value of bonds issued was in steep decline. (See Figure 9.) In part, this was the effect of interest rates moving beyond statutory limits set on the interest rate the Treasury could pay on long-term securities. The primary reason for the decline of the bond, however, was post-World War II economic growth and inflation that drove up interest rates and established expectations of rising inflation. In such conditions, shorter term securities were more in favor with investors who sought to ride the rising tide of interest rates and keep their financial assets as liquid as possible. Correspondingly, the number and value of notes and bills rose throughout the postwar years. Certificates of indebtedness declined as they were replaced by bills. Treasury bills won out because they were easier and therefore less expensive for the Treasury to issue than certificates of indebtedness. Bills required no predetermination of interest rates or servicing of coupon payments.

Source: Data for 1861 to 1880 derived from Rafael A. Bayley, The National Loans of the United States from July 4, 1776, to June 30, 1880, second edition, facs rpt (New York: Burt Franklin, 1970 [1881]), 180-84 and Annual Report of the Secretary of the Treasury on the State of the Finances, (Washington, DC: Government Printing Office, 1861), 44. Post-1880 numbers derived from “Analysis of the Principal of the Interest-Bearing Public Debt of the United States from July 1, 1856 to July 1, 1912,” idem (1912), 102-03; “Comparative Statement of the Public Debt Outstanding June 30, 1933 to 1939,” idem (1939), 452-53; “Composition of the Public Debt at the End of the Fiscal Years 1916 to 1938,” idem, 454-55; “Public Debt by Security Classes, June 30, 1939-49,” idem (1949), 400-01; “Public Debt Outstanding by Security Classes, June 30, 1945-55,” idem (1955); “Public Debt Outstanding by Classification,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix (Washington, DC: Government Printing Office, 1975), 67-71.

Nonmarketable Debt Securities

Securities sold as nonmarketable debt come in the forms above–certificate of indebtedness, bill, note, and bond. Most, but not all, nonmarketable securities fall into these series or categories:

Depositary Series
Made up of depositary bonds held by depositary banks. These are banks that provide banking facilities for the Treasury. Depositary bonds act as collateral for the Treasury funds deposited at the bank. The interest on these collateral securities provides the banks with income for the services rendered.
Foreign Government Series
The group of Treasury securities sold to foreign governments or used in foreign exchange stabilization operations.
Government Account Series
Refers to all types of securities issued to or by government accounts and trust funds.
Investment Series
Contains Treasury Bond, Investment Series securities sold to institutional investors.
REA Series
Rural electrification Administration Series securities are sold to recipients of Rural Electrification Administration loans who have unplanned excess loan money. Holding on to excess funds in the form of bonds give the borrower the capacity to cash in the bonds and retrieve the unused loan funds without the need for negotiating a new loan.
SLG Series
State and Local Government Series securities were first issued in 1972 to help state and municipal governments meet federal arbitrage restrictions.
US Savings Securities
United States Savings Securities refers to a group of securities consisting of savings stamps and bonds (most notably United States Savings Bonds) aimed at small, non-institutional investors.

A number of nonmarketable securities fall outside these series. The special issue securities sold to the Federal Reserve in 1917 (the first securities recognized as nonmarketable) and mentioned above do not fit into any of these categories, neither do securities providing tax advantages like Mortgage Guaranty Insurance Company Tax and Loss Bonds or Special Notes of the United States issued on behalf of the International Monetary Fund. Treasury reports are, in fact, frustratingly full of anomalies and contradictions. One major anomaly is Postal Savings Bonds. First issued in 1911, Postal Savings Bonds were United States Savings Securities that were bought by depositors in the now defunct Postal Savings System. These bonds, unlike United States Savings Bonds, were fully marketable and could be bought and sold on the open market. As a savings security, it is included in the nonmarketable United States Savings Security series even though it is marketable. (It is to include these anomalous securities that we begin the graphs below in 1910.)

The United States Savings Security Series and the Government Account Series were the most significant in the growth of the nonmarketable debt component of the public debt. (See Figure 10.) The real rise in savings securities began with the introduction of the nonmarketable United States Savings Bonds in 1935. The bond drives of World War II established these savings bonds in the American psyche and small investor portfolios. Securities issued for the benefit of government funds or programs began in 1925 and, as in the case of savings securities, really took off with the stimulus of World War II. The growth of government and government programs continued to stimulate the growth of the Government Account Series, making it the largest part of nonmarketable debt by 1975. (See Figure 13.)

Source: Various tables and exhibits, Annual Report of the Secretary of the Treasury on the State of the Finances, (Washington, DC: Government Printing Office, 1910-1932); “Comparative Statement of the Public Debt Outstanding June 30, 1933 to 1939,” idem (1939), 452-53; “Composition of the Public Debt at the End of the Fiscal Years 1916 to 1938,” idem, 454-55; “Public Debt by Security Classes, June 30, 1939-49,” idem (1949), 400-01; “Public Debt Outstanding by Security Classes, June 30, 1945-55,” idem (1955); “Public Debt Outstanding by Classification,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix (Washington, DC: Government Printing Office, 1975), 67-71.

The Depositary, REA, and SLG series were of minor importance throughout the period with depositary bonds declining because their fixed interest rate of 2% became increasing uncompetitive with the rise in inflation. (See Figure 11.) As the Investment Series was tied to a single security, it declined with the gradual redemptions of Treasury Bond, Investment Series securities. (See Figure 12.) The Foreign Government Series grew with escalating efforts to stabilize the value of dollar in foreign exchange markets. (See Figure 12.)

Source: “Description of Public Debt Issues Outstanding, June 30, 1975,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix (Washington, DC: Government Printing Office, 1975), 88-112.

History of the Public Debt

While we have examined the development of the various components of the public debt, we have yet to consider the public debt as a whole. Quite a few writers in the recent past have commented on the ever-growing size of the public debt. Many were concerned that the public debt figures were becoming astronomical in size and that there was no end in sight to continued growth as perennial budget deficits forced the government to keep borrowing money. Such fears are not entirely new to our country. In the Civil War, World War I, and World War II, people were astounded at the unprecedented heights reached by the public debt during wartime. What changed during World War II (and maybe a bit before) was the assumption that the public debt would decrease once the present crisis was over. The pattern in America’s past was that after each war every effort would be made to pay off the accumulated debt as quickly as possible. Thus we find after the Civil War, World War I, and World War II declines in the total public debt. (See Figures 14 and 15.) Until the United States’ entry into World War I, the public debt never exceeded $3 billion (See Figure 14); and probably the debt would have returned near to this level after World War I if the Great Depression and World War II had not intervened. Yet, the last contraction of the public debt between 1861 and 1975 occurred in 1957. (See Figures 15 and 18.) Since then the debt grew at an ever-increasing rate. Why?

The period 1861 to 1975 roughly divides into two eras and two corresponding philosophies on the public debt. From 1861 to 1932, government officials basically followed traditional precepts of public debt management, pursuing balanced budgets and paying down any debt as quickly as possible (Withers, 35-42). We will label these officials traditionalists. To oversimplify, for traditionalists the economy was not to be meddled with by the government as no good would come from it. The ups and downs of business cycles were natural phenomena that had to be endured and when possible provided for through the accumulation of budget surpluses. These views of national finance and the public debt held sway before the Great Depression and lingered on into the 1950s (Conklin, 234). But it was during the Great Depression and the first term of President Franklin Roosevelt, that we see an acceptance of what was then called “new economics” and would later be called Keynesianism. Basically, “new” economists believed that the business cycle could be counteracted through government intervention into the economy (Withers, 32). During economic downturns, the government could dampen the down cycle by stimulating the economy through lower taxes, increased government spending, and an expanded money supply. As the economy recovered, these stimulants would be reversed to dampen the up cycle of the economy. These beliefs gained ever greater currency over time and we will designate the period 1932 to 1975, the New Era.

The Traditional Era, 1861-1932

(This discussion focuses on figures 14 and 16. Also See Figures 18, 19, and 20.) In 1861, the public debt stood at roughly $65 million. At the end of the Civil War the debt was some 42 times greater at $2,756 million and the country was off the gold standard. The Civil War was paid for by a new personal income tax, massive bond issues, and the printing of currency, popularly known as Greenbacks. Once the war was over, there was a drive to return to the status quo antebellum with a return to the gold standard, a pay down of the public debt, and the retirement of Greenbacks. The period 1866 to 1893, saw 28 continuous years of budget surpluses with revenues pouring in from tariffs and land sales in the west. During that time, successive Secretaries of the Treasury redeemed public debt securities to the greatest extent possible, often buying securities at a premium in the open market. The debt declined continuously until 1893 to a low of $961 million with a brief exception in the late 1870s as the country dealt with the recessionary after effects of the Panic of 1873 and the controversy regarding resumption of the gold standard in 1879. The Panic of 1893 and a decline in tariff revenues brought a period of budget deficits and slightly raised the public debt from its 1893 low to a steady average of around $1,150 million in the years leading up to World War I. The first war drives occurred during World War I. With the aid of the recently established Federal Reserve, the Treasury held four Liberty Loan drives and one Victory Loan drive. The Treasury also introduced low cost savings certificates and stamps to attract the smallest investor. For 25 cents, one could aid the war effort by buying a Thrift Stamp. As at the end of previous wars, once World War I ended there was a concerted drive to pay down the debt. By 1931, the debt was reduced to $16,801 million from a wartime high of $25,485 million. The first budget deficit since the end of the war also appeared in 1931, marking the deepening of the Great Depression and a move away from the fiscal orthodoxy of the past.

Source: “Principal of the Public Debt, Fiscal Years 1790-1975,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix. (Washington, DC: Government Printing Office, 1975), 62-63.

Source: Robert Sahr, Oregon State University. URL: http://oregonstate.edu/Dept/pol_sci/ fac/sahr/sahrhome.htm.

The New Era, 1932-1975

(This discussion focuses on figures 15 and 17. Also See Figures 18, 19, and 20.) It was Roosevelt who first experimented with deficit spending to pull the economy out of depression and to stimulate jobs through the creation of public works programs and other elements of his New Deal. Though taxes were raised on the wealthy, the depressed state of the economy meant government revenues were far too low to finance the New Deal. As a result, Roosevelt in his first year created a budget deficit almost six times greater than that of Hoover’s last year in office. Between 1931 and 1941, the public debt tripled in size, standing at $48,961 million upon the United States’ entry into World War II. To help fund the debt and get hoarded money back into circulation, the Treasury introduced the United States Savings Bond. Nonmarketable with a guaranteed redemption value at any point in the life of the security and a denomination as low as $25, the savings bond was aimed at small investors fearful of continued bank collapses. With the advent of war, these bonds became War Savings Bonds and were the focus of the eight war drives of World War II, which also included Treasury bonds and certificates of indebtedness. The public debt reached a height of $269,422 million because of the war.

The experience of the New Deal combined with the low unemployment and victory of wartime, seemed to confirm Keynesian theories and reduce the fear of budget deficits. In 1946, Congress passed the Full Employment Act, committing the government to the pursuit of low unemployment through government intervention in the economy, which could include deficit spending. Though Truman and Eisenhower promoted some government intervention in the economy, they were still economic traditionalists at heart and sought to pay down the public debt as much as possible. And, despite massive foreign aid, a sharp recession in the late 1950s, and large-scale foreign military deployments, including the Korean War, these two presidents were able to present budget surpluses more than 50% of the time and limit the growth of the public debt to an average of $1,000 million per year. From 1960 to 1975, there would only be one year of budget surplus and the public debt would grow at an average rate of $17,040 million per year. It was in 1960 and the arrival of the Kennedy administration that the “new economics” or Keynesianism came into full flower within the government. In the 1960s and 1970s, tax cuts and increased domestic spending were pursued not only to improve society but also to move the economy toward full employment. However, these economic stimulants were not just applied on down cycles of the economy but also on up cycles, resulting in ever-growing deficits. Added to this domestic spending were the continued outlays on military deployments overseas, including Vietnam, and borrowings in foreign markets to prop up the value of the dollar. During boom years, government revenues did increase but never enough to outpace spending. The exception was 1969 when a high rate of inflation boosted nominal revenues which were offset by the increased nominal cost of servicing the debt. By 1975, the United States was suffering from the high inflation and high unemployment of stagflation, and the budgetary deficits seemed to take on a life of their own. Each downturn in the economy brought smaller revenues aggravated by tax cuts while spending soared because of increased welfare and unemployment benefits and other government spending aimed at spurring job creation. The net result was an ever-increasing charge on the public debt and the huge numbers that have concerned so many in the past (and present).

Source: Nominal figures from “Principal of the Public Debt, Fiscal Years 1790-1975,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix. (Washington, DC: Government Printing Office, 1975), 62-63; real figures adjust for inflation and are provided by Robert Sahr, Oregon State University. URL: http://oregonstate.edu/Dept/pol_sci/fac/sahr/sahrhome.htm.

Source: Derived from figures provided by Robert Sahr, Oregon State University. URL: http://oregonstate.edu/Dept/pol_sci/ fac/sahr/sahrhome.htm.

Source: Robert Sahr, Oregon State University. URL: http://oregonstate.edu/Dept/pol_sci/ fac/sahr/sahrhome.htm.

We end this study in 1975 and the passage of the Budget Control Act. Formally entitled the Congressional Budget and Impoundment Control Act of 1974, it was passed on July 12, 1974 (the start of fiscal year 1975). Some of the most notable provisions of the act were the establishment of House and Senate Budget Committees, creation of the Congressional Budget Office, and removal of impoundment authority from the President. Impoundment was the President’s ability to refrain from spending funds authorized in the budget. For example, if a government program ended up not spending all the money allotted it, the President (or more specifically the Treasury under the President’s authority) did not have to pay out the unneeded money. Or, if the President did not want to fund a project passed by Congress in the budget, he could in effect veto it by instructing the Treasury not to release the money. In sum, the Budget Control Act shifted the balance of budgetary power to the Congress from the executive branch. The effect was to weaken restraints on Congressional spending and contribute to the increased deficits and sharp, upward growth in the public debt in the next couple decades. (See Figures 1, 19, and 20.)

But the Budget Control Act was a watershed for the public debt not only in its rate of growth but also in the way it was recorded and reported. The act changed the fiscal year (the twelve-month period used to determine income and expenses for accounting purposes) from July 1 to June 30 of each year to October 1 to September 30. The Budget Control Act also initiated the reporting system currently used by the Bureau of the Public Debt to report on the public debt. Fiscal year 1975 saw the first publication of the Monthly Statement of the Public Debt of the United States. For the first time, it reported the public debt in the structure we examined above, a structure still used by the Treasury today.

Conclusion

The public debt from 1861 to 1975 was the product of many factors. First, it was the result of accountancy on the part of the United States Treasury. Only certain obligations of the United States fall into the definition of the public debt. Second, the debt was the effect of Treasury debt management decisions as to what debt instruments or securities were to be used to finance the debt. Third, the public debt was fundamentally a product of budget deficits. Massive government spending in itself did not create deficits and add to the debt. It was only when revenues were not sufficient to offset the spending that deficits and government borrowing were necessary. At times, as during wartime or severe recessions, deficits were largely unavoidable. The change that occurred between 1861 and 1975 was the attitude among the government and the public toward budget deficits. Until the Great Depression, deficits were seen as injurious to the public good, and the public debt was viewed with unease as something the country could really do without. After the Great Depression, deficits were still not welcomed but were now viewed as a necessary tool needed to aid in economic recovery and the creation of jobs. Post-World War II rising expectations of continuous economic growth and high employment at home and the extension of United States’ power abroad spurred the use of deficit spending. And, the belief among some influential Keynesians that more tinkering with the economy was all that was needed to fix a stagflating economy created an almost self-perpetuating growth of the public debt. In the end, the history of the public debt is not so much about accountancy or Treasury securities as about national ambitions, politics, and economic theories.

Annotated Bibliography

Though much has been written about the public debt, very little of it is of any real use in economic analysis or learning the history of the public debt. Most books deal with an ever-pending public debt crisis and give policy recommendations on how to solve the problem. However, there are a few recommendations:

Annual Report of the Secretary of the Treasury on the State of the Finances. Washington, DC: Government Printing Office, -1980.

This is the basic source for all information on the public debt until 1980.

Bayley, Rafael A. The National Loans of the United States from July 4, 1776, to June 30, 1880. Second edition. Facsimile reprint. New York: Burt Franklin, 1970 [1881].

This is the standard work on early United States financing written by a Treasury bureaucrat.

Bureau of the Public Debt. “The Public Debt Online.” URL: http://www.publicdebt.treas. gov/opd/opd.htm.

Provides limited data on the public debt, but provides all past issues of the Monthly Statement of the Public Debt.

Conklin, George T., Jr. “Treasury Financial Policy from the Institutional Point of View.” Journal of Finance 8, no. 2 (May 1953): 226-34.

This is a contemporary’s disapproving view of the growing acceptance of the “new economics” that appeared in the 1930s.

Gordon, John Steele. Hamilton’s Blessing: the Extraordinary Life and Times of Our National Debt. New York: Penguin, 1998.

This is a very readable, brief overview of the history of the public debt.

Love, Robert A. Federal Financing: A Study of the Methods Employed by the Treasury in Its Borrowing Operations. Reprint of 1931 edition. New York: AMS Press, 1968.

This is the most complete and thorough account of the structure of the public debt. Unfortunately, it only goes up to 1925.

Noll, Franklin. A Guide to Government Obligations, 1861-1976. Unpublished ms. 2004.

This is a descriptive inventory and chronological listing of the roughly 12,000 securities issued by the Treasury between 1861 and 1976.

Office of Management and Budget. “Historical Tables.” Budget of the United States Government, Fiscal Year 2005. URL: http://www.whitehouse.gov/omb/budget/fy2005/ pdf/hist.pdf.

Provides data on the public debt, budgets, and federal spending, but reports focus on the latter twentieth century.

Sahr, Robert. “National Government Budget.” URL: http://oregonstate.edu/Dept/pol_sci/ fac/sahr/sahr.htm.

This is a valuable web site containing a useful collection of detailed graphs on government spending and the public debt.

Withers, William. The Public Debt. New York: John Day Company, 1945.

Like Conklin, this is a contemporary’s view of the change in perspectives on the public debt occurring in the 1930s. Withers tends to favor the “new economics.”

Citation: Noll, Franklin. “The United States Public Debt, 1861 to 1975″. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-united-states-public-debt-1861-to-1975/