EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

An Economic History of Finland

Riitta Hjerppe, University of Helsinki

Finland in the early 2000s is a small industrialized country with a standard of living ranked among the top twenty in the world. At the beginning of the twentieth century it was a poor agrarian country with a gross domestic product per capita less than half of that of the United Kingdom and the United States, world leaders at the time in this respect. Finland was part of Sweden until 1809, and a Grand Duchy of Russia from 1809 to 1917, with relatively broad autonomy in its economic and many internal affairs. It became an independent republic in 1917. While not directly involved in the fighting in World War I, the country went through a civil war during the years of early independence in 1918, and fought against the Soviet Union during World War II. Participation in Western trade liberalization and bilateral trade with the Soviet Union required careful balancing of foreign policy, but also enhanced the welfare of the population. Finland has been a member of the European Union since 1995, and has belonged to the European Economic and Monetary Union since 1999, when it adopted the euro as its currency.

Gross Domestic Product per capita in Finland and in EU 15, 1860-2004, index 2004 = 100

Sources: Eurostat (2001–2005)

Finland has large forest areas of coniferous trees, and forests have been and still are an important natural resource in its economic development. Other natural resources are scarce: there is no coal or oil, and relatively few minerals. Outokumpu, the biggest copper mine in Europe in its time, was depleted in the 1980s. Even water power is scarce, despite the large number of lakes, because of the small height differences. The country is among the larger ones in Europe in area, but it is sparsely populated with 44 people per square mile, 5.3 million people altogether. The population is very homogeneous. There are a small number of people of foreign origin, about two percent, and for historical reasons there are two official language groups, the Finnish-speaking majority and a Swedish-speaking minority. In recent years population has grown at about 0.3 percent per year.

The Beginnings of Industrialization and Accelerating Growth

Finland was an agrarian country in the 1800s, despite poor climatic conditions for efficient grain growing. Seventy percent of the population was engaged in agriculture and forestry, and half of the value of production came from these primary industries in 1900. Slash and burn cultivation finally gave way to field cultivation during the nineteenth century, even in the eastern parts of the country.

Some iron works were founded in the southwestern part of the country in order to process Swedish iron ore as early as in the seventeenth century. Significant tar burning, sawmilling and fur trading brought cash with which to buy a few imported items such as salt, and some luxuries – coffee, sugar, wines and fine cloths. The small towns in the coastal areas flourished through the shipping of these items, even if restrictive legislation in the eighteenth century required transport via Stockholm. The income from tar and timber shipping accumulated capital for the first industrial plants.

The nineteenth century saw the modest beginnings of industrialization, clearly later than in Western Europe. The first modern cotton factories started up in the 1830s and 1840s, as did the first machine shops. The first steam machines were introduced in the cotton factories and the first rag paper machine in the 1840s. The first steam sawmills were allowed to start only in 1860. The first railroad shortened the traveling time from the inland towns to the coast in 1862, and the first telegraphs came at around the same time. Some new inventions, such as electrical power and the telephone, came into use early in the 1880s, but generally the diffusion of new technology to everyday use took a long time.

The export of various industrial and artisan products to Russia from the 1840s on, as well as the opening up of British markets to Finnish sawmill products in the 1860s were important triggers of industrial development. From the 1870s on pulp and paper based on wood fiber became major export items to the Russian market, and before World War I one-third of the demand of the vast Russian empire was satisfied with Finnish paper. Finland became a very open economy after the 1860s and 1870s, with an export share equaling one-fifth of GDP and an import share of one-fourth. A happy coincidence was the considerable improvement in the terms of trade (export prices/import prices) from the late 1860s to 1900, when timber and other export prices improved in relation to the international prices of grain and industrial products.

Openness of the economies (exports+imports of goods/GDP, percent) in Finland and EU 15, 1960-2005

Sources: Heikkinen and van Zanden 2004; Hjerppe 1989.

Finland participated fully in the global economy of the first gold-standard era, importing much of its grain tariff-free and a lot of other foodstuffs. Half of the imports consisted of food, beverages and tobacco. Agriculture turned to dairy farming, as in Denmark, but with poorer results. The Finnish currency, the markka from 1865, was tied to gold in 1878 and the Finnish Senate borrowed money from Western banking houses in order to build railways and schools.

GDP grew at a slightly accelerating average rate of 2.6 percent per annum, and GDP per capita rose 1.5 percent per year on average between 1860 and 1913. The population was also growing rapidly, and from two million in the 1860s it reached three million on the eve of World War I. Only about ten percent of the population lived in towns. The investment rate was a little over 10 percent of GDP between the 1860s and 1913 and labor productivity was low compared to the leading nations. Accordingly, economic growth depended mostly on added labor inputs, as well as a growing cultivated area.

Catching up in the Interwar Years

The revolution of 1917 in Russia and Finland’s independence cut off Russian trade, which was devastating for Finland’s economy. The food situation was particularly difficult as 60 percent of grain required had been imported.

Postwar reconstruction in Europe and the consequent demand for timber soon put the economy on a swift growth path. The gap between the Finnish economy and Western economies narrowed dramatically in the interwar period, although it remained the same among the Scandinavian countries, which also experienced fast growth: GDP grew by 4.7 percent per annum and GDP per capita by 3.8 percent in 1920–1938. The investment rate rose to new heights, which also improved labor productivity. The 1930s depression was milder than in many other European countries because of the continued demand for pulp and paper. On the other hand, Finnish industries went into depression at different times, which made the downturn milder than it would have been if all the industries had experienced their troughs simultaneously. The Depression, however, had serious and long-drawn-out consequences for poor people.

The land reform of 1918 secured land for tenant farmers and farm workers. A large number of new, small farms were established, which could only support families if they had extra income from forest work. The country remained largely agrarian. On the eve of World War II, almost half of the labor force and one-third of the production were still in the primary industries. Small-scale agriculture used horses and horse-drawn machines, lumberjacks went into the forest with axes and saws, and logs were transported from the forest by horses or by floating. Tariff protection and other policy measures helped to raise the domestic grain production to 80–90 percent of consumption by 1939.

Soon after the end of World War I, Finnish sawmill products, pulp and paper found old and new markets in the Western world. The structure of exports became more one-sided, however. Textiles and metal products found no markets in the West and had to compete hard with imports on the domestic market. More than four-fifths of exports were based on wood, and one-third of industrial production was in sawmilling, other wood products, pulp and paper. Other growing industries included mining, basic metal industries and machine production, but they operated on the domestic market, protected by the customs barriers that were typical of Europe at that time.

The Postwar Boom until the 1970s

Finland came out of World War II crippled by the loss of a full tenth of its territory, and with 400.000 evacuees from Karelia. Productive units were dilapidated and the raw material situation was poor. The huge war reparations to the Soviet Union were the priority problem of the decision makers. The favorable development of the domestic machinery and shipbuilding industries, which was based on domestic demand during the interwar period and arms deliveries to the army during the War made war-reparations deliveries possible. They were paid on time and according to the agreements. At the same time, timber exports to the West started again. Gradually the productive capacity was modernized and the whole industry was reformed. Evacuees and soldiers were given land on which to settle, and this contributed to the decrease in farm size.

Finland became part of the Western European trade-liberalization movement by joining the World Bank, the International Monetary Fund (IMF) and the Bretton Woods agreement in 1948, becoming a member of the General Agreement on Tariffs and Trade (GATT) two years later, and joining Finnefta (an agreement between the European Free Trade Area (EFTA) and Finland) in 1961. The government chose not to receive Marshall Aid because of the world political situation. Bilateral trade agreements with the Soviet Union started in 1947 and continued until 1991. Tariffs were eased and imports from market economies liberated from 1957. Exports and imports, which had stayed at internationally high levels during the interwar years, only slowly returned to the earlier relative levels.

The investment rate climbed to new levels soon after War World II under a government policy favoring investments and it remained on this very high level until the end of the 1980s. The labor-force growth stopped in the early 1960s, and economic growth has since depended on increases in productivity rather than increased labor inputs. GDP growth was 4.9 percent and GDP per capita 4.3 percent in 1950–1973 – matching the rapid pace of many other European countries.

Exports and, accordingly, the structure of the manufacturing industry were diversified by Soviet and, later, on Western orders for machinery products including paper machines, cranes, elevators, and special ships such as icebreakers. The vast Soviet Union provided good markets for clothing and footwear, while Finnish wool and cotton factories slowly disappeared because of competition from low-wage countries. The modern chemical industry started to develop in the early twentieth century, often led by foreign entrepreneurs, and the first small oil refinery was built by the government in the 1950s. The government became actively involved in industrial activities in the early twentieth century, with investments in mining, basic industries, energy production and transmission, and the construction of infrastructure, and this continued in the postwar period.

The new agricultural policy, the aim of which was to secure reasonable incomes and favorable loans to the farmers and the availability of domestic agricultural products for the population, soon led to overproduction in several product groups, and further to government-subsidized dumping on the international markets. The first limitations on agricultural production were introduced at the end of the 1960s.

The population reached four million in 1950, and the postwar baby boom put extra pressure on the educational system. The educational level of the Finnish population was low in Western European terms in the 1950s, even if everybody could read and write. The underdeveloped educational system was expanded and renewed as new universities and vocational schools were founded, and the number of years of basic, compulsory education increased. Education has been government run since the 1960s and 1970s, and is free at all levels. Finland started to follow the so-called Nordic welfare model, and similar improvements in health and social care have been introduced, normally somewhat later than in the other Nordic countries. Public child-health centers, cash allowances for children, and maternity leave were established in the 1940s, and pension plans have covered the whole population since the 1950s. National unemployment programs had their beginnings in the 1930s and were gradually expanded. A public health-care system was introduced in 1970, and national health insurance also covers some of the cost of private health care. During the 1980s the income distribution became one of the most even in the world.

Slower Growth from the 1970s

The oil crises of the 1970s put the Finnish economy under pressure. Although the oil reserves of the main supplier, the Soviet Union, showed no signs of running out, the price increased in line with world market prices. This was a source of devastating inflation in Finland. On the other hand, it was possible to increase exports under the terms of the bilateral trade agreement with the Soviet Union. This boosted export demand and helped Finland to avoid the high and sustained unemployment that plagued Western Europe.

Economic growth in the 1980s was somewhat better than in most Western economies, and at the end of the 1980s Finland caught up with the sluggishly-growing Swedish GDP per capita for the first time. In the early 1990s the collapse of the Soviet trade, Western European recession and problems in adjusting to the new liberal order of international capital movement led the Finnish economy into a depression that was worse than that of the 1930s. GDP fell by over 10 percent in three years, and unemployment rose to 18 percent. The banking crisis triggered a profound structural change in the Finnish financial sector. The economy revived again to a brisk growth rate of 3.6 percent in 1994-2005: GDP growth was 2.5 percent and GDP per capita 2.1 percent between 1973 and 2005.

Electronics started its spectacular rise in the 1980s and it is now the largest single manufacturing industry with a 25 percent share of all manufacturing. Nokia is the world’s largest producer of mobile phones and a major transmission-station constructor. Connected to this development was the increase in the research-and- development outlay to three percent of GDP, one of the highest in the world. The Finnish paper companies UPM-Kymmene and M-real and the Finnish-Swedish Stora-Enso are among the largest paper producers in the world, although paper production now accounts for only 10 percent of manufacturing output. The recent discussion on the future of the industry is alarming, however. The position of the Nordic paper industry, which is based on expensive, slowly-growing timber, is threatened by new paper factories founded near the expanding consumption areas in Asia and South America, which use local, fast-growing tropical timber. The formerly significant sawmilling operations now constitute a very small percentage of the activities, although the production volumes have been growing. The textile and clothing industries have shrunk into insignificance.

What has typified the last couple of decades is the globalization that has spread to all areas. Exports and imports have increased as a result of export-favoring policies. Some 80 percent of the stocks of Finnish public companies are now in foreign hands: foreign ownership was limited and controlled until the early 1990s. A quarter of the companies operating in Finland are foreign-owned, and Finnish companies have even bigger investments abroad. Most big companies are truly international nowadays. Migration to Finland has increased, and since the collapse of the eastern bloc Russian immigrants have become the largest single foreign group. The number of foreigners is still lower than in many other countries – there are about 120.000 people with foreign background out of a population of 5.3 million.

The directions of foreign trade have been changing because trade with the rising Asian economies has been gaining in importance and Russian trade has fluctuated. Otherwise, almost the same country distribution prevails as has been common for over a century. Western Europe has a share of three-fifths, which has been typical. The United Kingdom was for long Finland’s biggest trading partner, with a share of one-third, but this started to diminish in the 1960s. Russia accounted for one-third of Finnish foreign trade in the early 1900s, but the Soviet Union had minimal trade with the West at first, and its share of the Finnish foreign trade was just a few percentage points. After World War II Soviet-Finnish trade increased gradually until it reached 25 percent of Finnish foreign trade in the 1970s and early 1980s. Trade with Russia is now gradually gaining ground again from the low point of the early 1990s, and had risen to about ten percent in 2006. This makes Russia one of Finland’s three biggest trading partners, Sweden and Germany being the other two with a ten percent share each.

The balance of payments was a continuing problem in the Finnish economy until the 1990s. Particularly in the post-World War II period inflation repeatedly eroded the competitive capacity of the economy and led to numerous devaluations of the currency. An economic policy favoring exports helped the country out of the depression of the 1990s and improved the balance of payments.

Agriculture continued its problematic development of overproduction and high subsidies, which finally became very unpopular. The number of farms has shrunk since the 1960s and the average size has recently risen to average European levels. The share of agricultural production and labor are also on the Western European levels nowadays. Finnish agriculture is incorporated into the Common Agricultural Policy of the European Union and shares its problems, even if Finnish overproduction has been virtually eliminated.

The share of forestry is equally low, even if it supplies four-fifths of the wood used in Finnish sawmills and paper factories: the remaining fifth is imported mainly from the northwestern parts of Russia. The share of manufacturing is somewhat above Western European levels and, accordingly, that of services is high but slightly lower than in the old industrialized countries.

Recent discussion on the state of the economy mainly focuses on two issues. The very open economy of Finland is very much influenced by the rather sluggish economic development of the European Union. Accordingly, not very high growth rates are to be expected in Finland either. Since the 1990s depression, the investment rate has remained at a lower level than was common in the postwar period, and this is cause for concern.

The other issue concerns the prominent role of the public sector in the economy. The Nordic welfare model is basically approved of, but the costs create tensions. High taxation is one consequence of this and political parties discuss whether or not the high public-sector share slows down economic growth.

The aging population, high unemployment and the decreasing numbers of taxpayers in the rural areas of eastern and central Finland place a burden on the local governments. There is also continuing discussion about tax competition inside the European Union: how does the high taxation in some member countries affect the location decisions of companies?

Development of Finland’s exports by commodity group 1900-2005, percent

Source: Finnish National Board of Customs, Statistics Unit

Note on classification: Metal industry products SITC 28, 67, 68, 7, 87; Chemical products SITC 27, 32, 33, 34, 5, 66; Textiles SITC 26, 61, 65, 84, 85; Wood, paper and printed products SITC 24, 25, 63, 64, 82; Food, beverages, tobacco SITC 0, 1, 4.

Development of Finland’s imports by commodity group 1900-2005, percent

Source: Finnish National Board of Customs, Statistics Unit

Note on classification: Metal industry products SITC 28, 67, 68, 7, 87; Chemical products SITC 27, 32, 33, 34, 5, 66; Textiles SITC 26, 61, 65, 84, 85; Wood, paper and printed products SITC 24, 25, 63, 64, 82; Food, beverages, tobacco SITC 0, 1, 4.

References:

Heikkinen, S. and J.L van Zanden, eds. Explorations in Economic Growth. Amsterdam: Aksant, 2004.

Heikkinen, S. Labour and the Market: Workers, Wages and Living Standards in Finland, 1850–1913. Commentationes Scientiarum Socialium 51 (1997).

Hjerppe, R. The Finnish Economy 1860–1985: Growth and Structural Change. Studies on Finland’s Economic Growth XIII. Helsinki: Bank of Finland Publications, 1989.

Jalava, J., S. Heikkinen and R. Hjerppe. “Technology and Structural Change: Productivity in the Finnish Manufacturing Industries, 1925-2000.” Transformation, Integration and Globalization Economic Research (TIGER), Working Paper No. 34, December 2002.

Kaukiainen, Yrjö. A History of Finnish Shipping. London: Routledge, 1993.

Myllyntaus, Timo. Electrification of Finland: The Transfer of a New Technology into a Late Industrializing Economy. Worcester, MA: Macmillan, Worcester, 1991.

Ojala, J., J. Eloranta and J. Jalava, editors. The Road to Prosperity: An Economic History of Finland. Helsinki: Suomalaisen Kirjallisuuden Seura, 2006.

Pekkarinen, J. and J. Vartiainen. Finlands ekonomiska politik: den långa linjen 1918–2000, Stockholm: Stiftelsen Fackföreningsrörelsens institut för ekonomisk forskning FIEF, 2001.

Citation: Hjerppe, Riitta. “An Economic History of Finland”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/an-economic-history-of-finland/

The U.S. Economy in the 1920s

Gene Smiley, Marquette University

Introduction

The interwar period in the United States, and in the rest of the world, is a most interesting era. The decade of the 1930s marks the most severe depression in our history and ushered in sweeping changes in the role of government. Economists and historians have rightly given much attention to that decade. However, with all of this concern about the growing and developing role of government in economic activity in the 1930s, the decade of the 1920s often tends to get overlooked. This is unfortunate because the 1920s are a period of vigorous, vital economic growth. It marks the first truly modern decade and dramatic economic developments are found in those years. There is a rapid adoption of the automobile to the detriment of passenger rail travel. Though suburbs had been growing since the late nineteenth century their growth had been tied to rail or trolley access and this was limited to the largest cities. The flexibility of car access changed this and the growth of suburbs began to accelerate. The demands of trucks and cars led to a rapid growth in the construction of all-weather surfaced roads to facilitate their movement. The rapidly expanding electric utility networks led to new consumer appliances and new types of lighting and heating for homes and businesses. The introduction of the radio, radio stations, and commercial radio networks began to break up rural isolation, as did the expansion of local and long-distance telephone communications. Recreational activities such as traveling, going to movies, and professional sports became major businesses. The period saw major innovations in business organization and manufacturing technology. The Federal Reserve System first tested its powers and the United States moved to a dominant position in international trade and global business. These things make the 1920s a period of considerable importance independent of what happened in the 1930s.

National Product and Income and Prices

We begin the survey of the 1920s with an examination of the overall production in the economy, GNP, the most comprehensive measure of aggregate economic activity. Real GNP growth during the 1920s was relatively rapid, 4.2 percent a year from 1920 to 1929 according to the most widely used estimates. (Historical Statistics of the United States, or HSUS, 1976) Real GNP per capita grew 2.7 percent per year between 1920 and 1929. By both nineteenth and twentieth century standards these were relatively rapid rates of real economic growth and they would be considered rapid even today.

There were several interruptions to this growth. In mid-1920 the American economy began to contract and the 1920-1921 depression lasted about a year, but a rapid recovery reestablished full-employment by 1923. As will be discussed below, the Federal Reserve System’s monetary policy was a major factor in initiating the 1920-1921 depression. From 1923 through 1929 growth was much smoother. There was a very mild recession in 1924 and another mild recession in 1927 both of which may be related to oil price shocks (McMillin and Parker, 1994). The 1927 recession was also associated with Henry Ford’s shut-down of all his factories for six months in order to changeover from the Model T to the new Model A automobile. Though the Model T’s market share was declining after 1924, in 1926 Ford’s Model T still made up nearly 40 percent of all the new cars produced and sold in the United States. The Great Depression began in the summer of 1929, possibly as early as June. The initial downturn was relatively mild but the contraction accelerated after the crash of the stock market at the end of October. Real total GNP fell 10.2 percent from 1929 to 1930 while real GNP per capita fell 11.5 percent from 1929 to 1930.

image002 image004

Price changes during the 1920s are shown in Figure 2. The Consumer Price Index, CPI, is a better measure of changes in the prices of commodities and services that a typical consumer would purchase, while the Wholesale Price Index, WPI, is a better measure in the changes in the cost of inputs for businesses. As the figure shows the 1920-1921 depression was marked by extraordinarily large price decreases. Consumer prices fell 11.3 percent from 1920 to 1921 and fell another 6.6 percent from 1921 to 1922. After that consumer prices were relatively constant and actually fell slightly from 1926 to 1927 and from 1927 to 1928. Wholesale prices show greater variation. The 1920-1921 depression hit farmers very hard. Prices had been bid up with the increasing foreign demand during the First World War. As European production began to recover after the war prices began to fall. Though the prices of agricultural products fell from 1919 to 1920, the depression brought on dramatic declines in the prices of raw agricultural produce as well as many other inputs that firms employ. In the scramble to beat price increases during 1919 firms had built up large inventories of raw materials and purchased inputs and this temporary increase in demand led to even larger price increases. With the depression firms began to draw down those inventories. The result was that the prices of raw materials and manufactured inputs fell rapidly along with the prices of agricultural produce—the WPI dropped 45.9 percent between 1920 and 1921. The price changes probably tend to overstate the severity of the 1920-1921 depression. Romer’s recent work (1988) suggests that prices changed much more easily in that depression reducing the drop in production and employment. Wholesale prices in the rest of the 1920s were relatively stable though they were more likely to fall than to rise.

Economic Growth in the 1920s

Despite the 1920-1921 depression and the minor interruptions in 1924 and 1927, the American economy exhibited impressive economic growth during the 1920s. Though some commentators in later years thought that the existence of some slow growing or declining sectors in the twenties suggested weaknesses that might have helped bring on the Great Depression, few now argue this. Economic growth never occurs in all sectors at the same time and at the same rate. Growth reallocates resources from declining or slower growing sectors to the more rapidly expanding sectors in accordance with new technologies, new products and services, and changing consumer tastes.

Economic growth in the 1920s was impressive. Ownership of cars, new household appliances, and housing was spread widely through the population. New products and processes of producing those products drove this growth. The combination of the widening use of electricity in production and the growing adoption of the moving assembly line in manufacturing combined to bring on a continuing rise in the productivity of labor and capital. Though the average workweek in most manufacturing remained essentially constant throughout the 1920s, in a few industries, such as railroads and coal production, it declined. (Whaples 2001) New products and services created new markets such as the markets for radios, electric iceboxes, electric irons, fans, electric lighting, vacuum cleaners, and other laborsaving household appliances. This electricity was distributed by the growing electric utilities. The stocks of those companies helped create the stock market boom of the late twenties. RCA, one of the glamour stocks of the era, paid no dividends but its value appreciated because of expectations for the new company. Like the Internet boom of the late 1990s, the electricity boom of the 1920s fed a rapid expansion in the stock market.

Fed by continuing productivity advances and new products and services and facilitated by an environment of stable prices that encouraged production and risk taking, the American economy embarked on a sustained expansion in the 1920s.

Population and Labor in the 1920s

At the same time that overall production was growing, population growth was declining. As can be seen in Figure 3, from an annual rate of increase of 1.85 and 1.93 percent in 1920 and 1921, respectively, population growth rates fell to 1.23 percent in 1928 and 1.04 percent in 1929.

These changes in the overall growth rate were linked to the birth and death rates of the resident population and a decrease in foreign immigration. Though the crude death rate changed little during the period, the crude birth rate fell sharply into the early 1930s. (Figure 4) There are several explanations for the decline in the birth rate during this period. First, there was an accelerated rural-to-urban migration. Urban families have tended to have fewer children than rural families because urban children do not augment family incomes through their work as unpaid workers as rural children do. Second, the period also saw continued improvement in women’s job opportunities and a rise in their labor force participation rates.

Immigration also fell sharply. In 1917 the federal government began to limit immigration and in 1921 an immigration act limited the number of prospective citizens of any nationality entering the United States each year to no more than 3 percent of that nationality’s resident population as of the 1910 census. A new act in 1924 lowered this to 2 percent of the resident population at the 1890 census and more firmly blocked entry for people from central, southern, and eastern European nations. The limits were relaxed slightly in 1929.

The American population also continued to move during the interwar period. Two regions experienced the largest losses in population shares, New England and the Plains. For New England this was a continuation of a long-term trend. The population share for the Plains region had been rising through the nineteenth century. In the interwar period its agricultural base, combined with the continuing shift from agriculture to industry, led to a sharp decline in its share. The regions gaining population were the Southwest and, particularly, the far West.— California began its rapid growth at this time.

 Real Average Weekly or Daily Earnings for Selected=During the 1920s the labor force grew at a more rapid rate than population. This somewhat more rapid growth came from the declining share of the population less than 14 years old and therefore not in the labor force. In contrast, the labor force participation rates, or fraction of the population aged 14 and over that was in the labor force, declined during the twenties from 57.7 percent to 56.3 percent. This was entirely due to a fall in the male labor force participation rate from 89.6 percent to 86.8 percent as the female labor force participation rate rose from 24.3 percent to 25.1 percent. The primary source of the fall in male labor force participation rates was a rising retirement rate. Employment rates for males who were 65 or older fell from 60.1 percent in 1920 to 58.0 percent in 1930.

With the depression of 1920-1921 the unemployment rate rose rapidly from 5.2 to 8.7 percent. The recovery reduced unemployment to an average rate of 4.8 percent in 1923. The unemployment rate rose to 5.8 percent in the recession of 1924 and to 5.0 percent with the slowdown in 1927. Otherwise unemployment remained relatively low. The onset of the Great Depression from the summer of 1929 on brought the unemployment rate from 4.6 percent in 1929 to 8.9 percent in 1930. (Figure 5)

Earnings for laborers varied during the twenties. Table 1 presents average weekly earnings for 25 manufacturing industries. For these industries male skilled and semi-skilled laborers generally commanded a premium of 35 percent over the earnings of unskilled male laborers in the twenties. Unskilled males received on average 35 percent more than females during the twenties. Real average weekly earnings for these 25 manufacturing industries rose somewhat during the 1920s. For skilled and semi-skilled male workers real average weekly earnings rose 5.3 percent between 1923 and 1929, while real average weekly earnings for unskilled males rose 8.7 percent between 1923 and 1929. Real average weekly earnings for females rose on 1.7 percent between 1923 and 1929. Real weekly earnings for bituminous and lignite coal miners fell as the coal industry encountered difficult times in the late twenties and the real daily wage rate for farmworkers in the twenties, reflecting the ongoing difficulties in agriculture, fell after the recovery from the 1920-1921 depression.

The 1920s were not kind to labor unions even though the First World War had solidified the dominance of the American Federation of Labor among labor unions in the United States. The rapid growth in union membership fostered by federal government policies during the war ended in 1919. A committee of AFL craft unions undertook a successful membership drive in the steel industry in that year. When U.S. Steel refused to bargain, the committee called a strike, the failure of which was a sharp blow to the unionization drive. (Brody, 1965) In the same year, the United Mine Workers undertook a large strike and also lost. These two lost strikes and the 1920-21 depression took the impetus out of the union movement and led to severe membership losses that continued through the twenties. (Figure 6)

Under Samuel Gompers’s leadership, the AFL’s “business unionism” had attempted to promote the union and collective bargaining as the primary answer to the workers’ concerns with wages, hours, and working conditions. The AFL officially opposed any government actions that would have diminished worker attachment to unions by providing competing benefits, such as government sponsored unemployment insurance, minimum wage proposals, maximum hours proposals and social security programs. As Lloyd Ulman (1961) points out, the AFL, under Gompers’ direction, differentiated on the basis of whether the statute would or would not aid collective bargaining. After Gompers’ death, William Green led the AFL in a policy change as the AFL promoted the idea of union-management cooperation to improve output and promote greater employer acceptance of unions. But Irving Bernstein (1965) concludes that, on the whole, union-management cooperation in the twenties was a failure.

To combat the appeal of unions in the twenties, firms used the “yellow-dog” contract requiring employees to swear they were not union members and would not join one; the “American Plan” promoting the open shop and contending that the closed shop was un-American; and welfare capitalism. The most common aspects of welfare capitalism included personnel management to handle employment issues and problems, the doctrine of “high wages,” company group life insurance, old-age pension plans, stock-purchase plans, and more. Some firms formed company unions to thwart independent unionization and the number of company-controlled unions grew from 145 to 432 between 1919 and 1926.

Until the late thirties the AFL was a voluntary association of independent national craft unions. Craft unions relied upon the particular skills the workers had acquired (their craft) to distinguish the workers and provide barriers to the entry of other workers. Most craft unions required a period of apprenticeship before a worker was fully accepted as a journeyman worker. The skills, and often lengthy apprenticeship, constituted the entry barrier that gave the union its bargaining power. There were only a few unions that were closer to today’s industrial unions where the required skills were much less (or nonexistent) making the entry of new workers much easier. The most important of these industrial unions was the United Mine Workers, UMW.

The AFL had been created on two principles: the autonomy of the national unions and the exclusive jurisdiction of the national union.—Individual union members were not, in fact, members of the AFL; rather, they were members of the local and national union, and the national was a member of the AFL. Representation in the AFL gave dominance to the national unions, and, as a result, the AFL had little effective power over them. The craft lines, however, had never been distinct and increasingly became blurred. The AFL was constantly mediating jurisdictional disputes between member national unions. Because the AFL and its individual unions were not set up to appeal to and work for the relatively less skilled industrial workers, union organizing and growth lagged in the twenties.

Agriculture

The onset of the First World War in Europe brought unprecedented prosperity to American farmers. As agricultural production in Europe declined, the demand for American agricultural exports rose, leading to rising farm product prices and incomes. In response to this, American farmers expanded production by moving onto marginal farmland, such as Wisconsin cutover property on the edge of the woods and hilly terrain in the Ozark and Appalachian regions. They also increased output by purchasing more machinery, such as tractors, plows, mowers, and threshers. The price of farmland, particularly marginal farmland, rose in response to the increased demand, and the debt of American farmers increased substantially.

This expansion of American agriculture continued past the end of the First World War as farm exports to Europe and farm prices initially remained high. However, agricultural production in Europe recovered much faster than most observers had anticipated. Even before the onset of the short depression in 1920, farm exports and farm product prices had begun to fall. During the depression, farm prices virtually collapsed. From 1920 to 1921, the consumer price index fell 11.3 percent, the wholesale price index fell 45.9 percent, and the farm products price index fell 53.3 percent. (HSUS, Series E40, E42, and E135)

Real average net income per farm fell over 72.6 percent between 1920 and 1921 and, though rising in the twenties, never recovered the relative levels of 1918 and 1919. (Figure 7) Farm mortgage foreclosures rose and stayed at historically high levels for the entire decade of the 1920s. (Figure 8) The value of farmland and buildings fell throughout the twenties and, for the first time in American history, the number of cultivated acres actually declined as farmers pulled back from the marginal farmland brought into production during the war. Rather than indicators of a general depression in agriculture in the twenties, these were the results of the financial commitments made by overoptimistic American farmers during and directly after the war. The foreclosures were generally on second mortgages rather than on first mortgages as they were in the early 1930s. (Johnson, 1973; Alston, 1983)

A Declining Sector

A major difficulty in analyzing the interwar agricultural sector lies in separating the effects of the 1920-21 and 1929-33 depressions from those that arose because agriculture was declining relative to the other sectors. A relatively very slow growing demand for basic agricultural products and significant increases in the productivity of labor, land, and machinery in agricultural production combined with a much more rapid extensive economic growth in the nonagricultural sectors of the economy required a shift of resources, particularly labor, out of agriculture. (Figure 9) The market induces labor to voluntarily move from one sector to another through income differentials, suggesting that even in the absence of the effects of the depressions, farm incomes would have been lower than nonfarm incomes so as to bring about this migration.

The continuous substitution of tractor power for horse and mule power released hay and oats acreage to grow crops for human consumption. Though cotton and tobacco continued as the primary crops in the south, the relative production of cotton continued to shift to the west as production in Arkansas, Missouri, Oklahoma, Texas, New Mexico, Arizona, and California increased. As quotas reduced immigration and incomes rose, the demand for cereal grains grew slowly—more slowly than the supply—and the demand for fruits, vegetables, and dairy products grew. Refrigeration and faster freight shipments expanded the milk sheds further from metropolitan areas. Wisconsin and other North Central states began to ship cream and cheeses to the Atlantic Coast. Due to transportation improvements, specialized truck farms and the citrus industry became more important in California and Florida. (Parker, 1972; Soule, 1947)

The relative decline of the agricultural sector in this period was closely related to the highly inelastic income elasticity of demand for many farm products, particularly cereal grains, pork, and cotton. As incomes grew, the demand for these staples grew much more slowly. At the same time, rising land and labor productivity were increasing the supplies of staples, causing real prices to fall.

Table 3 presents selected agricultural productivity statistics for these years. Those data indicate that there were greater gains in labor productivity than in land productivity (or per acre yields). Per acre yields in wheat and hay actually decreased between 1915-19 and 1935-39. These productivity increases, which released resources from the agricultural sector, were the result of technological improvements in agriculture.

Technological Improvements In Agricultural Production

In many ways the adoption of the tractor in the interwar period symbolizes the technological changes that occurred in the agricultural sector. This changeover in the power source that farmers used had far-reaching consequences and altered the organization of the farm and the farmers’ lifestyle. The adoption of the tractor was land saving (by releasing acreage previously used to produce crops for workstock) and labor saving. At the same time it increased the risks of farming because farmers were now much more exposed to the marketplace. They could not produce their own fuel for tractors as they had for the workstock. Rather, this had to be purchased from other suppliers. Repair and replacement parts also had to be purchased, and sometimes the repairs had to be undertaken by specialized mechanics. The purchase of a tractor also commonly required the purchase of new complementary machines; therefore, the decision to purchase a tractor was not an isolated one. (White, 2001; Ankli, 1980; Ankli and Olmstead, 1981; Musoke, 1981; Whatley, 1987). These changes resulted in more and more farmers purchasing and using tractors, but the rate of adoption varied sharply across the United States.

Technological innovations in plants and animals also raised productivity. Hybrid seed corn increased yields from an average of 40 bushels per acre to 100 to 120 bushels per acre. New varieties of wheat were developed from the hardy Russian and Turkish wheat varieties which had been imported. The U.S. Department of Agriculture’s Experiment Stations took the lead in developing wheat varieties for different regions. For example, in the Columbia River Basin new varieties raised yields from an average of 19.1 bushels per acre in 1913-22 to 23.1 bushels per acre in 1933-42. (Shepherd, 1980) New hog breeds produced more meat and new methods of swine sanitation sharply increased the survival rate of piglets. An effective serum for hog cholera was developed, and the federal government led the way in the testing and eradication of bovine tuberculosis and brucellosis. Prior to the Second World War, a number of pesticides to control animal disease were developed, including cattle dips and disinfectants. By the mid-1920s a vaccine for “blackleg,” an infectious, usually fatal disease that particularly struck young cattle, was completed. The cattle tick, which carried Texas Fever, was largely controlled through inspections. (Schlebecker, 1975; Bogue, 1983; Wood, 1980)

Federal Agricultural Programs in the 1920s

Though there was substantial agricultural discontent in the period from the Civil War to late 1890s, the period from then to the onset of the First World War was relatively free from overt farmers’ complaints. In later years farmers dubbed the 1910-14 period as agriculture’s “golden years” and used the prices of farm crops and farm inputs in that period as a standard by which to judge crop and input prices in later years. The problems that arose in the agricultural sector during the twenties once again led to insistent demands by farmers for government to alleviate their distress.

Though there were increasing calls for direct federal government intervention to limit production and raise farm prices, this was not used until Roosevelt took office. Rather, there was a reliance upon the traditional method to aid injured groups—tariffs, and upon the “sanctioning and promotion of cooperative marketing associations.” In 1921 Congress attempted to control the grain exchanges and compel merchants and stockyards to charge “reasonable rates,” with the Packers and Stockyards Act and the Grain Futures Act. In 1922 Congress passed the Capper-Volstead Act to promote agricultural cooperatives and the Fordney-McCumber Tariff to impose high duties on most agricultural imports.—The Cooperative Marketing Act of 1924 did not bolster failing cooperatives as it was supposed to do. (Hoffman and Liebcap, 1991)

Twice between 1924 and 1928 Congress passed “McNary-Haugan” bills, but President Calvin Coolidge vetoed both. The McNary-Haugan bills proposed to establish “fair” exchange values (based on the 1910-14 period) for each product and to maintain them through tariffs and a private corporation that would be chartered by the government and could buy enough of each commodity to keep its price up to the computed fair level. The revenues were to come from taxes imposed on farmers. The Hoover administration passed the Hawley-Smoot tariff in 1930 and an Agricultural Marketing Act in 1929. This act committed the federal government to a policy of stabilizing farm prices through several nongovernment institutions but these failed during the depression. Federal intervention in the agricultural sector really came of age during the New Deal era of the 1930s.

Manufacturing

Agriculture was not the only sector experiencing difficulties in the twenties. Other industries, such as textiles, boots and shoes, and coal mining, also experienced trying times. However, at the same time that these industries were declining, other industries, such as electrical appliances, automobiles, and construction, were growing rapidly. The simultaneous existence of growing and declining industries has been common to all eras because economic growth and technological progress never affect all sectors in the same way. In general, in manufacturing there was a rapid rate of growth of productivity during the twenties. The rise of real wages due to immigration restrictions and the slower growth of the resident population spurred this. Transportation improvements and communications advances were also responsible. These developments brought about differential growth in the various manufacturing sectors in the United States in the 1920s.

Because of the historic pattern of economic development in the United States, the northeast was the first area to really develop a manufacturing base. By the mid-nineteenth century the East North Central region was creating a manufacturing base and the other regions began to create manufacturing bases in the last half of the nineteenth century resulting in a relative westward and southern shift of manufacturing activity. This trend continued in the 1920s as the New England and Middle Atlantic regions’ shares of manufacturing employment fell while all of the other regions—excluding the West North Central region—gained. There was considerable variation in the growth of the industries and shifts in their ranking during the decade. The largest broadly defined industries were, not surprisingly, food and kindred products; textile mill products; those producing and fabricating primary metals; machinery production; and chemicals. When industries are more narrowly defined, the automobile industry, which ranked third in manufacturing value added in 1919, ranked first by the mid-1920s.

Productivity Developments

Gavin Wright (1990) has argued that one of the underappreciated characteristics of American industrial history has been its reliance on mineral resources. Wright argues that the growing American strength in industrial exports and industrialization in general relied on an increasing intensity in nonreproducible natural resources. The large American market was knit together as one large market without internal barriers through the development of widespread low-cost transportation. Many distinctively American developments, such as continuous-process, mass-production methods were associated with the “high throughput” of fuel and raw materials relative to labor and capital inputs. As a result the United States became the dominant industrial force in the world 1920s and 1930s. According to Wright, after World War II “the process by which the United States became a unified ‘economy’ in the nineteenth century has been extended to the world as a whole. To a degree, natural resources have become commodities rather than part of the ‘factor endowment’ of individual countries.” (Wright, 1990)

In addition to this growing intensity in the use of nonreproducible natural resources as a source of productivity gains in American manufacturing, other technological changes during the twenties and thirties tended to raise the productivity of the existing capital through the replacement of critical types of capital equipment with superior equipment and through changes in management methods. (Soule, 1947; Lorant, 1967; Devine, 1983; Oshima, 1984) Some changes, such as the standardization of parts and processes and the reduction of the number of styles and designs, raised the productivity of both capital and labor. Modern management techniques, first introduced by Frederick W. Taylor, were introduced on a wider scale.

One of the important forces contributing to mass production and increased productivity was the transfer to electric power. (Devine, 1983) By 1929 about 70 percent of manufacturing activity relied on electricity, compared to roughly 30 percent in 1914. Steam provided 80 percent of the mechanical drive capacity in manufacturing in 1900, but electricity provided over 50 percent by 1920 and 78 percent by 1929. An increasing number of factories were buying their power from electric utilities. In 1909, 64 percent of the electric motor capacity in manufacturing establishments used electricity generated on the factory site; by 1919, 57 percent of the electricity used in manufacturing was purchased from independent electric utilities.

The shift from coal to oil and natural gas and from raw unprocessed energy in the forms of coal and waterpower to processed energy in the form of internal combustion fuel and electricity increased thermal efficiency. After the First World War energy consumption relative to GNP fell, there was a sharp increase in the growth rate of output per labor-hour, and the output per unit of capital input once again began rising. These trends can be seen in the data in Table 3. Labor productivity grew much more rapidly during the 1920s than in the previous or following decade. Capital productivity had declined in the decade previous to the 1920s while it also increased sharply during the twenties and continued to rise in the following decade. Alexander Field (2003) has argued that the 1930s were the most technologically progressive decade of the twentieth century basing his argument on the growth of multi-factor productivity as well as the impressive array of technological developments during the thirties. However, the twenties also saw impressive increases in labor and capital productivity as, particularly, developments in energy and transportation accelerated.

 Average Annual Rates of Labor Productivity and Capital Productivity Growth.

Warren Devine, Jr. (1983) reports that in the twenties the most important result of the adoption of electricity was that it would be an indirect “lever to increase production.” There were a number of ways in which this occurred. Electricity brought about an increased flow of production by allowing new flexibility in the design of buildings and the arrangement of machines. In this way it maximized throughput. Electric cranes were an “inestimable boon” to production because with adequate headroom they could operate anywhere in a plant, something that mechanical power transmission to overhead cranes did not allow. Electricity made possible the use of portable power tools that could be taken anywhere in the factory. Electricity brought about improved illumination, ventilation, and cleanliness in the plants, dramatically improving working conditions. It improved the control of machines since there was no longer belt slippage with overhead line shafts and belt transmission, and there were less limitations on the operating speeds of machines. Finally, it made plant expansion much easier than when overhead shafts and belts had been relied upon for operating power.

The mechanization of American manufacturing accelerated in the 1920s, and this led to a much more rapid growth of productivity in manufacturing compared to earlier decades and to other sectors at that time. There were several forces that promoted mechanization. One was the rapidly expanding aggregate demand during the prosperous twenties. Another was the technological developments in new machines and processes, of which electrification played an important part. Finally, Harry Jerome (1934) and, later, Harry Oshima (1984) both suggest that the price of unskilled labor began to rise as immigration sharply declined with new immigration laws and falling population growth. This accelerated the mechanization of the nation’s factories.

Technological changes during this period can be documented for a number of individual industries. In bituminous coal mining, labor productivity rose when mechanical loading devices reduced the labor required from 24 to 50 percent. The burst of paved road construction in the twenties led to the development of a finishing machine to smooth the surface of cement highways, and this reduced the labor requirement from 40 to 60 percent. Mechanical pavers that spread centrally mixed materials further increased productivity in road construction. These replaced the roadside dump and wheelbarrow methods of spreading the cement. Jerome (1934) reports that the glass in electric light bulbs was made by new machines that cut the number of labor-hours required for their manufacture by nearly half. New machines to produce cigarettes and cigars, for warp-tying in textile production, and for pressing clothes in clothing shops also cut labor-hours. The Banbury mixer reduced the labor input in the production of automobile tires by half, and output per worker of inner tubes increased about four times with a new production method. However, as Daniel Nelson (1987) points out, the continuing advances were the “cumulative process resulting from a vast number of successive small changes.” Because of these continuing advances in the quality of the tires and in the manufacturing of tires, between 1910 and 1930 “tire costs per thousand miles of driving fell from $9.39 to $0.65.”

John Lorant (1967) has documented other technological advances that occurred in American manufacturing during the twenties. For example, the organic chemical industry developed rapidly due to the introduction of the Weizman fermentation process. In a similar fashion, nearly half of the productivity advances in the paper industry were due to the “increasingly sophisticated applications of electric power and paper manufacturing processes,” especially the fourdrinier paper-making machines. As Avi Cohen (1984) has shown, the continuing advances in these machines were the result of evolutionary changes to the basic machine. Mechanization in many types of mass-production industries raised the productivity of labor and capital. In the glass industry, automatic feeding and other types of fully automatic production raised the efficiency of the production of glass containers, window glass, and pressed glass. Giedion (1948) reported that the production of bread was “automatized” in all stages during the 1920s.

Though not directly bringing about productivity increases in manufacturing processes, developments in the management of manufacturing firms, particularly the largest ones, also significantly affected their structure and operation. Alfred D. Chandler, Jr. (1962) has argued that the structure of a firm must follow its strategy. Until the First World War most industrial firms were centralized, single-division firms even when becoming vertically integrated. When this began to change the management of the large industrial firms had to change accordingly.

Because of these changes in the size and structure of the firm during the First World War, E. I. du Pont de Nemours and Company was led to adopt a strategy of diversifying into the production of largely unrelated product lines. The firm found that the centralized, divisional structure that had served it so well was not suited to this strategy, and its poor business performance led its executives to develop between 1919 and 1921 a decentralized, multidivisional structure that boosted it to the first rank among American industrial firms.

General Motors had a somewhat different problem. By 1920 it was already decentralized into separate divisions. In fact, there was so much decentralization that those divisions essentially remained separate companies and there was little coordination between the operating divisions. A financial crisis at the end of 1920 ousted W. C. Durant and brought in the du Ponts and Alfred Sloan. Sloan, who had seen the problems at GM but had been unable to convince Durant to make changes, began reorganizing the management of the company. Over the next several years Sloan and other GM executives developed the general office for a decentralized, multidivisional firm.

Though facing related problems at nearly the same time, GM and du Pont developed their decentralized, multidivisional organizations separately. As other manufacturing firms began to diversify, GM and du Pont became the models for reorganizing the management of the firms. In many industrial firms these reorganizations were not completed until well after the Second World War.

Competition, Monopoly, and the Government

The rise of big businesses, which accelerated in the postbellum period and particularly during the first great turn-of-the-century merger wave, continued in the interwar period. Between 1925 and 1939 the share of manufacturing assets held by the 100 largest corporations rose from 34.5 to 41.9 percent. (Niemi, 1980) As a public policy, the concern with monopolies diminished in the 1920s even though firms were growing larger. But the growing size of businesses was one of the convenient scapegoats upon which to blame the Great Depression.

However, the rise of large manufacturing firms in the interwar period is not so easily interpreted as an attempt to monopolize their industries. Some of the growth came about through vertical integration by the more successful manufacturing firms. Backward integration was generally an attempt to ensure a smooth supply of raw materials where that supply was not plentiful and was dispersed and firms “feared that raw materials might become controlled by competitors or independent suppliers.” (Livesay and Porter, 1969) Forward integration was an offensive tactic employed when manufacturers found that the existing distribution network proved inadequate. Livesay and Porter suggested a number of reasons why firms chose to integrate forward. In some cases they had to provide the mass distribution facilities to handle their much larger outputs; especially when the product was a new one. The complexity of some new products required technical expertise that the existing distribution system could not provide. In other cases “the high unit costs of products required consumer credit which exceeded financial capabilities of independent distributors.” Forward integration into wholesaling was more common than forward integration into retailing. The producers of automobiles, petroleum, typewriters, sewing machines, and harvesters were typical of those manufacturers that integrated all the way into retailing.

In some cases, increases in industry concentration arose as a natural process of industrial maturation. In the automobile industry, Henry Ford’s invention in 1913 of the moving assembly line—a technological innovation that changed most manufacturing—lent itself to larger factories and firms. Of the several thousand companies that had produced cars prior to 1920, 120 were still doing so then, but Ford and General Motors were the clear leaders, together producing nearly 70 percent of the cars. During the twenties, several other companies, such as Durant, Willys, and Studebaker, missed their opportunity to become more important producers, and Chrysler, formed in early 1925, became the third most important producer by 1930. Many went out of business and by 1929 only 44 companies were still producing cars. The Great Depression decimated the industry. Dozens of minor firms went out of business. Ford struggled through by relying on its huge stockpile of cash accumulated prior to the mid-1920s, while Chrysler actually grew. By 1940, only eight companies still produced cars—GM, Ford, and Chrysler had about 85 percent of the market, while Willys, Studebaker, Nash, Hudson, and Packard shared the remainder. The rising concentration in this industry was not due to attempts to monopolize. As the industry matured, growing economies of scale in factory production and vertical integration, as well as the advantages of a widespread dealer network, led to a dramatic decrease in the number of viable firms. (Chandler, 1962 and 1964; Rae, 1984; Bernstein, 1987)

It was a similar story in the tire industry. The increasing concentration and growth of firms was driven by scale economies in production and retailing and by the devastating effects of the depression in the thirties. Although there were 190 firms in 1919, 5 firms dominated the industry—Goodyear, B. F. Goodrich, Firestone, U.S. Rubber, and Fisk, followed by Miller Rubber, General Tire and Rubber, and Kelly-Springfield. During the twenties, 166 firms left the industry while 66 entered. The share of the 5 largest firms rose from 50 percent in 1921 to 75 percent in 1937. During the depressed thirties, there was fierce price competition, and many firms exited the industry. By 1937 there were 30 firms, but the average employment per factory was 4.41 times as large as in 1921, and the average factory produced 6.87 times as many tires as in 1921. (French, 1986 and 1991; Nelson, 1987; Fricke, 1982)

The steel industry was already highly concentrated by 1920 as U.S. Steel had around 50 percent of the market. But U. S. Steel’s market share declined through the twenties and thirties as several smaller firms competed and grew to become known as Little Steel, the next six largest integrated producers after U. S. Steel. Jonathan Baker (1989) has argued that the evidence is consistent with “the assumption that competition was a dominant strategy for steel manufacturers” until the depression. However, the initiation of the National Recovery Administration (NRA) codes in 1933 required the firms to cooperate rather than compete, and Baker argues that this constituted a training period leading firms to cooperate in price and output policies after 1935. (McCraw and Reinhardt, 1989; Weiss, 1980; Adams, 1977)

Mergers

A number of the larger firms grew by merger during this period, and the second great merger wave in American industry occurred during the last half of the 1920s. Figure 10 shows two series on mergers during the interwar period. The FTC series included many of the smaller mergers. The series constructed by Carl Eis (1969) only includes the larger mergers and ends in 1930.

This second great merger wave coincided with the stock market boom of the twenties and has been called “merger for oligopoly” rather than merger for monopoly. (Stigler, 1950) This merger wave created many larger firms that ranked below the industry leaders. Much of the activity in occurred in the banking and public utilities industries. (Markham, 1955) In manufacturing and mining, the effects on industrial structure were less striking. Eis (1969) found that while mergers took place in almost all industries, they were concentrated in a smaller number of them, particularly petroleum, primary metals, and food products.

The federal government’s antitrust policies toward business varied sharply during the interwar period. In the 1920s there was relatively little activity by the Justice Department, but after the Great Depression the New Dealers tried to take advantage of big business to make business exempt from the antitrust laws and cartelize industries under government supervision.

With the passage of the FTC and Clayton Acts in 1914 to supplement the 1890 Sherman Act, the cornerstones of American antitrust law were complete. Though minor amendments were later enacted, the primary changes after that came in the enforcement of the laws and in swings in judicial decisions. Their two primary areas of application were in the areas of overt behavior, such as horizontal and vertical price-fixing, and in market structure, such as mergers and dominant firms. Horizontal price-fixing involves firms that would normally be competitors getting together to agree on stable and higher prices for their products. As long as most of the important competitors agree on the new, higher prices, substitution between products is eliminated and the demand becomes much less elastic. Thus, increasing the price increases the revenues and the profits of the firms who are fixing prices. Vertical price-fixing involves firms setting the prices of intermediate products purchased at different stages of production. It also tends to eliminate substitutes and makes the demand less elastic.

Price-fixing continued to be considered illegal throughout the period, but there was no major judicial activity regarding it in the 1920s other than the Trenton Potteries decision in 1927. In that decision 20 individuals and 23 corporations were found guilty of conspiring to fix the prices of bathroom bowls. The evidence in the case suggested that the firms were not very successful at doing so, but the court found that they were guilty nevertheless; their success, or lack thereof, was not held to be a factor in the decision. (Scherer and Ross, 1990) Though criticized by some, the decision was precedent setting in that it prohibited explicit pricing conspiracies per se.

The Justice Department had achieved success in dismantling Standard Oil and American Tobacco in 1911 through decisions that the firms had unreasonably restrained trade. These were essentially the same points used in court decisions against the Powder Trust in 1911, the thread trust in 1913, Eastman Kodak in 1915, the glucose and cornstarch trust in 1916, and the anthracite railroads in 1920. The criterion of an unreasonable restraint of trade was used in the 1916 and 1918 decisions that found the American Can Company and the United Shoe Machinery Company innocent of violating the Sherman Act; it was also clearly enunciated in the 1920 U. S. Steel decision. This became known as the rule of reason standard in antitrust policy.

Merger policy had been defined in the 1914 Clayton Act to prohibit only the acquisition of one corporation’s stock by another corporation. Firms then shifted to the outright purchase of a competitor’s assets. A series of court decisions in the twenties and thirties further reduced the possibilities of Justice Department actions against mergers. “Only fifteen mergers were ordered dissolved through antitrust actions between 1914 and 1950, and ten of the orders were accomplished under the Sherman Act rather than Clayton Act proceedings.”

Energy

The search for energy and new ways to translate it into heat, light, and motion has been one of the unending themes in history. From whale oil to coal oil to kerosene to electricity, the search for better and less costly ways to light our lives, heat our homes, and move our machines has consumed much time and effort. The energy industries responded to those demands and the consumption of energy materials (coal, oil, gas, and fuel wood) as a percent of GNP rose from about 2 percent in the latter part of the nineteenth century to about 3 percent in the twentieth.

Changes in the energy markets that had begun in the nineteenth century continued. Processed energy in the forms of petroleum derivatives and electricity continued to become more important than “raw” energy, such as that available from coal and water. The evolution of energy sources for lighting continued; at the end of the nineteenth century, natural gas and electricity, rather than liquid fuels began to provide more lighting for streets, businesses, and homes.

In the twentieth century the continuing shift to electricity and internal combustion fuels increased the efficiency with which the American economy used energy. These processed forms of energy resulted in a more rapid increase in the productivity of labor and capital in American manufacturing. From 1899 to 1919, output per labor-hour increased at an average annual rate of 1.2 percent, whereas from 1919 to 1937 the increase was 3.5 percent per year. The productivity of capital had fallen at an average annual rate of 1.8 percent per year in the 20 years prior to 1919, but it rose 3.1 percent a year in the 18 years after 1919. As discussed above, the adoption of electricity in American manufacturing initiated a rapid evolution in the organization of plants and rapid increases in productivity in all types of manufacturing.

The change in transportation was even more remarkable. Internal combustion engines running on gasoline or diesel fuel revolutionized transportation. Cars quickly grabbed the lion’s share of local and regional travel and began to eat into long distance passenger travel, just as the railroads had done to passenger traffic by water in the 1830s. Even before the First World War cities had begun passing laws to regulate and limit “jitney” services and to protect the investments in urban rail mass transit. Trucking began eating into the freight carried by the railroads.

These developments brought about changes in the energy industries. Coal mining became a declining industry. As Figure 11 shows, in 1925 the share of petroleum in the value of coal, gas, and petroleum output exceeded bituminous coal, and it continued to rise. Anthracite coal’s share was much smaller and it declined while natural gas and LP (or liquefied petroleum) gas were relatively unimportant. These changes, especially the declining coal industry, were the source of considerable worry in the twenties.

Coal

One of the industries considered to be “sick” in the twenties was coal, particularly bituminous, or soft, coal. Income in the industry declined, and bankruptcies were frequent. Strikes frequently interrupted production. The majority of the miners “lived in squalid and unsanitary houses, and the incidence of accidents and diseases was high.” (Soule, 1947) The number of operating bituminous coal mines declined sharply from 1923 through 1932. Anthracite (or hard) coal output was much smaller during the twenties. Real coal prices rose from 1919 to 1922, and bituminous coal prices fell sharply from then to 1925. (Figure 12) Coal mining employment plummeted during the twenties. Annual earnings, especially in bituminous coal mining, also fell because of dwindling hourly earnings and, from 1929 on, a shrinking workweek. (Figure 13)

The sources of these changes are to be found in the increasing supply due to productivity advances in coal production and in the decreasing demand for coal. The demand fell as industries began turning from coal to electricity and because of productivity advances in the use of coal to create energy in steel, railroads, and electric utilities. (Keller, 1973) In the generation of electricity, larger steam plants employing higher temperatures and steam pressures continued to reduce coal consumption per kilowatt hour. Similar reductions were found in the production of coke from coal for iron and steel production and in the use of coal by the steam railroad engines. (Rezneck, 1951) All of these factors reduced the demand for coal.

Productivity advances in coal mining tended to be labor saving. Mechanical cutting accounted for 60.7 percent of the coal mined in 1920 and 78.4 percent in 1929. By the middle of the twenties, the mechanical loading of coal began to be introduced. Between 1929 and 1939, output per labor-hour rose nearly one third in bituminous coal mining and nearly four fifths in anthracite as more mines adopted machine mining and mechanical loading and strip mining expanded.

The increasing supply and falling demand for coal led to the closure of mines that were too costly to operate. A mine could simply cease operations, let the equipment stand idle, and lay off employees. When bankruptcies occurred, the mines generally just turned up under new ownership with lower capital charges. When demand increased or strikes reduced the supply of coal, idle mines simply resumed production. As a result, the easily expanded supply largely eliminated economic profits.

The average daily employment in coal mining dropped by over 208,000 from its peak in 1923, but the sharply falling real wages suggests that the supply of labor did not fall as rapidly as the demand for labor. Soule (1947) notes that when employment fell in coal mining, it meant fewer days of work for the same number of men. Social and cultural characteristics tended to tie many to their home region. The local alternatives were few, and ignorance of alternatives outside the Appalachian rural areas, where most bituminous coal was mined, made it very costly to transfer out.

Petroleum

In contrast to the coal industry, the petroleum industry was growing throughout the interwar period. By the thirties, crude petroleum dominated the real value of the production of energy materials. As Figure 14 shows, the production of crude petroleum increased sharply between 1920 and 1930, while real petroleum prices, though highly variable, tended to decline.

The growing demand for petroleum was driven by the growth in demand for gasoline as America became a motorized society. The production of gasoline surpassed kerosene production in 1915. Kerosene’s market continued to contract as electric lighting replaced kerosene lighting. The development of oil burners in the twenties began a switch from coal toward fuel oil for home heating, and this further increased the growing demand for petroleum. The growth in the demand for fuel oil and diesel fuel for ship engines also increased petroleum demand. But it was the growth in the demand for gasoline that drove the petroleum market.

The decline in real prices in the latter part of the twenties shows that supply was growing even faster than demand. The discovery of new fields in the early twenties increased the supply of petroleum and led to falling prices as production capacity grew. The Santa Fe Springs, California strike in 1919 initiated a supply shock as did the discovery of the Long Beach, California field in 1921. New discoveries in Powell, Texas and Smackover Arkansas further increased the supply of petroleum in 1921. New supply increases occurred in 1926 to 1928 with petroleum strikes in Seminole, Oklahoma and Hendricks, Texas. The supply of oil increased sharply in 1930 to 1931 with new discoveries in Oklahoma City and East Texas. Each new discovery pushed down real oil prices, and the prices of petroleum derivatives, and the growing production capacity led to a general declining trend in petroleum prices. McMillin and Parker (1994) argue that supply shocks generated by these new discoveries were a factor in the business cycles during the 1920s.

The supply of gasoline increased more than the supply of crude petroleum. In 1913 a chemist at Standard Oil of Indiana introduced the cracking process to refine crude petroleum; until that time it had been refined by distillation or unpressurized heating. In the heating process, various refined products such as kerosene, gasoline, naphtha, and lubricating oils were produced at different temperatures. It was difficult to vary the amount of the different refined products produced from a barrel of crude. The cracking process used pressurized heating to break heavier components down into lighter crude derivatives; with cracking, it was possible to increase the amount of gasoline obtained from a barrel of crude from 15 to 45 percent. In the early twenties, chemists at Standard Oil of New Jersey improved the cracking process, and by 1927 it was possible to obtain twice as much gasoline from a barrel of crude petroleum as in 1917.

The petroleum companies also developed new ways to distribute gasoline to motorists that made it more convenient to purchase gasoline. Prior to the First World War, gasoline was commonly purchased in one- or five-gallon cans and the purchaser used a funnel to pour the gasoline from the can into the car. Then “filling stations” appeared, which specialized in filling cars’ tanks with gasoline. These spread rapidly, and by 1919 gasoline companies werebeginning to introduce their own filling stations or contract with independent stations to exclusively distribute their gasoline. Increasing competition and falling profits led filling station operators to expand into other activities such as oil changes and other mechanical repairs. The general name attached to such stations gradually changed to “service stations” to reflect these new functions.

Though the petroleum firms tended to be large, they were highly competitive, trying to pump as much petroleum as possible to increase their share of the fields. This, combined with the development of new fields, led to an industry with highly volatile prices and output. Firms desperately wanted to stabilize and reduce the production of crude petroleum so as to stabilize and raise the prices of crude petroleum and refined products. Unable to obtain voluntary agreement on output limitations by the firms and producers, governments began stepping in. Led by Texas, which created the Texas Railroad Commission in 1891, oil-producing states began to intervene to regulate production. Such laws were usually termed prorationing laws and were quotas designed to limit each well’s output to some fraction of its potential. The purpose was as much to stabilize and reduce production and raise prices as anything else, although generally such laws were passed under the guise of conservation. Although the federal government supported such attempts, not until the New Deal were federal laws passed to assist this.

Electricity

By the mid 1890s the debate over the method by which electricity was to be transmitted had been won by those who advocated alternating current. The reduced power losses and greater distance over which electricity could be transmitted more than offset the necessity for transforming the current back to direct current for general use. Widespread adoption of machines and appliances by industry and consumers then rested on an increase in the array of products using electricity as the source of power, heat, or light and the development of an efficient, lower cost method of generating electricity.

General Electric, Westinghouse, and other firms began producing the electrical appliances for homes and an increasing number of machines based on electricity began to appear in industry. The problem of lower cost production was solved by the introduction of centralized generating facilities that distributed the electric power through lines to many consumers and business firms.

Though initially several firms competed in generating and selling electricity to consumers and firms in a city or area, by the First World War many states and communities were awarding exclusive franchises to one firm to generate and distribute electricity to the customers in the franchise area. (Bright, 1947; Passer, 1953) The electric utility industry became an important growth industry and, as Figure 15 shows, electricity production and use grew rapidly.

The electric utilities increasingly were regulated by state commissions that were charged with setting rates so that the utilities could receive a “fair return” on their investments. Disagreements over what constituted a “fair return” and the calculation of the rate base led to a steady stream of cases before the commissions and a continuing series of court appeals. Generally these court decisions favored the reproduction cost basis. Because of the difficulty and cost in making these calculations, rates tended to be in the hands of the electric utilities that, it has been suggested, did not lower rates adequately to reflect the rising productivity and lowered costs of production. The utilities argued that a more rapid lowering of rates would have jeopardized their profits. Whether or not this increased their monopoly power is still an open question, but it should be noted, that electric utilities were hardly price-taking industries prior to regulation. (Mercer, 1973) In fact, as Figure 16 shows, the electric utilities began to systematically practice market segmentation charging users with less elastic demands, higher prices per kilowatt-hour.

Energy in the American Economy of the 1920s

The changes in the energy industries had far-reaching consequences. The coal industry faced a continuing decline in demand. Even in the growing petroleum industry, the periodic surges in the supply of petroleum caused great instability. In manufacturing, as described above, electrification contributed to a remarkable rise in productivity. The transportation revolution brought about by the rise of gasoline-powered trucks and cars changed the way businesses received their supplies and distributed their production as well as where they were located. The suburbanization of America and the beginnings of urban sprawl were largely brought about by the introduction of low-priced gasoline for cars.

Transportation

The American economy was forever altered by the dramatic changes in transportation after 1900. Following Henry Ford’s introduction of the moving assembly production line in 1914, automobile prices plummeted, and by the end of the 1920s about 60 percent of American families owned an automobile. The advent of low-cost personal transportation led to an accelerating movement of population out of the crowded cities to more spacious homes in the suburbs and the automobile set off a decline in intracity public passenger transportation that has yet to end. Massive road-building programs facilitated the intercity movement of people and goods. Trucks increasingly took over the movement of freight in competition with the railroads. New industries, such as gasoline service stations, motor hotels, and the rubber tire industry, arose to service the automobile and truck traffic. These developments were complicated by the turmoil caused by changes in the federal government’s policies toward transportation in the United States.

With the end of the First World War, a debate began as to whether the railroads, which had been taken over by the government, should be returned to private ownership or nationalized. The voices calling for a return to private ownership were much stronger, but doing so fomented great controversy. Many in Congress believed that careful planning and consolidation could restore the railroads and make them more efficient. There was continued concern about the near monopoly that the railroads had on the nation’s intercity freight and passenger transportation. The result of these deliberations was the Transportation Act of 1920, which was premised on the continued domination of the nation’s transportation by the railroads—an erroneous presumption.

The Transportation Act of 1920 presented a marked change in the Interstate Commerce Commission’s ability to control railroads. The ICC was allowed to prescribe exact rates that were to be set so as to allow the railroads to earn a fair return, defined as 5.5 percent, on the fair value of their property. The ICC was authorized to make an accounting of the fair value of each regulated railroad’s property; however, this was not completed until well into the 1930s, by which time the accounting and rate rules were out of date. To maintain fair competition between railroads in a region, all roads were to have the same rates for the same goods over the same distance. With the same rates, low-cost roads should have been able to earn higher rates of return than high-cost roads. To handle this, a recapture clause was inserted: any railroad earning a return of more than 6 percent on the fair value of its property was to turn the excess over to the ICC, which would place half of the money in a contingency fund for the railroad when it encountered financial problems and the other half in a contingency fund to provide loans to other railroads in need of assistance.

In order to address the problem of weak and strong railroads and to bring better coordination to the movement of rail traffic in the United States, the act was directed to encourage railroad consolidation, but little came of this in the 1920s. In order to facilitate its control of the railroads, the ICC was given two additional powers. The first was the control over the issuance or purchase of securities by railroads, and the second was the power to control changes in railroad service through the control of car supply and the extension and abandonment of track. The control of the supply of rail cars was turned over to the Association of American Railroads. Few extensions of track were proposed, but as time passed, abandonment requests grew. The ICC, however, trying to mediate between the conflicting demands of shippers, communities and railroads, generally refused to grant abandonments, and this became an extremely sensitive issue in the 1930s.

As indicated above, the premises of the Transportation Act of 1920 were wrong. Railroads experienced increasing competition during the 1920s, and both freight and passenger traffic were drawn off to competing transport forms. Passenger traffic exited from the railroads much more quickly. As the network of all weather surfaced roads increased, people quickly turned from the train to the car. Harmed even more by the move to automobile traffic were the electric interurban railways that had grown rapidly just prior to the First World War. (Hilton-Due, 1960) Not surprisingly, during the 1920s few railroads earned profits in excess of the fair rate of return.

The use of trucks to deliver freight began shortly after the turn of the century. Before the outbreak of war in Europe, White and Mack were producing trucks with as much as 7.5 tons of carrying capacity. Most of the truck freight was carried on a local basis, and it largely supplemented the longer distance freight transportation provided by the railroads. However, truck size was growing. In 1915 Trailmobile introduced the first four-wheel trailer designed to be pulled by a truck tractor unit. During the First World War, thousands of trucks were constructed for military purposes, and truck convoys showed that long distance truck travel was feasible and economical. The use of trucks to haul freight had been growing by over 18 percent per year since 1925, so that by 1929 intercity trucking accounted for more than one percent of the ton-miles of freight hauled.

The railroads argued that the trucks and buses provided “unfair” competition and believed that if they were also regulated, then the regulation could equalize the conditions under which they competed. As early as 1925, the National Association of Railroad and Utilities Commissioners issued a call for the regulation of motor carriers in general. In 1928 the ICC called for federal regulation of buses and in 1932 extended this call to federal regulation of trucks.

Most states had began regulating buses at the beginning of the 1920s in an attempt to reduce the diversion of urban passenger traffic from the electric trolley and railway systems. However, most of the regulation did not aim to control intercity passenger traffic by buses. As the network of surfaced roads expanded during the twenties, so did the routes of the intercity buses. In 1929 a number of smaller bus companies were incorporated in the Greyhound Buslines, the carrier that has since dominated intercity bus transportation. (Walsh, 2000)

A complaint of the railroads was that interstate trucking competition was unfair because it was subsidized while railroads were not. All railroad property was privately owned and subject to property taxes, whereas truckers used the existing road system and therefore neither had to bear the costs of creating the road system nor pay taxes upon it. Beginning with the Federal Road-Aid Act of 1916, small amounts of money were provided as an incentive for states to construct rural post roads. (Dearing-Owen, 1949) However, through the First World War most of the funds for highway construction came from a combination of levies on the adjacent property owners and county and state taxes. The monies raised by the counties were commonly 60 percent of the total funds allocated, and these primarily came from property taxes. In 1919 Oregon pioneered the state gasoline tax, which then began to be adopted by more and more states. A highway system financed by property taxes and other levies can be construed as a subsidization of motor vehicles, and one study for the period up to 1920 found evidence of substantial subsidization of trucking. (Herbst-Wu, 1973) However, the use of gasoline taxes moved closer to the goal of users paying the costs of the highways. Neither did the trucks have to pay for all of the highway construction because automobiles jointly used the highways. Highways had to be constructed in more costly ways in order to accommodate the larger and heavier trucks. Ideally the gasoline taxes collected from trucks should have covered the extra (or marginal) costs of highway construction incurred because of the truck traffic. Gasoline taxes tended to do this.

The American economy occupies a vast geographic region. Because economic activity occurs over most of the country, falling transportation costs have been crucial to knitting American firms and consumers into a unified market. Throughout the nineteenth century the railroads played this crucial role. Because of the size of the railroad companies and their importance in the economic life of Americans, the federal government began to regulate them. But, by 1917 it appeared that the railroad system had achieved some stability, and it was generally assumed that the post-First World War era would be an extension of the era from 1900 to 1917. Nothing could have been further from the truth. Spurred by public investments in highways, cars and trucks voraciously ate into the railroad’s market, and, though the regulators failed to understand this at the time, the railroad’s monopoly on transportation quickly disappeared.

Communications

Communications had joined with transportation developments in the nineteenth century to tie the American economy together more completely. The telegraph had benefited by using the railroads’ right-of-ways, and the railroads used the telegraph to coordinate and organize their far-flung activities. As the cost of communications fell and information transfers sped, the development of firms with multiple plants at distant locations was facilitated. The interwar era saw a continuation of these developments as the telephone continued to supplant the telegraph and the new medium of radio arose to transmit news and provide a new entertainment source.

Telegraph domination of business and personal communications had given way to the telephone as long distance telephone calls between the east and west coasts with the new electronic amplifiers became possible in 1915. The number of telegraph messages handled grew 60.4 percent in the twenties. The number of local telephone conversations grew 46.8 percent between 1920 and 1930, while the number of long distance conversations grew 71.8 percent over the same period. There were 5 times as many long distance telephone calls as telegraph messages handled in 1920, and 5.7 times as many in 1930.

The twenties were a prosperous period for AT&T and its 18 major operating companies. (Brooks, 1975; Temin, 1987; Garnet, 1985; Lipartito, 1989) Telephone usage rose and, as Figure 19 shows, the share of all households with a telephone rose from 35 percent to nearly 42 percent. In cities across the nation, AT&T consolidated its system, gained control of many operating companies, and virtually eliminated its competitors. It was able to do this because in 1921 Congress passed the Graham Act exempting AT&T from the Sherman Act in consolidating competing telephone companies. By 1940, the non-Bell operating companies were all small relative to the Bell operating companies.

Surprisingly there was a decline in telephone use on the farms during the twenties. (Hadwiger-Cochran, 1984; Fischer 1987) Rising telephone rates explain part of the decline in rural use. The imposition of connection fees during the First World War made it more costly for new farmers to hook up. As AT&T gained control of more and more operating systems, telephone rates were increased. AT&T also began requiring, as a condition of interconnection, that independent companies upgrade their systems to meet AT&T standards. Most of the small mutual companies that had provided service to farmers had operated on a shoestring—wires were often strung along fenceposts, and phones were inexpensive “whoop and holler” magneto units. Upgrading to AT&T’s standards raised costs, forcing these companies to raise rates.

However, it also seems likely that during the 1920s there was a general decline in the rural demand for telephone services. One important factor in this was the dramatic decline in farm incomes in the early twenties. The second reason was a change in the farmers’ environment. Prior to the First World War, the telephone eased farm isolation and provided news and weather information that was otherwise hard to obtain. After 1920 automobiles, surfaced roads, movies, and the radio loosened the isolation and the telephone was no longer as crucial.

Othmar Merganthaler’s development of the linotype machine in the late nineteenth century had irrevocably altered printing and publishing. This machine, which quickly created a line of soft, lead-based metal type that could be printed, melted down and then recast as a new line of type, dramatically lowered the costs of printing. Previously, all type had to be painstakingly set by hand, with individual cast letter matrices picked out from compartments in drawers to construct words, lines, and paragraphs. After printing, each line of type on the page had to be broken down and each individual letter matrix placed back into its compartment in its drawer for use in the next printing job. Newspapers often were not published every day and did not contain many pages, resulting in many newspapers in most cities. In contrast to this laborious process, the linotype used a keyboard upon which the operator typed the words in one of the lines in a news column. Matrices for each letter dropped down from a magazine of matrices as the operator typed each letter and were assembled into a line of type with automatic spacers to justify the line (fill out the column width). When the line was completed the machine mechanically cast the line of matrices into a line of lead type. The line of lead type was ejected into a tray and the letter matrices mechanically returned to the magazine while the operator continued typing the next line in the news story. The first Merganthaler linotype machine was installed in the New York Tribune in 1886. The linotype machine dramatically lowered the costs of printing newspapers (as well as books and magazines). Prior to the linotype a typical newspaper averaged no more than 11 pages and many were published only a few times a week. The linotype machine allowed newspapers to grow in size and they began to be published more regularly. A process of consolidation of daily and Sunday newspapers began that continues to this day. Many have termed the Merganthaler linotype machine the most significant printing invention since the introduction of movable type 400 years earlier.

For city families as well as farm families, radio became the new source of news and entertainment. (Barnouw, 1966; Rosen, 1980 and 1987; Chester-Garrison, 1950) It soon took over as the prime advertising medium and in the process revolutionized advertising. By 1930 more homes had radio sets than had telephones. The radio networks sent news and entertainment broadcasts all over the country. The isolation of rural life, particularly in many areas of the plains, was forever broken by the intrusion of the “black box,” as radio receivers were often called. The radio began a process of breaking down regionalism and creating a common culture in the United States.

The potential demand for radio became clear with the first regular broadcast of Westinghouse’s KDKA in Pittsburgh in the fall of 1920. Because the Department of Commerce could not deny a license application there was an explosion of stations all broadcasting at the same frequency and signal jamming and interference became a serious problem. By 1923 the Department of Commerce had gained control of radio from the Post Office and the Navy and began to arbitrarily disperse stations on the radio dial and deny licenses creating the first market in commercial broadcast licenses. In 1926 a U.S. District Court decided that under the Radio Law of 1912 Herbert Hoover, the secretary of commerce, did not have this power. New stations appeared and the logjam and interference of signals worsened. A Radio Act was passed in January of 1927 creating the Federal Radio Commission (FRC) as a temporary licensing authority. Licenses were to be issued in the public interest, convenience, and necessity. A number of broadcasting licenses were revoked; stations were assigned frequencies, dial locations, and power levels. The FRC created 24 clear-channel stations with as much as 50,000 watts of broadcasting power, of which 21 ended up being affiliated with the new national radio networks. The Communications Act of 1934 essentially repeated the 1927 act except that it created a permanent, seven-person Federal Communications Commission (FCC).

Local stations initially created and broadcast the radio programs. The expenses were modest, and stores and companies operating radio stations wrote this off as indirect, goodwill advertising. Several forces changed all this. In 1922, AT&T opened up a radio station in New York City, WEAF (later to become WNBC). AT&T envisioned this station as the center of a radio toll system where individuals could purchase time to broadcast a message transmitted to other stations in the toll network using AT&T’s long distance lines and an August 1922 broadcast by a Long Island realty company became the first conscious use of direct advertising.

Though advertising continued to be condemned, the fiscal pressures on radio stations to accept advertising began rising. In 1923 the American Society of Composers and Publishers (ASCAP), began demanding a performance fee anytime ASCAP-copyrighted music was performed on the radio, either live or on record. By 1924 the issue was settled, and most stations began paying performance fees to ASCAP. AT&T decided that all stations broadcasting with non AT&T transmitters were violating their patent rights and began asking for annual fees from such stations based on the station’s power. By the end of 1924, most stations were paying the fees. All of this drained the coffers of the radio stations, and more and more of them began discreetly accepting advertising.

RCA became upset at AT&T’s creation of a chain of radio stations and set up its own toll network using the inferior lines of Western Union and Postal Telegraph, because AT&T, not surprisingly, did not allow any toll (or network) broadcasting on its lines except by its own stations. AT&T began to worry that its actions might threaten its federal monopoly in long distance telephone communications. In 1926 a new firm was created, the National Broadcasting Company (NBC), which took over all broadcasting activities from AT&T and RCA as AT&T left broadcasting. When NBC debuted in November of 1926, it had two networks: the Red, which was the old AT&T network, and the Blue, which was the old RCA network. Radio networks allowed advertisers to direct advertising at a national audience at a lower cost. Network programs allowed local stations to broadcast superior programs that captured a larger listening audience and in return received a share of the fees the national advertiser paid to the network. In 1927 a new network, the Columbia Broadcasting System (CBS) financed by the Paley family began operation and other new networks entered or tried to enter the industry in the 1930s.

Communications developments in the interwar era present something of a mixed picture. By 1920 long distance telephone service was in place, but rising rates slowed the rate of adoption in the period, and telephone use in rural areas declined sharply. Though direct dialing was first tried in the twenties, its general implementation would not come until the postwar era, when other changes, such as microwave transmission of signals and touch-tone dialing, would also appear. Though the number of newspapers declined, newspaper circulation generally held up. The number of competing newspapers in larger cities began declining, a trend that also would accelerate in the postwar American economy.

Banking and Securities Markets

In the twenties commercial banks became “department stores of finance.”— Banks opened up installment (or personal) loan departments, expanded their mortgage lending, opened up trust departments, undertook securities underwriting activities, and offered safe deposit boxes. These changes were a response to growing competition from other financial intermediaries. Businesses, stung by bankers’ control and reduced lending during the 1920-21 depression, began relying more on retained earnings and stock and bond issues to raise investment and, sometimes, working capital. This reduced loan demand. The thrift institutions also experienced good growth in the twenties as they helped fuel the housing construction boom of the decade. The securities markets boomed in the twenties only to see a dramatic crash of the stock market in late 1929.

There were two broad classes of commercial banks; those that were nationally chartered and those that were chartered by the states. Only the national banks were required to be members of the Federal Reserve System. (Figure 21) Most banks were unit banks because national regulators and most state regulators prohibited branching. However, in the twenties a few states began to permit limited branching; California even allowed statewide branching.—The Federal Reserve member banks held the bulk of the assets of all commercial banks, even though most banks were not members. A high bank failure rate in the 1920s has usually been explained by “overbanking” or too many banks located in an area, but H. Thomas Johnson (1973-74) makes a strong argument against this. (Figure 22)— If there were overbanking, on average each bank would have been underutilized resulting in intense competition for deposits and higher costs and lower earnings. One common reason would have been the free entry of banks as long as they achieved the minimum requirements then in force. However, the twenties saw changes that led to the demise of many smaller rural banks that would likely have been profitable if these changes had not occurred. Improved transportation led to a movement of business activities, including banking, into the larger towns and cities. Rural banks that relied on loans to farmers suffered just as farmers did during the twenties, especially in the first half of the twenties. The number of bank suspensions and the suspension rate fell after 1926. The sharp rise in bank suspensions in 1930 occurred because of the first banking crisis during the Great Depression.

Prior to the twenties, the main assets of commercial banks were short-term business loans, made by creating a demand deposit or increasing an existing one for a borrowing firm. As business lending declined in the 1920s commercial banks vigorously moved into new types of financial activities. As banks purchased more securities for their earning asset portfolios and gained expertise in the securities markets, larger ones established investment departments and by the late twenties were an important force in the underwriting of new securities issued by nonfinancial corporations.

The securities market exhibited perhaps the most dramatic growth of the noncommercial bank financial intermediaries during the twenties, but others also grew rapidly. (Figure 23) The assets of life insurance companies increased by 10 percent a year from 1921 to 1929; by the late twenties they were a very important source of funds for construction investment. Mutual savings banks and savings and loan associations (thrifts) operated in essentially the same types of markets. The Mutual savings banks were concentrated in the northeastern United States. As incomes rose, personal savings increased, and housing construction expanded in the twenties, there was an increasing demand for the thrifts’ interest earning time deposits and mortgage lending.

But the dramatic expansion in the financial sector came in new corporate securities issues in the twenties—especially common and preferred stock—and in the trading of existing shares of those securities. (Figure 24) The late twenties boom in the American economy was rapid, highly visible, and dramatic. Skyscrapers were being erected in most major cities, the automobile manufacturers produced over four and a half million new cars in 1929; and the stock market, like a barometer of this prosperity, was on a dizzying ride to higher and higher prices. “Playing the market” seemed to become a national pastime.

The Dow-Jones index hit its peak of 381 on September 3 and then slid to 320 on October 21. In the following week the stock market “crashed,” with a record number of shares being traded on several days. At the end of Tuesday, October, 29th, the index stood at 230, 96 points less than one week before. On November 13, 1929, the Dow-Jones index reached its lowest point for the year at 198—183 points less than the September 3 peak.

The path of the stock market boom of the twenties can be seen in Figure 25. Sharp price breaks occurred several times during the boom, and each of these gave rise to dark predictions of the end of the bull market and speculation. Until late October of 1929, these predictions turned out to be wrong. Between those price breaks and prior to the October crash, stock prices continued to surge upward. In March of 1928, 3,875,910 shares were traded in one day, establishing a record. By late 1928, five million shares being traded in a day was a common occurrence.

New securities, from rising merger activity and the formation of holding companies, were issued to take advantage of the rising stock prices.—Stock pools, which were not illegal until the 1934 Securities and Exchange Act, took advantage of the boom to temporarily drive up the price of selected stocks and reap large gains for the members of the pool. In stock pools a group of speculators would pool large amounts of their funds and then begin purchasing large amounts of shares of a stock. This increased demand led to rising prices for that stock. Frequently pool insiders would “churn” the stock by repeatedly buying and selling the same shares among themselves, but at rising prices. Outsiders, seeing the price rising, would decide to purchase the stock whose price was rising. At a predetermined higher price the pool members would, within a short period, sell their shares and pull out of the market for that stock. Without the additional demand from the pool, the stock’s price usually fell quickly bringing large losses for the unsuspecting outside investors while reaping large gains for the pool insiders.

Another factor commonly used to explain both the speculative boom and the October crash was the purchase of stocks on small margins. However, contrary to popular perception, margin requirements through most of the twenties were essentially the same as in previous decades. Brokers, recognizing the problems with margin lending in the rapidly changing market, began raising margin requirements in late 1928, and by the fall of 1929, margin requirements were the highest in the history of the New York Stock Exchange. In the 1920s, as was the case for decades prior to that, the usual margin requirements were 10 to 15 percent of the purchase price, and, apparently, more often around 10 percent. There were increases in this percentage by 1928 and by the fall of 1928, well before the crash and at the urging of a special New York Clearinghouse committee, margin requirements had been raised to some of the highest levels in New York Stock Exchange history. One brokerage house required the following of its clients. Securities with a selling price below $10 could only be purchased for cash. Securities with a selling price of $10 to $20 had to have a 50 percent margin; for securities of $20 to $30 a margin requirement of 40 percent; and, for securities with a price above $30 the margin was 30 percent of the purchase price. In the first half of 1929 margin requirements on customers’ accounts averaged a 40 percent margin, and some houses raised their margins to 50 percent a few months before the crash. These were, historically, very high margin requirements. (Smiley and Keehn, 1988)—Even so, during the crash when additional margin calls were issued, those investors who could not provide additional margin saw the brokers’ sell their stock at whatever the market price was at the time and these forced sales helped drive prices even lower.

The crash began on Monday, October 21, as the index of stock prices fell 3 points on the third-largest volume in the history of the New York Stock Exchange. After a slight rally on Tuesday, prices began declining on Wednesday and fell 21 points by the end of the day bringing on the third call for more margin in that week. On Black Thursday, October 24, prices initially fell sharply, but rallied somewhat in the afternoon so that the net loss was only 7 points, but the volume of thirteen million shares set a NYSE record. Friday brought a small gain that was wiped out on Saturday. On Monday, October 28, the Dow Jones index fell 38 points on a volume of nine million shares—three million in the final hour of trading. Black Tuesday, October 29, brought declines in virtually every stock price. Manufacturing firms, which had been lending large sums to brokers for margin loans, had been calling in these loans and this accelerated on Monday and Tuesday. The big Wall Street banks increased their lending on call loans to offset some of this loss of loanable funds. The Dow Jones Index fell 30 points on a record volume of nearly sixteen and a half million shares exchanged. Black Thursday and Black Tuesday wiped out entire fortunes.

Though the worst was over, prices continued to decline until November 13, 1929, as brokers cleaned up their accounts and sold off the stocks of clients who could not supply additional margin. After that, prices began to slowly rise and by April of 1930 had increased 96 points from the low of November 13,— “only” 87 points less than the peak of September 3, 1929. From that point, stock prices resumed their depressing decline until the low point was reached in the summer of 1932.

 

—There is a long tradition that insists that the Great Bull Market of the late twenties was an orgy of speculation that bid the prices of stocks far above any sustainable or economically justifiable level creating a bubble in the stock market. John Kenneth Galbraith (1954) observed, “The collapse in the stock market in the autumn of 1929 was implicit in the speculation that went before.”—But not everyone has agreed with this.

In 1930 Irving Fisher argued that the stock prices of 1928 and 1929 were based on fundamental expectations that future corporate earnings would be high.— More recently, Murray Rothbard (1963), Gerald Gunderson (1976), and Jude Wanniski (1978) have argued that stock prices were not too high prior to the crash.—Gunderson suggested that prior to 1929, stock prices were where they should have been and that when corporate profits in the summer and fall of 1929 failed to meet expectations, stock prices were written down.— Wanniski argued that political events brought on the crash. The market broke each time news arrived of advances in congressional consideration of the Hawley-Smoot tariff. However, the virtually perfect foresight that Wanniski’s explanation requires is unrealistic.— Charles Kindleberger (1973) and Peter Temin (1976) examined common stock yields and price-earnings ratios and found that the relative constancy did not suggest that stock prices were bid up unrealistically high in the late twenties.—Gary Santoni and Gerald Dwyer (1990) also failed to find evidence of a bubble in stock prices in 1928 and 1929.—Gerald Sirkin (1975) found that the implied growth rates of dividends required to justify stock prices in 1928 and 1929 were quite conservative and lower than post-Second World War dividend growth rates.

However, examination of after-the-fact common stock yields and price-earning ratios can do no more than provide some ex post justification for suggesting that there was not excessive speculation during the Great Bull Market.— Each individual investor was motivated by that person’s subjective expectations of each firm’s future earnings and dividends and the future prices of shares of each firm’s stock. Because of this element of subjectivity, not only can we never accurately know those values, but also we can never know how they varied among individuals. The market price we observe will be the end result of all of the actions of the market participants, and the observed price may be different from the price almost all of the participants expected.

In fact, there are some indications that there were differences in 1928 and 1929. Yields on common stocks were somewhat lower in 1928 and 1929. In October of 1928, brokers generally began raising margin requirements, and by the beginning of the fall of 1929, margin requirements were, on average, the highest in the history of the New York Stock Exchange. Though the discount and commercial paper rates had moved closely with the call and time rates on brokers’ loans through 1927, the rates on brokers’ loans increased much more sharply in 1928 and 1929.— This pulled in funds from corporations, private investors, and foreign banks as New York City banks sharply reduced their lending. These facts suggest that brokers and New York City bankers may have come to believe that stock prices had been bid above a sustainable level by late 1928 and early 1929. White (1990) created a quarterly index of dividends for firms in the Dow-Jones index and related this to the DJI. Through 1927 the two track closely, but in 1928 and 1929 the index of stock prices grows much more rapidly than the index of dividends.

The qualitative evidence for a bubble in the stock market in 1928 and 1929 that White assembled was strengthened by the findings of J. Bradford De Long and Andre Shleifer (1991). They examined closed-end mutual funds, a type of fund where investors wishing to liquidate must sell their shares to other individual investors allowing its fundamental value to be exactly measurable.— Using evidence from these funds, De Long and Shleifer estimated that in the summer of 1929, the Standard and Poor’s composite stock price index was overvalued about 30 percent due to excessive investor optimism. Rappoport and White (1993 and 1994) found other evidence that supported a bubble in the stock market in 1928 and 1929. There was a sharp divergence between the growth of stock prices and dividends; there were increasing premiums on call and time brokers’ loans in 1928 and 1929; margin requirements rose; and stock market volatility rose in the wake of the 1929 stock market crash.

There are several reasons for the creation of such a bubble. First, the fundamental values of earnings and dividends become difficult to assess when there are major industrial changes, such as the rapid changes in the automobile industry, the new electric utilities, and the new radio industry.— Eugene White (1990) suggests that “While investors had every reason to expect earnings to grow, they lacked the means to evaluate easily the future path of dividends.” As a result investors bid up prices as they were swept up in the ongoing stock market boom. Second, participation in the stock market widened noticeably in the twenties. The new investors were relatively unsophisticated, and they were more likely to be caught up in the euphoria of the boom and bid prices upward.— New, inexperienced commission sales personnel were hired to sell stocks and they promised glowing returns on stocks they knew little about.

These observations were strengthened by the experimental work of economist Vernon Smith. (Bishop, 1987) In a number of experiments over a three-year period using students and Tucson businessmen and businesswomen, bubbles developed as inexperienced investors valued stocks differently and engaged in price speculation. As these investors in the experiments began to realize that speculative profits were unsustainable and uncertain, their dividend expectations changed, the market crashed, and ultimately stocks began trading at their fundamental dividend values. These bubbles and crashes occurred repeatedly, leading Smith to conjecture that there are few regulatory steps that can be taken to prevent a crash.

Though the bubble of 1928 and 1929 made some downward adjustment in stock prices inevitable, as Barsky and De Long have shown, changes in fundamentals govern the overall movements. And the end of the long bull market was almost certainly governed by this. In late 1928 and early 1929 there was a striking rise in economic activity, but a decline began somewhere between May and July of that year and was clearly evident by August of 1929. By the middle of August, the rise in stock prices had slowed down as better information on the contraction was received. There were repeated statements by leading figures that stocks were “overpriced” and the Federal Reserve System sharply increased the discount rate in August 1929 was well as continuing its call for banks to reduce their margin lending. As this information was assessed, the number of speculators selling stocks increased, and the number buying decreased. With the decreased demand, stock prices began to fall, and as more accurate information on the nature and extent of the decline was received, stock prices fell more. The late October crash made the decline occur much more rapidly, and the margin purchases and consequent forced selling of many of those stocks contributed to a more severe price fall. The recovery of stock prices from November 13 into April of 1930 suggests that stock prices may have been driven somewhat too low during the crash.

There is now widespread agreement that the 1929 stock market crash did not cause the Great Depression. Instead, the initial downturn in economic activity was a primary determinant of the ending of the 1928-29 stock market bubble. The stock market crash did make the downturn become more severe beginning in November 1929. It reduced discretionary consumption spending (Romer, 1990) and created greater income uncertainty helping to bring on the contraction (Flacco and Parker, 1992). Though stock market prices reached a bottom and began to recover following November 13, 1929, the continuing decline in economic activity took its toll and by May 1930 stock prices resumed their decline and continued to fall through the summer of 1932.

Domestic Trade

In the nineteenth century, a complex array of wholesalers, jobbers, and retailers had developed, but changes in the postbellum period reduced the role of the wholesalers and jobbers and strengthened the importance of the retailers in domestic trade. (Cochran, 1977; Chandler, 1977; Marburg, 1951; Clewett, 1951) The appearance of the department store in the major cities and the rise of mail order firms in the postbellum period changed the retailing market.

Department Stores

A department store is a combination of specialty stores organized as departments within one general store. A. T. Stewart’s huge 1846 dry goods store in New York City is often referred to as the first department store. (Resseguie, 1965; Sobel-Sicilia, 1986) R. H. Macy started his dry goods store in 1858 and Wanamaker’s in Philadelphia opened in 1876. By the end of the nineteenth century, every city of any size had at least one major department store. (Appel, 1930; Benson, 1986; Hendrickson, 1979; Hower, 1946; Sobel, 1974) Until the late twenties, the department store field was dominated by independent stores, though some department stores in the largest cities had opened a few suburban branches and stores in other cities. In the interwar period department stores accounted for about 8 percent of retail sales.

The department stores relied on a “one-price” policy, which Stewart is credited with beginning. In the antebellum period and into the postbellum period, it was common not to post a specific price on an item; rather, each purchaser haggled with a sales clerk over what the price would be. Stewart posted fixed prices on the various dry goods sold, and the customer could either decide to buy or not buy at the fixed price. The policy dramatically lowered transactions costs for both the retailer and the purchaser. Prices were reduced with a smaller markup over the wholesale price, and a large sales volume and a quicker turnover of the store’s inventory generated profits.

Mail Order Firms

What changed the department store field in the twenties was the entrance of Sears Roebuck and Montgomery Ward, the two dominant mail order firms in the United States. (Emmet-Jeuck, 1950; Chandler, 1962, 1977) Both firms had begun in the late nineteenth century and by 1914 the younger Sears Roebuck had surpassed Montgomery Ward. Both located in Chicago due to its central location in the nation’s rail network and both had benefited from the advent of Rural Free Delivery in 1896 and low cost Parcel Post Service in 1912.

In 1924 Sears hired Robert C. Wood, who was able to convince Sears Roebuck to open retail stores. Wood believed that the declining rural population and the growing urban population forecast the gradual demise of the mail order business; survival of the mail order firms required a move into retail sales. By 1925 Sears Roebuck had opened 8 retail stores, and by 1929 it had 324 stores. Montgomery Ward quickly followed suit. Rather than locating these in the central business district (CBD), Wood located many on major streets closer to the residential areas. These moves of Sears Roebuck and Montgomery Ward expanded department store retailing and provided a new type of chain store.

Chain Stores

Though chain stores grew rapidly in the first two decades of the twentieth century, they date back to the 1860s when George F. Gilman and George Huntington Hartford opened a string of New York City A&P (Atlantic and Pacific) stores exclusively to sell tea. (Beckman-Nolen, 1938; Lebhar, 1963; Bullock, 1933) Stores were opened in other regions and in 1912 their first “cash-and-carry” full-range grocery was opened. Soon they were opening 50 of these stores each week and by the 1920s A&P had 14,000 stores. They then phased out the small stores to reduce the chain to 4,000 full-range, supermarket-type stores. A&P’s success led to new grocery store chains such as Kroger, Jewel Tea, and Safeway.

Prior to A&P’s cash-and-carry policy, it was common for grocery stores, produce (or green) grocers, and meat markets to provide home delivery and credit, both of which were costly. As a result, retail prices were generally marked up well above the wholesale prices. In cash-and-carry stores, items were sold only for cash; no credit was extended, and no expensive home deliveries were provided. Markups on prices could be much lower because other costs were much lower. Consumers liked the lower prices and were willing to pay cash and carry their groceries, and the policy became common by the twenties.

Chains also developed in other retail product lines. In 1879 Frank W. Woolworth developed a “5 and 10 Cent Store,” or dime store, and there were over 1,000 F. W. Woolworth stores by the mid-1920s. (Winkler, 1940) Other firms such as Kresge, Kress, and McCrory successfully imitated Woolworth’s dime store chain. J.C. Penney’s dry goods chain store began in 1901 (Beasley, 1948), Walgreen’s drug store chain began in 1909, and shoes, jewelry, cigars, and other lines of merchandise also began to be sold through chain stores.

Self-Service Policies

In 1916 Clarence Saunders, a grocer in Memphis, Tennessee, built upon the one-price policy and began offering self-service at his Piggly Wiggly store. Previously, customers handed a clerk a list or asked for the items desired, which the clerk then collected and the customer paid for. With self-service, items for sale were placed on open shelves among which the customers could walk, carrying a shopping bag or pushing a shopping cart. Each customer could then browse as he or she pleased, picking out whatever was desired. Saunders and other retailers who adopted the self-service method of retail selling found that customers often purchased more because of exposure to the array of products on the shelves; as well, self-service lowered the labor required for retail sales and therefore lowered costs.

Shopping Centers

Shopping Centers, another innovation in retailing that began in the twenties, was not destined to become a major force in retail development until after the Second World War. The ultimate cause of this innovation was the widening ownership and use of the automobile. By the 1920s, as the ownership and use of the car began expanding, population began to move out of the crowded central cities toward the more open suburbs. When General Robert Wood set Sears off on its development of urban stores, he located these not in the central business district, CBD, but as free-standing stores on major arteries away from the CBD with sufficient space for parking.

At about the same time, a few entrepreneurs began to develop shopping centers. Yehoshua Cohen (1972) says, “The owner of such a center was responsible for maintenance of the center, its parking lot, as well as other services to consumers and retailers in the center.” Perhaps the earliest such shopping center was the Country Club Plaza built in 1922 by the J. C. Nichols Company in Kansas City, Missouri. Other early shopping centers appeared in Baltimore and Dallas. By the mid-1930s the concept of a planned shopping center was well known and was expected to be the means to capture the trade of the growing number of suburban consumers.

International Trade and Finance

In the twenties a gold exchange standard was developed to replace the gold standard of the prewar world. Under a gold standard, each country’s currency carried a fixed exchange rate with gold, and the currency had to be backed up by gold. As a result, all countries on the gold standard had fixed exchange rates with all other countries. Adjustments to balance international trade flows were made by gold flows. If a country had a deficit in its trade balance, gold would leave the country, forcing the money stock to decline and prices to fall. Falling prices made the deficit countries’ exports more attractive and imports more costly, reducing the deficit. Countries with a surplus imported gold, which increased the money stock and caused prices to rise. This made the surplus countries’ exports less attractive and imports more attractive, decreasing the surplus. Most economists who have studied the prewar gold standard contend that it did not work as the conventional textbook model says, because capital flows frequently reduced or eliminated the need for gold flows for long periods of time. However, there is no consensus on whether fortuitous circumstances, rather than the gold standard, saved the international economy from periodic convulsions or whether the gold standard as it did work was sufficient to promote stability and growth in international transactions.

After the First World War it was argued that there was a “shortage” of fluid monetary gold to use for the gold standard, so some method of “economizing” on gold had to be found. To do this, two basic changes were made. First, most nations, other than the United States, stopped domestic circulation of gold. Second, the “gold exchange” system was created. Most countries held their international reserves in the form of U.S. dollars or British pounds and international transactions used dollars or pounds, as long as the United States and Great Britain stood ready to exchange their currencies for gold at fixed exchange rates. However, the overvaluation of the pound and the undervaluation of the franc threatened these arrangements. The British trade deficit led to a capital outflow, higher interest rates, and a weak economy. In the late twenties, the French trade surplus led to the importation of gold that they did not allow to expand the money supply.

Economizing on gold by no longer allowing its domestic circulation and by using key currencies as international monetary reserves was really an attempt to place the domestic economies under the control of the nations’ politicians and make them independent of international events. Unfortunately, in doing this politicians eliminated the equilibrating mechanism of the gold standard but had nothing with which to replace it. The new international monetary arrangements of the twenties were potentially destabilizing because they were not allowed to operate as a price mechanism promoting equilibrating adjustments.

There were other problems with international economic activity in the twenties. Because of the war, the United States was abruptly transformed from a debtor to a creditor on international accounts. Though the United States did not want reparations payments from Germany, it did insist that Allied governments repay American loans. The Allied governments then insisted on war reparations from Germany. These initial reparations assessments were quite large. The Allied Reparations Commission collected the charges by supervising Germany’s foreign trade and by internal controls on the German economy, and it was authorized to increase the reparations if it was felt that Germany could pay more. The treaty allowed France to occupy the Ruhr after Germany defaulted in 1923.

Ultimately, this tangled web of debts and reparations, which was a major factor in the course of international trade, depended upon two principal actions. First, the United States had to run an import surplus or, on net, export capital out of the United States to provide a pool of dollars overseas. Germany then had either to have an export surplus or else import American capital so as to build up dollar reserves—that is, the dollars the United States was exporting. In effect, these dollars were paid by Germany to Great Britain, France, and other countries that then shipped them back to the United States as payment on their U.S. debts. If these conditions did not occur, (and note that the “new” gold standard of the twenties had lost its flexibility because the price adjustment mechanism had been eliminated) disruption in international activity could easily occur and be transmitted to the domestic economies.

In the wake of the 1920-21 depression Congress passed the Emergency Tariff Act, which raised tariffs, particularly on manufactured goods. (Figures 26 and 27) The Fordney-McCumber Tariff of 1922 continued the Emergency Tariff of 1921, and its protection on many items was extremely high, ranging from 60 to 100 percent ad valorem (or as a percent of the price of the item). The increases in the Fordney-McCumber tariff were as large and sometimes larger than the more famous (or “infamous”) Smoot-Hawley tariff of 1930. As farm product prices fell at the end of the decade presidential candidate Herbert Hoover proposed, as part of his platform, tariff increases and other changes to aid the farmers. In January 1929, after Hoover’s election, but before he took office, a tariff bill was introduced into Congress. Special interests succeeded in gaining additional (or new) protection for most domestically produced commodities and the goal of greater protection for the farmers tended to get lost in the increased protection for multitudes of American manufactured products. In spite of widespread condemnation by economists, President Hoover signed the Smoot-Hawley Tariff in June 1930 and rates rose sharply.

Following the First World War, the U.S. government actively promoted American exports, and in each of the postwar years through 1929, the United States recorded a surplus in its balance of trade. (Figure 28) However, the surplus declined in the 1930s as both exports and imports fell sharply after 1929. From the mid-1920s on finished manufactures were the most important exports, while agricultural products dominated American imports.

The majority of the funds that allowed Germany to make its reparations payments to France and Great Britain and hence allowed those countries to pay their debts to the United States came from the net flow of capital out of the United States in the form of direct investment in real assets and investments in long- and short-term foreign financial assets. After the devastating German hyperinflation of 1922 and 1923, the Dawes Plan reformed the German economy and currency and accelerated the U.S. capital outflow. American investors began to actively and aggressively pursue foreign investments, particularly loans (Lewis, 1938) and in the late twenties there was a marked deterioration in the quality of foreign bonds sold in the United States. (Mintz, 1951)

The system, then, worked well as long as there was a net outflow of American capital, but this did not continue. In the middle of 1928, the flow of short-term capital began to decline. In 1928 the flow of “other long-term” capital out of the United States was 752 million dollars, but in 1929 it was only 34 million dollars. Though arguments now exist as to whether the booming stock market in the United States was to blame for this, it had far-reaching effects on the international economic system and the various domestic economies.

The Start of the Depression

The United States had the majority of the world’s monetary gold, about 40 percent, by 1920. In the latter part of the twenties, France also began accumulating gold as its share of the world’s monetary gold rose from 9 percent in 1927 to 17 percent in 1929 and 22 percent by 1931. In 1927 the Federal Reserve System had reduced discount rates (the interest rate at which they lent reserves to member commercial banks) and engaged in open market purchases (purchasing U.S. government securities on the open market to increase the reserves of the banking system) to push down interest rates and assist Great Britain in staying on the gold standard. By early 1928 the Federal Reserve System was worried about its loss of gold due to this policy as well as the ongoing boom in the stock market. It began to raise the discount rate to stop these outflows. Gold was also entering the United States so that foreigners could obtain dollars to invest in stocks and bonds. As the United States and France accumulated more and more of the world’s monetary gold, other countries’ central banks took contractionary steps to stem the loss of gold. In country after country these deflationary strategies began contracting economic activity and by 1928 some countries in Europe, Asia, and South America had entered into a depression. More countries’ economies began to decline in 1929, including the United States, and by 1930 a depression was in force for almost all of the world’s market economies. (Temin, 1989; Eichengreen, 1992)

Monetary and Fiscal Policies in the 1920s

Fiscal Policies

As a tool to promote stability in aggregate economic activity, fiscal policy is largely a post-Second World War phenomenon. Prior to 1930 the federal government’s spending and taxing decisions were largely, but not completely, based on the perceived “need” for government-provided public goods and services.

Though the fiscal policy concept had not been developed, this does not mean that during the twenties no concept of the government’s role in stimulating economic activity existed. Herbert Stein (1990) points out that in the twenties Herbert Hoover and some of his contemporaries shared two ideas about the proper role of the federal government. The first was that federal spending on public works could be an important force in reducin Smiley and Keehn, 1995.  investment. Both concepts fit the ideas held by Hoover and others of his persuasion that the U.S. economy of the twenties was not the result of laissez-faire workings but of “deliberate social engineering.”

The federal personal income tax was enacted in 1913. Though mildly progressive, its rates were low and topped out at 7 percent on taxable income in excess of $750,000. (Table 4) As the United States prepared for war in 1916, rates were increased and reached a maximum marginal rate of 12 percent. With the onset of the First World War, the rates were dramatically increased. To obtain additional revenue in 1918, marginal rates were again increased. The share of federal revenue generated by income taxes rose from 11 percent in 1914 to 69 percent in 1920. The tax rates had been extended downward so that more than 30 percent of the nation’s income recipients were subject to income taxes by 1918. However, through the purchase of tax exempt state and local securities and through steps taken by corporations to avoid the cash distribution of profits, the number of high income taxpayers and their share of total taxes paid declined as Congress kept increasing the tax rates. The normal (or base) tax rate was reduced slightly for 1919 but the surtax rates, which made the income tax highly progressive, were retained. (Smiley-Keehn, 1995)

President Harding’s new Secretary of the Treasury, Andrew Mellon, proposed cutting the tax rates, arguing that the rates in the higher brackets had “passed the point of productivity” and rates in excess of 70 percent simply could not be collected. Though most agreed that the rates were too high, there was sharp disagreement on how the rates should be cut. Democrats and  Smiley and Keehn, 1995.  Progressive Republicans argued for rate cuts targeted for the lower income taxpayers while maintaining most of the steep progressivity of the tax rates. They believed that remedies could be found to change the tax laws to stop the legal avoidance of federal income taxes. Republicans argued for sharper cuts that reduced the progressivity of the rates. Mellon proposed a maximum rate of 25 percent.

Though the federal income tax rates were reduced and made less progressive, it took three tax rate cuts in 1921, 1924, and 1925 before Mellon’s goal was finally achieved. The highest marginal tax rate was reduced from 73 percent to 58 percent to 46 percent and finally to 25 percent for the 1925 tax year. All of the other rates were also reduced and exemptions increased. By 1926, only about the top 10 percent of income recipients were subject to federal income taxes. As tax rates were reduced, the number of high income tax returns increased and the share of total federal personal income taxes paid rose. (Tables 5 and 6) Even with the dramatic income tax rate cuts and reductions in the number of low income taxpayers, federal personal income tax revenue continued to rise during the 1920s. Though early estimates of the distribution of personal income showed sharp increases in income inequality during the 1920s (Kuznets, 1953; Holt, 1977), more recent estimates have found that the increases in inequality were considerably less and these appear largely to be related to the sharp rise in capital gains due to the booming stock market in the late twenties. (Smiley, 1998 and 2000)

Each year in the twenties the federal government generated a surplus, in some years as much as 1 percent of GNP. The surpluses were used to reduce the federal deficit and it declined by 25 percent between 1920 and 1930. Contrary to simple macroeconomic models that argue a federal government budget surplus must be contractionary and tend to stop an economy from reaching full employment, the American economy operated at full-employment or close to it throughout the twenties and saw significant economic growth. In this case, the surpluses were not contractionary because the dollars were circulated back into the economy through the purchase of outstanding federal debt rather than pulled out as currency and held in a vault somewhere.

Monetary Policies

In 1913 fear of the “money trust” and their monopoly power led Congress to create 12 central banks when they created the Federal Reserve System. The new central banks were to control money and credit and act as lenders of last resort to end banking panics. The role of the Federal Reserve Board, located in Washington, D.C., was to coordinate the policies of the 12 district banks; it was composed of five presidential appointees and the current secretary of the treasury and comptroller of the currency. All national banks had to become members of the Federal Reserve System, the Fed, and any state bank meeting the qualifications could elect to do so.

The act specified fixed reserve requirements on demand and time deposits, all of which had to be on deposit in the district bank. Commercial banks were allowed to rediscount commercial paper and given Federal Reserve currency. Initially, each district bank set its own rediscount rate. To provide additional income when there was little rediscounting, the district banks were allowed to engage in open market operations that involved the purchasing and selling of federal government securities, short-term securities of state and local governments issued in anticipation of taxes, foreign exchange, and domestic bills of exchange. The district banks were also designated to act as fiscal agents for the federal government. Finally, the Federal Reserve System provided a central check clearinghouse for the entire banking system.

When the Federal Reserve System was originally set up, it was believed that its primary role was to be a lender of last resort to prevent banking panics and become a check-clearing mechanism for the nation’s banks. Both the Federal Reserve Board and the Governors of the District Banks were bodies established to jointly exercise these activities. The division of functions was not clear, and a struggle for power ensued, mainly between the New York Federal Reserve Bank, which was led by J. P. Morgan’s protege, Benjamin Strong, through 1928, and the Federal Reserve Board. By the thirties the Federal Reserve Board had achieved dominance.

There were really two conflicting criteria upon which monetary actions were ostensibly based: the Gold Standard and the Real Bills Doctrine. The Gold Standard was supposed to be quasi-automatic, with an effective limit to the quantity of money. However, the Real Bills Doctrine (which required that all loans be made on short-term, self-liquidating commercial paper) had no effective limit on the quantity of money. The rediscounting of eligible commercial paper was supposed to lead to the required “elasticity” of the stock of money to “accommodate” the needs of industry and business. Actually the rediscounting of commercial paper, open market purchases, and gold inflows all had the same effects on the money stock.

The 1920-21 Depression

During the First World War, the Fed kept discount rates low and granted discounts on banks’ customer loans used to purchase V-bonds in order to help finance the war. The final Victory Loan had not been floated when the Armistice was signed in November of 1918: in fact, it took until October of 1919 for the government to fully sell this last loan issue. The Treasury, with the secretary of the treasury sitting on the Federal Reserve Board, persuaded the Federal Reserve System to maintain low interest rates and discount the Victory bonds necessary to keep bond prices high until this last issue had been floated. As a result, during this period the money supply grew rapidly and prices rose sharply.

A shift from a federal deficit to a surplus and supply disruptions due to steel and coal strikes in 1919 and a railroad strike in early 1920 contributed to the end of the boom. But the most—common view is that the Fed’s monetary policy was the main determinant of the end of the expansion and inflation and the beginning of the subsequent contraction and severe deflation. When the Fed was released from its informal agreement with the Treasury in November of 1919, it raised the discount rate from 4 to 4.75 percent. Benjamin Strong (the governor of the New York bank) was beginning to believe that the time for strong action was past and that the Federal Reserve System’s actions should be moderate. However, with Strong out of the country, the Federal Reserve Board increased the discount rate from 4.75 to 6 percent in late January of 1920 and to 7 percent on June 1, 1920. By the middle of 1920, economic activity and employment were rapidly falling, and prices had begun their downward spiral in one of the sharpest price declines in American history. The Federal Reserve System kept the discount rate at 7 percent until May 5, 1921, when it was lowered to 6.5 percent. By June of 1922, the rate had been lowered yet again to 4 percent. (Friedman and Schwartz, 1963)

The Federal Reserve System authorities received considerable criticism then and later for their actions. Milton Friedman and Anna Schwartz (1963) contend that the discount rate was raised too much too late and then kept too high for too long, causing the decline to be more severe and the price deflation to be greater. In their opinion the Fed acted in this manner due to the necessity of meeting the legal reserve requirement with a safe margin of gold reserves. Elmus Wicker (1966), however, argues that the gold reserve ratio was not the main factor determining the Federal Reserve policy in the episode. Rather, the Fed knowingly pursued a deflationary policy because it felt that the money supply was simply too large and prices too high. To return to the prewar parity for gold required lowering the price level, and there was an excessive stock of money because the additional money had been used to finance the war, not to produce consumer goods. Finally, the outstanding indebtedness was too large due to the creation of Fed credit.

Whether statutory gold reserve requirements to maintain the gold standard or domestic credit conditions were the most important determinant of Fed policy is still an open question, though both certainly had some influence. Regardless of the answer to that question, the Federal Reserve System’s first major undertaking in the years immediately following the First World War demonstrated poor policy formulation.

Federal Reserve Policies from 1922 to 1930

By 1921 the district banks began to recognize that their open market purchases had effects on interest rates, the money stock, and economic activity. For the next several years, economists in the Federal Reserve System discussed how this worked and how it could be related to discounting by member banks. A committee was created to coordinate the open market purchases of the district banks.

The recovery from the 1920-1921 depression had proceeded smoothly with moderate price increases. In early 1923 the Fed sold some securities and increased the discount rate from 4 percent as they believed the recovery was too rapid. However, by the fall of 1923 there were some signs of a business slump. McMillin and Parker (1994) argue that this contraction, as well as the 1927 contraction, were related to oil price shocks. By October of 1923 Benjamin Strong was advocating securities purchases to counter this. Between then and September 1924 the Federal Reserve System increased its securities holdings by over $500 million. Between April and August of 1924 the Fed reduced the discount rate to 3 percent in a series of three separate steps. In addition to moderating the mild business slump, the expansionary policy was also intended to reduce American interest rates relative to British interest rates. This reversed the gold flow back toward Great Britain allowing Britain to return to the gold standard in 1925. At the time it appeared that the Fed’s monetary policy had successfully accomplished its goals.

By the summer of 1924 the business slump was over and the economy again began to grow rapidly. By the mid-1920s real estate speculation had arisen in many urban areas in the United States and especially in Southeastern Florida. Land prices were rising sharply. Stock market prices had also begun rising more rapidly. The Fed expressed some worry about these developments and in 1926 sold some securities to gently slow the real estate and stock market boom. Amid hurricanes and supply bottlenecks the Florida real estate boom collapsed but the stock market boom continued.

The American economy entered into another mild business recession in the fall of 1926 that lasted until the fall of 1927. One of the factors in this was Henry’s Ford’s shut down of all of his factories to changeover from the Model T to the Model A. His employees were left without a job and without income for over six months. International concerns also reappeared. France, which was preparing to return to the gold standard, had begun accumulating gold and gold continued to flow into the United States. Some of this gold came from Great Britain making it difficult for the British to remain on the gold standard. This occasioned a new experiment in central bank cooperation. In July 1927 Benjamin Strong arranged a conference with Governor Montagu Norman of the Bank of England, Governor Hjalmar Schacht of the Reichsbank, and Deputy Governor Charles Ritt of the Bank of France in an attempt to promote cooperation among the world’s central bankers. By the time the conference began the Fed had already taken steps to counteract the business slump and reduce the gold inflow. In early 1927 the Fed reduced discount rates and made large securities purchases. One result of this was that the gold stock fell from $4.3 billion in mid-1927 to $3.8 billion in mid-1928. Some of the gold exports went to France and France returned to the gold standard with its undervalued currency. The loss of gold from Britain eased allowing it to maintain the gold standard.

By early 1928 the Fed was again becoming worried. Stock market prices were rising even faster and the apparent speculative bubble in the stock market was of some concern to Fed authorities. The Fed was also concerned about the loss of gold and wanted to bring that to an end. To do this they sold securities and, in three steps, raised the discount rate to 5 percent by July 1928. To this point the Federal Reserve Board had largely agreed with district Bank policy changes. However, problems began to develop.

During the stock market boom of the late 1920s the Federal Reserve Board preferred to use “moral suasion” rather than increases in discount rates to lessen member bank borrowing. The New York City bank insisted that moral suasion would not work unless backed up by literal credit rationing on a bank by bank basis which they, and the other district banks, were unwilling to do. They insisted that discount rates had to be increased. The Federal Reserve Board countered that this general policy change would slow down economic activity in general rather than be specifically targeted to stock market speculation. The result was that little was done for a year. Rates were not raised but no open market purchases were undertaken. Rates were finally raised to 6 percent in August of 1929. By that time the contraction had already begun. In late October the stock market crashed, and America slid into the Great Depression.

In November, following the stock market crash the Fed reduced discount rates to 4.5 percent. In January they again decreased discount rates and began a series of discount rate decreases until the rate reached 2.5 percent at the end of 1930. No further open market operations were undertaken for the next six months. As banks reduced their discounting in 1930, the stock of money declined. There was a banking crisis in the southeast in November and December of 1930, and in its wake the public’s holding of currency relative to deposits and banks’ reserve ratios began to rise and continued to do so through the end of the Great Depression.

Conclusion

Though some disagree, there is growing evidence that the behavior of the American economy in the 1920s did not cause the Great Depression. The depressed 1930s were not “retribution” for the exuberant growth of the 1920s. The weakness of a few economic sectors in the 1920s did not forecast the contraction from 1929 to 1933. Rather it was the depression of the 1930s and the Second World War that interrupted the economic growth begun in the 1920s and resumed after the Second World War. Just as the construction of skyscrapers that began in the 1920s resumed in the 1950s, so did real economic growth and progress resume. In retrospect we can see that the introduction and expansion of new technologies and industries in the 1920s, such as autos, household electric appliances, radio, and electric utilities, are echoed in the 1990s in the effects of the expanding use and development of the personal computer and the rise of the internet. The 1920s have much to teach us about the growth and development of the American economy.

References

Adams, Walter, ed. The Structure of American Industry, 5th ed. New York: Macmillan Publishing Co., 1977.

Aldcroft, Derek H. From Versailles to Wall Street, 1919-1929. Berkeley: The University of California Press, 1977.

Allen, Frederick Lewis. Only Yesterday. New York: Harper and Sons, 1931.

Alston, Lee J. “Farm Foreclosures in the United States During the Interwar Period.” The Journal of Economic History 43, no. 4 (1983): 885-904.

Alston, Lee J., Wayne A. Grove, and David C. Wheelock. “Why Do Banks Fail? Evidence from the 1920s.” Explorations in Economic History 31 (1994): 409-431.

Ankli, Robert. “Horses vs. Tractors on the Corn Belt.” Agricultural History 54 (1980): 134-148.

Ankli, Robert and Alan L. Olmstead. “The Adoption of the Gasoline Tractor in California.” Agricultural History 55 (1981):— 213-230.

Appel, Joseph H. The Business Biography of John Wanamaker, Founder and Builder. New York: The Macmillan Co., 1930.

Baker, Jonathan B. “Identifying Cartel Pricing Under Uncertainty: The U.S. Steel Industry, 1933-1939.” The Journal of Law and Economics 32 (1989): S47-S76.

Barger, E. L, et al. Tractors and Their Power Units. New York: John Wiley and Sons, 1952.

Barnouw, Eric. A Tower in Babel: A History of Broadcasting in the United States: Vol. I—to 1933. New York: Oxford University Press, 1966.

Barnouw, Eric. The Golden Web: A History of Broadcasting in the United States: Vol. II—1933 to 1953. New York: Oxford University Press, 1968.

Beasley, Norman. Main Street Merchant: The Story of the J. C. Penney Company. New York: Whittlesey House, 1948.

Beckman, Theodore N. and Herman C. Nolen. The Chain Store Problem: A Critical Analysis. New York: McGraw-Hill Book Co., 1938.

Benson, Susan Porter. Counter Cultures: Saleswomen, Managers, and Customers in American Department Stores, 1890-1940. Urbana, IL: University of Illinois Press, 1986.

Bernstein, Irving. The Lean Years: A History of the American Worker, 1920-1933. Boston: Houghton Mifflin Co., 1960.

Bernstein, Michael A. The Great Depression: Delayed Recovery and Economic Change in America, 1929-1939. New York: Cambridge University Press, 1987.

Bishop, Jerry E. “Stock Market Experiment Suggests Inevitability of Booms and Busts.” The Wall Street Journal, 17 November, 1987.

Board of Governors of the Federal Reserve System. Banking and Monetary Statistics. Washington: USGOP, 1943.

Bogue, Allan G. “Changes in Mechanical and Plant Technology: The Corn Belt, 1910-1940.” The Journal of Economic History 43 (1983): 1-26.

Breit, William and Elzinga, Kenneth. The Antitrust Casebook: Milestones in Economic Regulation, 2d ed. Chicago: The Dryden Press, 1989.

Bright, Arthur A., Jr. The Electric Lamp Industry: Technological Change and Economic Development from 1800 to 1947. New York: Macmillan, 1947.

Brody, David. Labor in Crisis: The Steel Strike. Philadelphia: J. B. Lippincott Co., 1965.

Brooks, John. Telephone: The First Hundred Years. New York: Harper and Row, 1975.

Brown, D. Clayton. Electricity for Rural America: The Fight for the REA. Westport, CT: The Greenwood Press, 1980.

Brown, William A., Jr. The International Gold Standard Reinterpreted, 1914-1934, 2 vols. New York: National Bureau of Economic Research, 1940.

Brunner, Karl and Allen Meltzer. “What Did We Learn from the Monetary Experience of the United States in the Great Depression?” Canadian Journal of Economics 1 (1968): 334-48.

Bryant, Keith L., Jr., and Henry C. Dethloff. A History of American Business. Englewood Cliffs, NJ: Prentice-Hall, Inc., 1983.

Bucklin, Louis P. Competition and Evolution in the Distributive Trades. Englewood Cliffs, NJ: Prentice-Hall, 1972.

Bullock, Roy J. “The Early History of the Great Atlantic & Pacific Tea Company,” Harvard Business Review 11 (1933): 289-93.

Bullock, Roy J. “A History of the Great Atlantic & Pacific Tea Company Since 1878.” Harvard Business Review 12 (1933): 59-69.

Cecchetti, Stephen G. “Understanding the Great Depression: Lessons for Current Policy.” In The Economics of the Great Depression, Edited by Mark Wheeler. Kalamazoo, MI: W. E. Upjohn Institute for Employment Research, 1998.

Chandler, Alfred D., Jr. Strategy and Structure: Chapters in the History of the American Industrial Enterprise. Cambridge, MA: The M.I.T. Press, 1962.

Chandler, Alfred D., Jr. The Visible Hand: The Managerial Revolution in American Business. Cambridge, MA: the Belknap Press Harvard University Press, 1977.

Chandler, Alfred D., Jr. Giant Enterprise: Ford, General Motors, and the American Automobile Industry. New York: Harcourt, Brace, and World, 1964.

Chester, Giraud, and Garnet R. Garrison. Radio and Television: An Introduction. New York: Appleton-Century Crofts, 1950.

Clewett, Richard C. “Mass Marketing of Consumers’ Goods.” In The Growth of the American Economy, 2d ed. Edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Cochran, Thomas C. 200 Years of American Business. New York: Delta Books, 1977.

Cohen, Yehoshua S. Diffusion of an Innovation in an Urban System: The Spread of Planned Regional Shopping Centers in the United States, 1949-1968. Chicago: The University of Chicago, Department of Geography, Research Paper No. 140, 1972.

Daley, Robert. An American Saga: Juan Trippe and His Pan American Empire. New York: Random House, 1980.

De Long, J. Bradford, and Andre Shleifer. “The Stock Market Bubble of 1929: Evidence from Closed-end Mutual Funds.” The Journal of Economic History 51 (September 1991): 675-700.

Clarke, Sally. “New Deal Regulation and the Revolution in American Farm Productivity: A Case Study of the Diffusion of the Tractor in the Corn Belt, 1920-1940.” The Journal of Economic History 51, no. 1 (1991): 105-115.

Cohen, Avi. “Technological Change as Historical Process: The Case of the U.S. Pulp and Paper Industry, 1915-1940.” The Journal of Economic History 44 (1984): 775-79.

Davies, R. E. G. A History of the World’s Airlines. London: Oxford University Press, 1964.

Dearing, Charles L., and Wilfred Owen. National Transportation Policy. Washington: The Brookings Institution, 1949.

Degen, Robert A. The American Monetary System: A Concise Survey of Its Evolution Since 1896. Lexington, MA: Lexington Books, 1987.

De Long, J. Bradford and Andre Shleifer. “The Stock Market Bubble of 1929: Evidence from Closed-end Mutual Funds.” The Journal of Economic History 51 (1991): 675-700.

Devine, Warren D., Jr. “From Shafts to Wires: Historical Perspectives on Electrification.” The Journal of Economic History 43 (1983): 347-372.

Eckert, Ross D., and George W. Hilton. “The Jitneys.” The Journal of Law and Economics 15 (October 1972): 293-326.

Eichengreen, Barry, ed. The Gold Standard in Theory and History. New York: Metheun, 1985.

Barry Eichengreen. “The Political Economy of the Smoot-Hawley Tariff.” Research in Economic History 12 (1989): 1-43.

Eichengreen, Barry. Golden Fetters: The Gold Standard and the Great Depression, 1919-1939. New York: Oxford University Press, 1992.

Eis, Carl. “The 1919-1930 Merger Movement in American Industry.” The Journal of Law and Economics XII (1969): 267-96.

Emmet, Boris, and John E. Jeuck. Catalogues and Counters: A History of Sears Roebuck and Company. Chicago: University of Chicago Press, 1950.

Fearon, Peter. War, Prosperity, & Depression: The U.S. Economy, 1947-1945. Lawrence, KS: University of Kansas Press, 1987.

Field, Alexander J. “The Most Technologically Progressive Decade of the Century.” The American Economic Review 93 (2003): 1399-1413.

Fischer, Claude. “The Revolution in Rural Telephony, 1900-1920.” Journal of Social History 21 (1987): 221-38.

Fischer, Claude. “Technology’s Retreat: The Decline of Rural Telephony in the United States, 1920-1940.” Social Science History, Vol. 11 (Fall 1987), pp. 295-327.

Fisher, Irving. The Stock Market Crash—and After. New York: Macmillan, 1930.

French, Michael J. “Structural Change and Competition in the United States Tire Industry, 1920-1937.” Business History Review 60 (1986): 28-54.

French, Michael J. The U.S. Tire Industry. Boston: Twayne Publishers, 1991.

Fricke, Ernest B. “The New Deal and the Modernization of Small Business: The McCreary Tire and Rubber Company, 1930-1940.” Business History Review 56 (1982):— 559-76.

Friedman, Milton, and Anna J. Schwartz. A Monetary History of the United States, 1867-1960. Princeton: Princeton University Press, 1963.

Galbraith, John Kenneth. The Great Crash. Boston: Houghton Mifflin, 1954.

Garnet, Robert W. The Telephone Enterprise: The Evolution of the Bell System’s Horizontal Structure, 1876-1900. Baltimore: The Johns Hopkins University Press, 1985.

Gideonse, Max. “Foreign Trade, Investments, and Commercial Policy.” In The Growth of the American Economy, 2d ed. Edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Giedion, Simon. Mechanization Takes Command. New York: Oxford University Press, 1948.

Gordon, Robert Aaron. Economic Instability and Growth: The American Record. New York: Harper and Row, 1974.

Gray, Roy Burton. Development of the Agricultural Tractor in the United States, 2 vols. Washington, D. C.: USGPO, 1954.

Gunderson, Gerald. An Economic History of America. New York: McGraw-Hill, 1976.

Hadwiger, Don F., and Clay Cochran. “Rural Telephones in the United States.” Agricultural History 58 (July 1984): 221-38.

Hamilton, James D. “Monetary Factors in the Great Depression.” Journal of Monetary Economics 19 (1987): 145-169.

Hamilton, James D. “The Role of the International Gold Standard in Propagating the Great Depression.” Contemporary Policy Issues 6 (1988): 67-89.

Hayek, Friedrich A. Prices and Production. New York: Augustus M. Kelly reprint of 1931 edition.

Hayford, Marc and Carl A. Pasurka, Jr. “The Political Economy of the Fordney-McCumber and Smoot-Hawley Tariff Acts.” Explorations in Economic History 29 (1992): 30-50.

Hendrickson, Robert. The Grand Emporiums: The Illustrated History of America’s Great Department Stores. Briarcliff Manor, NY: Stein and Day, 1979.

Herbst, Anthony F., and Joseph S. K. Wu. “Some Evidence of Subsidization of the U.S. Trucking Industry, 1900-1920.” The Journal of Economic History— 33 (June 1973): 417-33.

Higgs, Robert. Crisis and Leviathan: Critical Episodes in the Growth of American Government. New York: Oxford University Press, 1987.

Hilton, George W., and John Due. The Electric Interurban Railways in America. Stanford: Stanford University Press, 1960.

Hoffman, Elizabeth and Gary D. Liebcap. “Institutional Choice and the Development of U.S. Agricultural Policies in the 1920s.” The Journal of Economic History 51 (1991): 397-412.

Holt, Charles F. “Who Benefited from the Prosperity of the Twenties?” Explorations in Economic History 14 (1977): 277-289.

Hower, Ralph W. History of Macy’s of New York, 1858-1919. Cambridge, MA: Harvard University Press, 1946.

Hubbard, R. Glenn, Ed. Financial Markets and Financial Crises. Chicago: University of Chicago Press, 1991.

Hunter, Louis C. “Industry in the Twentieth Century.” In The Growth of the American Economy, 2d ed., edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Jerome, Harry. Mechanization in Industry. New York: National Bureau of Economic Research, 1934.

Johnson, H. Thomas. “Postwar Optimism and the Rural Financial Crisis.” Explorations in Economic History 11, no. 2 (1973-1974): 173-192.

Jones, Fred R. and William H. Aldred. Farm Power and Tractors, 5th ed. New York: McGraw-Hill, 1979.

Keller, Robert. “Factor Income Distribution in the United States During the 20’s: A Reexamination of Fact and Theory.” The Journal of Economic History 33 (1973): 252-95.

Kelly, Charles J., Jr. The Sky’s the Limit: The History of the Airlines. New York: Coward-McCann, 1963.

Kindleberger, Charles. The World in Depression, 1929-1939. Berkeley: The University of California Press, 1973.

Klebaner, Benjamin J. Commercial Banking in the United States: A History. New York: W. W. Norton and Co., 1974.

Kuznets, Simon. Shares of Upper Income Groups in Income and Savings. New York: NBER, 1953.

Lebhar, Godfrey M. Chain Stores in America, 1859-1962. New York: Chain Store Publishing Corp., 1963.

Lewis, Cleona. America’s Stake in International Investments. Washington: The Brookings Institution, 1938.

Livesay, Harold C. and Patrick G. Porter. “Vertical Integration in American Manufacturing, 1899-1948.” The Journal of Economic History 29 (1969): 494-500.

Lipartito, Kenneth. The Bell System and Regional Business: The Telephone in the South, 1877-1920. Baltimore: The Johns Hopkins University Press, 1989.

Liu, Tung, Gary J. Santoni, and Courteney C. Stone. “In Search of Stock Market Bubbles: A Comment on Rappoport and White.” The Journal of Economic History 55 (1995): 647-654.

Lorant, John. “Technological Change in American Manufacturing During the 1920s.” The Journal of Economic History 33 (1967): 243-47.

MacDonald, Forrest. Insull. Chicago: University of Chicago Press, 1962.

Marburg, Theodore. “Domestic Trade and Marketing.” In The Growth of the American Economy, 2d ed. Edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Markham, Jesse. “Survey of the Evidence and Findings on Mergers.” In Business Concentration and Price Policy, National Bureau of Economic Research. Princeton: Princeton University Press, 1955.

Markin, Rom J. The Supermarket: An Analysis of Growth, Development, and Change. Rev. ed. Pullman, WA: Washington State University Press, 1968.

McCraw, Thomas K. TVA and the Power Fight, 1933-1937. Philadelphia: J. B. Lippincott, 1971.

McCraw, Thomas K. and Forest Reinhardt. “Losing to Win: U.S. Steel’s Pricing, Investment Decisions, and Market Share, 1901-1938.” The Journal of Economic History 49 (1989): 592-620.

McMillin, W. Douglas and Randall E. Parker. “An Empirical Analysis of Oil Price Shocks in the Interwar Period.” Economic Inquiry 32 (1994): 486-497.

McNair, Malcolm P., and Eleanor G. May. The Evolution of Retail Institutions in the United States. Cambridge, MA: The Marketing Science Institute, 1976.

Mercer, Lloyd J. “Comment on Papers by Scheiber, Keller, and Raup.” The Journal of Economic History 33 (1973): 291-95.

Mintz, Ilse. Deterioration in the Quality of Foreign Bonds Issued in the United States, 1920-1930. New York: National Bureau of Economic Research, 1951.

Mishkin, Frederic S. “Asymmetric Information and Financial Crises: A Historical Perspective.” In Financial Markets and Financial Crises Edited by R. Glenn Hubbard. Chicago: University of Chicago Press, 1991.

Morris, Lloyd. Not So Long Ago. New York: Random House, 1949.

Mosco, Vincent. Broadcasting in the United States: Innovative Challenge and Organizational Control. Norwood, NJ: Ablex Publishing Corp., 1979.

Moulton, Harold G. et al. The American Transportation Problem. Washington: The Brookings Institution, 1933.

Mueller, John. “Lessons of the Tax-Cuts of Yesteryear.” The Wall Street Journal, March 5, 1981.

Musoke, Moses S. “Mechanizing Cotton Production in the American South: The Tractor, 1915-1960.” Explorations in Economic History 18 (1981): 347-75.

Nelson, Daniel. “Mass Production and the U.S. Tire Industry.” The Journal of Economic History 48 (1987): 329-40.

Nelson, Ralph L. Merger Movements in American Industry, 1895-1956. Princeton: Princeton University Press, 1959.

Niemi, Albert W., Jr., U.S. Economic History, 2nd ed. Chicago: Rand McNally Publishing Co., 1980.

Norton, Hugh S. Modern Transportation Economics. Columbus, OH: Charles E. Merrill Books, Inc., 1963.

Nystrom, Paul H. Economics of Retailing, vol. 1, 3rd ed. New York: The Ronald Press Co., 1930.

Oshima, Harry T. “The Growth of U.S. Factor Productivity: The Significance of New Technologies in the Early Decades of the Twentieth Century.” The Journal of Economic History 44 (1984): 161-70.

Parker, Randall and W. Douglas McMillin. “An Empirical Analysis of Oil Price Shocks During the Interwar Period.” Economic Inquiry 32 (1994): 486-497.

Parker, Randall and Paul Flacco. “Income Uncertainty and the Onset of the Great Depression.” Economic Inquiry 30 (1992): 154-171.

Parker, William N. “Agriculture.” In American Economic Growth: An Economist’s History of the United States, edited by Lance E. Davis, Richard A. Easterlin, William N. Parker, et. al. New York: Harper and Row, 1972.

Passer, Harold C. The Electrical Manufacturers, 1875-1900. Cambridge: Harvard University Press, 1953.

Peak, Hugh S., and Ellen F. Peak. Supermarket Merchandising and Management. Englewood Cliffs, NJ: Prentice-Hall, 1977.

Pilgrim, John. “The Upper Turning Point of 1920: A Reappraisal.” Explorations in Economic History 11 (1974): 271-98.

Rae, John B. Climb to Greatness: The American Aircraft Industry, 1920-1960. Cambridge: The M.I.T. Press, 1968.

Rae, John B. The American Automobile Industry. Boston: Twayne Publishers, 1984.

Rappoport, Peter and Eugene N. White. “Was the Crash of 1929 Expected?” American Economic Review 84 (1994): 271-281.

Rappoport, Peter and Eugene N. White. “Was There a Bubble in the 1929 Stock Market?” The Journal of Economic History 53 (1993): 549-574.

Resseguie, Harry E. “Alexander Turney Stewart and the Development of the Department Store, 1823-1876,” Business History Review 39 (1965): 301-22.

Rezneck, Samuel. “Mass Production and the Use of Energy.” In The Growth of the American Economy, 2d ed., edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Rockwell, Llewellyn H., Jr., ed. The Gold Standard: An Austrian Perspective. Lexington, MA: Lexington Books, 1985.

Romer, Christina. “Spurious Volatility in Historical Unemployment Data.” The Journal of Political Economy 91 (1986): 1-37.

Romer, Christina. “New Estimates of Prewar Gross National Product and Unemployment.” Journal of Economic History 46 (1986): 341-352.

Romer, Christina. “World War I and the Postwar Depression: A Reinterpretation Based on Alternative Estimates of GNP.” Journal of Monetary Economics 22 (1988): 91-115.

Romer, Christina and Jeffrey A. Miron. “A New Monthly Index of Industrial Production, 1884-1940.” Journal of Economic History 50 (1990): 321-337.

Romer, Christina. “The Great Crash and the Onset of the Great Depression.” Quarterly Journal of Economics 105 (1990): 597-625.

Romer, Christina. “Remeasuring Business Cycles.” The Journal of Economic History 54 (1994): 573-609.

Roose, Kenneth D. “The Production Ceiling and the Turning Point of 1920.” American Economic Review 48 (1958): 348-56.

Rosen, Philip T. The Modern Stentors: Radio Broadcasters and the Federal Government, 1920-1934. Westport, CT: The Greenwood Press, 1980.

Rosen, Philip T. “Government, Business, and Technology in the 1920s: The Emergence of American Broadcasting.” In American Business History: Case Studies. Edited by Henry C. Dethloff and C. Joseph Pusateri. Arlington Heights, IL: Harlan Davidson, 1987.

Rothbard, Murray N. America’s Great Depression. Kansas City: Sheed and Ward, 1963.

Sampson, Roy J., and Martin T. Ferris. Domestic Transportation: Practice, Theory, and Policy, 4th ed. Boston: Houghton Mifflin Co., 1979.

Samuelson, Paul and Everett E. Hagen. After the War—1918-1920. Washington: National Resources Planning Board, 1943.

Santoni, Gary and Gerald P. Dwyer, Jr. “The Great Bull Markets, 1924-1929 and 1982-1987: Speculative Bubbles or Economic Fundamentals?” Federal Reserve Bank of St. Louis Review 69 (1987): 16-29.

Santoni, Gary, and Gerald P. Dwyer, Jr. “Bubbles vs. Fundamentals: New Evidence from the Great Bull Markets.” In Crises and Panics: The Lessons of History. Edited by Eugene N. White. Homewood, IL: Dow Jones/Irwin, 1990.

Scherer, Frederick M. and David Ross. Industrial Market Structure and Economic Performance, 3d ed. Boston: Houghton Mifflin, 1990.

Schlebecker, John T. Whereby We Thrive: A History of American Farming, 1607-1972. Ames, IA: The Iowa State University Press, 1975.

Shepherd, James. “The Development of New Wheat Varieties in the Pacific Northwest.” Agricultural History 54 (1980): 52-63.

Sirkin, Gerald. “The Stock Market of 1929 Revisited: A Note.” Business History Review 49 (Fall 1975): 233-41.

Smiley, Gene. The American Economy in the Twentieth Century. Cincinnati: South-Western Publishing Co., 1994.

Smiley, Gene. “New Estimates of Income Shares During the 1920s.” In Calvin Coolidge and the Coolidge Era: Essays on the History of the 1920s, edited by John Earl Haynes, 215-232. Washington, D.C.: Library of Congress, 1998.

Smiley, Gene. “A Note on New Estimates of the Distribution of Income in the 1920s.” The Journal of Economic History 60, no. 4 (2000): 1120-1128.

Smiley, Gene. Rethinking the Great Depression: A New View of Its Causes and Consequences. Chicago: Ivan R. Dee, 2002.

Smiley, Gene, and Richard H. Keehn. “Margin Purchases, Brokers’ Loans and the Bull Market of the Twenties.” Business and Economic History. 2d series. 17 (1988): 129-42.

Smiley, Gene and Richard H. Keehn. “Federal Personal Income Tax Policy in the 1920s.” the Journal of Economic History 55, no. 2 (1995): 285-303.

Sobel, Robert. The Entrepreneuers: Explorations Within the American Business Tradition. New York: Weybright and Talley, 1974.

Soule, George. Prosperity Decade: From War to Depression: 1917-1929. New York: Holt, Rinehart, and Winston, 1947.

Stein, Herbert. The Fiscal Revolution in America, revised ed. Washington, D.C.: AEI Press, 1990.

Stigler, George J. “Monopoly and Oligopoly by Merger.” American Economic Review, 40 (May 1950): 23-34.

Sumner, Scott. “The Role of the International Gold Standard in Commodity Price Deflation: Evidence from the 1929 Stock Market Crash.” Explorations in Economic History 29 (1992): 290-317.

Swanson, Joseph and Samuel Williamson. “Estimates of National Product and Income for the United States Economy, 1919-1941.” Explorations in Economic History 10, no. 1 (1972): 53-73.

Temin, Peter. “The Beginning of the Depression in Germany.” Economic History Review. 24 (May 1971): 240-48.

Temin, Peter. Did Monetary Forces Cause the Great Depression. New York: W. W. Norton, 1976.

Temin, Peter. The Fall of the Bell System. New York: Cambridge University Press, 1987.

Temin, Peter. Lessons from the Great Depression. Cambridge, MA: The M.I.T. Press, 1989.

Thomas, Gordon, and Max Morgan-Witts. The Day the Bubble Burst. Garden City, NY: Doubleday, 1979.

Ulman, Lloyd. “The Development of Trades and Labor Unions,” In American Economic History, edited by Seymour E. Harris, chapter 14. New York: McGraw-Hill Book Co., 1961.

Ulman Lloyd. The Rise of the National Trade Union. Cambridge, MA: Harvard University Press, 1955.

U.S. Department of Commerce, Bureau of the Census. Historical Statistics of the United States: Colonial Times to 1970, 2 volumes. Washington, D.C.: USGPO, 1976.

Walsh, Margaret. Making Connections: The Long Distance Bus Industry in the U.S.A. Burlington, VT: Ashgate, 2000.

Wanniski, Jude. The Way the World Works. New York: Simon and Schuster, 1978.

Weiss, Leonard W. Cast Studies in American Industry, 3d ed. New York: John Wiley & Sons, 1980.

Whaples, Robert. “Hours of Work in U.S. History.” EH.Net Encyclopedia, edited by Robert Whaples, August 15 2001 URL— http://www.eh.net/encyclopedia/contents/whaples.work.hours.us.php

Whatley, Warren. “Southern Agrarian Labor Contracts as Impediments to Cotton Mechanization.” The Journal of Economic History 87 (1987): 45-70.

Wheelock, David C. and Subal C. Kumbhakar. “The Slack Banker Dances: Deposit Insurance and Risk-Taking in the Banking Collapse of the 1920s.” Explorations in Economic History 31 (1994): 357-375.

White, Eugene N. “The Stock Market Boom and Crash of 1929 Revisited.” The Journal of Economic Perspectives. 4 (Spring 1990): 67-83.

White, Eugene N., Ed. Crises and Panics: The Lessons of History. Homewood, IL: Dow Jones/Irwin, 1990.

White, Eugene N. “When the Ticker Ran Late: The Stock Market Boom and Crash of 1929.” In Crises and Panics: The Lessons of History Edited by Eugene N. White. Homewood, IL: Dow Jones/Irwin, 1990.

White, Eugene N. “Stock Market Bubbles? A Reply.” The Journal of Economic History 55 (1995): 655-665.

White, William J. “Economic History of Tractors in the United States.” EH.Net Encyclopedia, edited by Robert Whaples, August 15 2001 URL http://www.eh.net/encyclopedia/contents/white.tractors.history.us.php

Wicker, Elmus. “Federal Reserve Monetary Policy, 1922-1933: A Reinterpretation.” Journal of Political Economy 73 (1965): 325-43.

Wicker, Elmus. “A Reconsideration of Federal Reserve Policy During the 1920-1921 Depression.” The Journal of Economic History 26 (1966): 223-38.

Wicker, Elmus. Federal Reserve Monetary Policy, 1917-1933. New York: Random House, 1966.

Wigmore, Barrie A. The Crash and Its Aftermath. Westport, CT: Greenwood Press, 1985.

Williams, Raburn McFetridge. The Politics of Boom and Bust in Twentieth-Century America. Minneapolis/St. Paul: West Publishing Co., 1994.

Williamson, Harold F., et al. The American Petroleum Industry: The Age of Energy, 1899-1959. Evanston, IL: Northwestern University Press, 1963.

Wilson, Thomas. Fluctuations in Income and Employment, 3d ed. New York: Pitman Publishing, 1948.

Wilson, Jack W., Richard E. Sylla, and Charles P. Jones. “Financial Market Panics and Volatility in the Long Run, 1830-1988.” In Crises and Panics: The Lessons of History Edited by Eugene N. White. Homewood, IL: Dow Jones/Irwin, 1990.

Winkler, John Kennedy. Five and Ten: The Fabulous Life of F. W. Woolworth. New York: R. M. McBride and Co., 1940.

Wood, Charles. “Science and Politics in the War on Cattle Diseases: The Kansas Experience, 1900-1940.” Agricultural History 54 (1980): 82-92.

Wright, Gavin. Old South, New South: Revolutions in the Southern Economy Since the Civil War. New York: Basic Books, 1986.

Wright, Gavin. “The Origins of American Industrial Success, 1879-1940.” The American Economic Review 80 (1990): 651-668.

Citation: Smiley, Gene. “US Economy in the 1920s”. EH.Net Encyclopedia, edited by Robert Whaples. June 29, 2004. URL http://eh.net/encyclopedia/the-u-s-economy-in-the-1920s/

An Economic History of Denmark

Ingrid Henriksen, University of Copenhagen

Denmark is located in Northern Europe between the North Sea and the Baltic. Today Denmark consists of the Jutland Peninsula bordering Germany and the Danish Isles and covers 43,069 square kilometers (16,629 square miles). 1 The present nation is the result of several cessions of territory throughout history. The last of the former Danish territories in southern Sweden were lost to Sweden in 1658, following one of the numerous wars between the two nations, which especially marred the sixteenth and seventeenth centuries. Following defeat in the Napoleonic Wars, Norway was separated from Denmark in 1814. After the last major war, the Second Schleswig War in 1864, Danish territory was further reduced by a third when Schleswig and Holstein were ceded to Germany. After a regional referendum in 1920 only North-Schleswig returned to Denmark. Finally, Iceland, withdrew from the union with Denmark in 1944. The following will deal with the geographical unit of today’s Denmark.

Prerequisites of Growth

Throughout history a number of advantageous factors have shaped the Danish economy. From this perspective it may not be surprising to find today’s Denmark among the richest societies in the world. According to the OECD, it ranked seventh in 2004, with income of $29.231 per capita (PPP). Although we can identify a number of turning points and breaks, for the time period over which we have quantitative evidence this long-run position has changed little. Thus Maddison (2001) in his estimate of GDP per capita around 1600 places Denmark as number six. One interpretation could be that favorable circumstances, rather than ingenious institutions or policies, have determined Danish economic development. Nevertheless, this article also deals with time periods in which the Danish economy was either diverging from or converging towards the leading economies.

Table 1:
Average Annual GDP Growth (at factor costs)
Total Per capita
1870-1880 1.9% 0.9%
1880-1890 2.5% 1.5%
1890-1900 2.9% 1.8%
1900-1913 3.2% 2.0%
1913-1929 3.0% 1.6%
1929-1938 2.2% 1.4%
1938-1950 2.4% 1.4%
1950-1960 3.4% 2.6%
1960-1973 4.6% 3.8%
1973-1982 1.5% 1.3%
1982-1993 1.6% 1.5%
1993-2004 2.2% 2.0%

Sources: Johansen (1985) and Statistics Denmark ‘Statistikbanken’ online.

Denmark’s geographical location in close proximity of the most dynamic nations of sixteenth-century Europe, the Netherlands and the United Kingdom, no doubt exerted a positive influence on the Danish economy and Danish institutions. The North German area influenced Denmark both through long-term economic links and through the Lutheran Protestant Reformation which the Danes embraced in 1536.

The Danish economy traditionally specialized in agriculture like most other small and medium-sized European countries. It is, however, rather unique to find a rich European country in the late-nineteenth and mid-twentieth century which retained such a strong agrarian bias. Only in the late 1950s did the workforce of manufacturing industry overtake that of agriculture. Thus an economic history of Denmark must take its point of departure in agricultural development for quite a long stretch of time.

Looking at resource endowments, Denmark enjoyed a relatively high agricultural land-to-labor ratio compared to other European countries, with the exception of the UK. This was significant for several reasons since it, in this case, was accompanied by a comparatively wealthy peasantry.

Denmark had no mineral resources to speak of until the exploitation of oil and gas in the North Sea began in 1972 and 1984, respectively. From 1991 on Denmark has been a net exporter of energy although on a very modest scale compared to neighboring Norway and Britain. The small deposits are currently projected to be depleted by the end of the second decade of the twenty-first century.

Figure 1. Percent of GDP in selected=

Source: Johansen (1985) and Statistics Denmark ’Nationalregnskaber’

Good logistic can be regarded as a resource in pre-industrial economies. The Danish coast line of 7,314 km and the fact that no point is more than 50 km from the sea were advantages in an age in which transport by sea was more economical than land transport.

Decline and Transformation, 1500-1750

The year of the Lutheran Reformation (1536) conventionally marks the end of the Middle Ages in Danish historiography. Only around 1500 did population growth begin to pick up after the devastating effect of the Black Death. Growth thereafter was modest and at times probably stagnant with large fluctuations in mortality following major wars, particularly during the seventeenth century, and years of bad harvests. About 80-85 percent of the population lived from subsistence agriculture in small rural communities and this did not change. Exports are estimated to have been about 5 percent of GDP between 1550 and 1650. The main export products were oxen and grain. The period after 1650 was characterized by a long lasting slump with a marked decline in exports to the neighboring countries, the Netherlands in particular.

The institutional development after the Black Death showed a return to more archaic forms. Unlike other parts of northwestern Europe, the peasantry on the Danish Isles afterwards became a victim of a process of re-feudalization during the last decades of the fifteenth century. A likely explanation is the low population density that encouraged large landowners to hold on to their labor by all means. Freehold tenure among peasants effectively disappeared during the seventeenth century. Institutions like bonded labor that forced peasants to stay on the estate where they were born, and labor services on the demesne as part of the land rent bring to mind similar arrangements in Europe east of the Elbe River. One exception to the East European model was crucial, however. The demesne land, that is the land worked directly under the estate, never made up more than nine percent of total land by the mid eighteenth century. Although some estate owners saw an interest in encroaching on peasant land, the state protected the latter as production units and, more importantly, as a tax base. Bonded labor was codified in the all-encompassing Danish Law of Christian V in 1683. It was further intensified by being extended, though under another label, to all Denmark during 1733-88, as a means for the state to tide the large landlords over an agrarian crisis. One explanation for the long life of such an authoritarian institution could be that the tenants were relatively well off, with 25-50 acres of land on average. Another reason could be that reality differed from the formal rigor of the institutions.

Following the Protestant Reformation in 1536, the Crown took over all church land, thereby making it the owner of 50 percent of all land. The costs of warfare during most of the sixteenth century could still be covered by the revenue of these substantial possessions. Around 1600 the income from taxation and customs, mostly Sound Toll collected from ships that passed the narrow strait between Denmark and today’s Sweden, on the one hand and Crown land revenues on the other were equally large. About 50 years later, after a major fiscal crisis had led to the sale of about half of all Crown lands, the revenue from royal demesnes declined relatively to about one third, and after 1660 the full transition from domain state to tax state was completed.

The bulk of the former Crown land had been sold to nobles and a few common owners of estates. Consequently, although the Danish constitution of 1665 was the most stringent version of absolutism found anywhere in Europe at the time, the Crown depended heavily on estate owners to perform a number of important local tasks. Thus, conscription of troops for warfare, collection of land taxes and maintenance of law and order enhanced the landlords’ power over their tenants.

Reform and International Market Integration, 1750-1870

The driving force of Danish economic growth, which took off during the late eighteenth century was population growth at home and abroad – which triggered technological and institutional innovation. Whereas the Danish population during the previous hundred years grew by about 0.4 percent per annum, growth climbed to about 0.6 percent, accelerating after 1775 and especially from the second decade of the nineteenth century (Johansen 2002). Like elsewhere in Northern Europe, accelerating growth can be ascribed to a decline in mortality, mainly child mortality. Probably this development was initiated by fewer spells of epidemic diseases due to fewer wars and to greater inherited immunity against contagious diseases. Vaccination against smallpox and formal education of midwives from the early nineteenth century might have played a role (Banggård 2004). Land reforms that entailed some scattering of the farm population may also have had a positive influence. Prices rose from the late eighteenth century in response to the increase in populations in Northern Europe, but also following a number of international conflicts. This again caused a boom in Danish transit shipping and in grain exports.

Population growth rendered the old institutional set up obsolete. Landlords no longer needed to bind labor to their estate, as a new class of landless laborers or cottagers with little land emerged. The work of these day-laborers was to replace the labor services of tenant farmers on the demesnes. The old system of labor services obviously presented an incentive problem all the more since it was often carried by the live-in servants of the tenant farmers. Thus, the labor days on the demesnes represented a loss to both landlords and tenants (Henriksen 2003). Part of the land rent was originally paid in grain. Some of it had been converted to money which meant that real rents declined during the inflation. The solution to these problems was massive land sales both from the remaining crown lands and from private landlords to their tenants. As a result two-thirds of all Danish farmers became owner-occupiers compared to only ten percent in the mid-eighteenth century. This development was halted during the next two and a half decades but resumed as the business cycle picked up during the 1840s and 1850s. It was to become of vital importance to the modernization of Danish agriculture towards the end of the nineteenth century that 75 percent of all agricultural land was farmed by owners of middle-sized farms of about 50 acres. Population growth may also have put a pressure on common lands in the villages. At any rate enclosure begun in the 1760s, accelerated in the 1790s supported by legislation and was almost complete in the third decade of the nineteenth century.

The initiative for the sweeping land reforms from the 1780s is thought to have come from below – that is from the landlords and in some instances also from the peasantry. The absolute monarch and his counselors were, however, strongly supportive of these measures. The desire for peasant land as a tax base weighed heavily and the reforms were believed to enhance the efficiency of peasant farming. Besides, the central government was by now more powerful than in the preceding centuries and less dependent on landlords for local administrative tasks.

Production per capita rose modestly before the 1830s and more pronouncedly thereafter when a better allocation of labor and land followed the reforms and when some new crops like clover and potatoes were introduced at a larger scale. Most importantly, the Danes no longer lived at the margin of hunger. No longer do we find a correlation between demographic variables, deaths and births, and bad harvest years (Johansen 2002).

A liberalization of import tariffs in 1797 marked the end of a short spell of late mercantilism. Further liberalizations during the nineteenth and the beginning of the twentieth century established the Danish liberal tradition in international trade that was only to be broken by the protectionism of the 1930s.

Following the loss of the secured Norwegian market for grain in 1814, Danish exports began to target the British market. The great rush forward came as the British Corn Law was repealed in 1846. The export share of the production value in agriculture rose from roughly 10 to around 30 percent between 1800 and 1870.

In 1849 absolute monarchy was peacefully replaced by a free constitution. The long-term benefits of fundamental principles such as the inviolability of private property rights, the freedom of contracting and the freedom of association were probably essential to future growth though hard to quantify.

Modernization and Convergence, 1870-1914

During this period Danish economic growth outperformed that of most other European countries. A convergence in real wages towards the richest countries, Britain and the U.S., as shown by O’Rourke and Williamsson (1999), can only in part be explained by open economy forces. Denmark became a net importer of foreign capital from the 1890s and foreign debt was well above 40 percent of GDP on the eve of WWI. Overseas emigration reduced the potential workforce but as mortality declined population growth stayed around one percent per annum. The increase in foreign trade was substantial, as in many other economies during the heyday of the gold standard. Thus the export share of Danish agriculture surged to a 60 percent.

The background for the latter development has featured prominently in many international comparative analyses. Part of the explanation for the success, as in other Protestant parts of Northern Europe, was a high rate of literacy that allowed a fast spread of new ideas and new technology.

The driving force of growth was that of a small open economy, which responded effectively to a change in international product prices, in this instance caused by the invasion of cheap grain to Western Europe from North America and Eastern Europe. Like Britain, the Netherlands and Belgium, Denmark did not impose a tariff on grain, in spite of the strong agrarian dominance in society and politics.

Proposals to impose tariffs on grain, and later on cattle and butter, were turned down by Danish farmers. The majority seems to have realized the advantages accruing from the free imports of cheap animal feed during the ongoing process of transition from vegetable to animal production, at a time when the prices of animal products did not decline as much as grain prices. The dominant middle-sized farm was inefficient for wheat but had its comparative advantage in intensive animal farming with the given technology. O’Rourke (1997) found that the grain invasion only lowered Danish rents by 4-5 percent, while real wages rose (according to expectation) but more than in any other agrarian economy and more than in industrialized Britain.

The move from grain exports to exports of animal products, mainly butter and bacon, was to a great extent facilitated by the spread of agricultural cooperatives. This organization allowed the middle-sized and small farms that dominated Danish agriculture to benefit from the economy of scale in processing and marketing. The newly invented steam-driven continuous cream separator skimmed more cream from a kilo of milk than conventional methods and had the further advantage of allowing transported milk brought together from a number of suppliers to be skimmed. From the 1880s the majority of these creameries in Denmark were established as cooperatives and about 20 years later, in 1903, the owners of 81 percent of all milk cows supplied to a cooperative (Henriksen 1999). The Danish dairy industry captured over a third of the rapidly expanding British butter-import market, establishing a reputation for consistent quality that was reflected in high prices. Furthermore, the cooperatives played an active role in persuading the dairy farmers to expand production from summer to year-round dairying. The costs of intensive feeding during the wintertime were more than made up for by a winter price premium (Henriksen and O’Rourke 2005). Year-round dairying resulted in a higher rate of utilization of agrarian capital – that is of farm animals and of the modern cooperative creameries. Not least did this intensive production mean a higher utilization of hitherto underemployed labor. From the late 1890’s, in particular, labor productivity in agriculture rose at an unanticipated speed at par with productivity increase in the urban trades.

Industrialization in Denmark took its modest beginning in the 1870s with a temporary acceleration in the late 1890s. It may be a prime example of an industrialization process governed by domestic demand for industrial goods. Industry’s export never exceeded 10 percent of value added before 1914, compared to agriculture’s export share of 60 percent. The export drive of agriculture towards the end of the nineteenth century was a major force in developing other sectors of the economy not least transport, trade and finance.

Weathering War and Depression, 1914-1950

Denmark, as a neutral nation, escaped the devastating effects of World War I and was even allowed to carry on exports to both sides in the conflict. The ensuing trade surplus resulted in a trebling of the money supply. As the monetary authorities failed to contain the inflationary effects of this development, the value of the Danish currency slumped to about 60 percent of its pre-war value in 1920. The effects of monetary policy failure were aggravated by a decision to return to the gold standard at the 1913 level. When monetary policy was finally tightened in 1924, it resulted in fierce speculation in an appreciation of the Krone. During 1925-26 the currency returned quickly to its pre-war parity. As this was not counterbalanced by an equal decline in prices, the result was a sharp real appreciation and a subsequent deterioration in Denmark’s competitive position (Klovland 1997).

Figure 2. Indices of the Krone Real Exchange Rate and Terms Of Trade (1980=100; Real rates based on Wholesale Price Index

Source: Abildgren (2005)

Note: Trade with Germany is included in the calculation of the real effective exchange rate for the whole period, including 1921-23.

When, in September 1931, Britain decided to leave the gold standard again, Denmark, together with Sweden and Norway, followed only a week later. This move was beneficial as the large real depreciation lead to a long-lasting improvement in Denmark’s competitiveness in the 1930s. It was, no doubt, the single most important policy decision during the depression years. Keynesian demand management, even if it had been fully understood, was barred by a small public sector, only about 13 percent of GDP. As it was, fiscal orthodoxy ruled and policy was slightly procyclical as taxes were raised to cover the deficit created by crisis and unemployment (Topp 1995).

Structural development during the 1920s, surprisingly for a rich nation at this stage, was in favor of agriculture. The total labor force in Danish agriculture grew by 5 percent from 1920 to 1930. The number of employees in agriculture was stagnating whereas the number of self-employed farmers increased by a larger number. The development in relative incomes cannot account for this trend but part of the explanation must be found in a flawed Danish land policy, which actively supported a further parceling out of land into small holdings and restricted the consolidation into larger more viable farms. It took until the early 1960s before this policy began to be unwound.

When the world depression hit Denmark with a minor time lag, agriculture still employed one-third of the total workforce while its contribution to total GDP was a bit less than one-fifth. Perhaps more importantly, agricultural goods still made up 80 percent of total exports.

Denmark’s terms of trade, as a consequence, declined by 24 percent from 1930 to 1932. In 1933 and 1934 bilateral trade agreements were forced upon Denmark by Britain and Germany. In 1932 Denmark had adopted exchange control, a harsh measure even for its time, to stem the net flow of foreign exchange out of the country. By rationing imports exchange control also offered some protection of domestic industry. At the end of the decade manufacture’s GDP had surpassed that of agriculture. In spite of the protectionist policy, unemployment soared to 13-15 percent of the workforce.

The policy mistakes during World War I and its immediate aftermath served as a lesson for policymakers during World War II. The German occupation force (April 9, 1940 until May 5, 1945) drew the funds for its sustenance and for exports to Germany on the Danish central bank whereby the money supply more than doubled. In response the Danish authorities in 1943 launched a policy of absorbing money through open market operations and, for the first time in history, through a surplus on the state budget.

Economic reconstruction after World War II was swift, as again Denmark had been spared the worst consequences of a major war. In 1946 GDP recovered its highest pre-war level. In spite of this, Denmark received relatively generous support through the Marshall Plan of 1948-52, when measured in dollars per capita.

From Riches to Crisis, 1950-1973: Liberalizations and International Integration Once Again

The growth performance during 1950-1957 was markedly lower than the Western European average. The main reason was the high share of agricultural goods in Danish exports, 63 percent in 1950. International trade in agricultural products to a large extent remained regulated. Large deteriorations in the terms of trade caused by the British devaluation 1949, when Denmark followed suit, the outbreak of the Korean War in 1950, and the Suez-crisis of 1956 made matters worse. The ensuing deficits on the balance of payment led the government to contractionary policy measures which restrained growth.

The liberalization of the flow of goods and capital in Western Europe within the framework of the OEEC (the Organization for European Economic Cooperation) during the 1950s probably dealt a blow to some of the Danish manufacturing firms, especially in the textile industry, that had been sheltered through exchange control and wartime. Nevertheless, the export share of industrial production doubled from 10 percent to 20 percent before 1957, at the same time as employment in industry surpassed agricultural employment.

On the question of European economic integration Denmark linked up with its largest trading partner, Britain. After the establishment of the European Common Market in 1958 and when the attempts to create a large European free trade area failed, Denmark entered the European Free Trade Association (EFTA) created under British leadership in 1960. When Britain was finally able to join the European Economic Community (EEC) in 1973, Denmark followed, after a referendum on the issue. Long before admission to the EEC, the advantages to Danish agriculture from the Common Agricultural Policy (CAP) had been emphasized. The higher prices within the EEC were capitalized into higher land prices at the same time that investments were increased based on the expected gains from membership. As a result the most indebted farmers who had borrowed at fixed interests rates were hit hard by two developments from the early 1980s. The EEC started to reduce the producers’ benefits of the CAP because of overproduction and, after 1982, the Danish economy adjusted to a lower level of inflation, and therefore, nominal interest rates. According to Andersen (2001) Danish farmers were left with the highest interest burden of all European Union (EU) farmers in the 1990’s.

Denmark’s relations with the EU, while enthusiastic at the beginning, have since been characterized by a certain amount of reserve. A national referendum in 1992 turned down the treaty on the European Union, the Maastricht Treaty. The Danes, then, opted out of four areas, common citizenship, a common currency, common foreign and defense politics and a common policy on police and legal matters. Once more, in 2000, adoption of the common currency, the Euro, was turned down by the Danish electorate. In the debate leading up to the referendum the possible economic advantages of the Euro in the form of lower transaction costs were considered to be modest, compared to the existent regime of fixed exchange rates vis-à-vis the Euro. All the major political parties, nevertheless, are pro-European, with only the extreme Right and the extreme Left being against. It seems that there is a discrepancy between the general public and the politicians on this particular issue.

As far as domestic economic policy is concerned, the heritage from the 1940s was a new commitment to high employment modified by a balance of payment constraint. The Danish policy differed from that of some other parts of Europe in that the remains of the planned economy from the war and reconstruction period in the form of rationing and price control were dismantled around 1950 and that no nationalizations took place.

Instead of direct regulation, economic policy relied on demand management with fiscal policy as its main instrument. Monetary policy remained a bone of contention between politicians and economists. Coordination of policies was the buzzword but within that framework monetary policy was allotted a passive role. The major political parties for a long time were wary of letting the market rate of interest clear the loan market. Instead, some quantitative measures were carried out with the purpose of dampening the demand for loans.

From Agricultural Society to Service Society: The Growth of the Welfare State

Structural problems in foreign trade extended into the high growth period of 1958-73, as Danish agricultural exports were met with constraints both from the then EEC-member countries and most EFTA countries, as well. During the same decade, the 1960s, as the importance of agriculture was declining the share of employment in the public sector grew rapidly until 1983. Building and construction also took a growing share of the workforce until 1970. These developments left manufacturing industry with a secondary position. Consequently, as pointed out by Pedersen (1995) the sheltered sectors in the economy crowded out the sectors that were exposed to international competition, that is mostly industry and agriculture, by putting a pressure on labor and other costs during the years of strong expansion.

Perhaps the most conspicuous feature of the Danish economy during the Golden Age was the steep increase in welfare-related costs from the mid 1960s and not least the corresponding increases in the number of public employees. Although the seeds of the modern Scandinavian welfare state were sown at a much earlier date, the 1960s was the time when public expenditure as a share of GDP exceeded that of most other countries.

As in other modern welfare states, important elements in the growth of the public sector during the 1960s were the expansion in public health care and education, both free for all citizens. The background for much of the increase in the number of public employees from the late 1960s was the rise in labor participation by married women from the late 1960s until about 1990, partly at least as a consequence. In response, the public day care facilities for young children and old people were expanded. Whereas in 1965 7 percent of 0-6 year olds were in a day nursery or kindergarten, this share rose to 77 per cent in 2000. This again spawned more employment opportunities for women in the public sector. Today the labor participation for women, around 75 percent of 16-66 year olds, is among the highest in the world.

Originally social welfare programs targeted low income earners who were encouraged to take out insurance against sickness (1892), unemployment (1907) and disability (1922). The public subsidized these schemes and initiated a program for the poor among old people (1891). The high unemployment period in the 1930s inspired some temporary relief and some administrative reform, but little fundamental change.

Welfare policy in the first four decades following World War II is commonly believed to have been strongly influenced by the Social Democrat party which held around 30 percent of the votes in general elections and was the party in power for long periods of time. One of the distinctive features of the Danish welfare state has been its focus on the needs of the individual person rather than on the family context. Another important characteristic is the universal nature of a number of benefits starting with a basic old age pension for all in 1956. The compensation rates in a number of schedules are high in international comparison, particularly for low income earners. Public transfers gained a larger share in total public outlays both because standards were raised – that is benefits became higher – and because the number of recipients increased dramatically following the high unemployment regime from the mid 1970s to the mid 1990s. To pay for the high transfers and the large public sector – around 30 percent of the work force – the tax load is also high in international perspective. The share public sector and social expenditure has risen to above 50 percent of GDP, only second to the share in Sweden.

Figure 3. Unemployment, Denmark (percent of total labor force)

Source: Statistics Denmark ‘50 års-oversigten’ and ADAM’s databank

The Danish labor market model has recently attracted favorable international attention (OECD 2005). It has been declared successful in fighting unemployment – especially compared to the policies of countries like Germany and France. The so-called Flexicurity model rests on three pillars. The first is low employment protection, the second is relatively high compensation rates for the unemployed and the third is the requirement for active participation by the unemployed. Low employment protection has a long tradition in Denmark and there is no change in this factor when comparing the twenty years of high unemployment – 8-12 per cent of the labor force – from the mid 1970s to the mid 1990s, to the past ten years when unemployment has declined to a mere 4.5 percent in 2006. The rules governing compensation to the unemployed were tightened from 1994, limiting the number of years the unemployed could receive benefits from 7 to 4. Most noticeably labor market policy in 1994 turned from ‘passive’ measures – besides unemployment benefits, an early retirement scheme and a temporary paid leave scheme – toward ‘active’ measures that were devoted to getting people back to work by providing training and jobs. It is commonly supposed that the strengthening of economic incentives helped to lower unemployment. However, as Andersen and Svarer (2006) point out, while unemployment has declined substantially a large and growing share of Danes of employable age receives transfers other than unemployment benefit – that is benefits related to sickness or social problems of various kinds, early retirement benefits, etc. This makes it hazardous to compare the Danish labor market model with that of many other countries.

Exchange Rates and Macroeconomic Policy

Denmark has traditionally adhered to a fixed exchange rate regime. The belief is that for a small and open economy, a floating exchange rate could lead to very volatile exchange rates which would harm foreign trade. After having abandoned the gold standard in 1931, the Danish currency (the Krone) was, for a while, pegged to the British pound, only to join the IMF system of fixed but adjustable exchange rates, the so-called Bretton Woods system after World War II. The close link with the British economy still manifested itself when the Danish currency was devaluated along with the pound in 1949 and, half way, in 1967. The devaluation also reflected that after 1960, Denmark’s international competitiveness had gradually been eroded by rising real wages, corresponding to a 30 percent real appreciation of the currency (Pedersen 1996).

When the Bretton Woods system broke down in the early 1970s, Denmark joined the European exchange rate cooperation, the “Snake” arrangement, set up in 1972, an arrangement that was to be continued in the form of the Exchange Rate Mechanism within the European Monetary System from 1979. The Deutschmark was effectively the nominal anchor in European currency cooperation until the launch of the Euro in 1999, a fact that put Danish competitiveness under severe pressure because of markedly higher inflation in Denmark compared to Germany. In the end the Danish government gave way before the pressure and undertook four discrete devaluations from 1979 to 1982. Since compensatory increases in wages were held back, the balance of trade improved perceptibly.

This improvement could, however, not make up for the soaring costs of old loans at a time when the international real rates of interests were high. The Danish devaluation strategy exacerbated this problem. The anticipation of further devaluations was mirrored in a steep increase in the long-term rate of interest. It peaked at 22 percent in nominal terms in 1982, with an interest spread to Germany of 10 percent. Combined with the effects of the second oil crisis on the Danish terms of trade, unemployment rose to 10 percent of the labor force. Given the relatively high compensation ratios for the unemployed, the public deficit increased rapidly and public debt grew to about 70 percent of GDP.

Figure 4. Current Account and Foreign Debt (Denmark)

Source: Statistics Denmark Statistical Yearbooks and ADAM’s Databank

In September 1982 the Social Democrat minority government resigned without a general election and was relieved by a Conservative-Liberal minority government. The new government launched a program to improve the competitiveness of the private sector and to rebalance public finances. An important element was a disinflationary economic policy based on fixed exchange rates pegging the Krone to the participants of the EMS and, from 1999, to the Euro. Furthermore, automatic wage indexation that had occurred, with short interruptions since 1920 (with a short lag and high coverage), was abolished. Fiscal policy was tightened, thus bringing an end to the real increases in public expenditure that had lasted since the 1960’s.

The stabilization policy was successful in bringing down inflation and long interest rates. Pedersen (1995) finds that this process, nevertheless, was slower than might have been expected. In view of former Danish exchange rate policy it took some time for the market to believe in the credible commitment to fixed exchange rates. From the late 1990s the interest spread to Germany/ Euroland has been negligible, however.

The initial success of the stabilization policy brought a boom to the Danish economy that, once again, caused overheating in the form of high wage increases (in 1987) and a deterioration of the current account. The solution to this was a number of reforms in 1986-87 aiming at encouraging private savings that had by then fallen to an historical low. Most notable was the reform that reduced tax deductibility of private interest on debts. These measures resulted in a hard landing to the economy caused by the collapse of the housing market.

The period of low growth was further prolonged by the international recession in 1992. In 1993 yet another shift of regime occurred in Danish economic policy. A new Social Democrat government decided to ‘kick start’ the economy by means of a moderate fiscal expansion whereas, in 1994, the same government tightened labor market policies substantially, as we have seen. Mainly as a consequence of these measures the Danish economy from 1994 entered a period of moderate growth with unemployment steadily falling to the level of the 1970s. A new feature that still puzzles Danish economists is that the decline in unemployment over these years has not yet resulted in any increase in wage inflation.

Denmark at the beginning of the twenty-first century in many ways fits the description of a Small Successful European Economy according to Mokyr (2006). Unlike in most of the other small economies, however, Danish exports are broad based and have no “niche” in the world market. Like some other small European countries, Ireland, Finland and Sweden, the short term economic fluctuations as described above have not followed the European business cycle very closely for the past thirty years (Andersen 2001). Domestic demand and domestic economic policy has, after all, played a crucial role even in a very small and very open economy.

References

Abildgren, Kim. “Real Effective Exchange Rates and Purchasing-Power-parity Convergence: Empirical Evidence for Denmark, 1875-2002.” Scandinavian Economic History Review 53, no. 3 (2005): 58-70.

Andersen, Torben M. et al. The Danish Economy: An international Perspective. Copenhagen: DJØF Publishing, 2001.

Andersen, Torben M. and Michael Svarer. “Flexicurity: den danska arbetsmarknadsmodellen.” Ekonomisk debatt 34, no. 1 (2006): 17-29.

Banggaard, Grethe. Befolkningsfremmende foranstaltninger og faldende børnedødelighed. Danmark, ca. 1750-1850. Odense: Syddansk Universitetsforlag, 2004

Hansen, Sv. Aage. Økonomisk vækst i Danmark: Volume I: 1720-1914 and Volume II: 1914-1983. København: Akademisk Forlag, 1984.

Henriksen, Ingrid. “Avoiding Lock-in: Cooperative Creameries in Denmark, 1882-1903.” European Review of Economic History 3, no. 1 (1999): 57-78

Henriksen, Ingrid. “Freehold Tenure in Late Eighteenth-Century Denmark.” Advances in Agricultural Economic History 2 (2003): 21-40.

Henriksen, Ingrid and Kevin H. O’Rourke. “Incentives, Technology and the Shift to Year-round Dairying in Late Nineteenth-century Denmark.” Economic History Review 58, no. 3 (2005):.520-54.

Johansen, Hans Chr. Danish Population History, 1600-1939. Odense: University Press of Southern Denmark, 2002.

Johansen, Hans Chr. Dansk historisk statistik, 1814-1980. København: Gyldendal, 1985.

Klovland, Jan T. “Monetary Policy and Business Cycles in the Interwar Years: The Scandinavian Experience.” European Review of Economic History 2, no. 3 (1998): 309-44.

Maddison, Angus. The World Economy: A Millennial Perspective. Paris: OECD, 2001

Mokyr, Joel. “Successful Small Open Economies and the Importance of Good Institutions.” In The Road to Prosperity. An Economic History of Finland, edited by Jari Ojala, Jari Eloranta and Jukka Jalava, 8-14. Helsinki: SKS, 2006.

Pedersen, Peder J. “Postwar Growth of the Danish Economy.” In Economic Growth in Europe since 1945, edited by Nicholas Crafts and Gianni Toniolo. Cambridge: Cambridge University Press, 1995.

OECD, Employment Outlook, 2005.

O’Rourke, Kevin H. “The European Grain Invasion, 1870-1913.” Journal of Economic History 57, no. 4 (1997): 775-99.

O’Rourke, Kevin H. and Jeffrey G. Williamson. Globalization and History: The Evolution of a Nineteenth-century Atlantic Economy. Cambridge, MA: MIT Press, 1999

Topp, Niels-Henrik. “Influence of the Public Sector on Activity in Denmark, 1929-39.” Scandinavian Economic History Review 43, no. 3 (1995): 339-56.


Footnotes

1 Denmark also includes the Faeroe Islands, with home rule since 1948, and Greenland, with home rule since 1979, both in the North Atlantic. These territories are left out of this account.

Citation: Henriksen, Ingrid. “An Economic History of Denmark”. EH.Net Encyclopedia, edited by Robert Whaples. October 6, 2006. URL http://eh.net/encyclopedia/an-economic-history-of-denmark/

Credit in the Colonial American Economy

David T Flynn, University of North Dakota

Overview of Credit versus Barter and Cash

Credit was vital to the economy of colonial America and much of the individual prosperity and success in the colonies was due to credit. Networks of credit stretched across the Atlantic from Britain to the major port cities and into the interior of the country allowing exchange to occur (Bridenbaugh, 1990, 154). Colonists made purchases by credit, cash and barter. Barter and cash were spot exchanges, goods and services were given in exchange for immediate payment. Credit, however, delayed the payment until a later date. Understanding the role of credit in the eighteenth century requires a brief discussion of all payment options as well as the nature of the repayment of credit.

Barter

Barter is an exchange of goods and services for other goods and services and can be a very difficult method of exchange due to the double coincidence of wants. For exchange to occur in a barter situation each party must have the good desired by its trading partner. Suppose John Hancock has paper supplies and wants corn while Paul Revere has silver spoons and wants paper products. Even though Revere wants the goods available from Hancock no exchange occurs because Hancock does not want the good Revere has to offer. The double coincidence of wants can make barter very costly because of time spent searching for a trading partner. This time could otherwise be used for consumption, production, leisure, or any number of other activities. The principle advantage of any form of money over barter is obvious: money satisfies the double coincidence of wants, that is, money functions as a medium of exchange.

Money’s advantages

Money also has other functions that make it a superior method of exchange to barter including acting as the unit of account (the unit in which prices are quoted) in the economy (e.g. the dollar in the United States and the pound in England). A barter economy uses a large number of prices because every good must have a price in terms of each other good available in the economy. An economy with n different goods would have n(n-1)/2 prices in total, not an enormous burden for small values of n, but as n grows it quickly becomes unmanageable. A unit of account reduces the number of prices from the barter situation to n, or the number of goods. The colonists had a unit of account, the colonial pound (£), which removed this burden of barter.

Several forms of money circulated in the colonies over the course of the seventeenth and eighteenth centuries, such as specie, commodity money and paper currency. Specie is gold or silver minted into coins and is a special form of commodity money, a good that has an exchange value separate from the market value of the good. Tobacco, and later tobacco warehouse receipts, acted as a form of money in many of the colonies. Despite multiple money options some colonists complained of an inability to keep money in circulation, or at least in the hands of those wanting to use it for exchange (Baxter, 1945, 11-17; Bridenbaugh, 153).1

Credit’s advantages

When you acquire goods with credit you delay payment to a later time, be it one day or one year. A basic credit transaction today is essentially the same as in the eighteenth century, only the form is different.2 Extending credit presents risks, most notably default, or the failure of the borrower to repay the amount borrowed. Sellers also needed to worry about the total volume of credit they extended because it threatened their solvency in the case of default. Consumers benefited from credit by the ability to consume beyond current financial resources, as well as security from theft and other advantages. Sellers gained by faster sales of goods and interest charges, often hidden in a higher price for the goods.3

Uncertainty about the scope of credit

The frequency of credit versus barter and cash is not well quantified because surviving account books and transaction records generally only report cash or goods payments made after the merchant allowed credit, not spot cash or barter transactions (Baxter, 19n). Martin (1939, 150) concurs, “The entries represent transactions with those customers who did not pay at once on purchasing goods for [the seller] either made no record of immediate cash purchases, or else there were almost no such transactions.” The results of Flynn’s (2001) study using merchant account books from Connecticut and Massachusetts found also that most purchases recorded in the account books were credit purchases (see Table 1 below).4 Scholars are forced to make general statements about credit as a standard tool in transactions in port cities and rural villages without reference to specific numbers (Perkins, 1980, 123-124).

Table 1

Percentage of Purchases by Type

Purchases by Credit Purchases by Cash Purchases by Barter
Connecticut 98.6 1.1 0.3
Massachusetts 98.5 1.0 0.4
Combined 98.6 1.0 0.4

Source: Adapted from Table 3.2 in Flynn (2001), p. 54.

Indications of the importance of credit

In some regions, the institution of credit was so accepted that many employers, including merchants, paid their employees by providing them credit at a store on the business’s account (Martin, 94). Probate inventories evidence the frequency of credit through the large amount of accounts receivable recorded for traders and merchant in Connecticut, sometimes over £1,000 (Main, 1985, 302-303). Accounts receivable are an asset of the business representing amounts owed to the business by other parties. Almost 30 percent of the estates of Connecticut “traders” contained £100 or more of receivables as part of their estate (Main, 316). More than this, accounts receivable averaged one-eighth of personal wealth throughout most of the colonial period, and more than one-fifth at the end (Main, 36). While there is no evidence that enables us to determine the relative frequencies of payments, the available information supports the idea that the different forms of payment co-existed.

The Different Types of Credit

There are three different types of credit to discuss: international credit, book credit, and promissory notes and each facilitated exchange and payments. Colonial importers and wholesalers relied on credit from British suppliers while rural merchants received credit from importers and wholesalers in the port cities and, finally, consumers received credit from the retailers. A discussion starts logically with international credit from British suppliers to colonial merchants because it allowed colonial merchants to extend credit to their customers (McCusker and Menard, 1985, 80n; Martin, 1939, 19; Perkins, 1980, 24).

Overseas credit

Research on colonial growth attaches importance to several items including foreign funds, capital improvements and productivity gains. The majority of foreign funds transferred were in the form of mercantile credit (Egnal, 1998, 12-20). British merchants shipped goods to colonial merchants on credit for between six months and one year before demanding payment or charging interest (Egnal, 55; Perkins, 1994, 65; Shepherd and Walton, 1972, 131-132; Thomson, 1955, 15). Other examples show a minimum of one year’s credit given before suppliers assessed five percent interest charges (Martin, 122-123). Factors such as interest and duration determined for how long colonial merchants could extend credit to their own customers and at what level of markup. Some merchants sold goods on commission, where the goods remained the property of the British merchant until sold. After the sale the colonial merchant remitted the funds, less his fee, to the British merchant.

Relationships between colonial and British merchants exhibited regional differences. Virginia merchants’ system of exchange, known as the consignment system, depended on the credit arrangements between planters and “factors” – middlemen who accepted colonial goods and acquired British or other products desired by colonists (Thomson, 28). A relationship with a British merchant was important for success in business because it provided the tobacco growers and factors access to supplies of credit sufficient to maintain business (Thomson, 211). Independent Virginia merchants, those without a British connection, ordered their supplies of goods on credit and paid with locally produced goods (Thomson, 15). Virginia and other Southern colonies could rely on credit because of their production of a staple crop desired by British merchants. New England merchants such as Thomas Hancock, uncle of the famous patriot John Hancock, could not rely on this to the same extent. New England merchants sometimes engaged in additional exchanges with other colonies and countries because they lacked goods desired by British merchants (Baxter, 46-47). Without the willingness of British merchant houses to wait for payment it would have been difficult for many colonial merchants to extend credit to their customers.

Domestic credit: book credit and promissory notes

Domestic credit was primarily of two forms, book credit and promissory notes. Merchants recorded book credit in the account books of the business. These entries were debits for an individual’s account and were set against payments, credits in the merchant’s ledger. Promissory notes detailed a debt, including typically the date of issue, the date of redemption, the amount owed, possibly the form of repayment and an interest rate. Book credit and promissory notes were substitutes and complements. Both represented a delay of payment and could be used to acquire goods but book accounts were also a large source of personal notes. Merchants who felt payment was either too slow in coming or the risks of default too high could insist the buyer provide a note. The note was a more secure form of credit as it could be exchanged and, despite the likely loss on the note’s face value if the debtor was in financial trouble, would not represent a continuing worry of the merchant (Martin, 158-159).5

Figure 1

Accounts of Samuell Maxey, Customer, and Jonathan Parker, Massachusetts Merchant

Date Transaction Debt (£) Date Transaction Credit (£)
5/28/1748 To Maxey earthenware by Brock 62.00 5/30/1748 By cash & Leather 45.00
10/21/1748 To ditto by Cap’n Long 13.75 8/20/1748 By 2 quintals of fish @6-0-0 [per quintal] 12.00
5/25/1749 To ditto 61.75 11/15/1748 By cash received of Mr. Suttin 5.00
6/26/1749 To ditto 27.35 5/26/1749 By sundrys 74.75
10/1749 By cash of Mr. Kettel 9.75
12/1749 By ditto 18.35

Source: John Parker Account Book. Baker Library, Harvard Business School, Mss: 605 1747-1764 P241, p.7.

The settlement of debt obligations incorporated many forms of payment. Figure 1 details the activity between Samuell Maxey and Jonathan Parker, a Massachusetts merchant. Included are several purchases of earthenware by Maxey and others and several payments, including some in cash and goods as well as from third parties. Baxter (1945, 21) describes similar experiences when he says,

…the accounts over and over again tell of the creditor’s weary efforts to get his dues by accepting a tardy and halting series of odds and ends; and (as prices were often soaring, especially in 1740-64) the longer a debtor could put off payment, the fewer goods might he need to hand over to square a liability for so much money.

Repayment means and examples

The “odds and ends” included goods and commodity money as well as other cash, bills of exchange, and third party settlements (Baxter, 17-32). Merchants accepted goods such as pork beef, fish and grains for their store goods (Martin, 94). Flynn (2001) shows several items offered as payment, including goods, cash, notes and others, shown in Table 2.

Table 2

Percentage of Payments by Category

Repayment in Cash Repayment in Goods Repayment by note Repayment by Reckoning Repayment by third- party note Repayment by Bond Repayment by Labor

Conn.

27.5 45.9 3.3 7.5 6.9 0.0 8.9
Mass. 24.2 47.6 2.8 7.5 13.7 0.2 2.3
Combined 25.6 46.9 3.0 7.5 10.9 0.1 5.0

Source: Adapted from Table 3.4 in Flynn (2001), p. 54.

Cash, goods and notes require no further explanation, but Table 2 shows other items used in payment as well. Colonists used labor to repay their tabs, working in their creditor’s field or lending the labor services of a child or yoke of oxen. Some accounts also list “reckoning,” which occurred typically between two merchants or traders that made purchases on credit from each other. Before the two merchants settled their accounts it was convenient to determine the net position of their accounts with each other. After making the determination the merchant in debt possibly made a payment that brought the balance to zero, but at other times the merchants proceeded without a payment but a better sense of the account position. Third parties also made payments that employed goods, money and credit. When the merchant did not want the particular goods offered in payment he could hope to pass them on, ideally to his own creditors. Such exchange satisfied both the merchant’s debts and the consumer’s (Baxter, 24-25). Figure 1 above and Figure 2 below illustrate this.

Figure 2

Accounts of Mr. Clark, Customer, and Jonathan Parker, Massachusetts Merchant

Date Transaction Debt (£) Date Transaction Credit (£)
9/27/1749 To Clark earthenware 10.85 11/30/1749 By cash 3.00
4/14/1750 By ditto 1.00
?/1762 By rum in full of Mr. Blanchard 6.35

Source: John Parker Account Book. Baker Library, Harvard Business School, Mss: 605 1747-1764 P241, p.2.

The accounts of Parker and his customer, Mr. Clark, show another purchase of earthenware and three payments. The purchase is clearly on credit as Parker recorded the first payment occurring over two months after the purchase. Clark provided two cash payments and then a third person Mr. Blanchard settled Clark’s account in full with rum. What do these third party payments represent? For answers to this we need to step back from the specifics of the account and generalize.

Figures 1 and 2 show credits from third parties in cash and goods. If we think in terms of three-way trade the answer becomes obvious. In Figure 1 where a Mr. Suttin pays £5.00 cash to Parker on the account of Samuell Maxey, Suttin is settling a debt with Maxey (in part or in full we do not know). To settle the debt he owes Parker, Maxey directs those who owe him money to pay Parker, and thus reduce his debt. Figure 2 displays the same type of activity, except Blanchard pays with rum. Though not depicted here, private debts between customers could be settled on the merchant’s books. Rather than offering payment in cash or goods, private parties could swap debt on the merchant’s account book, ordering a transfer from one account to another. The merchant’s final approval for the exchange implied something about the added risk from a third party exchange. The new person did not pose a greater default risk in the creditor’s opinion, otherwise (we would suspect) they refused the exchange.6

Complexity of the credit system

The payment system in the colonies was complex and dynamic with creditors allowing debtors to settle accounts in several fashions. Goods and money satisfied outstanding debts and other credit obligations deferred or transferred debts. Debtors and creditors employed the numerous forms of payment in regular and third party transactions, making merchants’ account books a clearinghouse for debts. Although the lack of technology leaves casual observers thinking payments at this time were primitive, such was clearly not the case. With only pen and paper eighteenth century merchants developed a sophisticated payment system, of which book credit and personal notes were an important part.

The Duration of Credit

The length of time outstanding for credit, its duration, is an important characteristic. Duration represents the amount of time a creditor awaited payment and anecdotal and statistical evidence provide some insights into the duration of book credit and promissory notes.

The calculation of the duration of book credit, or any similar type of instrument, is relatively straightforward when the merchant recorded dates in his account book conscientiously. Consider the following example.

Figure 3

Accounts of David Forthingham, Customer, and Jonathan Parker, Massachusetts Merchant

Date Transaction Debt (£) Date Transaction Credit (£)
10/1/1748 To Forthingham earthenware 7.75 10/1/1748 By cash 3.00
4/1749 By Indian corn 4.75

Source: John Parker Account Book. Baker Library, Harvard Business School, Mss: 605 1747-1764 P241, p.2.

The exchanges between Frothingham and Jonathan Parker show one purchase and two payments. Frothingham provides a partial payment for the earthenware at the time of purchase, in cash. However, £4.75 of debt remains outstanding, and is not repaid until April of 1749. It is possible to calculate a range of values for the final settlement of this account, using the first day of April to give a lower bound estimate and the last day to give an upper bound estimate. Counting the number of days shows that it took at least 182 days and at most 211 days to settle the debt. Alternatively the debt lasted between 6 and 7 months.

Figure 4

Accounts of Joseph Adams, Customer, and Jonathan Parker, Massachusetts Merchant

Date Transaction Debt (£) Date Transaction Credit (£)
9/7/1747 to Adams earthenware -30.65 11/9/1747 by cash 30.65
7/22/1748 to ditto -22.40 7/22/1748 by ditto 12.40
No Date7 by ditto 10.00

Source: John Parker Account Book. Baker Library, Harvard Business School, Mss: 605 1747-1764 P241, p.4.

Not all merchants were meticulous record keepers and sometimes they failed to record a particular date with the rest of an account book entry.8 Figure 4 illustrates this problem well and also provides an example of multiple purchases along with multiple payments. The first purchase of earthenware is repaid with one “cash” payment sixty-three days (2.1 months) later.9 Computation of the term of the second loan is more complicated. The last two payments satisfy the purchase amount, so Adams repaid the loan completely. Unfortunately, Parker left out the date for the second payment. The second payment occurred on or after July 22, 1748, so this date is the lower end of the interval. The minimum time between purchase and second payment is zero days, but computation of a maximum time, or upper bound, is not possible due to the lack of information.10

With a sufficient number of debts some generalization is possible. If we interpret the data as the length of a debt’s life we can use demographic methods, in particular the life table.11 For a sample of Connecticut and Massachusetts account books the average duration looks like the following.12

Table 3

Expected Duration for Connecticut Debts, Lower and Upper Bound

(a) (b) (c) (d) (e)
Size of debt in £ eo lower bound (months) Median lower bound (interval) eo upper bound (months) Median upper bound (interval)
All Values 14.79 6-12 15.87 6-12
0.0-0.25 15.22 6-12 15.99 6-12
0.25-0.50 14.28 6-12 15.51 6-12
0.50-0.75 15.24 6-12 18.01 6-12
0.75-1.00 14.25 6-12 15.94 6-12
1.00-10.00 13.95 6-12 15.07 6-12
10.00+ 7.95 0-6 10.73 6-12

Table 4

Expectation Duration for Massachusetts Debts, Lower and Upper Bound

(a) (b) (c) (d) (e)
Size of debt in £ eo lower bound (months) Lower bound median (interval) eo upper bound (months) Upper bound median (interval)
All Values 13.22 6-12 14.87 6-12
0.0-0.25 14.74 6-12 17.55 12-18
0.25-0.50 12.08 6-12 12.80 6-12
0.50-0.75 11.73 6-12 13.08 6-12
0.75-1.00 11.01 6-12 12.43 6-12
1.00-10.00 13.08 6-12 13.88 6-12
10.00+ 14.28 12-18 17.02 12-18

Source: Adapted from Tables 4.1 and 4.2 in Flynn (2001), p. 80.

For all debts in the sample from Connecticut, the expected length of time the debt is outstanding from its inception is estimated between 14.78 and 15.86 months. For Massachusetts the range is somewhat shorter, from 13.22 to 14.87 months. Tables 3 and 4 break the data into categories based on the value of the credit transaction as well. An important question to ask is whether this represents a long- term or a short-term debt? There is no standard yardstick for comparison in this case. The best comparison is likely the international credit granted to colonial merchants. The colonial merchants needed to repay these amounts and had to sell the goods to make remittances. The estimates of that credit duration, listed earlier, center around one year, which means that colonial merchants in New England needed to repay their British suppliers before they could expect to receive full payment from their customers. From the colonial merchants’ perspective book credit was certainly long-term.

Other estimates of duration of book credit

Other estimates of book credit’s duration vary. Consumers paying their credit purchases in kind took as little time as a few months or as long as several years (Martin, 153). Some accounting records show book credit remaining unsettled for nearly thirty years (Baxter, 161). Thomas Hancock often noted expected payment dates, such as “to pay in 6 months” along with a purchase, though frequently this was not enough time for the buyer. Thomas blamed the law, which allowed twelve months for people to make repayments, complaining to his suppliers that he often provided credit to country residents of “one two & more years” (Baxter, 192). Surely such a situation is the exception and not the rule, though it does serve to remind us that many of these arrangements were open, lacking definite endpoints. Some merchants allowed accounts to last as long as two years before examining the position of the account, allowing one year’s book credit without charge, and thereafter assessing interest (Martin, 157).

Duration of promissory notes

The duration of promissory notes is also important. Priest (1999) examines a form of duration for these credit instruments, estimating the time between a debtor’s signing of the note and the creditor’s filing of suit to collect payment. Of course this only measures the duration for notes that go into default and require legal recourse. Typically, a suit originated some 6 to 9 months after default (Priest, 2417-18). Results for the period 1724 to 1750 show 14.5% of cases occurred within 6 months after the initial contraction date, the execution of the debt. Merchants brought suit in more than 60% of the cases between 6 months and 3 years from execution, 21.4% from six to twelve months, 27.4% from one to two years and 14.1% from two to three years. Finally, more than 20% of the cases occurred more than three years from the execution of the debt. The median interval between execution and suit was 17.5 months (Priest, 2436, Table 3).

The duration of promissory notes provides an important complement to estimates of book credit’s term. Median estimates of 17.5 months make promissory notes, more than likely, a long-term credit instrument when balanced against the one year credit term given colonial importers. The estimates for book credit range between three months and several years in the literature to between 13 and 16 months in Flynn (2001) study. Duration results show that merchants waited significant amounts of time for payment, raising the issue of the time value of money and interest rates.

The Interest Practices of Merchants

In some cases credit was outstanding for a long period of time, but the accounts make no mention of any interest charges, as in Figures 1 through 4. Such an omission is difficult to reconcile with the fairly sophisticated business practices for the merchants of the day. Accounting research and manuals from the time demonstrate clearly an understanding of the time value of money. The business community understood the concept of compound interest. Account books allowed merchants to charge higher and variable prices for goods sold on book credit (Martin, 94). While in some cases interest charges entered the account book as an explicit entry in many others interest was an added or implicit charge contained in the good’s price.

Advertisements from the time make it clear that merchants charged less for goods

purchased by cash, and accounts paid promptly received a discount on the price,

One general pricing policy seems to have been that goods for cash were sold at a lower price than when they were charged. Cabel[sic] Bull advertised beaver hats at 27/ cash and 30/ country produce in hand. Daniel Butler of Northampton offered dyes, and “a few Cwt. of Redwood and Logwood cheaper than ever for ready money.” Many other advertisements carried allusions to the practice but gave no definite data. A daybook of the Ely store contained this entry for October 21, 1757: “William Jones, Dr to 6 yds Towcloth at 1/6—if paid in a month at 1/4. (Martin, 1939, 144-145)

Other advertisements also evidence a price difference, offering cash prices for certain grains they desired. Connecticut merchants likely offered good prices for products they thought would sell well as they sought remittances for their British creditors. Hartford merchants charged interest rates ranging from four and one-half to six and one-half percent in the 1750s and 1760s, though Flynn (2001) arrives at different rates from a different sample of New England account books (Martin, 158). Many promissory notes in South Carolina specified interest, though not an exact rate, usually just the term “lawful interest” (Woods, 364).

Estimates of interest rates

Simple regression analysis can help determine if interest was implicit in the price of goods sold on credit though there are numerous technical issues, such as borrower characteristics, market conditions and the quality of the good that make a discussion here inappropriate.13 In general, there seems to be a positive correlation, with the annual interest rates falling between 3.75% and 7%, which seem consistent with the results from interest entries made in account books. There is some tendency for the price of a good to increase with the time waited for repayment, though many other technical matters need resolution.

Most annual interest rates in Flynn’s (2001) study, explicit and implicit, fall in the range of 4 to 6.5 percent making them similar to those Martin found in her examination of accounts and roughly consistent with the Massachusetts lawful rate of 6 percent at the time, though some entries assess interest as high as 10 percent (Martin, 158; Rothenberg, 1992, 124). Despite this, the explicit rates are insufficient on their own to form a conclusion about the interest rate charged on book credit; there are too few entries, and many involve promissory notes or third parties, factors expected to alter the interest rate. Other factors such as borrower characteristics likely changed the assessed rate of interest too, with more prominent and wealthy individuals charged lower rates, either due to their status and a perceived lower risk, or possibly due to longer merchant-buyer relationships. Most account books do not contain information sufficient to judge the effects of these characteristics.

Merchants gained from credit use by charging higher prices; credit required a premium over cash sales and so the merchant collected interest and at the same time minimized the necessary amount of payments media (Martin, 94). Interest was distinct from the normal markups for insurance, freight, wharfage, etc. that were often significant additions to the overall price and represented an attempt to account for risk and the time value of money (Baxter, 192; Thomson, 239).14

Conclusions

Credit was significant as a form of payment in colonial America. Direct comparisons of the number of credit purchases versus barter or cash are not possible, but an examination of accounting records demonstrates credit’s widespread use. Credit was present in all forms of trade including international trade between England and her colonies. The domestic forms of credit were relatively long-term instruments that allowed individuals to consume beyond current means. In addition, book credit allowed colonists to economize on cash and other means of payment through transfers of credit, “reckoning,” and other means such as paying workers with store credit. Merchants also understood the time value of money, entering interest charges explicitly in the account books and implicitly as part of the price. The use of credit, the duration of credit instruments, and the methods of incorporating interest show credit as an important method of exchange and the economy of colonial America to be very complex and sophisticated.

References

Baxter, W.T. The House of Hancock: Business in Boston, 1724-1775. Cambridge: Harvard University Press, 1945.

Bridenbaugh, Carl. The Colonial Craftsman. Dover Publications: New York, 1990.

Egnal, Marc. New World Economies: The Growth of the Thirteen Colonies and Early Canada. Oxford: Oxford University Press, 1998.

Flynn, David T. “Credit and the Economy of Colonial New England.” Ph.D. dissertation, Indiana University, 2001.

McCusker, John J., and Russel R. Menard. The Economy of British America, 1607-1789. Chapel Hill: University of North Carolina Press, 1985.

Main, Jackson Turner. Society and Economy in Colonial Connecticut. Princeton: Princeton University Press, 1985.

Martin, Margaret. “Merchants and Trade of the Connecticut River Valley, 1750-1820.” Smith College Studies in History. Department of History, Smith College: Northampton, Mass. 1939.

Parker, Jonathan. Account Book, 1747-1764. Mss:605 1747-1815. Baker Library Historical Collections, Harvard Business School; Cambridge, Massachusetts

Perkins, Edwin J. The Economy of Colonial America. New York: Columbia University Press, 1980.

Perkins, Edwin J. American Public Finance and Financial Services, 1700-1815. Columbus: Ohio State University Press, 1994.

Price, Jacob M. Capital and Credit in British Overseas Trade: The View from the Chesapeake, 1700-1776. Cambridge: Harvard University Press, 1980.

Priest, Claire. “Colonial Courts and Secured Credit: Early American Commercial Litigation and Shays’ Rebellion.” Yale Law Journal 108, no. 8 (June, 1999): 2412-2450.

Rothenberg, Winifred. From Market-Places to a Market Economy: The Transformation of Rural Massachusetts, 1750-1850. Chicago: University of Chicago Press, 1992.

Shepherd, James F. and Gary Walton. Shipping, Maritime Trade, and the Economic Development of Colonial North America. Cambridge: University Press 1972.

Thomson, Robert Polk. The Merchant in Virginia, 1700-1775. Ph.D. dissertation, University of Wisconsin, 1955.

Further Reading:

For a good introduction to credit’s importance across different professions, merchant practices and the development of business practices over time I suggest:

Bailyn, Bernard. The New England Merchants in the Seventeenth-Century. Cambridge: Harvard University Press, 1979.

Schlesinger, Arthur. The Colonial Merchants and the American Revolution: 1763-1776. New York: Facsimile Library Inc., 1939.

For an introduction to issues relating to money supply, the unit of account in the economy, and price and exchange rate data I recommend:

Brock, Leslie V. The Currency of the American Colonies, 1700-1764: A Study in Colonial Finance and Imperial Relations. New York: Arno Press, 1975.

McCusker, John J. Money and Exchange in Europe and America, 1600-1775: A Handbook. Chapel Hill: University of North Carolina Press, 1978.

McCusker, John J. How Much Is That in Real Money? A Historical Commodity Price Index for Use as a Deflator of Money Values in the Economy of the United States, Second Edition. Worcester, MA: American Antiquarian Society, 2001.

1 Some authors note a small amount of cash purchases as well as small numbers of cash payments for debts as evidence of a lack of money (Bridenbaugh, 153; Baxter, 19n).

2 Presently, credit cards are a common form of payment. While such technology did not exist in the past, the merchant’s account book provided a means of recording credit purchases.

3 Price (1980, pp.16-17) provides an excellent summary of the advantages and risks of credit to different types of consumers and to merchants in both Britain and the colonies.

4 Please note that this table consists of transactions mostly between colonial retail merchants and colonial consumers in New England. Flynn (2001) uses account books that collectively span from approximately 1704 to 1770.

5 In some cases with the extension of book credit came a requirement to provide a note too. When the solvency of the debtor came into question the creditor, could sell the note and pass the risk of default on to another.

6 I offer a detailed example of such an exchange going sour for the merchant below.

7 “No date” is Flynn’s entry to show that a date is not recorded in the account book.

8 It seems that this frequently occurs at the end of a list of entries, particularly when the credit fully satisfies an outstanding purchase as in Figure 4.

9 To calculate months, divide days by 30. The term “cash” is placed in quotation marks as it is woefully nondescript. Some merchants and researchers using account books group several different items under the heading cash.

10 Students interested in historical research of this type should be prepared to encounter many situations of missing information. There are ways to deal with this censoring problem, but a technical discussion is not appropriate here.

11 Colin Newell’s Methods and Models in Demography (Guilford Press, 1988) is an excellent introduction for these techniques.

12 Note that either merchants recorded amounts in the lawful money standard or Flynn (2001) converted amounts into this standard for these purposes.

13 The premise behind the regression is quite simple: we look for a correlation between the amount of time an amount was outstanding and the per unit price of the good. If credit purchases contained implicit interest charges there would be a positive relationship. Note that this test implies forward looking merchants, that is, merchants factored the perceived or agreed upon time to repayment into the price of the good.

14 The advance varied by colony, good and time period,

In 1783, a Boston correspondent wrote Wadsworth that dry goods in Boston were selling at a twenty to twenty-five percent ‘advance’ from the ‘real Sterling Cost by Wholesale.’ The ‘advances’ occasionally mentioned in John Ely’s Day Book were far higher, seventy to seventy-five per cent on dry goods. Dry goods sold well at one hundred and fifty per cent ‘advance’ in New York in 1750… (Martin, 136).

In the 1720s a typical advance on piece goods in Boston was eighty per cent, seventy-five with cash (Martin, 136n). It should be noted that others find open account balances were commonly kept interest free (Rothenberg, 1992, 123).

13

Citation: Flynn, David. “Credit in the Colonial American Economy”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/credit-in-the-colonial-american-economy/

Cliometrics

John Lyons, Miami University

Lou Cain, Loyola University Chicago and Northwestern University

Sam Williamson, Miami University

Introduction

In the 1950s a small group of North American scholars adopted a revolutionary approach to investigating the economic past that soon spread to Great Britain and Ireland, the European mainland, Australia, New Zealand, and Japan. What was first called “The New Economic History,” then “Cliometrics,” was impelled by the promise of significant achievement, by the novelties of the recent (mathematical) formalization of economic theory, by the rapid spread of econometric methods, and by the introduction of computers into academia. Cliometrics has three obvious elements: use of quantifiable evidence, use of theoretical concepts and models, and use of statistical methods of estimation and inference, and an important fourth element, employment of the historian’s skills in judging provenance and quality of sources, in placing an investigation in institutional and social context, and in choosing subject matter of significance to history as well as economics. Although the term cliometrics is used to describe work in a variety of historical social and behavioral sciences, the discussion here focuses on economic history.

A quantitative-analytical approach to economic history developed in the interwar years through the work of such scholars as Simon Kuznets in the U.S. and Colin Clark in Britain. Characteristic elements of cliometrics were stimulated by events, by changes in economics, and by an intensification of what might be called the statistical impulse.

First, depression, war, the dissolution of empires, a renewal of widespread and more rapid growth in the Western world, and the challenge of Soviet-style economic planning combined to focus attention on the sources and mechanisms of economic growth and development.

Second, new intellectual currents in economics, spurred in part by contemporary economic problems, arose and came to dominate the profession. In the 1930s, and especially during the war, theoretical approaches to the aggregate economy and its capabilities grew out of the new Keynesian macroeconomics and the development of national income accounting. Explicit techniques for analyzing resource allocation in detail were introduced and employed in wartime planning. Econometrics, the statistical analysis of economic data, continued to grow apace.

Third, the gathering of facts – with an emphasis on systematic arrays of quantitative facts – became more important. By the nineteenth century governments, citizens and scholars had become preoccupied with fact-gathering, but their collations were ordinarily ad hoc and unsystematic. Thoroughness and system became the desideratum of scholarly fact-gathering in the twentieth century.

All these forces had an impact on the birth of a more rigorous way of examining our economic past.

The New Economic History in North America

Cliometrics was unveiled formally in Williamstown, Massachusetts, in the autumn of 1957 at an unusual four-day gathering sponsored by the Economic History Association and the Conference on Research in Income and Wealth. Most of the program was designed to showcase recent work by economists who had ventured into history.

Young scholars in the Income and Wealth group presented their contributions to the historical national accounts of the United States and Canada, spearheaded by Robert Gallman’s estimates of U.S. commodity output, 1839-1899. A pair of headline sessions dealt with method; the one on economic theory and economic history was headed by Walt Rostow, who recalled his undergraduate years in the 1930s at Yale, where he had been led to ask himself “why not see what happened if the machinery of economic theory was brought to bear on modern economic history?” He asserted “economic history is a less interesting field than it could be, because we do not remain sufficiently loyal to the problem approach, which in fact underlies and directs our efforts.”

Newcomers John R. Meyer and Alfred H. Conrad presented two papers. The first was “Economic Theory, Statistical Inference, and Economic History” (1957), a manifesto for using formal theory and econometric methods to examine historical questions. They argued that particular historical circumstances are instances of more general phenomena, suitable for theoretical analysis, and that that quantitative historical evidence, although relatively scarce, is much more abundant than many historians believed and can be analyzed using formal statistical methods. At another session Conrad and Meyer presented “The Economics of Slavery in the Antebellum South,” which incorporated their methodological views to refute a long-standing proposition that the slave system in the southern United States had become moribund by the 1850s and would have died out had there been no Civil War. Conrad and Meyer buttressed the point by showing that slaveholding, viewed as a business activity, had been at least as remunerative as other uses of financial and physical capital. More broadly they illustrated “the ways in which economic theory might be used in ordering and organizing historical facts.”

Two decades later Robert Gallman recalled that the Williamstown “conference did more than put the ball in motion … It also set the tone and style of the new economic history and even forecast the chief methodological and substantive interests that were to occupy cliometricians for the next twenty-one years.” What began in the late 1950s as a trickle of work in the new style grew to a freshet and then a flood, incorporating new methods, examining bodies of data previously too difficult to analyze without the aid of computers, and investigating a variety of questions of traditional importance, mostly in American economic history. The watershed was continent-wide, collecting the work of small clusters of scholars bound together in a ramifying intellectual and social network.

An important and continuing node in this network was at Purdue University in West Lafayette, Indiana. In the late 1950s a group of young historical economists assembled there, among whom the cross-pollination of historical interests and technical expertise was exceptional. In this group were Lance Davis and Jonathan Hughes and several others known primarily for their work in other fields. One was Stanley Reiter, a mathematical economist who traveled with Davis and Hughes to the meetings of the Economic History Association in September 1960 to present their paper explaining the new quantitative historical research being undertaken at Purdue – and to introduce the term “cliometrics” to the profession. The term was coined by Reiter as a whimsical combination of the words Clio, the muse of history, and metrics, from econometrics. As the years went by, the word stuck and became the name of the field.

To build on the enthusiasm aroused by that presentation, and to “consolidate Purdue’s position as the leader in this country of quantitative research in economic history,” Davis and Hughes (with Reiter’s aid) sought and received funds from Purdue for a meeting in December 1960 of about a dozen like-minded economic historians. They gave it the imposing title, “Conference on the Application of Economic Theory and Quantitative Methods to the Study of Problems of Economic History.” For obvious reasons the meetings were soon called “Clio” or the “Cliometrics Conference” by their familiars. Of the six presentations at the first meeting, none was more intriguing than Robert Fogel’s estimates of the “social saving” accruing from the expansion of the American railroad network to 1890.

Sessions were renowned from Clio’s early days as occasions for engaging in sharp debate and asking probing (and occasionally unanswerable) questions. Those who attended the first Clio conference established a tradition of rigorous and detailed analysis of the presenters’ work. In the early years at Purdue and elsewhere, cliometricians developed a research program with mutual support and encouragement and conducted an unusually large proportion of collaborative work, all the while believing in the progressiveness of their efforts.

Indeed, like Walt Rostow, other established economic historians felt that economic history was in need of renewal: Alexander Gerschenkron wrote in 1957 “Economic history is in a poor way. It is unable to attract good students, mainly because the discipline does not present any intellectual challenge …” Some cliometric young Turks were not so mild. While often relying heavily on the wealth of detail amassed in earlier research, they asserted a distinctive identity. The old economic history, it was said, was riddled with errors in economic reasoning and embodied an inadequate approach to causal explanation. The cliometricians insisted on a scientific approach to economic-historical questions, on careful specification of explicit models of the phenomena they were investigating. By implication and by declaration they said that much of conventional wisdom was based on unscientific and unsystematic historical scholarship, on occasion employing language not calculated to endear them to outsiders. The most vocal proponents declared a new order. Douglass North proclaimed that a “revolution is taking place in economic history in the United States … initiated by a new generation of economic historians” intent on reappraising “traditional interpretations of U.S. economic history.” Robert Fogel said that the “novel element in the work of the new economic historians is their approach to measurement and theory,” especially in their ability to find “methods of measuring economic phenomena that cannot be measured directly.” In 1993, these two were awarded the Nobel Memorial Prize in Economics for, in the words of the Nobel committee, being “pioneers in the branch of economic history that has been called the ‘new economic history,’ or cliometrics.”

The hallmark of the top rung of work done by the new economic historians was its integration of fact with theory. As Donald [Deirdre] McCloskey observed in a series of surveys, the theory was often simple. The facts, when not conveniently available, were dug up from surviving sources, whether published or not. Indeed the discipline imposed by the need to measure usually requires more data than would serve for a qualitative argument. Many new economic historians expended considerable effort in the 1960s to expand the American quantitative record. Thus, with eyebrow raised, so to speak, Albert Fishlow remarked in 1970, “It is ironic … to read that … most of the “New Economic History” only applies its ingenuity to analyzing convenient (usually published) data.'” Many cliometricians worked their magic not merely by relying on their predecessors’ compilations; as Scott Eddie comments, “one of the most significant contributions of cliometricians’ painstaking search for data has been the uncovering of vast treasure troves of useful data hitherto either unknown, unappreciated, or simply ignored.” Very early in the computer age they put such data into forms suitable for tabulation and statistical analysis.

William Parker and Robert Gallman, with their students, were pioneers in analyzing individual-level data from the United States Census manuscripts, a project arising from Parker’s earlier study of Southern plantations. From the 1860 agricultural census schedule they drew a carefully constructed sample of over 5,000 farms in the cotton counties of the American South and matched those farms with the two separate schedules for the free and slave populations. The Parker-Gallman sample was followed by Census samples for northern agriculture and for the post-bellum South.

The early practitioners of cliometrics applied their theoretical and quantitative skills to some issues well established in the more “traditional” economic historiography, none more important than asking when and how rapidly the North American economy began to experience “modern economic growth.” In the nineteenth century, economic growth in both the U.S. and Canada was punctuated by booms, recessions and financial crises, but the new work provided a better picture of the path of GNP and its components, revealing steady upward trends in aggregate output and in incomes per person and per worker. This last, it seemed clear from the work in the 1950s of Moses Abramovitz and Robert Solow, must have derived significantly from the introduction of new techniques, as well as from expansion of the scale and penetration of the market. Several scholars thus established a related objective, understanding – or at least accounting for – productivity growth.

Attempting to provide sound explanations for growth, productivity change, and numerous other developments in modern economic history, especially of the U.S. and Britain, was the objective of the cliometricians’ theory and quantification. They were much criticized from without for the very use of these technical tools, and within the movement there was much methodological dispute and considerable dissent. Nonetheless, the early cliometricians spawned a sustained intellectual tradition that diffused worldwide from its North American origins.

Historical Economics in Britain

Cliometrics arrived relatively slowly among British economic historians, but it did arrive. Some was homegrown; some was imported. When Jonathan Hughes expressed doubts in 1970 that the American style of cliometrics could ever be an “export product,” he was already wrong. Admittedly, by then the new style had been employed by only a tiny minority of those writing economic history in Britain. Introduction of a more formal style, in Britain as in North America, fell to those trained as economists, initially to Alec Cairncross, Brinley Thomas and Robin Matthews. Cairncross’s book on home and foreign investment and Thomas’s on migration and growth developed, or collected into one place, a great deal of quantitative information for theoretical analysis; their method, as David Landes noted in 1955, was “in the tradition of historical economics, as opposed to economic history.” Matthews’s Study in Trade Cycle History (1954), which examines the trade cycle of 1833-42, was written, he said, in a “quantitative-historical” mode, and contains theoretical reasoning, economic models, and statistical estimates.

Systematic use of national accounting methods to study British economic development was a task undertaken by Phyllis Deane at Cambridge. Her work resulted in two early papers on British income growth and capital formation and in two books of major importance and lasting value: British Economic Growth, 1688-1959 (1962), written with W. A. Cole, and a compendium of underlying data compiled with Brian Mitchell. Despite skeptical reviews, the basics of the Deane-Cole estimates of eighteenth- and early nineteenth-century aggregate growth were accepted widely for two decades and provided a quantitative basis for discussing living standards and the dispersion of technical progress in the new industrial era. Also at Cambridge, Charles Feinstein estimated the composition and magnitude of British investment flows and produced detailed national income estimates for the nineteenth and twentieth centuries, augmenting, refining and revising, as well as extending, the work of Deane and Cole.

All these studies belong to a decidedly British empirical tradition, despite the use of contemporary theoretical constructs, and contained nothing like the later claims of some American cliometricians about the virtues of using formal theory and statistical methods. Research in a consciously cliometric style was strongly encouraged in the 1960s at Oxford by Hrothgar Habakkuk and Max Hartwell, although neither saw himself as a cliometrician. Separately and together, they supported the movement, encouraging students to absorb both quantitative and formal analytical elements into their work.

The incursion of cliometrics into British economic history was – and has remained – neither so widespread nor so dominant as in North America, partly for reasons suggested by Hughes. Although economic history had been taught and practiced in British universities since the 1870s, after the first World War most faculty members were housed in separate departments of economic (and social) history that tended to require of their students only a modicum of economics and little of quantitative methods. With the establishment of new British universities and the rapid expansion of others, a dozen new departments of economic history were founded in the 1960s, staffed largely by people taught in history and economic history departments. The limited presence of cliometric types in Britain at the turn of the 1970s did not come from deficient demand, nor was it due to hostility or indifference. It was due to limited supply stemming from the small scale of the British academic labor market and an aversion to excessive specialization among young economists. Yet the situation was being rectified. On the demand side, British faculties of economics began to welcome more economic historians as colleagues, and, on the supply side, advanced students were being aided by post-graduate stipends and research support provided by the new Social Science Research Council.

During the 1970s a British version of new historical economics began to take shape. Its practitioners expanded their informal networks into formal institutional structures and scholarly ventures. The organized British movement opened in September 1970 at an Anglo-American “Conference on the New Economic History of Britain” in Cambridge (Massachusetts), followed by two others. From these meetings grew a project to re-write British economic history in a cliometric mode, which resulted in the publication in 1981 of a path-breaking two-volume work, The Economic History of Britain since 1700, edited by Roderick Floud and Donald [Deirdre] McCloskey.

Equally path-breaking, perhaps more so, was the outcome of parallel developments in English historical demography, whose practitioners had become progressively more quantitatively and theoretically adept since the 1950s, and for whom 1981 was also a banner year. Although portions of the book had been circulating for some time, E. A. Wrigley’s and R. S. Schofield’s Population History of England, 1541-1871: A Reconstruction and its striking revisions of English demographic history were now available in one massive document.

As in North America, after the first wave of “quanitifiers” invaded parts of British historiography, cliometrics was refined in the heat of scholarly debate.

Controversies

Cliometricians started or continued a series of debates about the nature and sources of economic growth and its welfare consequences that decidedly have altered our picture of modern economic history. The first was initiated by Walt Rostow, who argued that modern economic growth begins with a brief and well-defined period of “take-off,” with the necessary “preconditions” having already become the normal condition of a given national economy or society. His metaphor of a “take-off into self-sustained growth”, which first appeared in a journal article, was popularized in Rostow’s famous book, The Stages of Economic Growth (1960). Rostow asserted that “The introduction of the railroad has been historically the most powerful single initiator of take-offs.” To test this contention, Robert Fogel and Albert Fishlow both wrote Ph.D. dissertations dealing in part with Rostow’s view: Fogel’s Railroads and American Economic Growth (1964) and Fishlow’s American Railroads and the Transformation of the Antebellum Economy (1965). These books contain their estimates of the extent of resource saving that had accrued from the adoption of a new transport system, with costs lower than those of canals. Their results rejected Rostow’s view.

Until the cliometricians made a pair of disputatious incursions into its economic history, the American South was largely the province of regional historians – almost a footnote to the story of U.S. economic development. Sparked by Conrad and Meyer, for two decades cliometricians focused intently on the place of the South in the national economy and of slavery in the Southern economy. To what extent was early national economic growth driven by Southern cotton exports and how self-sufficient was the South as an economic region? Douglass North argued that the key to American economic development before 1860 was regional specialization, that Southern cotton was the economy’s staple product, and that much of Western and Northern economic growth derived from Southern demand for food and manufactures. Indeed, Conrad and Meyer had touched a nerve. Their demonstration of current profitability did not demonstrate long-run viability of the slave system; Yasukichi Yasuba was able to fill that gap by showing that slave prices were regularly more than enough to finance rearing slaves for future sale or employment. Many others tested and refined these early results. As a system of organizing production, American slavery was found to have been thriving on the eve of the Civil War; the sources of that prosperity, however, needed deeper examination.

In Time on the Cross (1974), Robert Fogel and Stanley Engerman not only reaffirmed the profitability and viability of Southern slavery, but they also made claims about the superior productivity of Southern versus Midwestern agriculture and about the relatively generous material comforts afforded to the slave population. Their book sparked a long-running controversy that extended beyond academia and prompted critical examinations and rebuttals by political and social historians and, above all, by their fellow cliometricians. A major critique was Reckoning with Slavery (by Paul David and others, 1976), as much a defense of cliometric method as a catalogue of what the authors saw as the method’s improper or incomplete application in Time on the Cross. Fogel subsequently published Without Consent or Contract (1989), a defense and extension of his and Engerman’s earlier work.

The remarkable antebellum prosperity of the Southern slave economy was followed by an equally remarkable relative decline in Southern per-capita income after the war. While the remainder of the American economy grew rapidly, the South stagnated, with a distinctively low-wage, low-productivity economy and a poorly educated labor force, both black and white. The next generation of cliometricians asked “Why?” Was it the legacy of the slave system, of the virtual absence of industrial development in the antebellum South, of post-Civil War Reconstruction and backlash, of continued reliance on cotton, of Jim Crow, or of racism and discrimination? Roger Ransom and Richard Sutch investigated share-tenancy, debt peonage and labor effort in maintaining cotton cultivation, using individual level data, some derived a la Parker and Gallman, from a sample of the manuscript U.S. Censuses. Gavin Wright focused on an effective separation of the Southern from the national labor market, and Robert Margo examined the region’s low level of educational investment and its consequences.

An entirely new line of investigation derived from the research on slavery, measuring the “biological standard of living” using anthropometric data. Richard Steckel’s paper on slave height profiles led directly to the discussion of “Anthropometric Indexes of Malnutrition” in Without Consent or Contract. In a corrective to the Fogel-Engerman interpretation of the slave diet, Steckel showed how stunted (and thus how poorly fed) slave children were before they came of working age. John Komlos discovered that heights (of West Point cadets) were declining even as American per capita income was rising in the years before the Civil War, what he called the “Antebellum Puzzle.” Elsewhere, Roderick Floud led a project employing anthropometric data from records of British military recruits, while Stephen Nicholas, Deborah Oxley and Steckel analyzed records for male and female convicts transported to Australia.

Industrialization and its new technologies in the U.S. long predate the Civil War. In writing about technological progress, economic historians had, before the 1960s, tended to concentrate on single industries or economies. Yet distinctive “national” technologies emerged in the early nineteenth century (e.g., contemporary British observers distinguished “The American System of Manufactures” from their own). Amid the early ferment of quantitative economic history in the United States, Hrothgar Habakkuk published American and British Technology in the Nineteenth Century: The Search for Labour-Saving Inventions, a truly comparative study. It was 1962, when, as Paul David writes, “economic historians’ interests in Anglo-American technological divergences were suddenly raised from a quiet simmer to a furious boil by the publication of … Habakkuk’s now celebrated book on the subject.” Habakkuk expanded on an idea that the apparent labor-saving bias of American manufacturing techniques was due to land so abundant that American workers were paid (relative to other factors) much more than what their British counterparts received, but he did not resolve whether the bias was due to more machines per worker, better machines, or more inventiveness.

One strand of the debate over what Peter Temin called Habakkuk’s “labor-scarcity paradox” left to one side the question of “better machines.” It fell to Nathan Rosenberg and Paul David to explore the distinctive technological trajectories of different economies. Rosenberg pointed to the emergence of “technologically convergent” production processes and to the importance of very low relative materials costs in American manufacturing. Paul David reviewed the debate, beginning to formulate a theoretical approach to explain sources of technical change (and divergence). He argued that an economy’s trajectory of technological development is conditioned, perhaps only initially, by relative factor prices, but then by opportunities for further progress based on localized learning from, or constrained by, existing techniques and their histories. David developed the concept of “path dependence,” which is “a dynamic process whose evolution is governed by its own history.”

The first systematic cliometric debate involving European economic history was over an alleged British technological and economic failure in the late nineteenth century. The slower growth of income and exports, the loss of markets even in the Empire, and an “invasion” of foreign manufactures (many American) alarmed businessmen and policymakers alike and led to opposition to a half-century of British “Free Trade.” Who was to blame for loss of competitiveness? Although some scholars attributed Britain’s “climacteric” to the maturation of the technologies underpinning her success during the Industrial Revolution, others attributed it to “entrepreneurial failure” and cited the inability or refusal of British business leaders to adopt the best available technologies. Cliometricians argued, by and large, that British businessmen made their investment and production decisions in a sensible, economically rational fashion, given the constraints they faced; they had made the best of a bad situation. Subsequent research has demonstrated the problem to be more complex, and it is yet to be resolved.

Many results of the cliometrics revolution come from the application of theory and measurement in the service of history; a converse case comes from the macro economists. Monetarists, in particular, have placed economic history in the service of theory, prominently in analyzing the Great Depression of the 1930s. In 1963, Milton Friedman and Anna Schwartz, in A Monetary History of the United States, 1867-1960, opened a discussion that has led to widespread, but not universal, acceptance among economists of a sophisticated version of the “quantity theory of money.” Their detailed examination of several episodes in American monetary development under varying institutional regimes allowed them to use a set of “natural experiments” to assess the economic impact of exogenous changes in the stock of money. The Friedman-Schwartz enterprise sought support for the general proposition that money is not simply a veil over real transactions – that money does matter. Their demonstration of that point for the Great Depression initiated an entire scholarly literature involving not only economic historians but also monetary and macro economists. Peter Temin was among the first of the economic historians to question their argument, in Did Monetary Forces Cause the Great Depression? (1976). His answer was essentially “No,” stressing declines in consumer spending and in investment in the late 1920s as initiating factors and discounting money stock reductions for the continued downturn. In a later book, Lessons from the Great Depression (1989), Temin in effect recanted his earlier position, impelled by a good deal of further research, especially on international finance. The present consensus is that what Friedman and Schwartz call “The Great Contraction, 1929-1933” may have been initiated by real factors in the late 1920s, but it was faulty public policy and adherence to the Gold Standard that played major roles in turning an economic downturn into “The Great Depression.”

A broad new approach to economic change over time has emerged from the mind of Douglass North. Confronted in the later 1960s with European economic development in its variety and antiquity, North became dissatisfied with the limited modes of analysis that he had applied fruitfully to the American case and concluded that “we couldn’t make sense out of European economic history without explicitly modeling institutions, property rights, and government.” For that matter, making sense of a wider view of American economic history was similarly difficult, as exemplified in the Lance Davis and North venture, Institutional Change and American Economic Growth (1971). The core of North’s model, conceptual rather than formal, is that, when changes in underlying circumstances alter the cost-benefit calculus of existing arrangements, new institutions will arise if there is a net benefit to be realized. Although their approach arose from dissatisfaction with the static nature of economic theory in the 1960s, North and his colleagues nonetheless followed what most other economists would do in arguing that optimal institutional forms will arise dynamically from an essentially profit-maximizing response to changes in incentives. As Davis and North were quick to admit, their effort was “a first (and very primitive) attempt” at formulating a theory of institutional change and applying that theory to American institutional development. North recognized the limitations of his early work on institutional change and has endeavored to develop a more subtle and articulated approach. In Understanding the Process of Economic Change (2005), North stresses again that modeling institutional change is less than straightforward, and he continues to examine the persistence of “institutions that provided incentives for stagnation and decline.”

Retrospect and Prospect

In the 1960s, when the first cliometricians began to group themselves into a distinct intellectual and social movement, buoyed by their revisionist achievements, they (at least many of them) thought they could use their scientific approach to re-write history. This hope may not have been a vain one, but it is yet to be realized. The best efforts of cliometricians have merged with those in other traditions to develop a rather different understanding of the economic past from views maintained half a century ago.

As economic history has evolved, so have the environs economic historians inhabit. In the Anglophone world, economic history – and cliometrics within it – burgeoned with the growth of higher education, but it has recently suffered the effects of retrenchment in that sector. Elsewhere, a new multi-lingual generation of enthusiastic economic historians and historical economists has arisen, with English as the language of international discourse. Both history and economics have been transformed by dissatisfaction with old verities and values, by adoption of new methods and points of view, and by posing new or revived questions. Economic history has been beneficiary of and contributor to such changes.

Although this entry focuses on the development of historical economics in the United States and the United Kingdom, we note that the cliometric approach has diffused well beyond their boundaries. In France the economist’s quantitative approach was fostered when Kuznets’s historical national accounts project recruited scholars in the 1950s to amass and organize the agricultural, output and population data available, in a new histoire quantitative. Still, that movement was overshadowed by the Annales school, whose histoire totale involved much data collection but limited economic analysis. Economic history of France, produced in the cliometric mode by scholars trained there, did not arrive in force until the mid-1980s. French cliometrics was first written by economic historians from (or trained in) North America or Britain; the Gallic cliometrics revolution occurred gradually, for “peculiarly French” institutional and ideological reasons. In Germany similar institutional barriers were partially breached in the 1960s with the arrival of a “turnkey” cliometrics operation in the form of an American-trained American scholar, Richard Tilly, who went from Wisconsin to Munster. Tilly was joined later by a few central Europeans who received American degrees, and all have since taught younger German cliometricians. Leading cliometric scholars from Italy, Spain and Portugal likewise received their post-graduate educations in Britain or America. The foremost Japanese cliometrician, Yasukichi Yasuba, received his Ph.D. from Johns Hopkins, supervised by Simon Kuznets.

If cliometrics in and of continental Europe could trace its roots to North America and Britain, by the 1980s it had developed indigenous strength and identity. At the Tenth International Economic History Congress in Leuven, Belgium (1990), a new association of analytical economic historians was founded. Rejecting the use of “cliometrics” as descriptor, the participants endorsed the nascent European Historical Economics Society. Subsequently national associations and seminars have grown up under the umbrella of the EHES – for example, French historical economists have the Association Francaise de Cliometrie and a new international journal, Cliometrica, while the Portuguese and Spaniards have sponsored a series of “Iberometrics” Conferences.

Cliometrics has transformed itself over the past half-century, forging important links with other disciplines and continuing to broaden its compass, and interpreting “new” phenomena. They are showing, for example, that recent “globalization” has origins and manifestations going back half a millennium and, given the recent experience of the formerly Socialist “transitional” economies, they are showing that the deep historical roots of institutions, organizations, values and behavior in the developed economies cannot be duplicated by following simple formulae. Despite the presentism of contemporary society, economic history will continue to address essential questions of origins and consequences, and it seems likely that cliometricians will complement and sometimes lead their colleagues in providing the answers. Cliometrics is a well-established field of study and its practitioners continue to increase our understanding of how economies evolve.

Source Note: The bulk of this article is a condensed version of the introduction to Lyons, Cain, and Williamson, eds., Reflections on the Cliometrics Revolution: Conversations with Economic Historians (2008), copyright (c) The Cliometric Society, Inc., which receives the royalties; reproduced by permission. Readers should consult that book for a more complete presentation, notes, and a full bibliography.

Further Reading

Coats, A. W. “The Historical Context of the ‘New’ Economic History.” Journal of European Economic History 9, no. 1 (1980): 185-207.

“Cliometrics after 40 Years.” American Economic Review: Papers and Proceedings 87:2, (1997): 396-414 [commentary by Claudia Goldin, Avner Greif, James J. Heckman, John R. Meyer, and Douglass C. North].

Crafts, N. F. R. “Cliometrics, 1971-1986: A Survey.” Journal of Applied Econometrics 2, no. 3 (1987): 171-92.

Davis, Lance E., Jonathan R. T. Hughes and Duncan McDougall. American Economic History. Homewood, IL: Irwin, 1961. [The first textbook of U.S. economic history to make systematic use of economic theory to organize the exposition. Second edition, 1965; third edition, 1969.]

Davis, Lance E., Jonathan R. T. Hughes and Stanley Reiter. “Aspects of Quantitative Research in Economic History.” _Journal of Economic History_ 20:4 (1960): 539-47 [in which “cliometrics” first appeared in print].

Drukker, J. W. The Revolution That Bit Its Own Tail: How Economic History Has Changed Our Ideas about Economic Growth. Amsterdam: Aksant, 2006.

Engerman, Stanley L. “Cliometrics.” In The Social Science Encyclopedia, second edition, edited by Adam Kuper and Jessica Kuper, 96-98. New York: Routledge, 1996.

Field, Alexander J. “The Future of Economic History.” In The Future of Economic History, edited by Alexander J. Field, 1-41. Boston: Kluwer-Nijhoff, 1987.

Fishlow, Albert, and Robert W. Fogel. “Quantitative Economic History: An Interim Evaluation. Past Trends and Present Tendencies.” Journal of Economic History 31, no. 1 (1971): 15-42.

Floud, Roderick. “Cliometrics.” In The New Palgrave: A Dictionary of Economics, edited by John Eatwell, Murray Milgate and Peter Newman, vol. 1, 452-54. London: Macmillan, 1987.

Goldin, Claudia. “Cliometrics and the Nobel.” Journal of Economic Perspectives 9, no. 2 (1995): 191-208.

Grantham, George. “The French Cliometric Revolution: A Survey of Cliometric Contributions to French Economic History.” European Review of Economic History 1, no. 3 (1997): 353-405.

Lamoreaux, Naomi R. “Economic History and the Cliometric Revolution.” In Imagined Histories: American Historians Interpret the Past, edited by Anthony Molho and Gordon S. Wood, 59-84. Princeton: Princeton University Press, 1998

Lyons, John S., Louis P Cain, and Samuel H. Williamson, eds. Reflections on the Cliometrics Revolution: Conversations with Economic Historians. New York: Routledge, 2008.

McCloskey, Donald [Deirdre] N. Econometric History. London: Macmillan, 1987

Parker, William, editor. Trends in the American Economy in the Nineteenth Century. Princeton, N.J.: Princeton University Press, 1960. [Volume 24 in Studies in Income and Wealth, in which many of the papers presented at the 1957 Williamstown conference appear.]

Tilly, Richard. “German Economic History and Cliometrics: A Selective Survey of Recent Tendencies.” European Review of Economic History 5, vol. 2 (2001): 151-87.

Whaples, Robert. “A Quantitative History of the Journal of Economic History and the Cliometric Revolution.” Journal of Economic History 51, no. 2 (1991): 289-301.

Williamson, Samuel H. “The History of Cliometrics.” In The Vital One: Essays in Honor of Jonathan R. T. Hughes, edited by Joel Mokyr, 15-31. Greenwich, Conn.: JAI Press, 1991. [Research in Economic History, Supplement 6.]

Williamson, Samuel H., and Robert Whaples. “Cliometrics.” In The Oxford Encyclopedia of Economic History, vol. 1, edited by Joel Mokyr, 446-47. Oxford: Oxford University Press, 2003.

Wright, Gavin. “Economic History, Quantitative: United States.” In International Encyclopedia of the Social and Behavioral Sciences, edited by Neil J. Smelser and Paul B. Baltes, 4108-14. Amsterdam: Elsevier, 2001.

Citation: Lyons, Cain and Williamson. “Cliometrics”. EH.Net Encyclopedia, edited by Robert Whaples. August 27, 2009. URL http://eh.net/encyclopedia/cliometrics/

The Economics of the Civil War

Roger L. Ransom, University of California, Riverside

The Civil War has been something of an enigma for scholars studying American history. During the first half of the twentieth century, historians viewed the war as a major turning point in American economic history. Charles Beard labeled it “Second American Revolution,” claiming that “at bottom the so-called Civil War – was a social war, ending in the unquestioned establishment of a new power in the government, making vast changes – in the course of industrial development, and in the constitution inherited from the Fathers” (Beard and Beard 1927: 53). By the time of the Second World War, Louis Hacker could sum up Beard’s position by simply stating that the war’s “striking achievement was the triumph of industrial capitalism” (Hacker 1940: 373). The “Beard-Hacker Thesis” had become the most widely accepted interpretation of the economic impact of the Civil War. Harold Faulkner devoted two chapters to a discussion of the causes and consequences of the war in his 1943 textbook American Economic History (which was then in its fifth edition), claiming that “its effects upon our industrial, financial, and commercial history were profound” (1943: 340).

In the years after World War II, a new group of economic historians — many of them trained in economics departments — focused their energies on the explanation of economic growth and development in the United States. As they looked for the keys to American growth in the nineteenth century, these economic historians questioned whether the Civil War — with its enormous destruction and disruption of society — could have been a stimulus to industrialization. In his 1955 textbook on American economic history, Ross Robertson mirrored a new view of the Civil War and economic growth when he argued that “persistent, fundamental forces were at work to forge the economic system and not even the catastrophe of internecine strife could greatly affect the outcome” (1955: 249). “Except for those with a particular interest in the economics of war,” claimed Robertson, “the four year period of conflict [1861-65] has had little attraction for economic historians” (1955: 247). Over the next two decades, this became the dominant view of the Civil War’s role industrialization of the United States.

Historical research has a way of returning to the same problems over and over. The efforts to explain regional patterns of economic growth and the timing of the United States’ “take-off” into industrialization, together with extensive research into the “economics” of the slave system of the South and the impact of emancipation, brought economic historians back to questions dealing with the Civil War. By the 1990s a new generation of economic history textbooks once again examined the “economics” of the Civil War (Atack and Passell 1994; Hughes and Cain 1998; Walton and Rockoff 1998). This reconsideration of the Civil War by economic historians can be loosely grouped into four broad issues: the “economic” causes of the war; the “costs” of the war; the problem of financing the War; and a re-examination of the Hacker-Beard thesis that the War was a turning point in American economic history.

Economic Causes of the War

No one seriously doubts that the enormous economic stake the South had in its slave labor force was a major factor in the sectional disputes that erupted in the middle of the nineteenth century. Figure 1 plots the total value of all slaves in the United States from 1805 to 1860. In 1805 there were just over one million slaves worth about $300 million; fifty-five years later there were four million slaves worth close to $3 billion. In the 11 states that eventually formed the Confederacy, four out of ten people were slaves in 1860, and these people accounted for more than half the agricultural labor in those states. In the cotton regions the importance of slave labor was even greater. The value of capital invested in slaves roughly equaled the total value of all farmland and farm buildings in the South. Though the value of slaves fluctuated from year to year, there was no prolonged period during which the value of the slaves owned in the United States did not increase markedly. Looking at Figure 1, it is hardly surprising that Southern slaveowners in 1860 were optimistic about the economic future of their region. They were, after all, in the midst of an unparalleled rise in the value of their slave assets.

A major finding of the research into the economic dynamics of the slave system was to demonstrate that the rise in the value of slaves was not based upon unfounded speculation. Slave labor was the foundation of a prosperous economic system in the South. To illustrate just how important slaves were to that prosperity, Gerald Gunderson (1974) estimated what fraction of the income of a white person living in the South of 1860 was derived from the earnings of slaves. Table 1 presents Gunderson’s estimates. In the seven states where most of the cotton was grown, almost one-half the population were slaves, and they accounted for 31 percent of white people’s income; for all 11 Confederate States, slaves represented 38 percent of the population and contributed 23 percent of whites’ income. Small wonder that Southerners — even those who did not own slaves — viewed any attempt by the federal government to limit the rights of slaveowners over their property as a potentially catastrophic threat to their entire economic system. By itself, the South’s economic investment in slavery could easily explain the willingness of Southerners to risk war when faced with what they viewed as a serious threat to their “peculiar institution” after the electoral victories of the Republican Party and President Abraham Lincoln the fall of 1860.

Table 1

The Fraction of Whites’ Incomes from Slavery

State Percent of the Population That Were Slaves Per Capita Earnings of Free Whites (in dollars) Slave Earnings per Free White (in dollars) Fraction of Earnings Due to Slavery
Alabama 45 120 50 41.7
South Carolina 57 159 57 35.8
Florida 44 143 48 33.6
Georgia 44 136 40 29.4
Mississippi 55 253 74 29.2
Louisiana 47 229 54 23.6
Texas 30 134 26 19.4
Seven Cotton States 46 163 50 30.6
North Carolina 33 108 21 19.4
Tennessee 25 93 17 18.3
Arkansas 26 121 21 17.4
Virginia 32 121 21 17.4
All 11 States 38 135 35 25.9
Source: Computed from data in Gerald Gunderson (1974: 922, Table 1)

The Northern states also had a huge economic stake in slavery and the cotton trade. The first half of the nineteenth century witnessed an enormous increase in the production of short-staple cotton in the South, and most of that cotton was exported to Great Britain and Europe. Figure 2 charts the growth of cotton exports from 1815 to 1860. By the mid 1830s, cotton shipments accounted for more than half the value of all exports from the United States. Note that there is a marked similarity between the trends in the export of cotton and the rising value of the slave population depicted in Figure 1. There could be little doubt that the prosperity of the slave economy rested on its ability to produce cotton more efficiently than any other region of the world.

The income generated by this “export sector” was a major impetus for growth not only in the South, but in the rest of the economy as well. Douglass North, in his pioneering study of the antebellum U.S. economy, examined the flows of trade within the United States to demonstrate how all regions benefited from the South’s concentration on cotton production (North 1961). Northern merchants gained from Southern demands for shipping cotton to markets abroad, and from the demand by Southerners for Northern and imported consumption goods. The low price of raw cotton produced by slave labor in the American South enabled textile manufacturers — both in the United States and in Britain — to expand production and provide benefits to consumers through a declining cost of textile products. As manufacturing of all kinds expanded at home and abroad, the need for food in cities created markets for foodstuffs that could be produced in the areas north of the Ohio River. And the primary force at work was the economic stimulus from the export of Southern Cotton. When James Hammond exclaimed in 1859 that “Cotton is King!” no one rose to dispute the point.

With so much to lose on both sides of the Mason-Dixon Line, economic logic suggests that a peaceful solution to the slave issue would have made far more sense than a bloody war. Yet no solution emerged. One “economic” solution to the slave problem would be for those who objected to slavery to “buy out” the economic interest of Southern slaveholders. Under such a scheme, the federal government would purchase slaves. A major problem here was that the costs of such a scheme would have been enormous. Claudia Goldin estimates that the cost of having the government buy all the slaves in the United States in 1860, would be about $2.7 billion (1973: 85, Table 1). Obviously, such a large sum could not be paid all at once. Yet even if the payments were spread over 25 years, the annual costs of such a scheme would involve a tripling of federal government outlays (Ransom and Sutch 1990: 39-42)! The costs could be reduced substantially if instead of freeing all the slaves at once, children were left in bondage until the age of 18 or 21 (Goldin 1973:85). Yet there would remain the problem of how even those reduced costs could be distributed among various groups in the population. The cost of any “compensated” emancipation scheme was so high that even those who wished to eliminate slavery were unwilling to pay for a “buyout” of those who owned slaves.

The high cost of emancipation was not the only way in which economic forces produced strong regional tensions in the United States before 1860. The regional economic specialization, previously noted as an important cause of the economic expansion of the antebellum period, also generated very strong regional divisions on economic issues. Recent research by economic, social and political historians has reopened some of the arguments first put forward by Beard and Hacker that economic changes in the Northern states were a major factor leading to the political collapse of the 1850s. Beard and Hacker focused on the narrow economic aspects of these changes, interpreting them as the efforts of an emerging class of industrial capitalists to gain control of economic policy. More recently, historians have taken a broader view of the situation, arguing that the sectional splits on these economic issues reflected sweeping economic and social changes in the Northern and Western states that were not experienced by people in the South. The term most historians have used to describe these changes is a “market revolution.”

Source: United States Population Census, 1860.

Perhaps the best single indicator of how pervasive the “market revolution” was in the Northern and Western states is the rise of urban areas in areas where markets have become important. Map 1 plots the 292 counties that reported an “urban population” in 1860. (The 1860 Census Office defined an “urban place” as a town or city having a population of at least 2,500 people.) Table 2 presents some additional statistics on urbanization by region. In 1860 6.1 million people — roughly one out of five persons in the United States — lived in an urban county. A glance at either the map or Table 2 reveals the enormous difference in urban development in the South compared to the Northern states. More than two-thirds of all urban counties were in the Northeast and West; those two regions accounted for nearly 80 percent of the urban population of the country. By contrast, less than 7 percent of people in the 11 Southern states of Table 2 lived in urban counties.

Table 2

Urban Population of the United States in 1860a

Region Counties with Urban Populations Total Urban Population in the Region Percent of Region’s Population Living in Urban Counties Region’s Urban Population as Percent of U.S. Urban Population
Northeastb 103 3,787,337 35.75 61.66
Westc 108 1,059,755 13.45 17.25
Borderd 23 578,669 18.45 9.42
Southe 51 621,757 6.83 10.12
Far Westf 7 99,145 15.19 1.54
Totalg 292 6,141,914 19.77 100.00
Notes:

a Urban population is people living in a city or town of at least 2,500

b Includes: Connecticut, Maine, Massachusetts, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, and Vermont.

c Includes: Illinois, Indiana, Iowa, Kansas, Minnesota, Nebraska, Ohio, and Wisconsin.

d Includes: Delaware, Kentucky, Maryland, and Missouri.

e Includes: Alabama, Arkansas, Florida, Georgia, Louisiana, Mississippi, North Carolina, South Carolina, Tennessee, Texas, and Virginia.

f Includes: Colorado, California, Dakotas, Nevada, New Mexico, Oregon, Utah and Washington

g includes District of Columbia

Source: U.S Census of Population, 1860.

The region along the north Atlantic Coast, with its extensive development of commerce and industry, had the largest concentration of urban population in the United States; roughly one-third of the population of the nine states defined as the Northeast in Table 2 lived in urban counties. In the South, the picture was very different. Cotton cultivation with slave labor did not require local financial services or nearby manufacturing activities that might generate urban activities. The 11 states of the Confederacy had only 51 urban counties and they were widely scattered throughout the region. Western agriculture with its emphasis on foodstuffs encouraged urban activity near to the source of production. These centers were not necessarily large; indeed, the West had roughly the same number of large and mid-sized cities as the South. However there were far more small towns scattered throughout settled regions of Ohio, Indiana, Illinois, Wisconsin and Michigan than in the Southern landscape.

Economic policy had played a prominent role in American politics since the birth of the republic in 1790. With the formation of the Whig Party in the 1830s, a number of key economic issues emerged at the national level. To illustrate the extent to which the rise of urban centers and increased market activity in the North led to a growing crisis in economic policy, historians have re-examined four specific areas of legislative action singled out by Beard and Hacker as evidence of a Congressional stalemate in 1860 (Egnal 2001; Ransom and Sutch 2001; 1989; Bensel 1990; McPherson 1988).

Land Policy

1. Land Policy. Settlement of western lands had always been a major bone of contention for slave and free-labor farms. The manner in which the federal government distributed land to people could have a major impact on the nature of farming in a region. Northerners wanted to encourage the settlement of farms which would depend primarily on family labor by offering cheap land in small parcels. Southerners feared that such a policy would make it more difficult to keep areas open for settlement by slaveholders who wanted to establish large plantations. This all came to a head with the “Homestead Act” of 1860 that would provide 160 acres of free land for anyone who wanted to settle and farm the land. Northern and western congressmen strongly favored the bill in the House of Representatives but the measure received only a single vote from slave states’ representatives. The bill passed, but President Buchanan vetoed it. (Bensel 1990: 69-72)

Transportation Improvements

2. Transportation Improvements. Following the opening of the Erie Canal in 1823, there was growing support in the North and the Northwest for government support of improvement in transportation facilities — what were termed in those days “internal improvements”. The need for government- sponsored improvements was particularly urgent in the Great Lakes region (Egnal 2001: 45-50). The appearance of the railroad in the 1840s gave added support for those advocating government subsidies to promote transportation. Southerners required far fewer internal improvements than people in the Northwest, and they tended to view federal subsidies for such projects to be part of a “deal” between western and eastern interests that held no obvious gains for the South. The bill that best illustrates the regional disputes on transportation was the Pacific Railway Bill of 1860, which proposed a transcontinental railway link to the West Coast. The bill failed to pass the House, receiving no votes from congressmen representing districts of the South where there was a significant slave population (Bensel 1990: 70-71).

The Tariff

3. The Tariff. Southerners, with their emphasis on staple agriculture and need to buy goods produced outside the South, strongly objected to the imposition of duties on imported goods. Manufacturers in the Northeast, on the other hand, supported a high tariff as protection against cheap British imports. People in the West were caught in the middle of this controversy. Like the agricultural South they disliked the idea of a high “protective” tariff that raised the cost of imports. However the tariff was also the main source of federal revenue at this time, and Westerners needed government funds for the transportation improvements they supported in Congress. As a result, a compromise reached by western and eastern interests during in the tariff debates of 1857 was to support a “moderate” tariff; with duties set high enough to generate revenue and offer some protection to Northern manufacturers while not putting too much of a burden on Western and Eastern consumers. Southerners complained that even this level of protection was excessive and that it was one more example of the willingness of the West and the North to make economic bargains at the expense of the South (Ransom and Sutch 2001; Egnal 2001:50-52).

Banking

4. Banking. The federal government’s role in the chartering and regulation of banks was a volatile political issue throughout the antebellum period. In 1834 President Andrew Jackson created a major furor when he vetoed a bill to recharter the Second Bank of the United States. Jackson’s veto ushered in a period of that was termed “free banking” in the United States, where the chartering and regulation of banks was left entirely in the hands of state governments. Banks were a relatively new economic institution at this point in time, and opinions were sharply divided over the degree to which the federal government should regulate banks. In the Northeast, where over 60 percent of all banks were located, there was strong support by 1860 for the creation of a system of banks that would be chartered and regulated by the federal government. But in the South, which had little need for local banking services, there was little enthusiasm for such a proposal. Here again, the western states were caught in the middle. While they worried that a system of “national” banks that would be controlled by the already dominant eastern banking establishment, western farmers found themselves in need of local banking services for financing their crops. By 1860 many were inclined to support the Republican proposal for a National Banking System, however Southern opposition killed the National Bank Bill in 1860 (Ransom and Sutch 2001; Bensel 1990).

The growth of an urbanized market society in the North produced more than just a legislative program of political economy that Southerners strongly resisted. Several historians have taken a much broader view of the market revolution and industrialization in the North. They see the economic conflict of North and South, in the words of Richard Brown, as “the conflict of a modernizing society” (1976: 161). A leading historian of the Civil War, James McPherson, argues that Southerners were correct when they claimed that the revolutionary program sweeping through the North threatened their way of life (1983; 1988). James Huston (1999) carries the argument one step further by arguing that Southerners were correct in their fears that the triumph of this coalition would eventually lead to an assault by Northern politicians on slave property rights.

All this provided ample argument for those clamoring for the South to leave the Union in 1861. But why did the North fight a war rather than simply letting the unhappy Southerners go in peace? It seems unlikely that anyone will ever be able to show that the “gains” from the war outweighed the “costs” in economic terms. Still, war is always a gamble, and with the neither the costs nor the benefits easily calculated before the fact, leaders are often tempted to take the risk. The evidence above certainly lent strong support for those arguing that it made sense for the South to fight if a belligerent North threatened the institution of slavery. An economic case for the North is more problematic. Most writers argue that the decision for war on Lincoln’s part was not based primarily on economic grounds. However, Gerald Gunderson points out that if, as many historians argue, Northern Republicans were intent on controlling the spread of slavery, then a war to keep the South in the Union might have made sense. Gunderson compares the “costs” of the war (which we discuss below) with the cost of “compensated” emancipation and notes that the two are roughly the same order of magnitude — 2.5 to 3.7 billion dollars (1974: 940-42). Thus, going to war made as much “economic sense” as buying out the slaveholders. Gunderson makes the further point, which has been echoed by other writers, that the only way that the North could ensure that their program to contain slavery could be “enforced” would be if the South were kept in the Union. Allowing the South to leave the Union would mean that the North could no longer control the expansion of slavery anywhere in the Western Hemisphere (Ransom 1989; Ransom and Sutch 2001; Weingast 1998; Weingast 1995; Wolfson 1995). What is novel about these interpretations of the war is that they argue it was economic pressures of “modernization” in the North that made Northern policy towards secession in 1861 far more aggressive than the traditional story of a North forced into military action by the South’s attack on Fort Sumter.

That is not to say that either side wanted war — for economic or any other reason. Abraham Lincoln probably summarized the situation as well as anyone when he observed in his second inaugural address that: “Both parties deprecated war, but one of them would make war rather than let the nation survive, and the other would accept war rather than let it perish, and the war came.”

The “Costs” of the War

The Civil War has often been called the first “modern” war. In part this reflects the enormous effort expended by both sides to conduct the war. What was the cost of this conflict? The most comprehensive effort to answer this question is the work of Claudia Goldin and Frank Lewis (1978; 1975). The Goldin and Lewis estimates of the costs of the war are presented in Table 3. The costs are divided into two groups: the direct costs which include the expenditures of state and local governments plus the loss from destruction of property and the loss of human capital from the casualties; and what Goldin and Lewis term the indirect costs of the war which include the subsequent implications of the war after 1865. Goldin and Lewis estimate that the combined outlays of both governments — in 1860 dollars — totaled $3.3 billion. To this they add $1.8 billion to account for the discounted economic value of casualties in the war, and they add $1.5 billion to account for the destruction of the war in the South. This gives a total of $6.6 billion in direct costs — with each region incurring roughly half the total.

Table 3

The Costs of the Civil War

(Millions of 1860 Dollars)

South

North

Total

Direct Costs:

Government Expenditures

1,032

2,302

3,334

Physical Destruction

1,487

1,487

Loss of Human Capital

767

1,064

1,831

Total Direct Costs of the War

3,286

3,366

6,652

Per capita

376

148

212

Indirect Costs:

Total Decline in Consumption

6,190

1,149

7,339

Less:

Effect of Emancipation

1,960

Effect of Cotton Prices

1,670

Total Indirect Costs of The War

2,560

1,149

3,709

Per capita

293

51

118

Total Costs of the War

5,846

4,515

10,361

Per capita

670

199

330

Population in 1860 (Million)

8.73

27.71

31.43

Source: Ransom, (1998: 51, Table 3-1); Goldin and Lewis. (1975; 1978)

While these figures are only a very rough estimate of the actual costs, they provide an educated guess as to the order of magnitude of the economic effort required to wage the war, and it seems likely that if there is a bias, it is to understate the total. (Thus, for example, the estimated “economic” losses from casualties ignore the emotional cost of 625,000 deaths, and the estimates of property destruction were quite conservative.) Even so, the direct cost of the war as calculated by Goldin and Lewis was 1.5 times the total gross national product of the United States for 1860 — an enormous sum in comparison with any military effort by the United States up to that point. What stands out in addition to the enormity of the bill is the disparity in the burden these costs represented to the people in the North and the South. On a per capita basis, the costs to the North population were about $150 — or roughly equal to one year’s income. The Southern burden was two and a half times that amount — $376 per man, woman and child.

Staggering though these numbers are, they represent only a fraction of the full costs of the war, which lingered long after the fighting had stopped. One way to measure the full “costs” and “benefits” of the war, Goldin and Lewis argue, is to estimate the value of the observed postwar stream of consumption in each region and compare that figure to the estimated hypothetical stream of consumption had there been no war (1975: 309-10). (All the figures for the costs in Table 3 have been adjusted to reflect their discounted value in 1860.) The Goldin and Lewis estimate for the discounted value of lost consumption for the South was $6.2 billion; for the North the estimate was $1.15 billion. Ingenious though this methodology is, it suffers from the serious drawback that consumption lost for any reason — not just the war — is included in the figure. Particularly for the South, not all the decline in output after 1860 could be directly attributed to the war; the growth in the demand for cotton that fueled the antebellum economy did not continue, and there was a dramatic change in the supply of labor due to emancipation. Consequently, Goldin and Lewis subsequently adjusted their estimate of lost consumption due to the war down to $2.56 billion for the South in order to exclude the effects of emancipation and the collapse of the cotton market. The magnitudes of the indirect effects are detailed in Table 3. After the adjustments, the estimated costs for the war totaled more than $10 billion. Allocating the costs to each region produces a per capita burden of $670 in the South and $199 in the North. What Table 3 does not show is the extent to which these expenses were spread out over a long period of time. In the North, consumption had regained its prewar level by 1873, however in the South consumption remained below its 1860 level to the end of the century. We shall return to this issue below.

Financing the War

No war in American history strained the economic resources of the economy as the Civil War did. Governments on both sides were forced to resort to borrowing on an unprecedented scale to meet the financial obligations for the war. With more developed markets and an industrial base that could ultimately produce the goods needed for the war, the Union was clearly in a better position to meet this challenge. The South, on the other hand, had always relied on either Northern or foreign capital markets for their financial needs, and they had virtually no manufacturing establishments to produce military supplies. From the outset, the Confederates relied heavily on funds borrowed outside the South to purchase supplies abroad.

Figure 3 shows the sources of revenue collected by the Union government during the war. In 1862 and 1863 the government covered less than 15 percent of its total expenditures through taxes. With the imposition of a higher tariff, excise taxes, and the introduction of the first income tax in American history, this situation improved somewhat, and by the war’s end 25 percent of the federal government revenues had been collected in taxes. But what of the other 75 percent? In 1862 Congress authorized the U.S. Treasury to issue currency notes that were not backed by gold. By the end of the war, the treasury had printed more than $250 million worth of these “Greenbacks” and, together with the issue of gold-backed notes, the printing of money accounted for 18 percent of all government revenues. This still left a huge shortfall in revenue that was not covered by either taxes or the printing of money. The remaining revenues were obtained by borrowing funds from the public. Between 1861 and 1865 the debt obligation of the Federal government increased from $65 million to $2.7 billion (including the increased issuance of notes by the Treasury). The financial markets of the North were strained by these demands, but they proved equal to the task. In all, Northerners bought almost $2 billion worth of treasury notes and absorbed $700 million of new currency. Consequently, the Northern economy was able to finance the war without a significant reduction in private consumption. While the increase in the national debt seemed enormous at the time, events were to prove that the economy was more than able to deal with it. Indeed, several economic historians have claimed that the creation and subsequent retirement of the Civil War debt ultimately proved to be a significant impetus to post-war growth (Williamson 1974; James 1984). Wartime finance also prompted a significant change in the banking system of the United States. In 1862 Congress finally passed legislation creating the National Banking System. Their motive was not only to institute the program of banking reform pressed for many years by the Whigs and the Republicans; the newly-chartered federal banks were also required to purchase large blocs of federal bonds to hold as security against the issuance of their national bank notes.

The efforts of the Confederate government to pay for their war effort were far more chaotic than in the North, and reliable expenditure and revenue data are not available. Figure 4 presents the best revenue estimates we have for the Richmond government from 1861 though November 1864 (Burdekin and Langdana 1993). Several features of Confederate finance immediately stand out in comparison to the Union effort. First is the failure of the Richmond government to finance their war expenditures through taxation. Over the course of the war, tax revenues accounted for only 11 percent of all revenues. Another contrast was the much higher fraction of revenues accounted for by the issuance of currency on the part of the Richmond government. Over a third of the Confederate government’s revenue came from the printing press. The remainder came in the form of bonds, many of which were sold abroad in either London or Amsterdam. The reliance on borrowed funds proved to be a growing problem for the Confederate treasury. By mid-1864 the costs of paying interest on outstanding government bonds absorbed more than half all government expenditures. The difficulties of collecting taxes and floating new bond issues had become so severe that in the final year of the war the total revenues collected by the Confederate Government actually declined.

The printing of money and borrowing on such a huge scale had a dramatic effect on the economic stability of the Confederacy. The best measure of this instability and eventual collapse can be seen in the behavior of prices. An index of consumer prices is plotted together with the stock on money from early 1861 to April 1865 in Figure 5. By the beginning of 1862 prices had already doubled; by middle of 1863 they had increased by a factor of 13. Up to this point, the inflation could be largely attributed to the money placed in the hands of consumers by the huge deficits of the government. Prices and the stock of money had risen at roughly the same rate. This represented a classic case of what economists call demand-pull inflation: too much money chasing too few goods. However, from the middle of 1863 on, the behavior of prices no longer mirrors the money supply. Several economic historians have suggested that at this point the prices reflect people’s confidence in the future of the Confederacy as a viable state (Burdekin and Langdana 1993; Weidenmier 2000). Figure 5 identifies three major military “turning points” between 1863 and 1865. In late 1863 and early 1864, following the Confederate defeats at Gettysburg and Vicksburg, prices rose very sharply despite a marked decrease in the growth of the money supply. When the Union offensives in Georgia and Virginia stalled in the summer of 1864, prices stabilized for a few months, only to resume their upward spiral after the fall of Atlanta in September 1864. By that time, of course, the Confederate cause was clearly doomed. By the end of the war, inflation had reached a point where the value of the Confederate currency was virtually zero. People had taken to engaging in barter or using Union dollars (if they could be found) to conduct their transactions. The collapse of the Confederate monetary system was a reflection of the overall collapse of the economy’s efforts to sustain the war effort.

The Union also experienced inflation as a result of deficit finance during the war; the consumer price index rose from 100 at the outset of the war to 175 by the end of 1865. While this is nowhere near the degree of economic disruption caused by the increase in prices experienced by the Confederacy, a doubling of prices did have an effect on how the burden of the war’s costs were distributed among various groups in each economy. Inflation is a tax, and it tends to fall on those who are least able to afford it. One group that tends to be vulnerable to a sudden rise in prices is wage earners. Table 4 presents data on prices and wages in the United States and the Confederacy. The series for wages has been adjusted to reflect the decline in purchasing power due to inflation. Not surprisingly, wage earners in the South saw the real value of their wages practically disappear by the end of the war. In the North the situation was not as severe, but wages certainly did not keep pace with prices; the real value of wages fell by about 20 percent. It is not obvious why this happened. The need for manpower in the army and the demand for war production should have created a labor shortage that would drive wages higher. While the economic situation of laborers deteriorated during the war, one must remember that wage earners in 1860 were still a relatively small share of the total labor force. Agriculture, not industry, was the largest economic sector in the north, and farmers fared much in terms of their income during the war than did wage earners in the manufacturing sector (Ransom 1998:255-64; Atack and Passell 1994:368-70).

Table 4:

Indices of Prices and Real Wages During the Civil War

(1860=100)

Union Confederate
Year Prices Real Wages Prices Real Wages
1860 100 100 100 100
1861 101 100 121 86
1862 113 93 388 35
1863 139 84 1,452 19
1864 176 77 3,992 11
1865 175 82
Source: Union: (Atack and Passell 1994: 367, Table 13.5)

Confederate: (Lerner 1954)

Overall, it is clear that the North did a far better job of mobilizing the economic resources needed to carry on the war. The greater sophistication and size of Northern markets meant that the Union government could call upon institutional arrangements that allowed for a more efficient system of redirecting resources into wartime production than was possible in the South. The Confederates depended far more upon outside resources and direct intervention in the production of goods and services for their war effort, and in the end the domestic economy could not bear up under the strain of the effort. It is worth noting in this regard, that the Union blockade, which by 1863 had largely closed down not only the external trade of the South with Europe, but also the coastal trade that had been an important element in the antebellum transportation system, may have played a more crucial part in bringing about the eventual collapse of the Southern war effort than is often recognized (Ransom 2002).

The Civil War as a Watershed in American Economic History

It is easy to see why contemporaries believed that the Civil War was a watershed event in American History. With a cost of billions of dollars and 625,000 men killed, slavery had been abolished and the Union had been preserved. Economic historians viewing the event fifty years later could note that the half-century following the Civil War had been a period of extraordinary growth and expansion of the American economy. But was the war really the “Second American Revolution” as Beard (1927) and Louis Hacker (1940) claimed? That was certainly the prevailing view as late as 1960, when Thomas Cochran (1961) published an article titled “Did the Civil War Retard Industrialization?” Cochran pointed out that, until the 1950s, there was no quantitative evidence to prove or disprove the Beard-Hacker thesis. Recent quantitative research, he argued, showed that the war had actually slowed the rate of industrial growth. Stanley Engerman expanded Cochran’s argument by attacking the Beard-Hacker claim that political changes — particularly the passage in 1862 of the Republican program of political economy that had been bottled up in Congress by Southern opposition — were instrumental in accelerating economic growth (Engerman 1966). The major thrust of these arguments was that neither the war nor the legislation was necessary for industrialization — which was already well underway by 1860. “Aside from commercial banking,” noted one commentator, “the Civil War appears not to have started or created any new patterns of economic institutional change” (Gilchrist and Lewis 1965: 174). Had there been no war, these critics argued, the trajectory of economic growth that emerged after 1870 would have done so anyway.

Despite this criticism, the notion of a “second” American Revolution lives on. Clearly the Beards and Hacker were in error in their claim that industrial growth accelerated during the war. The Civil War, like most modern wars, involved a huge effort to mobilize resources to carry on the fight. This had the effect of making it appear that the economy was expanding due to the production of military goods. However, Beard and Hacker — and a good many other historians — mistook this increased wartime activity as a net increase in output when in fact what happened is that resources were shifted away from consumer products towards wartime production (Ransom 1989: Chapter 7). But what of the larger question of political change resulting from the war? Critics of Beard and Hacker claimed that the Republican program would have eventually been enacted even if there been no war; hence the war was not a crucial turning point in economic development. The problem with this line of argument is that it completely misses the point of the Beard-Hacker argument. They would readily agree that in the absence of a war the Republican program of political economy would triumph — and that is why there was a war! Historians who argue that economic forces were an underlying cause of sectional conflicts go on to point out that war was probably the only way to settle those conflicts. In this view, the war was a watershed event in the economic development of the United States because the Union military victory ensured that the “market revolution” would not be stymied by the South’s attempt to break up the Union (Ransom 1999).

Whatever the effects of the war on industrial growth, economic historians agree that the war had a profound effect on the South. The destruction of slavery meant that the entire Southern economy had to be rebuilt. This turned out to be a monumental task; far larger than anyone at the time imagined. As noted above in the discussion of the indirect costs of the war, Southerners bore a disproportionate share of those costs and the burden persisted long after the war had ended. The failure of the postbellum Southern economy to recover has spawned a huge literature that goes well beyond the effects of the war.

Economic historians who have examined the immediate effects of the war have reached a few important conclusions. First, the idea that the South was physically destroyed by the fighting has been largely discarded. Most writers have accepted the argument of Ransom and Sutch (2001) that the major “damage” to the South from the war was the depreciation and neglect of property on farms as a significant portion of the male workforce went off to war for several years. Second was the impact of emancipation. Slaveholders lost their enormous investment in slaves as a result of emancipation. Planters were consequently strapped for capital in the years immediately after the war, and this affected their options with regard to labor contracts with the freedmen and in their dealings with capital markets to obtain credit for the planting season. The freedmen and their families responded to emancipation by withdrawing up to a third of their labor from the market. While this was a perfectly reasonable response, it had the effect of creating an apparent labor “shortage” and it convinced white landlords that a free labor system could never work with the ex-slaves; thus further complicating an already unsettled labor market. In the longer run, as Gavin Wright (1986) put it, emancipation transformed the white landowners from “laborlords” to “landlords.” This was not a simple transition. While they were able, for the most part, to cling to their landholdings, the ex-slaveholders were ultimately forced to break up the great plantations that had been the cornerstone of the antebellum Southern economy and rent small parcels of land to the freedmen under using a new form of rental contract — sharecropping. From a situation where tenancy was extremely rare, the South suddenly became an agricultural economy characterized by tenant farms.

The result was an economy that remained heavily committed not only to agriculture, but to the staple crop of cotton. Crop output in the South fell dramatically at the end of the war, and had not yet recovered its antebellum level by 1879. The loss of income was particularly hard on white Southerners; per capita income of whites in 1857 had been $125; in 1879 it was just over $80 (Ransom and Sutch 1979). Table 5 compares the economic growth of GNP in the United States with the gross crop output of the Southern states from 1874 to 1904. Over the last quarter of the nineteenth century, gross crop output in the South rose by about one percent per year at a time when the GNP of United States (including the South) was rising at twice that rate. By the end of the century, Southern per capita income had fallen to roughly two-thirds the national level, and the South was locked in a cycle of poverty that lasted well into the twentieth century. How much of this failure was due solely to the war remains open to debate. What is clear is that neither the dreams of those who fought for an independent South in 1861 nor the dreams of those who hoped that a “New South” that might emerge from the destruction of war after 1865 were realized.

Table 5Annual Rates of Growth of Gross National Product of the U.S. and the Gross Southern Crop Output, 1874 to 1904
Annual Percentage Rate of Growth
Interval Gross National Product of the U.S. Gross Southern Crop Output
1874 to 1884 2.79 1.57
1879 to 1889 1.91 1.14
1884 to 1894 0.96 1.51
1889 to 1899 1.15 0.97
1894 to 1904 2.30 0.21
1874 to 1904 2.01 1.10
Source: (Ransom and Sutch 1979: 140, Table 7.3

References

Atack, Jeremy, and Peter Passell. A New Economic View of American History from Colonial Times to 1940. Second edition. New York: W.W. Norton, 1994.

Beard, Charles, and Mary Beard. The Rise of American Civilization. Two volumes. New York: Macmillan, 1927.

Bensel, Richard F. Yankee Leviathan: The Origins of Central State Authority in America, 1859-1877. New York: Cambridge University Press, 1990.

Brown, Richard D. Modernization: The Transformation of American Life, 1600-1865. New York: Hill and Wang, 1976.

Burdekin, Richard C.K., and Farrokh K. Langdana. “War Finance in the Southern Confederacy.” Explorations in Economic History 30 (1993): 352-377.

Cochran, Thomas C. “Did the Civil War Retard Industrialization?” Mississippi Valley Historical Review 48 (September 1961): 197-210.

Egnal, Marc. “The Beards Were Right: Parties in the North, 1840-1860.” Civil War History 47 (2001): 30-56.

Engerman, Stanley L. “The Economic Impact of the Civil War.” Explorations in Entrepreneurial History, second series 3 (1966): 176-199 .

Faulkner, Harold Underwood. American Economic History. Fifth edition. New York: Harper & Brothers, 1943.

Gilchrist, David T., and W. David Lewis, editors. Economic Change in the Civil War Era. Greenville, DE: Eleutherian Mills-Hagley Foundation, 1965.

Goldin, Claudia Dale. “The Economics of Emancipation.” Journal of Economic History 33 (1973): 66-85.

Goldin, Claudia, and Frank Lewis. “The Economic Costs of the American Civil War: Estimates and Implications.” Journal of Economic History 35 (1975): 299-326.

Goldin, Claudia, and Frank Lewis. “The Post-Bellum Recovery of the South and the Cost of the Civil War: Comment.” Journal of Economic History 38 (1978): 487-492.

Gunderson, Gerald. “The Origin of the American Civil War.” Journal of Economic History 34 (1974): 915-950.

Hacker, Louis. The Triumph of American Capitalism: The Development of Forces in American History to the End of the Nineteenth Century. New York: Columbia University Press, 1940.

Hughes, J.R.T., and Louis P. Cain. American Economic History. Fifth edition. New York: Addison Wesley, 1998.

Huston, James L. “Property Rights in Slavery and the Coming of the Civil War.” Journal of Southern History 65 (1999): 249-286.

James, John. “Public Debt Management and Nineteenth-Century American Economic Growth.” Explorations in Economic History 21 (1984): 192-217.

Lerner, Eugene. “Money, Prices and Wages in the Confederacy, 1861-65.” Ph.D. dissertation, University of Chicago, Chicago, 1954.

McPherson, James M. “Antebellum Southern Exceptionalism: A New Look at an Old Question.” Civil War History 29 (1983): 230-244.

McPherson, James M. Battle Cry of Freedom: The Civil War Era. New York: Oxford University Press, 1988.

North, Douglass C. The Economic Growth of the United States, 1790-1860. Englewood Cliffs: Prentice Hall, 1961.

Ransom, Roger L. Conflict and Compromise: The Political Economy of Slavery, Emancipation, and the American Civil War. New York: Cambridge University Press, 1989.

Ransom, Roger L. “The Economic Consequences of the American Civil War.” In The Political Economy of War and Peace, edited by M. Wolfson. Norwell, MA: Kluwer Academic Publishers, 1998.

Ransom, Roger L. “Fact and Counterfact: The ‘Second American Revolution’ Revisited.” Civil War History 45 (1999): 28-60.

Ransom, Roger L. “The Historical Statistics of the Confederacy.” In The Historical Statistics of the United States, Millennial Edition, edited by Susan Carter and Richard Sutch. New York: Cambridge University Press, 2002.

Ransom, Roger L., and Richard Sutch. “Growth and Welfare in the American South in the Nineteenth Century.” Explorations in Economic History 16 (1979): 207-235.

Ransom, Roger L., and Richard Sutch. “Who Pays for Slavery?” In The Wealth of Races: The Present Value of Benefits from Past Injustices, edited by Richard F. America, 31-54. Westport, CT: Greenwood Press, 1990.

Ransom, Roger L., and Richard Sutch. “Conflicting Visions: The American Civil War as a Revolutionary Conflict.” Research in Economic History 20 (2001)

Ransom, Roger L., and Richard Sutch. One Kind of Freedom: The Economic Consequences of Emancipation. Second edition. New York: Cambridge University Press, 2001.

Robertson, Ross M. History of the American Economy. Second edition. New York: Harcourt Brace and World, 1955.

United States, Bureau of the Census. Historical Statistics of the United States, Colonial Times to 1970. Two volumes. Washington: U.S. Government Printing Office, 1975.

Walton, Gary M., and Hugh Rockoff. History of the American Economy. Eighth edition. New York: Dryden, 1998.

Weidenmier, Marc. “The Market for Confederate Bonds.” Explorations in Economic History 37 (2000): 76-97.

Weingast, Barry. “The Economic Role of Political Institutions: Market Preserving Federalism and Economic Development.” Journal of Law, Economics and Organization 11 (1995): 1:31.

Weingast, Barry R. “Political Stability and Civil War: Institutions, Commitment, and American Democracy.” In Analytic Narratives, edited by Robert Bates et al. Princeton: Princeton University Press, 1998.

Williamson, Jeffrey. “Watersheds and Turning Points: Conjectures on the Long-Term Impact of Civil War Financing.” Journal of Economic History 34 (1974): 636-661.

Wolfson, Murray. “A House Divided against Itself Cannot Stand.” Conflict Management and Peace Science 14 (1995): 115-141.

Wright, Gavin. Old South, New South: Revolutions in the Southern Economy since the Civil War. New York: Basic Books, 1986.

Citation: Ransom, Roger. “Economics of the Civil War”. EH.Net Encyclopedia, edited by Robert Whaples. August 24, 2001. URL http://eh.net/encyclopedia/the-economics-of-the-civil-war/

The Use of Quantitative Micro-data in Canadian Economic History: A Brief Survey

Livio Di Matteo, Lakehead University

Introduction1

From a macro perspective, Canadian quantitative economic history is concerned with the collection and construction of historical time series data as well as the study of the performance of broad economic aggregates over time.2 The micro dimension of quantitative economic history focuses on individual and sector responses to economic phenomena.3 In particular, micro economic history is marked by the collection and analysis of data sets rooted in individual economic and social behavior. This approach uses primary historical records like census rolls, probate records, assessment rolls, land records, parish records and company records, to construct sets of socio-economic data used to examine the social and economic characteristics and behavior of those individuals and their society, both cross-sectionally and over time.

The expansion of historical micro-data studies in Canada has been a function of academic demand and supply factors. On the demand side, there has been a desire for more explicit use of economic and social theory in history and micro-data studies that make use of available records on individuals appeal to historians interested in understanding aggregate trends and reaching the micro-underpinnings of the larger macroeconomic and social relationships. For example, in Canada, the late nineteenth century was a period of intermittent economic growth and analyzing how that growth record affected different groups in society requires studies that disaggregate the population into sub-groups. One way of doing this that became attractive in the 1960’s was to collect micro-data samples from relevant census, assessment or probate records.

On the supply side, computers have lowered research costs, making the analysis of large data sets feasible and cost-effective. The proliferation of low cost personal computers, statistical packages and data spread-sheets has led to another revolution in micro-data analysis, as computers are now routinely taken into archives so that data collection, input and analysis can proceed even more efficiently.

In addition, studies using historical micro-data are an area where economic historians trained either as economists or historians have been able to find common ground.4 Many of the pioneering micro-data projects in Canada were conducted by historians with some training in quantitative techniques, much of which was acquired “on the job” by intellectual interest and excitement, rather than as graduate school training. Historians and economists are united by their common analysis of primary micro-data sources and their choice of sophisticated computer equipment, linkage software and statistical packages.

Background to Historical Micro-data Studies in Canadian Economic History

The early stage of historical micro-data projects in Canada attempted to systematically collect and analyze data on a large scale. Many of these micro-data projects crossed the lines between social and economic history, as well as demographic history in the case of French Canada. Path-breaking work by American scholars such as Lee Soltow (1971), Stephan Thernstrom (1973) and Alice Hanson Jones (1980) was an important influence on Canadian work. Their work on wealth and social structure and mobility using census and probate data drew attention to the extent of mobility — geographic, economic and social — that existed in pre-twentieth-century America.

However, Canadian historical micro-data work has been quite distinct from that of the United States, reflecting its separate tradition in economic history. Canada’s history is one of centralized penetration from the east via the Great Lakes-St. Lawrence waterway and the presence of two founding “nations” of European settlers – English and French – which led to strong Protestant and Roman Catholic traditions. Indeed, there was nearly 100 percent membership in the Roman Catholic Church for francophone Quebeckers for much of Canada’s history. As well, there is an economic reliance on natural resources, and a sparse population spread along an east-west corridor in isolated regions that have made Canada’s economic history, politics and institutions quite different from the United States.

The United States, from its early natural resource staples origins, developed a large, integrated internal market that was relatively independent of external economic forces, at least compared with Canada, and this shifted research topics away from trade and towards domestic resource allocation issues. At the level of historical micro-data, American scholars have had access to national micro-data samples for some time, which has not been the case in Canada until recently. Most of the early studies in Canadian micro-data were regional or urban samples drawn from manuscript sources and there has been little work since at a national level using micro-data sources. However, the strong role of the state in Canada has meant a particular richness to those sources that can be accessed and even the Census contains some personal details not available in the U.S. Census, such as religious affiliation. Moreover, earnings data are available in the Canadian census starting some forty years earlier than the United States.

Canadian micro-data studies have examined industry, fertility, urban and rural life, wages and labor markets, women’s work and roles in the economy, immigration and wealth. The data sources include census, probate records, assessment rolls, legal records and contracts, and are used by historians, economists, geographers, sociologists and demographers to study economic history.5 Very often, the primary sources are untapped and there can be substantial gaps in their coverage due to uneven preservation.

A Survey of Micro-data Studies

Early Years in English Canada

The fruits of early work in English Canada were books and papers by Frank Denton and Peter George (1970, 1973), Michael Katz (1975) and David Gagan (1981), among others.6 The Denton and George paper examined the influences on family size and school attendance in Wentworth County, Ontario, using the 1871 Census of Canada manuscripts. But it was Katz and Gagan’s work that generated greater attention among historians. Katz’s Hamilton Project used census, assessment rolls, city directories and other assorted micro-records to describe patterns of life in mid-nineteenth century Hamilton. Gagan’s Peel County Project was a comprehensive social and economic study of Peel County, Ontario, again using a variety of individual records including probate. These studies stimulated discussion and controversy about nineteenth-century wealth, inheritance patterns, and family size and structure.

The Demographic Tradition in French Canada

In French Canada, the pioneering work was the Saguenay Project organized by Gerard Bouchard (1977, 1983, 1992, 1993, 1996, 1998). Beginning in the 1970’s, a large effort has been expended to create a computerized genealogical and demographic data base for the Saguenay and Charlevoix regions of Quebec going back well into the nineteenth century. This data set, known now as the Balsac Register, contains data on 600,000 individuals (140,000 couples) and 2.4 million events (e.g. births, deaths, gender, etc…) with enormous social scientific and human genetic possibilities. The material gathered has been used to examine fertility, marriage patterns, inheritance, agricultural production and literacy, as well as genetic predisposition towards disease and formed the basis for a book spanning the history of population and families in the Saguenay over the period 1858 to 1971.

French Canada has a strong tradition of historical micro-data research rooted in demographic analysis.7 Another project underway since 1969 and associated with Bertrand Desjardins, Hubert Charbonneau, Jacques Légaré and Yves Landry is Le Programme de recherche en démographie historique (P.R.D.H) at the University of Montréal (Charbonneau, 1988; Landry, 1993; Desjardins, 1993). The database will eventually contain details on a million persons and their life events in Quebec between 1608 and 1850.

Industrial Studies

Only for the 1871 census have all of the schedules survived and the industrial schedules of that census have been made machine-readable (Bloomfield, 1986; Borsa and Inwood, 1993). Kris Inwood and Phyllis Wagg (1993) have used the census manuscript industrial schedules to examine the survival of handloom weaving in rural Canada circa 1870 (Inwood and Wagg, 1993). A total of 2,830 records were examined and data on average product, capital and month’s activity utilized. The results show that the demand for woolen homespun was income sensitive and that patterns of weaving by men and women differed with male-headed firms working a greater number of months during the year and more likely to have a second worker.

More recently, using a combination of aggregate capital market data and firm-level data for a sample of Canadian and American steel producers, Ian Keay and Angela Redish (2004) analyze the relationships between capital costs, financial structure, and domestic capital market characteristics. They find that national capital market characteristics and firm specific characteristics were important determinants of twentieth-century U.S. and Canadian steel firms’ financing decisions. Keay (2000) uses information from firms’ balance sheets and income accounts, and industry-specific prices to calculate labor, capital, intermediate input and total factor productivities for a sample of 39 Canadian and 39 American manufacturing firms in nine industries. The firm-level data also allow for the construction of nation, industry and time consistent series, including capital and value added. Inwood and Keay (2005) use establishment-level data describing manufacturers located in 128 border and near-border counties in Michigan, New York, Ohio, Pennsylvania, and Ontario to calculate Canadian relative to U.S. total factor productivity ratios for 25 industries. Their results illustrate that the average U.S. establishment was approximately 7% more efficient than its Canadian counterpart in 1870/71.

Population, Demographics & Fertility

Marvin McInnis (1977) assembled a body of census data on childbearing and other aspects of Upper Canadian households in 1861 and produced a sample of 1200 farm households that was used to examine the relationship between child-bearing and land availability. He found that an abundance of nearby uncultivated land did affect the probability of there being young children in the household but the magnitude of the influence was small. Moreover, the strongest result was that fertility fell as larger cities developed sufficiently close by for there to be a real influence by urban life and culture.

Eric Moore and Brian Osborne (1987) have examined the socio-economic differentials of marital fertility in Kingston. They related religion, birthplace, and age of mother, ethnic origin and occupational status to changes in fertility between 1861and 1881, using a data set of approximately 3000 observations taken from the manuscript census. Their choice of variables allows for the examination of the impact of both economic factors, as well as the importance of cultural attributes. William Marr (1992) took the first reasonably large sample of farm households (2,656) from the 1851-52 Census of Canada West and examined the determinants of fertility. He found fertility differences between older and more newly settled regions were influenced by land availability at the farm level but farm location, with respect to the extent of agricultural development, did not affect fertility when age, birthplace and religion were held constant. Michael Wayne (1998) uses the 1861 Census of Canada to look at the black population of Canada on the eve of the American Civil War. Meanwhile, George Emery (1993) helps provide an assessment of the comprehensiveness and accuracy of aggregate vital statistics in Ontario between 1869 and 1952 by looking at the process of recording vital statistics. Emery and Kevin McQuillan (1988) use case studies to examine mortality in nineteenth-century Ingersoll, Ontario.

Urban and Rural Life

A number of studies have examined urban and rural life. Bettina Bradbury (1984) has analyzed the census manuscripts of two working class Montreal wards, Ste. Anne and St. Jacques, for the years 1861, 1871 and 1881. Random samples of 1/10 of the households in these parts of Montreal were taken for a sample of nearly 11,000 individuals over three decades. The data were used to examine women and wage labor in Montreal. The evidence is that men were the primary wage earners but the wife’s contribution to the family economy was not so much her own wage labor, which was infrequent, but in organizing the economic life of the household and finding alternate sources of support.

Bettina Bradbury, Peter Gossage, Evelyn Kolish and Alan Stewart (1993) and Gossage (1991) have examined marriage contracts in Montreal over the period 1820-1840 and found that, over time, the use of marriage contracts changed, becoming a tool of a propertied minority. As well, a growing proportion of contract signers chose to keep the property of spouses separate rather than “in community.” The movement towards separation was most likely to be found among the wealthy where separate property offered advantages, especially to those engaged in commerce during harsh economic times. Gillian Hamilton (1999) looks at prenuptial contracting behavior in early nineteenth-century Quebec to explore property rights within families and finds that couples signing contracts tended to choose joint ownership of property when wives were particularly important to the household.

Chad Gaffield (1979, 1983, 1987) has examined social, family and economic life in the Eastern Ontario counties of Prescott-Russell, Alfred and Caledonia using aggregate census, as well as manuscript data for the period 1851-1881.8 He has applied the material to studying rural schooling and the economic structure of farm families and found systematic differences between the marriage patterns of Anglophones and Francophone with Francophone tending to marry at a younger average age. Also, land shortages and the diminishing forest frontier created economic difficulties that led to reduced family sizes by 1881. Gaffield’s most significant current research project is his leadership of the Canadian Century Research Infrastructure (CCRI) initiative, one of the country’s largest research projects. The CCRI is creating cross-indexed databases from a century’s worth of national census information, enabling unprecedented understanding of the making of modern Canada. This effort will eventually lead to an integrated set of micro-data resources at a national level comparable to what currently exist for the United States.9

Business Records

Company and business records have also been used as a source of micro-data and insight into economic history. Gillian Hamilton has conducted a number of studies examining contracts, property rights and labor markets in pre-twentieth century Canada. Hamilton (1996, 2000) examines the nature of apprenticing arrangements in Montreal around the turn of the nineteenth century, using apprenticeship contracts from a larger body of notarial records found in Quebec. The principal question addressed is what determined apprenticeship length and when the decline of the institution began? Hamilton finds that the characteristics of both masters and their boys were important and that masters often relied on probationary periods to better gauge a boy’s worth before signing a contract. Probations, all else equal, were associated with shorter contracts.

Ann Carlos and Frank Lewis (1998, 1999, 2001, 2002) access Hudson Bay Company fur trading records to study property rights, competition, and depletion in the eighteenth-century Canadian fur trade and their work represents an important foray into Canadian aboriginal economic history by studying role of aboriginals as consumers. Doug McCalla (2005, 2005, 2001) uses store records from Upper Canada to examine and understand consumer purchases in the early nineteenth century and gain insight into material culture. Barton Hamilton and Mary MacKinnon (1996) use the Canadian Pacific Railway records to study changes between 1903 and 1938 in the composition of job separations, and the probability of separation. The proportion of voluntary departures fell by more than half after World War I. Independent competing risk, piecewise-constant hazard functions for the probabilities of quits and layoffs are estimated. Changes in workforce composition lengthened the average worker’s spell, but a worker with any given set of characteristics was much more likely to be laid off after 1921, although many of these layoffs were only temporary.

MacKinnon (1997) taps into the CPR data again with a constructed sample of 9000 employees hired before 1945 that includes 700 pensioners and finds features of the CPR pension plan are consistent with economic explanations regarding the role of pensions. Long, continuous periods of service were likely to be rewarded and employees in the most responsible positions generally had higher pensions.

MacKinnon (1996) complements published Canadian nominal wage data by constructing a new hourly wage series, developed from firm records, for machinists, helpers, and laborers employed by the Canadian Pacific Railway between 1900 and 1930. This new evidence suggests that real wage growth in Canada was faster than previously believed, and that there were substantial changes in wage inequality. In another contribution, MacKinnon (1990) studies unemployment relief in Canada by examining relief policies and recipients and contrasting the Canadian situation with unemployment insurance in Britain. She finds demographic factors important in explaining who went on relief, with older workers, and those with large families most likely to be on relief for sustained periods. Another unique contribution to historical labor studies is Michael Huberman and Denise Young (1999). They examine a set of individual strike data of 1554 strikes for Canada from 1901 to 1914 and conclude that having international unions did not weaken Canada’s union movement and that they became part of Canada’s industrial relations framework.

The 1891 and 1901 Census

An ongoing project is the 1891 Census of Canada Project at the University of Guelph under Director Kris Inwood, which is making the information of this census available to the research public in a digitized sample of individual records from the 1891 census. The project is hosted by the University of Guelph, with support from the Canadian Foundation for Innovation, the Ontario Innovation Trust and private sector partners. Phase 1 (Ontario) of the project began during the winter of 2003 in association with the College of Arts Canada Research Chair in Rural History. The Ontario project continues until 2007. Phase II began in 2005; it extends data collection to the rest of the country and also creates an integrated national sample. The database includes information returned on a randomly selected 5% of the enumerators’ manuscript pages with each page containing information describing twenty-five people. An additional 5% of census pages for western Canada and several large cities augment the basic sample. Ultimately the database will contain records for more than 350,000 people, bearing in mind that the population of Canada in 1891 was 3.8 million.

The release of the 1901 Census of Canada manuscript census has also spawned numerous micro-data studies. Peter Baskerville and Eric Sager (1995, 1998) have used the 1901 Census to examine unemployment and the work force in late Victorian Canada.10 Baskerville (2001a,b) uses the 1901 census to examine the practice of boarding in Victorian Canada while in another study he uses the 1901 census to examine wealth and religion. Kenneth Sylvester (2001) uses the 1901 census to examine ethnicity and landholding. Alan Green and Mary MacKinnon (2001) use a new sample of individual-level data compiled from the manuscript returns of the 1901 Census of Canada to examine the assimilation of male wage-earning immigrants (mainly from the UK) in Montreal and Toronto. Unlike studies of post-World War II immigrants to Canada, and some recent studies of nineteenth-century immigration to the United States, they find slow assimilation to the earnings levels of native-born English mother-tongue Canadians. Green, MacKinnon and Chris Minns (2005) use 1901 census data to demonstrate that Anglophones and Francophone had very different personal characteristics, so that movement to the west was rarely economically attractive for Francophone. However, large-scale migration into New England fitted French Canadians’ demographic and human capital profile.

Wealth and Inequality

Recent years have also seen the emergence of a body of literature by several contributors on wealth accumulation and distribution in nineteenth-century Canada. This work has provided quantitative measurements of the degree of inequality in wealth holding, as well as its evolution over time. Gilles Paquet and Jean-Pierre Wallot (1976, 1986) have examined the net personal wealth of wealth holders using “les inventaires après déces” (inventories taken after death) in Quebec during the late eighteenth and early nineteenth century. They have suggested that the habitant was indeed a rational economic agent who chose land as a form of wealth not because of inherent conservatism but because information and transactions costs hindered the accumulation of financial assets.

A. Gordon Darroch (1983a, 1983b) has utilized municipal assessment rolls to study wealth inequality in Toronto during the late nineteenth century. Darroch found that inequality among assessed families was such that the top one-fifth of assessed families held at least 65% of all assessed wealth and the poorest 40% never more than 8%, even though inequality did decline between 1871 and 1899. Darroch and Michael Ornstein (1980, 1984) used the 1871 Census to examine ethnicity, occupational structure and family life cycles in Canada. Darroch and Soltow (1992, 1994) research property holding in Ontario using 5,669 individuals the 1871 census manuscripts and find “deep and abiding structures of inequality” accompanied by opportunities for mobility.

Lars Osberg and Fazley Siddiq (1988, 1993) and Siddiq (1988) have examined wealth inequality in Nova Scotia using probated estates from 1871 and 1899. They found a slight shift towards greater inequality in wealth over time and concluded that the prosperity of the 1850-1875 period in Nova Scotia benefited primarily the Halifax- based merchant class. Higher levels of wealth were associated with being a merchant and with living in Halifax, as opposed to the rest of the province. Siddiq and Julian Gwyn (1992) used probate inventories from 1851 and 1871 to study wealth over the period. They again document a greater trend towards inequality, accompanied by rising wealth. In addition, Peter Wardhas collected a set of 196 Nova Scotia probate records for Lunenburg County spanning 1808-1922, as well as a set of poll tax records for the same location between 1791 and 1795.11

Livio Di Matteo and Peter George (1992, 1998) have examined wealth distribution in late nineteenth century Ontario using probate records and assessment roll data for Wentworth County for the years 1872, 1882, 1892 and 1902. They find a rise in average wealth levels up until 1892 and a decline from 1892 to 1902. Whereas the rise in wealth from 1872 to 1892 appears to have accompanied by a trend towards greater equality in wealth distribution, the period 1892 to 1902 marked a return to greater inequality. Di Matteo (1996, 1997, 1998, 2001) uses a set of 3,515 probated decedents for all of Ontario in 1892 to examine the determinants of wealth holding, the wealth of the Irish, inequality and life cycle accumulation. Di Matteo and Herb Emery (2002) use the 1892 Ontario data to examine life insurance holding and the extent of self-insurance as wealth rises. Di Matteo (2004, 2006) uses a newly constructed micro-data set for the Thunder Bay District from 1885-1920 consisting of 1,293 probated decedents to examine wealth and inequality during Canada’s wheat boom era. Di Matteo is currently using Ontario probated decedents from 1902 linked to the 1901 census and combined with previous data from 1892 to examine the impact of religious affiliation on wealth holding.

Wealth and property holding among women has also been a specific topic of research.12 Peter Baskerville (1999) uses probate data to examine wealth holding by women in the cities of Victoria and Hamilton between 1880 and 1901 and finds that they were substantial property owners. The holding of wealth by women in the wake of property legislation is studied by Inwood and Sue Ingram (2000) and Inwood and Sarah Van Sligtenhorst (2004). Their work chronicles the increase in female property holding in the wake of Canadian property law changes in the late nineteenth-century, Inwood and Richard Reid (2001) also use the Canadian Census to examine the relationship between gender and occupational identity.

Conclusion

The flurry of recent activity in Canadian quantitative economic history using census and probate data bodes well for the future. Even the National Archives of Canada has now made digital images of census forms available online as well as other primary records.13 Moreover, projects such as the CCRI and the 1891 Census Project hold the promise of new, integrated data sources for future research on national as opposed to regional micro-data questions. We will be able to see the extent of regional economic development, earnings and convergence at a regional level and from a national perspective. Access to the 1911 and future access to the 1921 Census of Canada will also provide fertile areas for research and discovery. The period between 1900 and 1921, spanning the wheat boom and the First World War, is particularly important as it coincides with Canadian industrialization, rapid economic growth and the further expansion of wealth and income at the individual level. Moreover, the access to new samples of micro data may also help shed light on aboriginal economic history during the nineteenth and early twentieth century, as well as the economic progress of women.14 In particular, the economic history of Canada’s aboriginal peoples after the decline of the fur trade and during Canada’s industrialization is an area where micro-data might be useful in illustrating economic trends and conditions.15

References:

Baskerville, Peter A. “Familiar Strangers: Urban Families with Boarders in Canada, 1901.” Social Science History 25, no. 3 (2001): 321-46.

Baskerville, Peter. “Did Religion Matter? Religion and Wealth in Urban Canada at the Turn of the Twentieth Century: An Exploratory Study.” Histoire sociale-Social History XXXIV, no. 67 (2001): 61-96.

Baskerville, Peter A. and Eric W. Sager. “Finding the Work Force in the 1901 Census of Canada.” Histoire sociale-Social History XXVIII, no. 56 (1995): 521-40.

Baskerville, Peter A., and Eric W. Sager. Unwilling Idlers: The Urban Unemployed and Their Families in Late Victorian Canada. Toronto: University of Toronto Press, 1998

Baskerville, Peter A. “Women and Investment in Late-Nineteenth Century Urban Canada: Victoria and Hamilton, 1880-1901.” Canadian Historical Review 80, no. 2 (1999): 191-218.

Borsa, Joan, and Kris Inwood. Codebook and Interpretation Manual for the 1870-71 Canadian Industrial Database. Guelph, 1993.

Bouchard, Gerard. “Introduction à l’étude de la société saguenayenne aux XIXe et XXe siècles.” Revue d’histoire de l’Amérique française 31, no. 1 (1977): 3-27.

Bouchard, Gerard. “Les systèmes de transmission des avoirs familiaux et le cycle de la société rurale au Québec, du XVIIe au XXe siècle.” Histoire sociale-Social History XVI, no. 31 (1983): 35-60.

Bouchard, Gerard. “Les fichiers-réseaux de population: Un retour à l’individualité.” Histoire sociale-Social History XXI, no. 42 (1988): 287-94.

Bouchard, Gerard and Regis Thibeault. “Change and Continuity in Saguenay Agriculture: The Evolution of Production and Yields (1852-1971).” In Canadian Papers in Rural History, Vol. VIII, edited Donald H. Akenson, 231-59. Gananoque, ON: Langdale Press, 1992.

Bouchard, Gerard. “Computerized Family Reconstitution and the Measure of Literacy. Presentation of a New Index.” History and Computing 5, no 1 (1993): 12-24.

Bouchard, Gerard. Quelques arpents d’Amérique: Population, économie, famille au Saguenay, 1838-1971. Montreal: Boréal, 1996.

Bouchard, Gerard. “Economic Inequalities in Saguenay Society, 1879-1949: A Descriptive Analysis.” Canadian Historical Review 79, no. 4 (1998): 660-90.

Bourbeau, Robert, and Jacques Légaré. Évolution de la mortalité au Canada et au Québec 1831-1931. Montreal: Les Presses de l’Université de Montréal, 1982.

Bradbury, Bettina. “Women and Wage Labour in a Period of Transition: Montreal, 1861-1881.” Histoire sociale-Social History XVII (1984): 115-31.

Bradbury, Bettina, Peter Gossage, Evelyn Kolish, and Alan Stewart. “Property and Marriage: The Law and the Practice in Early Nineteenth-Century Montreal.” Histoire sociale-Social History XXVI, no. 51 (1993): 9-40.

Carlos, Ann, and Frank Lewis. “Property Rights and Competition in the Depletion of the Beaver: Native Americans and the Hudson’s Bay Company, 1700-1763.” In The Other Side of the Frontier: Economic Explanations into Native American History, edited by Linda Barrington. Boulder, CO: Westview Press, 1998.

Carlos, Ann, and Frank Lewis. “Property Rights, Competition, and Depletion in the Eighteenth-century Canadian Fur Trade: The Role of the European Market.” Canadian Journal of Economics 32, no. 3 (1999): 705-28.

Carlos, Ann, and Frank Lewis. “Marketing in the Land of Hudson Bay: Indian Consumers and the Hudson’s Bay Company, 1670-1770.” Enterprise and Society 3 (2002): 285-317.

Carlos, Ann and Frank Lewis. “Trade, Consumption, and the Native Economy: Lessons from York Factory, Hudson Bay.” Journal of Economic History 61, no. 4 (2001): 1037-64.

Charbonneau, Hubert. “Le registre de population du Québec ancien Bilan de vingt annés de recherches.” Histoire sociale-Social History XXI, no. 42 (1988): 295-99.

Darroch, A. Gordon. “Occupational Structure, Assessed Wealth and Homeowning during Toronto’s Early Industrialization, 1861-1899.” Histoire sociale-Social History XVI (1983): 381-419.

Darroch, A. Gordon. “Early Industrialization and Inequality in Toronto, 1861-1899.” Labour/Le Travailleur 11 (1983): 31-61.

Darroch, A. Gordon. “A Study of Census Manuscript Data for Central Ontario, 1861-1871: Reflections on a Project and On Historical Archives.” Histoire sociale-Social History XXI, no. 42 (1988): 304-11.

Darroch, A. Gordon, and Michael Ornstein. “Ethnicity and Occupational Structure in Canada in 1871: The Vertical Mosaic in Historical Perspective.” Canadian Historical Review 61 (1980): 305-33.

Darroch, A. Gordon, and Michael Ornstein. “Family Coresidence in Canada in 1871: Family Life Cycles, Occupations and Networks of Mutual Aid.” Canadian Historical Association Historical Papers (1984): 30-55.

Darroch, A. Gordon, and Lee Soltow. “Inequality in Landed Wealth in Nineteenth-Century Ontario: Structure and Access.” Canadian Review of Sociology and Anthropology 29 (1992): 167-200.

Darroch, A. Gordon, and Lee Soltow. Property and Inequality in Victorian Ontario: Structural Patterns and Cultural Communities in the 1871 Census. Toronto: University of Toronto Press, 1994.

Denton, Frank T., and Peter George. “An Explanatory Statistical Analysis of Some Socio-economic Characteristics of Families in Hamilton, Ontario, 1871.” Histoire sociale-Social History 5 (1970): 16-44.

Denton, Frank T., and Peter George. “The Influence of Socio-Economic Variables on Family Size in Wentworth County, Ontario, 1871: A Statistical Analysis of Historical Micro-data.” Review of Canadian Sociology and Anthropology 10 (1973): 334-45.

Di Matteo, Livio. “Wealth and Inequality on Ontario’s Northwestern Frontier: Evidence from Probate.” Histoire sociale-Social History XXXVIII, no. 75, (2006): 79-104.

Di Matteo, Livio. “Boom and Bust, 1885-1920: Regional Wealth Evidence from Probate Records.” Australian Economic History Review 44, no. 1 (2004): 52-78.

Di Matteo, Livio. “Patterns of Inequality in Late Nineteenth-Century Ontario: Evidence from Census-Linked Probate Data.” Social Science History 25, no. 3 (2001): 347-80.

Di Matteo, Livio. “Wealth Accumulation and the Life Cycle in Economic History: Implications of Alternative Approaches to Micro-Data.” Explorations in Economic History 35 (1998): 296-324.

Di Matteo, Livio. “The Determinants of Wealth and Asset Holding in Nineteenth Century Canada: Evidence from Micro-data.” Journal of Economic History 57, no. 4 (1997): 907-34.

Di Matteo, Livio. “The Wealth of the Irish in Nineteenth-Century Ontario.” Social Science History 20, no. 2 (1996): 209-34.

Di Matteo, Livio, and J.C. Herbert Emery. “Wealth and the Demand for Life Insurance: Evidence from Ontario, 1892.” Explorations in Economic History 39, no. 4 (2002): 446-69.

Di Matteo, Livio, and Peter George. “Patterns and Determinants of Wealth among Probated Decedents in Wentworth County, Ontario, 1872-1902.” Histoire sociale-Social History XXXI, no. 61(1998): 1-34.

Di Matteo, Livio, and Peter George. “Canadian Wealth Inequality in the Late Nineteenth Century: A Study of Wentworth County, Ontario, 1872-1902.” Canadian Historical Review LXXIII, no. 4 (1992): 453-83.

Emery, George N. Facts of Life: The Social Construction of Vital Statistics, Ontario, 1869-1952. Montreal: McGill-Queen’s University Press, 1993.

Emery, George, and Kevin McQuillan. “A Case Study Approach to Ontario Mortality History: The Example of Ingersoll, 1881-1971.” Canadian Studies in Population 15, (1988): 135-58.

Ens, Gerhard. Homeland to Hinterland: The Changing Worlds of the Red River Metis in the Nineteenth Century. Toronto: University of Toronto Press, 1996.

Gaffield, Chad. “Canadian Families in Cultural Context: Hypotheses from the Mid-Nineteenth Century.” Historical Papers, Canadian Historical Association (1979): 48-70.

Gaffield, Chad. “Schooling, the Economy and Rural Society in Nineteenth-Century Ontario.” in Childhood and Family in Canadian History, edited by Joy Parr. Toronto: McClelland and Stewart (1983): 69-92.

Gaffield, Chad. _Language, Schooling and Cultural Conflict: The Origins of the French-Language Controversy in Ontario.” Kingston and Montreal: McGill-Queen’s, 1987.

Gaffield, Chad. “Machines and Minds: Historians and the Emerging Collaboration.” Histoire sociale-Social History XXI, no. 42 (1988): 312-17.

Gagan, David. Hopeful Travellers: Families, Land and Social Change in Mid-Victorian Peel County, Canada West. Toronto: University of Toronto Press, 1981.

Gagan, David. “Some Comments on the Canadian Experience with Historical Databases.” Histoire sociale-Social History XXI, no. 42 (1988): 300-03.

Gossage, Peter. “Family Formation and Age at Marriage at Saint-Hyacinthe, Quebec, 1854-1891.” Histoire sociale-Social History XXIV, no. 47 (1991): 61-84.

Green, Alan, Mary Mackinnon and Chris Minns. “Conspicuous by Their Absence: French Canadians and the Settlement of the Canadian West.” Journal of Economic History 65, no. 3 (2005): 822-49.

Green, Alan, and Mary MacKinnon. “The Slow Assimilation of British Immigrants in Canada: Evidence from Montreal and Toronto, 1901.”Explorations in Economic History 38, no. 3 (2001): 315-38

Green, Alan G., and Malcolm C. Urquhart. “New Estimates of Output Growth in Canada: Measurement and Interpretation.” In Perspectives on Canadian Economic History, edited by Douglas McCalla, 182-199. Toronto: Copp Clark Pitman Ltd., 1987.

Gwyn, Julian, and Fazley K. Siddiq. “Wealth Distribution in Nova Scotia during the Confederation Era, 1851 and 1871.” Canadian Historical Review LXXIII, no. 4 (1992): 435-52.

Hamilton Barton, and Mary MacKinnon. “Quits and Layoffs in Early Twentieth Century Labour Markets.” Explorations in Economic History 21 (1996): 346-66.

Hamilton, Gillian. “The Decline of Apprenticeship in North America: Evidence from Montreal.” Journal of Economic History 60, no. 3 (2000): 627-64.

Hamilton, Gillian. “Property Rights and Transaction Costs in Marriage: Evidence from Prenuptial Contracts.” Journal of Economic History 59, no. 1 (1999): 68-103.

Hamilton, Gillian. “The Market for Montreal Apprentices: Contract Length and Information.”Explorations in Economic History 33, no 4 (1996): 496-523

Hamilton, Michelle, and Kris Inwood. “The Identification of the Aboriginal Population in the 1891 Census of Canada.” Manuscript, University of Guelph, 2006.

Henripin, Jacques. Tendances at facteurs de la fécondité au Canada Bureau fédéral de la Statistique. Ottawa: Bureau fe?de?ral de la statistique, 1968.

Huberman, Michael, and Denise Young. “Cross-Border Unions: Internationals in Canada, 1901-1914.” Explorations in Economic History 36 (1999): 204-31.

Igartua Jose E. “Les bases de donnés historiques: L’expérience canadienne depuis quinze ans – Introduction.” Histoire sociale-Social History XXI, no. 42 (1988): 283-86.

Inwood, Kris, and Phyllis Wagg. “The Survival of Handloom Weaving in Rural Canada circa 1870.” Journal of Economic History 53 (1993): 346-58.

Inwood, Kris, and Sue Ingram, “The Impact of Married Women’s Property Legislation in Victorian Ontario.” Dalhousie Law Journal 23, no. 2 (2000): 405-49.

Inwood, Kris, and Sarah Van Sligtenhorst. “The Social Consequences of Legal Reform: Women and Property in a Canadian Community.” Continuity and Change 19 no. 1 (2004): 165-97.

Inwood, Kris, and Richard Reid. “Gender and Occupational Identity in a Canadian Census.” Historical Methods 32, no. 2 (2001): 57-70.

Inwood, Kris, and Kevin James. “The 1891 Census of Canada.” Cahiers québécois de démographie, forthcoming.

Inwood, Kris, and Ian.Keay. “Bigger Establishments in Thicker Markets: Can We Explain Early Productivity Differentials between Canada and the United States.” Canadian Journal of Economics 38, no. 4 (2005): 1327-63.

Jones, Alice Hanson. Wealth of a Nation to Be: The American Colonies on the Eve of the Revolution. New York: Columbia Press, 1980.

Katz, Michael B. _The People of Hamilton, Canada West: Family and Class in a Mid-nineteenth-century City_. Cambridge: Harvard University Press, 1975.

Keay, Ian. “Canadian Manufacturers’ Relative Productivity Performance: 1907-1990.” Canadian Journal of Economics 44, no. 4 (2000): 1049-68.

Keay, Ian, and Angela Redish. “The Micro-economic Effects of Financial Market Structure: Evidence from Twentieth -century North American Steel Firms.” Explorations in Economic History 41, no. 4 (2004): 377-403.

Landry, Yves. “Fertility in France and New France: The Distinguishing Characteristics of Canadian Behaviour in the Seventeenth and Eighteenth Centuries.” Social Science History 17, no. 4 (1993): 577-92.

Mackinnon, Mary. “Relief Not Insurance: Canadian Unemployment Relief in the 1930s.”Explorations in Economic History 27, no. 1 (1990): 46-83

Mackinnon, Mary. “New Evidence on Canadian Wage Rates, 1900-1930.”

Canadian Journal of Economics XXIX, no.1 (1996): 114-31.

MacKinnon, Mary. “Providing for Faithful Servants: Pensions at the Canadian Pacific Railway.” Social Science History 21, no. 1 (1997): 59-83.

Marr, William. “Micro and Macro Land Availability as a Determinant of Human Fertility in Rural Canada West, 1851.” Social Science History 16 (1992): 583-90.

McCalla, Doug. “Upper Canadians and Their Guns: An Exploration via Country Store Accounts (1808-61).” Ontario History 97 (2005): 121-37.

McCalla, Doug. “A World without Chocolate: Grocery Purchases at Some Upper Canadian Country Stores, 1808-61.” Agricultural History 79 (2005): 147-72.

McCalla, Doug. “Textile Purchases by Some Ordinary Upper Canadians, 1808-1862.” Material History Review 53, (2001): 4-27.

McInnis, Marvin. “Childbearing and Land Availability: Some Evidence from Individual Household Data.” In Population Patterns in the Past, edited by Ronald Demos Lee, 201-27. New York: Academic Press, 1977.

Moore, Eric G., and Brian S. Osborne. “Marital Fertility in Kingston, 1861-1881: A Study of Socio-economic Differentials.” Histoire sociale-Social History XX (1987): 9-27.

Muise, Del. “The Industrial Context of Inequality: Female Participation in Nova Scotia’s Paid Workforce, 1871-1921.” Acadiensis XX, no. 2 (1991).

Myers, Sharon. “‘Not to Be Ranked as Women’: Female Industrial Workers in Halifax at the Turn of the Twentieth Century.” In Separate Spheres: Women’s Worlds in the Nineteenth-Century Maritimes, edited by Janet Guildford and Suzanne Morton, 161-83. Fredericton: Acadiensis Press, 1994.

Osberg, Lars, and Fazley Siddiq. “The Acquisition of Wealth in Nova Scotia in the Late Nineteenth Century.” Research in Economic Inequality 4 (1993): 181-202.

Osberg, Lars, and Fazley Siddiq. “The Inequality of Wealth in Britain’s North American Colonies: The Importance of the Relatively Poor.” Review of Income and Wealth 34 (1988): 143-63.

Paquet, Gilles, and Jean-Pierre Wallot. “Les Inventaires après décès à Montréal au tournant du XIXe siècle: preliminaires à une analyse.” Revue d’histoire de l’Amérique française 30 (1976): 163-221.

Paquet, Gilles, and Jean-Pierre Wallot. “Stratégie Foncière de l’Habitant: Québec (1790-1835).” Revue d’histoire de l’Amérique française 39 (1986): 551-81.

Seager, Allen, and Adele Perry. “Mining the Connections: Class, Ethnicity and Gender in Nanaimo, British Columbia, 1891.” Histoire sociale/Social History 30 , no. 59 (1997): 55-76.

Siddiq, Fazley K. “The Size Distribution of Probate Wealth Holdings in Nova Scotia in the Late Nineteenth Century.” Acadiensis 18 (1988): 136-47.

Soltow, Lee. Patterns of Wealthholding in Wisconsin since 1850. Madison: University of Wisconsin Press, 1971.

Sylvester, Kenneth Michael. “All Things Being Equal: Land Ownership and Ethnicity in Rural Canada, 1901.” Histoire sociale-Social History XXXIV, no. 67 (2001): 35-59.

Thernstrom, Stephan. The Other Bostonians: Poverty and Progress in the American Metropolis, 1880-1970. Cambridge: Harvard University Press, 1973.

Urquhart, Malcolm C. Gross National Product, Canada, 1870-1926: The Derivation of the Estimates. Montreal: McGill-Queens, 1993.

Urquhart, Malcolm C. “New Estimates of Gross National Product Canada, 1870-1926: Some Implications for Canadian Development.” In Long Term Factors in American Economic Growth, edited by Stanley L. Engerman and Robert E. Gallman, 9-94. Chicago: University of Chicago Press (1986).

Wayne, Michael. “The Black Population of Canada West on the Eve of the American Civil War: A Reassessment Based on the Manuscript Census of 1861.” In A Nation of Immigrants: Women, Workers and Communities in Canadian History, edited by Franca Iacovetta, Paula Draper and Robert Ventresca. Toronto: University of Toronto Press, 1998.

Footnotes

1 The helpful comments of Herb Emery, Mary MacKinnon and Kris Inwood on earlier drafts are acknowledged.

2 See especially Mac Urquhart’s spearheading of the major efforts in national income and output estimates. (Urquhart, 1986, 1993)

3 “Individual response” means by individuals, households and firms.

4 See Gaffield (1988) and Igartua (1988).

5 The Conference on the Use of Census Manuscripts for Historical Research held at Guelph in March 1993 was an example of the interdisciplinary nature of historical micro-data research. The conference was sponsored by the Canadian Committee on History and Computing, the Social Sciences and Humanities Research Council of Canada and the University of Guelph. The conference was organized by economist Kris Inwood and historian Richard Reid and featured presentations by historians, economists, demographers, sociologists and anthropologists.

6 The Denton/George project had its origins in a proposal to the Second Conference on Quantitative Research in Canadian Economic History in 1967 that a sampling of the Canadian census be undertaken. Denton and George drew a sample from the manuscript census returns for individuals for 1871 that had recently been made available, and reported their preliminary findings to the Fourth Conference in March, 1970 in a paper that was published shortly afterwards in Histoire sociale/Social History (1970). Mac Urquhart’s role here must be acknowledged. He and Ken Buckley were insistent that a sampling of Census manuscripts would be an important venture for the conference members to initiate.

7 Also, sources such as the aggregate census have been used to examine fertility by Henripin (1968) and mortality by Bourbeau and Legaré (1982)).

8 Chad Gaffield, Peter Baskerville and Alan Artibise were also involved in the creation of a machine-readable listing of archival sources on Vancouver Island known as the Vancouver Islands Project (Gaffield, 1988, 313).

9 See Chad Gaffield, “Ethics, Technology and Confidential Research Data: The Case of the Canadian Century Research Infrastructure Project,” paper presented to the World History Conference, Sydney, July 3-9, 2005.

10 Baskerville and Sager have been involved in the Canadian Families Project. See “The Canadian Families Project”, a special issue of the journal Historical Methods, 33 no. 4 (2000).

11 See Don Paterson’s Economic and Social History Data Base at the University of British Columbia at http://www2.arts.ubc.ca/econsochistory/data/data_list.html

12 Examples of other aspects of gender and economic status in a regional context ar e covered by Muise (1991), Myers (1994) and Seager and Perry (1997).

13 See http://www.collectionscanada.ca/genealogy/022-500-e.html

14 See for example the work by Gerhard Ens (1996) on the Red River Metis.

15 Hamilton and Inwood (2006) have begun research into identifying the aboriginal population in the 1891 Census of Canada.

Citation: Di Matteo, Livio. “The Use of Quantitative Micro-data in Canadian Economic History: A Brief Survey”. EH.Net Encyclopedia, edited by Robert Whaples. January 27, 2007. URL
http://eh.net/encyclopedia/the-use-of-quantitative-micro-data-in-canadian-economic-history-a-brief-survey/

The Economic Impact of the Black Death

David Routt, University of Richmond

The Black Death was the largest demographic disaster in European history. From its arrival in Italy in late 1347 through its clockwise movement across the continent to its petering out in the Russian hinterlands in 1353, the magna pestilencia (great pestilence) killed between seventeen and twenty—eight million people. Its gruesome symptoms and deadliness have fixed the Black Death in popular imagination; moreover, uncovering the disease’s cultural, social, and economic impact has engaged generations of scholars. Despite growing understanding of the Black Death’s effects, definitive assessment of its role as historical watershed remains a work in progress.

A Controversy: What Was the Black Death?

In spite of enduring fascination with the Black Death, even the identity of the disease behind the epidemic remains a point of controversy. Aware that fourteenth—century eyewitnesses described a disease more contagious and deadlier than bubonic plague (Yersinia pestis), the bacillus traditionally associated with the Black Death, dissident scholars in the 1970s and 1980s proposed typhus or anthrax or mixes of typhus, anthrax, or bubonic plague as the culprit. The new millennium brought other challenges to the Black Death—bubonic plague link, such as an unknown and probably unidentifiable bacillus, an Ebola—like haemorrhagic fever or, at the pseudoscientific fringes of academia, a disease of interstellar origin.

Proponents of Black Death as bubonic plague have minimized differences between modern bubonic and the fourteenth—century plague through painstaking analysis of the Black Death’s movement and behavior and by hypothesizing that the fourteenth—century plague was a hypervirulent strain of bubonic plague, yet bubonic plague nonetheless. DNA analysis of human remains from known Black Death cemeteries was intended to eliminate doubt but inability to replicate initially positive results has left uncertainty. New analytical tools used and new evidence marshaled in this lively controversy have enriched understanding of the Black Death while underscoring the elusiveness of certitude regarding phenomena many centuries past.

The Rate and Structure of mortality

The Black Death’s socioeconomic impact stemmed, however, from sudden mortality on a staggering scale, regardless of what bacillus caused it. Assessment of the plague’s economic significance begins with determining the rate of mortality for the initial onslaught in 1347—53 and its frequent recurrences for the balance of the Middle Ages, then unraveling how the plague chose victims according to age, sex, affluence, and place.

Imperfect evidence unfortunately hampers knowing precisely who and how many perished. Many of the Black Death’s contemporary observers, living in an epoch of famine and political, military, and spiritual turmoil, described the plague apocalyptically. A chronicler famously closed his narrative with empty membranes should anyone survive to continue it. Others believed as few as one in ten survived. One writer claimed that only fourteen people were spared in London. Although sober eyewitnesses offered more plausible figures, in light of the medieval preference for narrative dramatic force over numerical veracity, chroniclers’ estimates are considered evidence of the Black Death’s battering of the medieval psyche, not an accurate barometer of its demographic toll.

Even non—narrative and presumably dispassionate, systematic evidence — legal and governmental documents, ecclesiastical records, commercial archives — presents challenges. No medieval scribe dragged his quill across parchment for the demographer’s pleasure and convenience. With a paucity of censuses, estimates of population and tracing of demographic trends have often relied on indirect indicators of demographic change (e.g., activity in the land market, levels of rents and wages, size of peasant holdings) or evidence treating only a segment of the population (e.g., assignment of new priests to vacant churches, payments by peasants to take over holdings of the deceased). Even the rare census—like record, like England’s Domesday Book (1086) or the Poll Tax Return (1377), either enumerates only heads of households or excludes slices of the populace or ignores regions or some combination of all these. To compensate for these imperfections, the demographer relies on potentially debatable assumptions about the size of the medieval household, the representativeness of a discrete group of people, the density of settlement in an undocumented region, the level of tax evasion, and so forth.

A bewildering array of estimates for mortality from the plague of 1347—53 is the result. The first outbreak of the Black Death indisputably was the deadliest but the death rate varied widely according to place and social stratum. National estimates of mortality for England, where the evidence is fullest, range from five percent, to 23.6 percent among aristocrats holding land from the king, to forty to forty—five percent of the kingdom’s clergy, to over sixty percent in a recent estimate. The picture for the continent likewise is varied. Regional mortality in Languedoc (France) was forty to fifty percent while sixty to eighty percent of Tuscans (Italy) perished. Urban death rates were mostly higher but no less disparate, e.g., half in Orvieto (Italy), Siena (Italy), and Volterra (Italy), fifty to sixty—six percent in Hamburg (Germany), fifty—eight to sixty—eight percent in Perpignan (France), sixty percent for Barcelona’s (Spain) clerical population, and seventy percent in Bremen (Germany). The Black Death was often highly arbitrary in how it killed in a narrow locale, which no doubt broadened the spectrum of mortality rates. Two of Durham Cathedral Priory’s manors, for instance, had respective death rates of twenty—one and seventy—eighty percent (Shrewsbury, 1970; Russell, 1948; Waugh, 1991; Ziegler, 1969; Benedictow, 2004; Le Roy Ladurie, 1976; Bowsky, 1964; Pounds, 1974; Emery, 1967; Gyug, 1983; Aberth, 1995; Lomas, 1989).

Credible death rates between one quarter and three quarters complicate reaching a Europe—wide figure. Neither a casual and unscientific averaging of available estimates to arrive at a probably misleading composite death rate nor a timid placing of mortality somewhere between one and two thirds is especially illuminating. Scholars confronting the problem’s complexity before venturing estimates once favored one third as a reasonable aggregate death rate. Since the early 1970s demographers have found higher levels of mortality plausible and European mortality of one half is considered defensible, a figure not too distant from less fanciful contemporary observations.

While the Black Death of 1347—53 inflicted demographic carnage, had it been an isolated event European population might have recovered to its former level in a generation or two and its economic impact would have been moderate. The disease’s long—term demographic and socioeconomic legacy arose from it recurrence. When both national and local epidemics are taken into account, England endured thirty plague years between 1351 and 1485, a pattern mirrored on the continent, where Perugia was struck nineteen times and Hamburg, Cologne, and Nuremburg at least ten times each in the fifteenth century. Deadliness of outbreaks declined — perhaps ten to twenty percent in the second plague (pestis secunda) of 1361—2, ten to fifteen percent in the third plague (pestis tertia) of 1369, and as low as five and rarely above ten percent thereafter — and became more localized; however, the Black Death’s persistence ensured that demographic recovery would be slow and socioeconomic consequences deeper. Europe’s population in 1430 may have been fifty to seventy—five percent lower than in 1290 (Cipolla, 1994; Gottfried, 1983).

Enumeration of corpses does not adequately reflect the Black Death’s demographic impact. Who perished was equally significant as how many; in other words, the structure of mortality influenced the time and rate of demographic recovery. The plague’s preference for urbanite over peasant, man over woman, poor over affluent, and, perhaps most significantly, young over mature shaped its demographic toll. Eyewitnesses so universally reported disproportionate death among the young in the plague’s initial recurrence (1361—2) that it became known as the Childen’s Plague (pestis puerorum, mortalité des enfants). If this preference for youth reflected natural resistance to the disease among plague survivors, the Black Death may have ultimately resembled a lower—mortality childhood disease, a reality that magnified both its demographic and psychological impact.

The Black Death pushed Europe into a long—term demographic trough. Notwithstanding anecdotal reports of nearly universal pregnancy of women in the wake of the magna pestilencia, demographic stagnancy characterized the rest of the Middle Ages. Population growth recommenced at different times in different places but rarely earlier than the second half of the fifteenth century and in many places not until c. 1550.

The European Economy on the Cusp of the Black Death

Like the plague’s death toll, its socioeconomic impact resists categorical measurement. The Black Death’s timing made a facile labeling of it as a watershed in European economic history nearly inevitable. It arrived near the close of an ebullient high Middle Ages (c. 1000 to c. 1300) in which urban life reemerged, long—distance commerce revived, business and manufacturing innovated, manorial agriculture matured, and population burgeoned, doubling or tripling. The Black Death simultaneously portended an economically stagnant, depressed late Middle Ages (c. 1300 to c. 1500). However, even if this simplistic and somewhat misleading portrait of the medieval economy is accepted, isolating the Black Death’s economic impact from manifold factors at play is a daunting challenge.

Cognizant of a qualitative difference between the high and late Middle Ages, students of medieval economy have offered varied explanations, some mutually exclusive, others not, some favoring the less dramatic, less visible, yet inexorable factor as an agent of change rather than a catastrophic demographic shift. For some, a cooling climate undercut agricultural productivity, a downturn that rippled throughout the predominantly agrarian economy. For others, exploitative political, social, and economic institutions enriched an idle elite and deprived working society of wherewithal and incentive to be innovative and productive. Yet others associate monetary factors with the fourteenth— and fifteenth—century economic doldrums.

The particular concerns of the twentieth century unsurprisingly induced some scholars to view the medieval economy through a Malthusian lens. In this reconstruction of the Middle Ages, population growth pressed against the society’s ability to feed itself by the mid—thirteenth century. Rising impoverishment and contracting holdings compelled the peasant to cultivate inferior, low—fertility land and to convert pasture to arable production and thereby inevitably reduce numbers of livestock and make manure for fertilizer scarcer. Boosting gross productivity in the immediate term yet driving yields of grain downward in the longer term exacerbated the imbalance between population and food supply; redressing the imbalance became inevitable. This idea’s adherents see signs of demographic correction from the mid—thirteenth century onward, possibly arising in part from marriage practices that reduced fertility. A more potent correction came with subsistence crises. Miserable weather in 1315 destroyed crops and the ensuing Great Famine (1315—22) reduced northern Europe’s population by perhaps ten to fifteen percent. Poor harvests, moreover, bedeviled England and Italy to the eve of the Black Death.

These factors — climate, imperfect institutions, monetary imbalances, overpopulation — diminish the Black Death’s role as a transformative socioeconomic event. In other words, socioeconomic changes already driven by other causes would have occurred anyway, merely more slowly, had the plague never struck Europe. This conviction fosters receptiveness to lower estimates of the Black Death’s deadliness. Recent scrutiny of the Malthusian analysis, especially studies of agriculture in source—rich eastern England, has, however, rehabilitated the Black Death as an agent of socioeconomic change. Growing awareness of the use of “progressive” agricultural techniques and of alternative, non—grain economies less susceptible to a Malthusian population—versus—resources dynamic has undercut the notion of an absolutely overpopulated Europe and has encouraged acceptance of higher rates of mortality from the plague (Campbell, 1983; Bailey, 1989).

The Black Death and the Agrarian Economy

The lion’s share of the Black Death’s effect was felt in the economy’s agricultural sector, unsurprising in a society in which, except in the most urbanized regions, nine of ten people eked out a living from the soil.

A village struck by the plague underwent a profound though brief disordering of the rhythm of daily life. Strong administrative and social structures, the power of custom, and innate human resiliency restored the village’s routine by the following year in most cases: fields were plowed, crops were sown, tended, and harvested, labor services were performed by the peasantry, the village’s lord collected dues from tenants. Behind this seeming normalcy, however, lord and peasant were adjusting to the Black Death’s principal economic consequence: a much smaller agricultural labor pool. Before the plague, rising population had kept wages low and rents and prices high, an economic reality advantageous to the lord in dealing with the peasant and inclining many a peasant to cleave to demeaning yet secure dependent tenure.

As the Black Death swung the balance in the peasant’s favor, the literate elite bemoaned a disintegrating social and economic order. William of Dene, John Langland, John Gower, and others polemically evoked nostalgia for the peasant who knew his place, worked hard, demanded little, and squelched pride while they condemned their present in which land lay unplowed and only an immediate pang of hunger goaded a lazy, disrespectful, grasping peasant to do a moment’s desultory work (Hatcher, 1994).

Moralizing exaggeration aside, the rural worker indeed demanded and received higher payments in cash (nominal wages) in the plague’s aftermath. Wages in England rose from twelve to twenty—eight percent from the 1340s to the 1350s and twenty to forty percent from the 1340s to the 1360s. Immediate hikes were sometimes more drastic. During the plague year (1348—49) at Fornham All Saints (Suffolk), the lord paid the pre—plague rate of 3d. per acre for more half of the hired reaping but the rest cost 5d., an increase of 67 percent. The reaper, moreover, enjoyed more and larger tips in cash and perquisites in kind to supplement the wage. At Cuxham (Oxfordshire), a plowman making 2s. weekly before the plague demanded 3s. in 1349 and 10s. in 1350 (Farmer, 1988; Farmer, 1991; West Suffolk Record Office 3/15.7/2.4; Harvey, 1965).

In some instances, the initial hikes in nominal or cash wages subsided in the years further out from the plague and any benefit they conferred on the wage laborer was for a time undercut by another economic change fostered by the plague. Grave mortality ensured that the European supply of currency in gold and silver increased on a per—capita basis, which in turned unleashed substantial inflation in prices that did not subside in England until the mid—1370s and even later in many places on the continent. The inflation reduced the purchasing power (real wage) of the wage laborer so significantly that, even with higher cash wages, his earnings either bought him no more or often substantially less than before the magna pestilencia (Munro, 2003; Aberth, 2001).

The lord, however, was confronted not only by the roving wage laborer on whom he relied for occasional and labor—intensive seasonal tasks but also by the peasant bound to the soil who exchanged customary labor services, rent, and dues for holding land from the lord. A pool of labor services greatly reduced by the Black Death enabled the servile peasant to bargain for less onerous responsibilities and better conditions. At Tivetshall (Norfolk), vacant holdings deprived its lord of sixty percent of his week—work and all his winnowing services by 1350—51. A fifth of winter and summer week—work and a third of reaping services vanished at Redgrave (Suffolk) in 1349—50 due to the magna pestilencia. If a lord did not make concessions, a peasant often gravitated toward any better circumstance beckoning elsewhere. At Redgrave, for instance, the loss of services in 1349—50 directly due to the plague was followed in 1350—51 by an equally damaging wave of holdings abandoned by surviving tenants. For the medieval peasant, never so tightly bound to the manor as once imagined, the Black Death nonetheless fostered far greater rural mobility. Beyond loss of labor services, the deceased or absentee peasant paid no rent or dues and rendered no fees for use of manorial monopolies such as mills and ovens and the lord’s revenues shrank. The income of English lords contracted by twenty percent from 1347 to 1353 (Norfolk Record Office WAL 1247/288×1; University of Chicago Bacon 335—6; Gottfried, 1983).

Faced with these disorienting circumstances, the lord often ultimately had to decide how or even whether the pre—plague status quo could be reestablished on his estate. Not capitalistic in the sense of maximizing productivity for reinvestment of profits to enjoy yet more lucrative future returns, the medieval lord nonetheless valued stable income sufficient for aristocratic ostentation and consumption. A recalcitrant peasantry, diminished dues and services, and climbing wages undermined the material foundation of the noble lifestyle, jostled the aristocratic sense of proper social hierarchy, and invited a response.

In exceptional circumstances, a lord sometimes kept the peasant bound to the land. Because the nobility in Spanish Catalonia had already tightened control of the peasantry before the Black Death, because underdeveloped commercial agriculture provided the peasantry narrow options, and because the labor—intensive demesne agriculture common elsewhere was largely absent, the Catalan lord through a mix of coercion (physical intimidation, exorbitant fees to purchase freedom) and concession (reduced rents, conversion of servile dues to less humiliating fixed cash payments) kept the Catalan peasant in place. In England and elsewhere on the continent, where labor services were needed to till the demesne, such a conservative approach was less feasible. This, however, did not deter some lords from trying. The lord of Halesowen (Worcestershire) not only commanded the servile tenant to perform the full range of services but also resuscitated labor obligations in abeyance long before the Black Death, tantamount to an unwillingness to acknowledge anything had changed (Freedman, 1991; Razi, 1981).

Europe’s political elite also looked to legal coercion not only to contain rising wages and to limit the peasant’s mobility but also to allay a sense of disquietude and disorientation arising from the Black Death’s buffeting of pre—plague social realities. England’s Ordinance of Laborers (1349) and Statute of Laborers (1351) called for a return to the wages and terms of employment of 1346. Labor legislation was likewise promulgated by the Córtes of Aragon and Castile, the French crown, and cities such as Siena, Orvieto, Pisa, Florence, and Ragusa. The futility of capping wages by legislative fiat is evident in the French crown’s 1351 revision of its 1349 enactment to permit a wage increase of one third. Perhaps only in England, where effective government permitted robust enforcement, did the law slow wage increases for a time (Aberth, 2001; Gottfried, 1983; Hunt and Murray, 1999; Cohn, 2007).

Once knee—jerk conservatism and legislative palliatives failed to revivify pre—plague socioeconomic arrangements, the lord cast about for a modus vivendi in a new world of abundant land and scarce labor. A sober triage of the available sources of labor, whether it was casual wage labor or a manor’s permanent stipendiary staff (famuli) or the dependent peasant, led to revision of managerial policy. The abbot of Saint Edmund’s, for example, focused on reconstitution of the permanent staff (famuli) on his manors. Despite mortality and flight, the abbot by and large achieved his goal by the mid—1350s. While labor legislation may have facilitated this, the abbot’s provision of more frequent and lucrative seasonal rewards, coupled with the payment of grain stipends in more valuable and marketable cereals such as wheat, no doubt helped secure the loyalty of famuli while circumventing statutory limits on higher wages. With this core of labor solidified, the focus turned to preserving the most essential labor services, especially those associated with the labor—intensive harvesting season. Less vital labor services were commuted for cash payments and ad hoc wage labor then hired to fill gaps. The cultivation of the demesne continued, though not on the pre—plague scale.

For a time in fact circumstances helped the lord continue direct management of the demesne. The general inflation of the quarter—century following the plague as well as poor harvests in the 1350s and 1360s boosted grain prices and partially compensated for more expensive labor. This so—called “Indian summer” of demesne agriculture ended quickly in the mid—1370s in England and subsequently on the continent when the post—plague inflation gave way to deflation and abundant harvests drove prices for commodities downward, where they remained, aside from brief intervals of inflation, for the rest of the Middle Ages. Recurrences of the plague, moreover, placed further stress on new managerial policies. For the lord who successfully persuaded new tenants to take over vacant holdings, such as happened at Chevington (Suffolk) by the late 1350s, the pestis secunda of 1361—62 often inflicted a decisive blow: a second recovery at Chevington never materialized (West Suffolk Records Office 3/15.3/2.9—2.23).

Under unremitting pressure, the traditional cultivation of the demesne ceased to be viable for lord after lord: a centuries—old manorial system gradually unraveled and the nature of agriculture was transformed. The lord’s earliest concession to this new reality was curtailment of cultivated acreage, a trend that accelerated with time. The 590.5 acres sown on average at Great Saxham (Suffolk) in the late 1330s was more than halved (288.67 acres) in the 1360s, for instance (West Suffolk Record Office, 3/15.14/1.1, 1.7, 1.8).

Beyond reducing the demesne to a size commensurate with available labor, the lord could explore types of husbandry less labor—intensive than traditional grain agriculture. Greater domestic manufacture of woolen cloth and growing demand for meat enabled many English lords to reduce arable production in favor of sheep—raising, which required far less labor. Livestock husbandry likewise became more significant on the continent. Suitable climate, soil, and markets made grapes, olives, apples, pears, vegetables, hops, hemp, flax, silk, and dye—stuffs attractive alternatives to grain. In hope of selling these cash crops, rural agriculture became more attuned to urban demand and urban businessmen and investors more intimately involved in what and how much of it was grown in the countryside (Gottfried, 1983; Hunt and Murray, 1999).

The lord also looked to reduce losses from demesne acreage no longer under the plow and from the vacant holdings of onetime tenants. Measures adopted to achieve this end initiated a process that gained momentum with each passing year until the face of the countryside was transformed and manorialism was dead. The English landlord, hopeful for a return to the pre—plague regime, initially granted brief terminal leases of four to six years at fixed rates for bits of demesne and for vacant dependent holdings. Leases over time lengthened to ten, twenty, thirty years, or even a lifetime. In France and Italy, the lord often resorted to métayage or mezzadria leasing, a type of sharecropping in which the lord contributed capital (land, seed, tools, plow teams) to the lessee, who did the work and surrendered a fraction of the harvest to the lord.

Disillusioned by growing obstacles to profitable cultivation of the demesne, the lord, especially in the late fourteenth century and the early fifteenth, adopted a more sweeping type of leasing, the placing of the demesne or even the entire manor “at farm” (ad firmam). A “farmer” (firmarius) paid the lord a fixed annual “farm” (firma) for the right to exploit the lord’s property and take whatever profit he could. The distant or unprofitable manor was usually “farmed” first and other manors followed until a lord’s personal management of his property often ceased entirely. The rising popularity of this expedient made direct management of demesne by lord rare by c. 1425. The lord often became a rentier bound to a fixed income. The tenurial transformation was completed when the lord sold to the peasant his right of lordship, a surrender to the peasant of outright possession of his holding for a fixed cash rent and freedom from dues and services. Manorialism, in effect, collapsed and was gone from western and central Europe by 1500.

The landlord’s discomfort ultimately benefited the peasantry. Lower prices for foodstuffs and greater purchasing power from the last quarter of the fourteenth century onward, progressive disintegration of demesnes, and waning customary land tenure enabled the enterprising, ambitious peasant to lease or purchase property and become a substantial landed proprietor. The average size of the peasant holding grew in the late Middle Ages. Due to the peasant’s generally improved standard of living, the century and a half following the magna pestilencia has been labeled a “golden age” in which the most successful peasant became a “yeoman” or “kulak” within the village community. Freed from labor service, holding a fixed copyhold lease, and enjoying greater disposable income, the peasant exploited his land exclusively for his personal benefit and often pursued leisure and some of the finer things in life. Consumption of meat by England’s humbler social strata rose substantially after the Black Death, a shift in consumer tastes that reduced demand for grain and helped make viable the shift toward pastoralism in the countryside. Late medieval sumptuary legislation, intended to keep the humble from dressing above his station and retain the distinction between low— and highborn, attests both to the peasant’s greater income and the desire of the elite to limit disorienting social change (Dyer, 1989; Gottfried, 1983; Hunt and Murray, 1999).

The Black Death, moreover, profoundly altered the contours of settlement in the countryside. Catastrophic loss of population led to abandonment of less attractive fields, contraction of existing settlements, and even wholesale desertion of villages. More than 1300 English villages vanished between 1350 and 1500. French and Dutch villagers abandoned isolated farmsteads and huddled in smaller villages while their Italian counterparts vacated remote settlements and shunned less desirable fields. The German countryside was mottled with abandoned settlements. Two thirds of named villages disappeared in Thuringia, Anhalt, and the eastern Harz mountains, one fifth in southwestern Germany, and one third in the Rhenish palatinate, abandonment far exceeding loss of population and possibly arising from migration from smaller to larger villages (Gottfried, 1983; Pounds, 1974).

The Black Death and the Commercial Economy

As with agriculture, assessment of the Black Death’s impact on the economy’s commercial sector is a complex problem. The vibrancy of the high medieval economy is generally conceded. As the first millennium gave way to the second, urban life revived, trade and manufacturing flourished, merchant and craft gilds emerged, commercial and financial innovations proliferated (e.g., partnerships, maritime insurance, double—entry bookkeeping, fair letters, letters of credit, bills of exchange, loan contracts, merchant banking, etc.). The integration of the high medieval economy reached its zenith c. 1250 to c. 1325 with the rise of large companies with international interests, such as the Bonsignori of Siena and the Buonaccorsi of Florence and the emergence of so—called “super companies” such as the Florentine Bardi, Peruzzi, and Acciaiuoli (Hunt and Murray, 1999).

How to characterize the late medieval economy has been more fraught with controversy, however. Historians a century past, uncomprehending of how their modern world could be rooted in a retrograde economy, imagined an entrepreneurially creative and expansive late medieval economy. Succeeding generations of historians darkened this optimistic portrait and fashioned a late Middle Ages of unmitigated decline, an “age of adversity” in which the economy was placed under the rubric “depression of the late Middle Ages.” The historiographical pendulum now swings away from this interpretation and a more nuanced picture has emerged that gives the Black Death’s impact on commerce its full due but emphasizes the variety of the plague’s impact from merchant to merchant, industry to industry, and city to city. Success or failure was equally possible after the Black Death and the game favored adaptability, creativity, nimbleness, opportunism, and foresight.

Once the magna pestilencia had passed, the city had to cope with a labor supply even more greatly decimated than in the countryside due to a generally higher urban death rate. The city, however, could reverse some of this damage by attracting, as it had for centuries, new workers from the countryside, a phenomenon that deepened the crisis for the manorial lord and contributed to changes in rural settlement. A resurgence of the slave trade occurred in the Mediterranean, especially in Italy, where the female slave from Asia or Africa entered domestic service in the city and the male slave toiled in the countryside. Finding more labor was not, however, a panacea. A peasant or slave performed an unskilled task adequately but could not necessarily replace a skilled laborer. The gross loss of talent due to the plague caused a decline in per capita productivity by skilled labor remediable only by time and training (Hunt and Murray, 1999; Miskimin, 1975).

Another immediate consequence of the Black Death was dislocation of the demand for goods. A suddenly and sharply smaller population ensured a glut of manufactured and trade goods, whose prices plummeted for a time. The businessman who successfully weathered this short—term imbalance in supply and demand then had to reshape his business’ output to fit a declining or at best stagnant pool of potential customers.

The Black Death transformed the structure of demand as well. While the standard of living of the peasant improved, chronically low prices for grain and other agricultural products from the late fourteenth century may have deprived the peasant of the additional income to purchase enough manufactured or trade items to fill the hole in commercial demand. In the city, however, the plague concentrated wealth, often substantial family fortunes, in fewer and often younger hands, a circumstance that, when coupled with lower prices for grain, left greater per capita disposable income. The plague’s psychological impact, moreover, it is believed, influenced how this windfall was used. Pessimism and the specter of death spurred an individualistic pursuit of pleasure, a hedonism that manifested itself in the purchase of luxuries, especially in Italy. Even with a reduced population, the gross volume of luxury goods manufactured and sold rose, a pattern of consumption that endured even after the extra income had been spent within a generation or so after the magna pestilencia.

Like the manorial lord, the affluent urban bourgeois sometimes employed structural impediments to block the ambitious parvenu from joining his ranks and becoming a competitor. A tendency toward limiting the status of gild master to the son or son—in—law of a sitting master, evident in the first half of the fourteenth century, gained further impetus after the Black Death. The need for more journeymen after the plague was conceded in the shortening of terms of apprenticeship, but the newly minted journeyman often discovered that his chance of breaking through the glass ceiling and becoming a master was virtually nil without an entrée through kinship. Women also were banished from gilds as unwanted competition. The urban wage laborer, by and large controlled by the gilds, was denied membership and had no access to urban structures of power, a potent source of frustration. While these measures may have permitted the bourgeois to hold his ground for a time, the winds of change were blowing in the city as well as the countryside and gild monopolies and gild restrictions were fraying by the close of the Middle Ages.

In the new climate created by the Black Death, the individual businessman did retain an advantage: the business judgment and techniques honed during the high Middle Ages. This was crucial in a contracting economy in which gross productivity never attained its high medieval peak and in which the prevailing pattern was boom and bust on a roughly generational basis. A fluctuating economy demanded adaptability and the most successful post—plague businessman not merely weathered bad times but located opportunities within adversity and exploited them. The post—plague entrepreneur’s preference for short—term rather than long—term ventures, once believed a product of a gloomy despondency caused by the plague and exacerbated by endemic violence, decay of traditional institutions, and nearly continuous warfare, is now viewed as a judicious desire to leave open entrepreneurial options, to manage risk effectively, and to take advantage of whatever better opportunity arose. The successful post—plague businessman observed markets closely and responded to them while exercising strict control over his concern, looking for greater efficiency, and trimming costs (Hunt and Murray, 1999).

The fortunes of the textile industry, a trade singularly susceptible to contracting markets and rising wages, best underscores the importance of flexibility. Competition among textile manufacturers, already great even before the Black Death due to excess productive capacity, was magnified when England entered the market for low— and medium—quality woolen cloth after the magna pestilencia and was exporting forty—thousand pieces annually by 1400. The English took advantage of proximity to raw material, wool England itself produced, a pattern increasingly common in late medieval business. When English producers were undeterred by a Flemish embargo on English cloth, the Flemish and Italians, the textile trade’s other principal players, were compelled to adapt in order to compete. Flemish producers that emphasized higher—grade, luxury textiles or that purchased, improved, and resold cheaper English cloth prospered while those that stubbornly competed head—to—head with the English in lower—quality woolens suffered. The Italians not only produced luxury woolens, improved their domestically—produced wool, found sources for wool outside England (Spain), and increased production of linen but also produced silks and cottons, once only imported into Europe from the East (Hunt and Murray, 1999).

The new mentality of the successful post—plague businessman is exemplified by the Florentines Gregorio Dati and Buonaccorso Pitti and especially the celebrated merchant of Prato, Francesco di Marco Datini. The large companies and super companies, some of which failed even before the Black Death, were not well suited to the post—plague commercial economy. Datini’s family business, with its limited geographical ambitions, better exercised control, was more nimble and flexible as opportunities vanished or materialized, and more effectively managed risk, all keys to success. Datini through voluminous correspondence with his business associates, subordinates, and agents and his conspicuously careful and regular accounting grasped the reins of his concern tightly. He insulated himself from undue risk by never committing too heavily to any individual venture, by dividing cargoes among ships or by insuring them, by never lending money to notoriously uncreditworthy princes, and by remaining as apolitical as he could. His energy and drive to complete every business venture likewise served him well and made him an exemplar for commercial success in a challenging era (Origo, 1957; Hunt and Murray, 1999).

The Black Death and Popular Rebellion

The late medieval popular uprising, a phenomenon with undeniable economic ramifications, is often linked with the demographic, cultural, social, and economic reshuffling caused by the Black Death; however, the connection between pestilence and revolt is neither exclusive nor linear. Any single uprising is rarely susceptible to a single—cause analysis and just as rarely was a single socioeconomic interest group the fomenter of disorder. The outbreak of rebellion in the first half of the fourteenth century (e.g., in urban [1302] and maritime [1325—28] Flanders and in English monastic towns [1326—27]) indicates the existence of socioeconomic and political disgruntlement well before the Black Death.

Some explanations for popular uprising, such as the placing of immediate stresses on the populace and the cumulative effect of centuries of oppression by manorial lords, are now largely dismissed. At times of greatest stress —— the Great Famine and the Black Death —— disorder but no large—scale, organized uprising materialized. Manorial oppression likewise is difficult to defend when the peasant in the plague’s aftermath was often enjoying better pay, reduced dues and services, broader opportunities, and a higher standard of living. Detailed study of the participants in the revolts most often labeled “peasant” uprisings has revealed the central involvement and apparent common cause of urban and rural tradesmen and craftsmen, not only manorial serfs.

The Black Death may indeed have made its greatest contribution to popular rebellion by expanding the peasant’s horizons and fueling a sense of grievance at the pace of change, not at its absence. The plague may also have undercut adherence to the notion of a divinely—sanctioned, static social order and buffeted a belief that preservation of manorial socioeconomic arrangements was essential to the survival of all, which in turn may have raised receptiveness to the apocalyptic socially revolutionary message of preachers like England’s John Ball. After the Black Death, change was inevitable and apparent to all.

The reasons for any individual rebellion were complex. Measures in the environs of Paris to check wage hikes caused by the plague doubtless fanned discontent and contributed to the outbreak of the Jacquerie of 1358 but high taxation to finance the Hundred Years’ War, depredation by marauding mercenary bands in the French countryside, and the peasantry’s conviction that the nobility had failed them in war roiled popular discontent. In the related urban revolt led by étienne Marcel (1355—58), tensions arose from the Parisian bourgeoisie’s discontent with the war’s progress, the crown’s imposition of regressive sales and head taxes, and devaluation of currency rather than change attributable to the Black Death.

In the English Peasants’ Rebellion of 1381, continued enforcement of the Statute of Laborers no doubt rankled and perhaps made the peasantry more open to provocative sermonizing but labor legislation had not halted higher wages or improvement in the standard of living for peasant. It seems likely that discontent may have arisen from an unsatisfying pace of improvement of the peasant’s lot. The regressive Poll Taxes of 1380 and 1381 also contributed to the discontent. It is furthermore noteworthy that the rebellion began in relatively affluent eastern England, not in the poorer west or north.

In the Ciompi revolt in Florence (1378—83), restrictive gild regulations and denial of political voice to workers due to the Black Death raised tensions; however, Florence’s war with the papacy and an economic slump in the 1370s resulting in devaluation of the penny in which the worker was paid were equally if not more important in fomenting unrest. Once the value of the penny was restored to its former level in 1383 the rebellion in fact subsided.

In sum, the Black Death played some role in each uprising but, as with many medieval phenomena, it is difficult to gauge its importance relative to other causes. Perhaps the plague’s greatest contribution to unrest lay in its fostering of a shrinking economy that for a time was less able to absorb socioeconomic tensions than had the growing high medieval economy. The rebellions in any event achieved little. Promises made to the rebels were invariably broken and brutal reprisals often followed. The lot of the lower socioeconomic strata was improved incrementally by the larger economic changes already at work. Viewed from this perspective, the Black Death may have had more influence in resolving the worker’s grievances than in spurring revolt.

Conclusion

The European economy at the close of the Middle Ages (c. 1500) differed fundamentally from the pre—plague economy. In the countryside, a freer peasant derived greater material benefit from his toil. Fixed rents if not outright ownership of land had largely displaced customary dues and services and, despite low grain prices, the peasant more readily fed himself and his family from his own land and produced a surplus for the market. Yields improved as reduced population permitted a greater focus on fertile lands and more frequent fallowing, a beneficial phenomenon for the peasant. More pronounced socioeconomic gradations developed among peasants as some, especially more prosperous ones, exploited the changed circumstances, especially the availability of land. The peasant’s gain was the lord’s loss. As the Middle Ages waned, the lord was commonly a pure rentier whose income was subject to the depredations of inflation.

In trade and manufacturing, the relative ease of success during the high Middle Ages gave way to greater competition, which rewarded better business practices and leaner, meaner, and more efficient concerns. Greater sensitivity to the market and the cutting of costs ultimately rewarded the European consumer with a wider range of good at better prices.

In the long term, the demographic restructuring caused by the Black Death perhaps fostered the possibility of new economic growth. The pestilence returned Europe’s population roughly its level c. 1100. As one scholar notes, the Black Death, unlike other catastrophes, destroyed people but not property and the attenuated population was left with the whole of Europe’s resources to exploit, resources far more substantial by 1347 than they had been two and a half centuries earlier, when they had been created from the ground up. In this environment, survivors also benefited from the technological and commercial skills developed during the course of the high Middle Ages. Viewed from another perspective, the Black Death was a cataclysmic event and retrenchment was inevitable, but it ultimately diminished economic impediments and opened new opportunity.

References and Further Reading:

Aberth, John. “The Black Death in the Diocese of Ely: The Evidence of the Bishop’s Register.” Journal of Medieval History 21 (1995): 275—87.

Aberth, John. From the Brink of the Apocalypse: Confronting Famine, War, Plague, and Death in the Later Middle Ages. New York: Routledge, 2001.

Aberth, John. The Black Death: The Great Mortality of 1348—1350, a Brief History with Documents . Boston and New York: Bedford/St. Martin’s, 2005.

Aston, T. H. and C. H. E. Philpin, eds. The Brenner Debate: Agrarian Class Structure and Economic Development in Pre—Industrial Europe. Cambridge: Cambridge University Press, 1985.

Bailey, Mark D. “Demographic Decline in Late Medieval England: Some Thoughts on Recent Research.” Economic History Review 49 (1996): 1—19.

Bailey, Mark D. A Marginal Economy? East Anglian Breckland in the Later Middle Ages. Cambridge: Cambridge University Press, 1989.

Benedictow, Ole J. The Black Death, 1346—1353: The Complete History. Woodbridge, Suffolk: Boydell Press, 2004.

Bleukx, Koenraad. “Was the Black Death (1348—49) a Real Plague Epidemic? England as a Case Study.” In Serta Devota in Memoriam Guillelmi Lourdaux. Pars Posterior: Cultura Medievalis, edited by W. Verbeke, M. Haverals, R. de Keyser, and J. Goossens, 64—113. Leuven: Leuven University Press, 1995.

Blockmans, Willem P. “The Social and Economic Effects of Plague in the Low Countries, 1349—1500.” Revue Belge de Philologie et d’Histoire 58 (1980): 833—63.

Bolton, Jim L. “‘The World Upside Down’: Plague as an Agent of Economic and Social Change.” In The Black Death in England, edited by M. Ormrod and P. Lindley. Stamford: Paul Watkins, 1996.

Bowsky, William M. “The Impact of the Black Death upon Sienese Government and Society.” Speculum 38 (1964): 1—34.

Campbell, Bruce M. S. “Agricultural Progress in Medieval England: Some Evidence from Eastern Norfolk.” Economic History Review 36 (1983): 26—46.

Campbell, Bruce M. S., ed. Before the Black Death: Studies in the ‘Crisis’ of the Early Fourteenth Century. Manchester: Manchester University Press, 1991.

Cipolla, Carlo M. Before the Industrial Revolution: European Society and Economy, 1000—1700, Third edition. New York: Norton, 1994.

Cohn, Samuel K. The Black Death Transformed: Disease and Culture in Early Renaissance Europe. London: Edward Arnold, 2002.

Cohn, Sameul K. “After the Black Death: Labour Legislation and Attitudes toward Labour in Late—Medieval Western Europe.” Economic History Review 60 (2007): 457—85.

Davis, David E. “The Scarcity of Rats and the Black Death.” Journal of Interdisciplinary History 16 (1986): 455—70.

Davis, R. A. “The Effect of the Black Death on the Parish Priests of the Medieval Diocese of Coventry and Lichfield.” Bulletin of the Institute of Historical Research 62 (1989): 85—90.

Drancourt, Michel, Gerard Aboudharam, Michel Signoli, Olivier Detour, and Didier Raoult. “Detection of 400—Year—Old Yersinia Pestis DNA in Human Dental Pulp: An Approach to the Diagnosis of Ancient Septicemia.” Proceedings of the National Academy of the United States 95 (1998): 12637—40.

Dyer, Christopher. Standards of Living in the Middle Ages: Social Change in England, c. 1200—1520. Cambridge: Cambridge University Press, 1989.

Emery, Richard W. “The Black Death of 1348 in Perpignan.” Speculum 42 (1967): 611—23.

Farmer, David L. “Prices and Wages.” In The Agrarian History of England and Wales, Vol. II, edited H. E. Hallam, 715—817. Cambridge: Cambridge University Press, 1988.

Farmer, D. L. “Prices and Wages, 1350—1500.” In The Agrarian History of England and Wales, Vol. III, edited E. Miller, 431—94. Cambridge: Cambridge University Press, 1991.

Flinn, Michael W. “Plague in Europe and the Mediterranean Countries.” Journal of European Economic History 8 (1979): 131—48.

Freedman, Paul. The Origins of Peasant Servitude in Medieval Catalonia. New York: Cambridge University Press, 1991.

Gottfried, Robert. The Black Death: Natural and Human Disaster in Medieval Europe. New York: Free Press, 1983.

Gyug, Richard. “The Effects and Extent of the Black Death of 1348: New Evidence for Clerical Mortality in Barcelona.” Mediæval Studies 45 (1983): 385—98.

Harvey, Barbara F. “The Population Trend in England between 1300 and 1348.” Transactions of the Royal Historical Society 4th ser. 16 (1966): 23—42.

Harvey, P. D. A. A Medieval Oxfordshire Village: Cuxham, 1240—1400. London: Oxford University Press, 1965.

Hatcher, John. “England in the Aftermath of the Black Death.” Past and Present 144 (1994): 3—35.

Hatcher, John and Mark Bailey. Modelling the Middle Ages: The History and Theory of England’s Economic Development. Oxford: Oxford University Press, 2001.

Hatcher, John. Plague, Population, and the English Economy 1348—1530. London and Basingstoke: MacMillan Press Ltd., 1977.

Herlihy, David. The Black Death and the Transformation of the West, edited by S. K. Cohn. Cambridge and London: Cambridge University Press, 1997.

Horrox, Rosemary, transl. and ed. The Black Death. Manchester: Manchester University Press, 1994.

Hunt, Edwin S.and James M. Murray. A History of Business in Medieval Europe, 1200—1550. Cambridge: Cambridge University Press, 1999.

Jordan, William C. The Great Famine: Northern Europe in the Early Fourteenth Century. Princeton: Princeton University Press, 1996.

Lehfeldt, Elizabeth, ed. The Black Death. Boston: Houghton and Mifflin, 2005.

Lerner, Robert E. The Age of Adversity: The Fourteenth Century. Ithaca: Cornell University Press, 1968.

Le Roy Ladurie, Emmanuel. The Peasants of Languedoc, transl. J. Day. Urbana: University of Illinois Press, 1976.

Lomas, Richard A. “The Black Death in County Durham.” Journal of Medieval History 15 (1989): 127—40.

McNeill, William H. Plagues and Peoples. Garden City, New York: Anchor Books, 1976.

Miskimin, Harry A. The Economy of the Early Renaissance, 1300—1460. Cambridge: Cambridge University Press, 1975.

Morris, Christopher “The Plague in Britain.” Historical Journal 14 (1971): 205—15.

Munro, John H. “The Symbiosis of Towns and Textiles: Urban Institutions and the Changing Fortunes of Cloth Manufacturing in the Low Countries and England, 1270—1570.” Journal of Early Modern History 3 (1999): 1—74.

Munro, John H. “Wage—Stickiness, Monetary Changes, and the Real Incomes in Late—Medieval England and the Low Countries, 1300—1500: Did Money Matter?” Research in Economic History 21 (2003): 185—297.

Origo. Iris The Merchant of Prato: Francesco di Marco Datini, 1335—1410. Boston: David R. Godine, 1957, 1986.

Platt, Colin. King Death: The Black Death and its Aftermath in Late—Medieval England. Toronto: University of Toronto Press, 1996.

Poos, Lawrence R. A Rural Society after the Black Death: Essex 1350—1575. Cambridge: Cambridge University Press, 1991.

Postan, Michael M. The Medieval Economy and Society: An Economic History of Britain in the Middle Ages. Harmondswworth, Middlesex: Penguin, 1975.

Pounds, Norman J. D. An Economic History of Europe. London: Longman, 1974.

Raoult, Didier, Gerard Aboudharam, Eric Crubézy, Georges Larrouy, Bertrand Ludes, and Michel Drancourt. “Molecular Identification by ‘Suicide PCR’ of Yersinia Pestis as the Agent of Medieval Black Death.” Proceedings of the National Academy of Sciences of the United States of America 97 (7 Nov. 2000): 12800—3.

Razi, Zvi “Family, Land, and the Village Community in Later Medieval England.” Past and Present 93 (1981): 3—36.

Russell, Josiah C. British Medieval Population. Albuquerque: University of New Mexico Press, 1948.

Scott, Susan and Christopher J. Duncan. Return of the Black Death: The World’s Deadliest Serial Killer. Chicester, West Sussex and Hoboken, NJ: Wiley, 2004.

Shrewsbury, John F. D. A History of Bubonic Plague in the British Isles. Cambridge: Cambridge University Press, 1970.

Twigg, Graham The Black Death: A Biological Reappraisal. London: Batsford Academic and Educational, 1984.

Waugh, Scott L. England in the Reign of Edward III. Cambridge: Cambridge University Press, 1991.

Ziegler, Philip. The Black Death. London: Penguin, 1969, 1987.

Citation: Routt, David. “The Economic Impact of the Black Death”. EH.Net Encyclopedia, edited by Robert Whaples. July 20, 2008. URL http://eh.net/encyclopedia/the-economic-impact-of-the-black-death/

The Economic History of Australia from 1788: An Introduction

Bernard Attard, University of Leicester

Introduction

The economic benefits of establishing a British colony in Australia in 1788 were not immediately obvious. The Government’s motives have been debated but the settlement’s early character and prospects were dominated by its original function as a jail. Colonization nevertheless began a radical change in the pattern of human activity and resource use in that part of the world, and by the 1890s a highly successful settler economy had been established on the basis of a favorable climate in large parts of the southeast (including Tasmania ) and the southwest corner; the suitability of land for European pastoralism and agriculture; an abundance of mineral wealth; and the ease with which these resources were appropriated from the indigenous population. This article will focus on the creation of a colonial economy from 1788 and its structural change during the twentieth century. To simplify, it will divide Australian economic history into four periods, two of which overlap. These are defined by the foundation of the ‘bridgehead economy’ before 1820; the growth of a colonial economy between 1820 and 1930; the rise of manufacturing and the protectionist state between 1891 and 1973; and the experience of liberalization and structural change since 1973. The article will conclude by suggesting briefly some of the similarities between Australia and other comparable settler economies, as well as the ways in which it has differed from them.

The Bridgehead Economy, 1788-1820

The description ‘bridgehead economy’ was used by one of Australia’s foremost economic historians, N. G. Butlin to refer to the earliest decades of British occupation when the colony was essentially a penal institution. The main settlements were at Port Jackson (modern Sydney, 1788) in New South Wales and Hobart (1804) in what was then Van Diemen’s Land (modern Tasmania). The colony barely survived its first years and was largely neglected for much of the following quarter-century while the British government was preoccupied by the war with France. An important beginning was nevertheless made in the creation of a private economy to support the penal regime. Above all, agriculture was established on the basis of land grants to senior officials and emancipated convicts, and limited freedoms were allowed to convicts to supply a range of goods and services. Although economic life depended heavily on the government Commissariat as a supplier of goods, money and foreign exchange, individual rights in property and labor were recognized, and private markets for both started to function. In 1808, the recall of the New South Wales Corps, whose officers had benefited most from access to land and imported goods (thus hopelessly entangling public and private interests), coupled with the appointment of a new governor, Lachlan Macquarie, in the following year, brought about a greater separation of the private economy from the activities and interests of the colonial government. With a significant increase in the numbers transported after 1810, New South Wales’ future became more secure. As laborers, craftsmen, clerks and tradesmen, many convicts possessed the skills required in the new settlements. As their terms expired, they also added permanently to the free population. Over time, this would inevitably change the colony’s character.

Natural Resources and the Colonial Economy, 1820-1930

Pastoral and Rural Expansion

For Butlin, the developments around 1810 were a turning point in the creation of a ‘colonial’ economy. Many historians have preferred to view those during the 1820s as more significant. From that decade, economic growth was based increasingly upon the production of fine wool and other rural commodities for markets in Britain and the industrializing economies of northwestern Europe. This growth was interrupted by two major depressions during the 1840s and 1890s and stimulated in complex ways by the rich gold discoveries in Victoria in 1851, but the underlying dynamics were essentially unchanged. At different times, the extraction of natural resources, whether maritime before the 1840s or later gold and other minerals, was also important. Agriculture, local manufacturing and construction industries expanded to meet the immediate needs of growing populations, which concentrated increasingly in the main urban centers. The colonial economy’s structure, growth of population and significance of urbanization are illustrated in tables 1 and 2. The opportunities for large profits in pastoralism and mining attracted considerable amounts of British capital, while expansion generally was supported by enormous government outlays for transport, communication and urban infrastructures, which also depended heavily on British finance. As the economy expanded, large-scale immigration became necessary to satisfy the growing demand for workers, especially after the end of convict transportation to the eastern mainland in 1840. The costs of immigration were subsidized by colonial governments, with settlers coming predominantly from the United Kingdom and bringing skills that contributed enormously to the economy’s growth. All this provided the foundation for the establishment of free colonial societies. In turn, the institutions associated with these — including the rule of law, secure property rights, and stable and democratic political systems — created conditions that, on balance, fostered growth. In addition to New South Wales, four other British colonies were established on the mainland: Western Australia (1829), South Australia (1836), Victoria (1851) and Queensland (1859). Van Diemen’s Land (Tasmania after 1856) became a separate colony in 1825. From the 1850s, these colonies acquired responsible government. In 1901, they federated, creating the Commonwealth of Australia.

Table 1
The Colonial Economy: Percentage Shares of GDP, 1891 Prices, 1861-1911

Pastoral Other rural Mining Manuf. Building Services Rent
1861 9.3 13.0 17.5 14.2 8.4 28.8 8.6
1891 16.1 12.4 6.7 16.6 8.5 29.2 10.3
1911 14.8 16.7 9.0 17.1 5.3 28.7 8.3

Source: Haig (2001), Table A1. Totals do not sum to 100 because of rounding.

Table 2
Colonial Populations (thousands), 1851-1911

Australia Colonies Cities
NSW Victoria Sydney Melbourne
1851 257 100 46 54 29
1861 669 198 328 96 125
1891 1,704 608 598 400 473
1911 2,313 858 656 648 593

Source: McCarty (1974), p. 21; Vamplew (1987), POP 26-34.

The process of colonial growth began with two related developments. First, in 1820, Macquarie responded to land pressure in the districts immediately surrounding Sydney by relaxing restrictions on settlement. Soon the outward movement of herdsmen seeking new pastures became uncontrollable. From the 1820s, the British authorities also encouraged private enterprise by the wholesale assignment of convicts to private employers and easy access to land. In 1831, the principles of systematic colonization popularized by Edward Gibbon Wakefield (1796-1862) were put into practice in New South Wales with the substitution of land sales for grants in order to finance immigration. This, however, did not affect the continued outward movement of pastoralists who simply occupied land where could find it beyond the official limits of settlement. By 1840, they had claimed a vast swathe of territory two hundred miles in depth running from Moreton Bay in the north (the site of modern Brisbane) through the Port Phillip District (the future colony of Victoria, whose capital Melbourne was marked out in 1837) to Adelaide in South Australia. The absence of any legal title meant that these intruders became known as ‘squatters’ and the terms of their tenure were not finally settled until 1846 after a prolonged political struggle with the Governor of New South Wales, Sir George Gipps.

The impact of the original penal settlements on the indigenous population had been enormous. The consequences of squatting after 1820 were equally devastating as the land and natural resources upon which indigenous hunter-gathering activities and environmental management depended were appropriated on a massive scale. Aboriginal populations collapsed in the face of disease, violence and forced removal until they survived only on the margins of the new pastoral economy, on government reserves, or in the arid parts of the continent least touched by white settlement. The process would be repeated again in northern Australia during the second half of the century.

For the colonists this could happen because Australia was considered terra nullius, vacant land freely available for occupation and exploitation. The encouragement of private enterprise, the reception of Wakefieldian ideas, and the wholesale spread of white settlement were all part of a profound transformation in official and private perceptions of Australia’s prospects and economic value as a British colony. Millennia of fire-stick management to assist hunter-gathering had created inland grasslands in the southeast that were ideally suited to the production of fine wool. Both the physical environment and the official incentives just described raised expectations of considerable profits to be made in pastoral enterprise and attracted a growing stream of British capital in the form of organizations like the Australian Agricultural Company (1824); new corporate settlements in Western Australia (1829) and South Australia (1836); and, from the 1830s, British banks and mortgage companies formed to operate in the colonies. By the 1830s, wool had overtaken whale oil as the colony’s most important export, and by 1850 New South Wales had displaced Germany as the main overseas supplier to British industry (see table 3). Allowing for the colonial economy’s growing complexity, the cycle of growth based upon land settlement, exports and British capital would be repeated twice. The first pastoral boom ended in a depression which was at its worst during 1842-43. Although output continued to grow during the 1840s, the best land had been occupied in the absence of substantial investment in fencing and water supplies. Without further geographical expansion, opportunities for high profits were reduced and the flow of British capital dried up, contributing to a wider downturn caused by drought and mercantile failure.

Table 3
Imports of Wool into Britain (thousands of bales), 1830-50

German Australian
1830 74.5 8.0
1840 63.3 41.0
1850 30.5 137.2

Source: Sinclair (1976), p. 46

When pastoral growth revived during the 1860s, borrowed funds were used to fence properties and secure access to water. This in turn allowed a further extension of pastoral production into the more environmentally fragile semi-arid interior districts of New South Wales, particularly during the 1880s. As the mobs of sheep moved further inland, colonial governments increased the scale of their railway construction programs, some competing to capture the freight to ports. Technical innovation and government sponsorship of land settlement brought greater diversity to the rural economy (see table 4). Exports of South Australian wheat started in the 1870s. The development of drought resistant grain varieties from the turn of the century led to an enormous expansion of sown acreage in both the southeast and southwest. From the 1880s, sugar production increased in Queensland, although mainly for the domestic market. From the 1890s, refrigeration made it possible to export meat, dairy products and fruit.

Table 4
Australian Exports (percentages of total value of exports), 1881-1928/29

Wool Minerals Wheat,flour Butter Meat Fruit
1881-90 54.1 27.2 5.3 0.1 1.2 0.2
1891-1900 43.5 33.1 2.9 2.4 4.1 0.3
1901-13 34.3 35.4 9.7 4.1 5.1 0.5
1920/21-1928/29 42.9 8.8 20.5 5.6 4.6 2.2

Source: Sinclair (1976), p. 166

Gold and Its Consequences

Alongside rural growth and diversification, the remarkable gold discoveries in central Victoria in 1851 brought increased complexity to the process of economic development. The news sparked an immediate surge of gold seekers into the colony, which was soon reinforced by a flood of overseas migrants. Until the 1870s, gold displaced wool as Australia’s most valuable export. Rural industries either expanded output (wheat in South Australia) or, in the case of pastoralists, switched production to meat and tallow, to supply a much larger domestic market. Minerals had been extracted since earliest settlement and, while yields on the Victorian gold fields soon declined, rich mineral deposits continued to be found. During the 1880s alone these included silver, lead and zinc at Broken Hill in New South Wales; copper at Mount Lyell in Tasmania; and gold at Charters Towers and Mount Morgan in Queensland. From 1893, what eventually became the richest goldfields in Australia were discovered at Coolgardie in Western Australia. The mining industry’s overall contribution to output and exports is illustrated in tables 1 and 4.

In Victoria, the deposits of easily extracted alluvial gold were soon exhausted and mining was taken over by companies that could command the financial and organizational resources needed to work the deep lodes. But the enormous permanent addition to the colonial population caused by the gold rush had profound effects throughout eastern Australia, dramatically accelerating the growth of the local market and workforce, and deeply disturbing the social balance that had emerged during the decade before. Between 1851 and 1861, the Australian population more than doubled. In Victoria it increased sevenfold; Melbourne outgrew Sydney, Chicago and San Francisco (see table 2). Significantly enlarged populations required social infrastructure, political representation, employment and land; and the new colonial legislatures were compelled to respond. The way this was played out varied between colonies but the common outcomes were the introduction of manhood suffrage, access to land through ‘free selection’ of small holdings, and, in the Victorian case, the introduction of a protectionist tariff in 1865. The particular age structure of the migrants of the 1850s also had long-term effects on the building cycle, notably in Victoria. The demand for housing accelerated during the 1880s, as the children of the gold generation matured and established their own households. With pastoral expansion and public investment also nearing their peaks, the colony experienced a speculative boom which added to the imbalances already being caused by falling export prices and rising overseas debt. The boom ended with the wholesale collapse of building companies, mortgage banks and other financial institutions during 1891-92 and the stoppage of much of the banking system during 1893.

The depression of the 1890s was worst in Victoria. Its impact on employment was softened by the Western Australian gold discoveries, which drew population away, but the colonial economy had grown to such an extent since the 1850s that the stimulus provided by the earlier gold finds could not be repeated. Severe drought in eastern Australia from the mid-1890s until 1903 caused the pastoral industry to contract. Yet, as we have seen, technological innovation also created opportunities for other rural producers, who were now heavily supported by government with little direct involvement by foreign investors. The final phase of rural expansion, with its associated public investment in rural (and increasingly urban) infrastructure continued until the end of the 1920s. Yields declined, however, as farmers moved onto the most marginal land. The terms of trade also deteriorated with the oversupply of several commodities in world markets after the First World War. As a result, the burden of servicing foreign debt rose once again. Australia’s position as a capital importer and exporter of natural resources meant that the Great Depression arrived early. From late 1929, the closure of overseas capital markets and collapse of export prices forced the Federal Government to take drastic measures to protect the balance of payments. The falls in investment and income transmitted the contraction to the rest of the economy. By 1932, average monthly unemployment amongst trade union members was over 22 percent. Although natural resource industries continued to have enduring importance as earners of foreign exchange, the Depression finally ended the long period in which land settlement and technical innovation had together provided a secure foundation for economic growth.

Manufacturing and the Protected Economy, 1891-1973

The ‘Australian Settlement’

There is a considerable chronological overlap between the previous section, which surveyed the growth of a colonial economy during the nineteenth century based on the exploitation of natural resources, and this one because it is a convenient way of approaching the two most important developments in Australian economic history between Federation and the 1970s: the enormous increase in government regulation after 1901 and, closely linked to this, the expansion of domestic manufacturing, which from the Second World War became the most dynamic part of the Australian economy.

The creation of the Commonwealth of Australia on 1 January 1901 broadened the opportunities for public intervention in private markets. The new Federal Government was given clearly-defined but limited powers over obviously ‘national’ matters like customs duties. The rest, including many affecting economic development and social welfare, remained with the states. The most immediate economic consequence was the abolition of inter-colonial tariffs and the establishment of a single Australian market. But the Commonwealth also soon set about transferring to the national level several institutions that different the colonies had experimented with during the 1890s. These included arrangements for the compulsory arbitration of industrial disputes by government tribunals, which also had the power to fix wages, and a discriminatory ‘white Australia’ immigration policy designed to exclude non-Europeans from the labor market. Both were partly responses to organized labor’s electoral success during the 1890s. Urban business and professional interests had always been represented in colonial legislatures; during the 1910s, rural producers also formed their own political parties. Subsequently, state and federal governments were typically formed by the either Australian Labor Party or coalitions of urban conservatives and the Country Party. The constituencies they each represented were thus able to influence the regulatory structure to protect themselves against the full impact of market outcomes, whether in the form of import competition, volatile commodity prices or uncertain employment conditions. The institutional arrangements they created have been described as the ‘Australian settlement’ because they balanced competing producer interests and arguably provided a stable framework for economic development until the 1970s, despite the inevitable costs.

The Growth of Manufacturing

An important part of the ‘Australian settlement’ was the imposition of a uniform federal tariff and its eventual elaboration into a system of ‘protection all round’. The original intended beneficiaries were manufacturers and their employees; indeed, when the first protectionist tariff was introduced in 1907, its operation was linked to the requirement that employers pay their workers ‘fair and reasonable wages’. Manufacturing’s actual contribution to economic growth before Federation has been controversial. The population influx of the 1850s widened opportunities for import-substitution but the best evidence suggests that manufacturing grew slowly as the industrial workforce increased (see table 1). Production was small-scale and confined largely to the processing of rural products and raw materials; assembly and repair-work; or the manufacture of goods for immediate consumption (e.g. soap and candle-making, brewing and distilling). Clothing and textile output was limited to a few lines. For all manufacturing, growth was restrained by the market’s small size and the limited opportunities for technical change it afforded.

After Federation, production was stimulated by several factors: rural expansion, the increasing use of agricultural machinery and refrigeration equipment, and the growing propensity of farm incomes to be spent locally. The removal of inter-colonial tariffs may also have helped. The statistical evidence indicates that between 1901 and the outbreak of the First World War manufacturing grew faster than the economy as a whole, while output per worker increased. But manufacturers also aspired mainly to supply the domestic market and expended increasing energy on retaining privileged access. Tariffs rose considerably between the two world wars. Some sectors became more capital intensive, particularly with the establishment of a local steel industry, the beginnings of automobile manufacture, and the greater use of electricity. But, except during the first half of the 1920s, there was little increase in labor productivity and the inter-war expansion of textile manufacturing reflected the heavy bias towards import substitution. Not until the Second World War and after did manufacturing growth accelerate and extend to those sectors most characteristic of an advance industrial economy (table 5). Amongst these were automobiles, chemicals, electrical and electronic equipment, and iron-and-steel. Growth was sustained during 1950s by similar factors to those operating in other countries during the ‘long boom’, including a growing stream of American direct investment, access to new and better technology, and stable conditions of full employment.

Table 5
Manufacturing and the Australian Economy, 1913-1949

1938-39 prices
Manufacturing share of GDP % Manufacturing annual rate of growth % GDP, annual rate of growth %
1913/14 21.9
1928/29 23.6 2.6 2.1
1948/49 29.8 3.4 2.2

Calculated from Haig (2001), Table A2. Rates of change are average annual changes since the previous year in the first column.

Manufacturing peaked in the mid-1960s at about 28 percent of national output (measured in 1968-69 prices) but natural resource industries remained the most important suppliers of exports. Since the 1920s, over-supply in world markets and the need to compensate farmers for manufacturing protection, had meant that virtually all rural industries, with the exception of wool, had been drawn into a complicated system of subsidies, price controls and market interventions at both federal and state levels. The post-war boom in the world economy increased demand for commodities, benefiting rural producers but also creating new opportunities for Australian miners. Most important of all, the first surge of breakneck growth in East Asia opened a vast new market for iron ore, coal and other mining products. Britain’s significance as a trading partner had declined markedly since the 1950s. By the end of the 1960s, Japan overtook it as Australia’s largest customer, while the United States was now the main provider of imports.

The mining bonanza contributed to the boom conditions experienced generally after 1950. The Federal Government played its part by using the full range of macroeconomic policies that were also increasingly familiar in similar western countries to secure stability and full employment. It encouraged high immigration, relaxing the entry criteria to allow in large numbers of southern Europeans, who added directly to the workforce, but also brought knowledge and experience. With state governments, the Commonwealth increased expenditure on education significantly, effectively entering the field for the first time after 1945. Access to secondary education was widened with the abandonment of fees in government schools and federal finance secured an enormous expansion of university places, especially after 1960. Some weaknesses remained. Enrolment rates after primary school were below those in many industrial countries and funding for technical education was poor. Despite this, the Australian population’s rising levels of education and skill continued to be important additional sources of growth. Finally, although government advisers expressed misgivings, industry policy remained determinedly interventionist. While state governments competed to attract manufacturing investment with tax and other incentives, by the 1960s protection had reached its highest level, with Australia playing virtually no part in the General Agreement on Tariffs and Trade (GATT), despite being an original signatory. The effects of rising tariffs since 1900 were evident in the considerable decline in Australia’s openness to trade (Table 6). Yet, as the post-war boom approached its end, the country still relied upon commodity exports and foreign investment to purchase the manufactures it was unable to produce itself. The impossibility of sustaining growth in this way was already becoming clear, even though the full implications would only be felt during the decades to come.

Table 6
Trade (Exports Plus Imports)
as a Share of GDP, Current Prices, %

1900/1 44.9
1928/29 36.9
1938/38 32.7
1964/65 33.3
1972/73 29.5

Calculated from Vamplew (1987), ANA 119-129.

Liberalization and Structural Change, 1973-2005

From the beginning of the 1970s, instability in the world economy and weakness at home ended Australia’s experience of the post-war boom. During the following decades, manufacturing’s share in output (table 7) and employment fell, while the long-term relative decline of commodity prices meant that natural resources could no longer be relied on to cover the cost of imports, let alone the long-standing deficits in payments for services, migrant remittances and interest on foreign debt. Until the early 1990s, Australia also suffered from persistent inflation and rising unemployment (which remained permanently higher, see chart 1). As a consequence, per capita incomes fluctuated during the 1970s, and the economy contracted in absolute terms during 1982-83 and 1990-91.

Even before the 1970s, new sources of growth and rising living standards had been needed, but the opportunities for economic change were restricted by the elaborate regulatory structure that had evolved since Federation. During that decade itself, policy and outlook were essentially defensive and backward looking, despite calls for reform and some willingness to alter the tariff. Governments sought to protect employment in established industries, while dependence on mineral exports actually increased as a result of the commodity booms at the decade’s beginning and end. By the 1980s, however, it was clear that the country’s existing institutions were failing and fundamental reform was required.

Table 7
The Australian Economy, 1974-2004

A. Percentage shares of value-added, constant prices

1974 1984 1994 2002
Agriculture 4.4 4.3 3.0 2.7
Manufacturing 18.1 15.2 13.3 11.8
Other industry, inc. mining 14.2 14.0 14.6 14.4
Services 63.4 66.4 69.1 71.1

B. Per capita GDP, annual average rate of growth %, constant prices

1973-84 1.2
1984-94 1.7
1994-2004 2.5

Calculated from World Bank, World Development Indicators (Sept. 2005).

Figure 1
Unemployment, 1971-2005, percent

Unemployment, 1971-2005, percent

Source: Reserve Bank of Australia (1988); Reserve Bank of Australia, G07Hist.xls. Survey data at August. The method of data collection changed in 1978.

The catalyst was the resumption of the relative fall of commodity prices since the Second World War which meant that the cost of purchasing manufactured goods inexorably rose for primary producers. The decline had been temporarily reversed by the oil shocks of the 1970s but, from the 1980/81 financial year until the decade’s end, the value of Australia’s merchandise imports exceeded that of merchandise exports in every year but two. The overall deficit on current account measured as a proportion of GDP also moved became permanently higher, averaging around 4.7 percent. During the 1930s, deflation had been followed by the further closing of the Australian economy. There was no longer much scope for this. Manufacturing had stagnated since the 1960s, suffering especially from the inflation of wage and other costs during the 1970s. It was particularly badly affected by the recession of 1982-83, when unemployment rose to almost ten percent, its highest level since the Great Depression. In 1983, a new federal Labor Government led by Bob Hawke sought to engineer a recovery through an ‘Accord’ with the trade union movement which aimed at creating employment by holding down real wages. But under Hawke and his Treasurer, Paul Keating — who warned colorfully that otherwise the country risked becoming a ‘banana republic’ — Labor also started to introduce broader reforms to increase the efficiency of Australian firms by improving their access to foreign finance and exposing them to greater competition. Costs would fall and exports of more profitable manufactures increase, reducing the economy’s dependence on commodities. During the 1980s and 1990s, the reforms deepened and widened, extending to state governments and continuing with the election of a conservative Liberal-National Party government under John Howard in 1996, as each act of deregulation invited further measures to consolidate them and increase their effectiveness. Key reforms included the floating of the Australian dollar and the deregulation of the financial system; the progressive removal of protection of most manufacturing and agriculture; the dismantling of the centralized system of wage-fixing; taxation reform; and the promotion of greater competition and better resource use through privatization and the restructuring of publicly-owned corporations, the elimination of government monopolies, and the deregulation of sectors like transport and telecommunications. In contrast with the 1930s, the prospects of further domestic reform were improved by an increasingly favorable international climate. Australia contributed by joining other nations in the Cairns Group to negotiate reductions of agricultural protection during the Uruguay round of GATT negotiations and by promoting regional liberalization through the Asia Pacific Economic Cooperation (APEC) forum.

Table 8
Exports and Openness, 1983-2004

Shares of total exports, % Shares of GDP: exports + imports, %
Goods Services
Rural Resource Manuf. Other
1983 30 34 9 3 24 26
1989 23 37 11 5 24 27
1999 20 34 17 4 24 37
2004 18 33 19 6 23 39

Calculated from: Reserve Bank of Australia, G10Hist.xls and H03Hist.xls; World Bank, World Development Indicators (Sept. 2005). Chain volume measures, except shares of GDP, 1983, which are at current prices.

The extent to which institutional reform had successfully brought about long-term structural change was still not clear at the end of the century. Recovery from the 1982-83 recession was based upon a strong revival of employment. By contrast, the uninterrupted growth experienced since 1992 arose from increases in the combined productivity of workers and capital. If this persisted, it was a historic change in the sources of growth from reliance on the accumulation of capital and the increase of the workforce to improvements in the efficiency of both. From the 1990s, the Australian economy also became more open (table 8). Manufactured goods increased their share of exports, while rural products continued to decline. Yet, although growth was more broadly-based, rapid and sustained (table 7), the country continued to experience large trade and current account deficits, which were augmented by the considerable increase of foreign debt after financial deregulation during the 1980s. Unemployment also failed to return to its pre-1974 level of around 2 percent, although much of the permanent rise occurred during the mid to late 1970s. In 2005, it remained 5 percent (Figure 1). Institutional reform clearly contributed to these changes in economic structure and performance but they were also influenced by other factors, including falling transport costs, the communications and information revolutions, the greater openness of the international economy, and the remarkable burst of economic growth during the century’s final decades in southeast and east Asia, above all China. Reform was also complemented by policies to provide the skills needed in a technologically-sophisticated, increasingly service-oriented economy. Retention rates in the last years of secondary education doubled during the 1980s, followed by a sharp increase of enrolments in technical colleges and universities. By 2002, total expenditure on education as a proportion of national income had caught up with the average of member countries of the OECD (Table 9). Shortages were nevertheless beginning to be experienced in the engineering and other skilled trades, raising questions about some priorities and the diminishing relative financial contribution of government to tertiary education.

Table 9
Tertiary Enrolments and Education Expenditure, 2002

Tertiary enrolments, gross percent Education expenditure as a proportion of GDP, percent
Australia 63.22 6.0
OECD 61.68 5.8
United States 70.67 7.2

Source: World Bank, World Development Indicators (Sept. 2005); OECD (2005). Gross enrolments are total enrolments, regardless of age, as a proportion of the population in the relevant official age group. OECD enrolments are for fifteen high-income members only.

Summing Up: The Australian Economy in a Wider Context

Virtually since the beginning of European occupation, the Australian economy had provided the original British colonizers, generations of migrants, and the descendants of both with a remarkably high standard of living. Towards the end of the nineteenth century, this was by all measures the highest in the world (see table 10). After 1900, national income per member of the population slipped behind that of several countries, but continued to compare favorably with most. In 2004, Australia was ranked behind only Norway and Sweden in the United Nation’s Human Development Index. Economic historians have differed over the sources of growth that made this possible. Butlin emphasized the significance of local factors like the unusually high rate of urbanization and the expansion of domestic manufacturing. In important respects, however, Australia was subject to the same forces as other European settler societies in New Zealand and Latin America, and its development bore striking similarities to theirs. From the 1820s, its economy grew as one frontier of an expanding western capitalism. With its close institutional ties to, and complementarities with, the most dynamic parts of the world economy, it drew capital and migrants from them, supplied them with commodities, and shared the benefits of their growth. Like other settler societies, it sought population growth as an end in itself and, from the turn of the nineteenth century, aspired to the creation of a national manufacturing base. Finally, when openness to the world economy appeared to threaten growth and living standards, governments intervened to regulate and protect with broader social objectives in mind. But there were also striking contrasts with other settler economies, notably those in Latin America like Argentina, with which it has been frequently compared. In particular, Australia responded to successive challenges to growth by finding new opportunities for wealth creation with a minimum of political disturbance, social conflict or economic instability, while sharing a rising national income as widely as possible.

Table 10
Per capita GDP in Australia, United States and Argentina
(1990 international dollars)

Australia United States Argentina
1870 3,641 2,457 1,311
1890 4,433 3,396 2,152
1950 7,493 9,561 4,987
1998 20,390 27,331 9,219

Sources: Australia: GDP, Haig (2001) as converted in Maddison (2003); all other data Maddison (1995) and (2001)

From the mid-twentieth century, Australia’s experience also resembled that of many advanced western countries. This included the post-war willingness to use macroeconomic policy to maintain growth and full employment; and, after the 1970s, the abandonment of much government intervention in private markets while at the same time retaining strong social services and seeking to improve education and training. Australia also experienced a similar relative decline of manufacturing, permanent rise of unemployment, and transition to a more service-based economy typical of high income countries. By the beginning of the new millennium, services accounted for over 70 percent of national income (table 7). Australia remained vulnerable as an exporter of commodities and importer of capital but its endowment of natural resources and the skills of its population were also creating opportunities. The country was again favorably positioned to take advantage of growth in the most dynamic parts of the world economy, particularly China. With the final abandonment of the White Australia policy during the 1970s, it had also started to integrate more closely with its region. This was further evidence of the capacity to change that allowed Australians to face the future with confidence.

References:

Anderson, Kym. “Australia in the International Economy.” In Reshaping Australia’s Economy: Growth with Equity and Sustainability, edited by John Nieuwenhuysen, Peter Lloyd and Margaret Mead, 33-49. Cambridge: Cambridge University Press, 2001.

Blainey, Geoffrey. The Rush that Never Ended: A History of Australian Mining, fourth edition. Melbourne: Melbourne University Press, 1993.

Borland, Jeff. “Unemployment.” In Reshaping Australia’s Economy: Growth and with Equity and Sustainable Development, edited by John Nieuwenhuysen, Peter Lloyd and Margaret Mead, 207-228. Cambridge: Cambridge University Press, 2001.

Butlin, N. G. Australian Domestic Product, Investment and Foreign Borrowing 1861-1938/39. Cambridge: Cambridge University Press, 1962.

Butlin, N.G. Economics and the Dreamtime, A Hypothetical History. Cambridge: Cambridge University Press, 1993.

Butlin, N.G. Forming a Colonial Economy: Australia, 1810-1850. Cambridge: Cambridge University Press, 1994.

Butlin, N.G. Investment in Australian Economic Development, 1861-1900. Cambridge: Cambridge University Press, 1964.

Butlin, N. G., A. Barnard and J. J. Pincus. Government and Capitalism: Public and Private Choice in Twentieth Century Australia. Sydney: George Allen and Unwin, 1982.

Butlin, S. J. Foundations of the Australian Monetary System, 1788-1851. Sydney: Sydney University Press, 1968.

Chapman, Bruce, and Glenn Withers. “Human Capital Accumulation: Education and Immigration.” In Reshaping Australia’s economy: growth with equity and sustainability, edited by John Nieuwenhuysen, Peter Lloyd and Margaret Mead, 242-267. Cambridge: Cambridge University Press, 2001.

Dowrick, Steve. “Productivity Boom: Miracle or Mirage?” In Reshaping Australia’s Economy: Growth with Equity and Sustainability, edited by John Nieuwenhuysen, Peter Lloyd and Margaret Mead, 19-32. Cambridge: Cambridge University Press, 2001.

Economist. “Has he got the ticker? A survey of Australia.” 7 May 2005.

Haig, B. D. “Australian Economic Growth and Structural Change in the 1950s: An International Comparison.” Australian Economic History Review 18, no. 1 (1978): 29-45.

Haig, B.D. “Manufacturing Output and Productivity 1910 to 1948/49.” Australian Economic History Review 15, no. 2 (1975): 136-61.

Haig, B.D. “New Estimates of Australian GDP: 1861-1948/49.” Australian Economic History Review 41, no. 1 (2001): 1-34.

Haig, B. D., and N. G. Cain. “Industrialization and Productivity: Australian Manufacturing in the 1920s and 1950s.” Explorations in Economic History 20, no. 2 (1983): 183-98.

Jackson, R. V. Australian Economic Development in the Nineteenth Century. Canberra: Australian National University Press, 1977.

Jackson, R.V. “The Colonial Economies: An Introduction.” Australian Economic History Review 38, no. 1 (1998): 1-15.

Kelly, Paul. The End of Certainty: The Story of the 1980s. Sydney: Allen and Unwin, 1992.

Macintyre, Stuart. A Concise History of Australia. Cambridge: Cambridge University Press, 1999.

McCarthy, J. W. “Australian Capital Cities in the Nineteenth Century.” In Urbanization in Australia; The Nineteenth Century, edited by J. W. McCarthy and C. B. Schedvin, 9-39. Sydney: Sydney University Press, 1974.

McLean, I.W. “Australian Economic Growth in Historical Perspective.” The Economic Record 80, no. 250 (2004): 330-45.

Maddison, Angus. Monitoring the World Economy 1820-1992. Paris: OECD, 1995.

Maddison, Angus. The World Economy: A Millennial Perspective. Paris: OECD, 2001.

Maddison, Angus. The World Economy: Historical Statistics. Paris: OECD, 2003.

Meredith, David, and Barrie Dyster. Australia in the Global Economy: Continuity and Change. Cambridge: Cambridge University Press, 1999.

Nicholas, Stephen, editor. Convict Workers: Reinterpreting Australia’s Past. Cambridge: Cambridge University Press, 1988.

OECD. Education at a Glance 2005 – Tables OECD, 2005 [cited 9 February 2006]. Available from http://www.oecd.org/document/11/0,2340,en_2825_495609_35321099_1_1_1_1,00.html.

Pope, David, and Glenn Withers. “The Role of Human Capital in Australia’s Long-Term Economic Growth.” Paper presented to 24th Conference of Economists, Adelaide, 1995.

Reserve Bank of Australia. “Australian Economic Statistics: 1949-50 to 1986-7: I Tables.” Occasional Paper No. 8A (1988).

Reserve Bank of Australia. Current Account – Balance of Payments – H1 [cited 29 November 2005]. Available from http://www.rba.gov.au/Statistics/Bulletin/H01bhist.xls.

Reserve Bank of Australia. Gross Domestic Product – G10 [cited 29 November 2005]. Available from http://www.rba.gov.au/Statistics/Bulletin/G10hist.xls.

Reserve Bank of Australia. Unemployment – Labour Force – G1 [cited 2 February 2006]. Available from http://www.rba.gov.au/Statistics/Bulletin/G07hist.xls.

Schedvin, C. B. Australia and the Great Depression: A Study of Economic Development and Policy in the 120s and 1930s. Sydney: Sydney University Press, 1970.

Schedvin, C.B. “Midas and the Merino: A Perspective on Australian Economic History.” Economic History Review 32, no. 4 (1979): 542-56.

Sinclair, W. A. The Process of Economic Development in Australia. Melbourne: Longman Cheshire, 1976.

United Nations Development Programme. Human Development Index [cited 29 November 2005]. Available from http://hdr.undp.org/statistics/data/indicators.cfm?x=1&y=1&z=1.

Vamplew, Wray, ed. Australians: Historical Statistics. Edited by Alan D. Gilbert and K. S. Inglis, Australians: A Historical Library. Sydney: Fairfax, Syme and Weldon Associates, 1987.

White, Colin. Mastering Risk: Environment, Markets and Politics in Australian Economic History. Melbourne: Oxford University Press, 1992.

World Bank. World Development Indicators ESDS International, University of Manchester, September 2005 [cited 29 November 2005]. Available from http://www.esds.ac.uk/International/Introduction.asp.

Citation: Attard, Bernard. “The Economic History of Australia from 1788: An Introduction”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL
http://eh.net/encyclopedia/the-economic-history-of-australia-from-1788-an-introduction/

Historical Anthropometrics

Timothy Cuff, Westminster College

Historical anthropometrics is the study of patterns in human body size and their correlates over time. While social researchers, public health specialists and physical anthropologists have long utilized anthropometric measures as indicators of well-being, only within the past three decades have historians begun to use such data extensively. Adult stature is a cumulative indicator of net nutritional status over the growth years, and thus reflects command over food and access to healthful surroundings. Since expenditures for these items comprised such a high percentage of family income for historical communities, mean stature can be used to examine changes in a population’s economic circumstances over time and to compare the well-being of different groups with similar genetic height potential. Anthropometric measures are available for portions of many national populations as far back as the early 1700s. While these data often serve as complements to standard economic indicators, in some cases they provide the only means of assessing historical economic well-being, as “conventional” measures such as per capita GDP, wage and price indices, and income inequality measures have been notoriously spotty and problematic to develop. Anthropometric-based research findings to date have contributed to the scholarly debates over mortality trends, the nature of slavery, and the outcomes of industrialization and economic development. Height has been the primary indicator utilized to date. Other indicators include height-standardized weight indices, birth weight, and age at menarche. Potentially even more important, historical anthropometrics broadens the understanding of “well-being” beyond the one dimensional “ruler” of income, providing another lens through which the quality of historical life can be viewed.

This article:

  • provides a brief background of the field including a history of human body measurement and analysis and a description of the biological foundations for historical anthropometrics,
  • describes the current state of the field (along with methodological issues) and future directions, and
  • provides a selective bibliography.

Anthropometrics: Historical and Bio-Medical Background

The Evolution of Body Measurement and Analysis in Context

The measurement and description of the human form in the West date back to the artists of classical civilizations, but the rationale for systematic, large-scale body measurement and record keeping emerged out of the needs of early modern military organizations. By the mid-eighteenth century height commonly provided a means of classifying men into and of identifying them within military units and the procedures for measuring individuals entering military service were well established. The military’s need to identify recruits has provided most historical measurements of young men.

Scientific curiosity in the eighteenth century also spurred development of the first textbooks on human growth, although they were more concerned with growth patterns throughout life than with stature differences across groups or over time. In the nineteenth century class differences in height were easily observable in England. The moral outrage generated by the “tiny children” (Charles Dickens’ “Oliver Twists”) along with the view that medicine had a preventive as well as a curative function, meant that anthropometry was directed primarily at the poor, especially children toiling in the factories of English and French industrial cities. Later, fear in Britain over the “degeneration” of its men and their potential as an effective fighting force provided motivation for large-scale anthropometric surveys, as did efforts evolving out of the child-welfare movement. The early-twentieth century saw the establishment of a series of longitudinal population surveys (which follow individuals as they age) in North America and in Europe. In some cases this work was directed toward the generation of growth standards, while other efforts evaluated social-class differences among children. Such studies can be seen as transitional steps between contemporary and historical anthropometrics. Since the 1950s, anthropometry has been utilized for a variety of purposes in both the developed and underdeveloped world. Population groups have been measured in order to refine growth standards, to monitor the nutritional status of individuals and populations during famines and political disturbances, and to evaluate the effectiveness of economic development programs.

Anthropometric studies today can be classified as one of three types. Auxologists perform basic research, collecting body measurements over the human life cycle to further detail standards of physical development for twenty-first century populations. The second focus, a continuation of nineteenth century work, documents the living standards of children often supporting regulatory legislation or government aid policies. The third direction is historical anthropometrics. Economists, historians, and anthropologists specializing in this field seek to assess, in physical terms, the well-being of previous societies and the factors which influenced it.

Human Growth and Development: The Biological Foundations of Historical Anthropometrics

While historical anthropometric research is a relatively recent development, an extensive body of medical literature relating nutrition and epidemiological conditions to physical growth provides a strong theoretical underpinning. Bio-medical literature, along with the World Health Organization, describes mean stature as one of the best measures of overall health conditions within a society.

Final attained height and height by age both result from a complex interaction of genetic endowment and environmental effects. At the level of the individual, genetics is a strong but not exclusive influence on the determination of final height and of growth patterns. Genetics is most important when net nutrition is optimal. However, when evaluating differences among groups of people in sub-optimal nutritional circumstances environmental influences predominate.

The same nutritional regime can result in different final stature for particular individuals, because of genetic variation in the ability to continue growing in the face of adverse nutritional circumstances, epidemiological environments, or work requirements. However, the genetic height potential of most Europeans, Africans, and North Americans of European or African ancestry is comparable; i.e., under equivalent environmental circumstances the groups have achieved nearly identical mean adult stature. For example, in many parts of rural Africa, mean adult heights today are similar to those of Africans of 150 years ago, while well-fed urban Africans attain final heights similar to current-day Europeans and North Americans of European descent. Differences in nutritional status do result in wide variation in adult height even within populations of the same genetic make-up. For example, individuals from higher socio-economic classes tend to be taller than their lower class counterparts whether in impoverished third-world countries or in the developed nations.

Height is the most commonly utilized, but not the only, anthropometric indicator of nutritional status. The growth profile is another. Environmental conditions, while affecting the timing of growth (the ages at which accelerations and decelerations in growth rates occur), do not affect the overall pattern (the sequence in which growth/maturation events occur). The body seems to be self-stabilizing, postponing growth until caloric levels will support it and maintaining genetically programmed body proportions more rigidly than size potential. While final adult height and length of the growth period are not absolutely linked, populations which stop growing earlier usually, although not universally, end up being taller. Age at menarche, birth weight, and weight-for-height are also useful. Age at menarche (i.e. the first occurrence of menstruation) is not a measure of physical size, but of sexual maturation. Menarche generally occurs earlier among well-nourished women. Average menarcheal age in the developed West is about 13 years, while in the middle of the nineteenth century it was between 15 and 16 years among European women. Areas which have not experienced nutritional improvement over the past century have not witnessed decreases in the age at menarche. Infant birth weight, an indicator of long-term maternal nutritional status, is influenced by the mother’s diet, work intensity, quality of health care, maternal size and the number of children she has delivered, as well as the mother’s health practices. The level of economic inequality and social class status are also correlated with birth weight variation, although these variables reflect some of the factors noted above. However, because the mother’s diet and health status are such strong influences on birth weight, it provides another useful means of monitoring women’s well-being. Height-for-weight indices, particularly the body mass index (BMI), have seen some use by anthropometric historians. Contemporary bio-medical research which links BMI levels and mortality risk hints at the promise which this measure might hold for historians. However, the limited availability of weight measurements before the mid-nineteenth century will limit the studies which can be produced.

Improvements in net nutritional status, both across wide segments of the population in developed countries and within urban areas of less-developed countries (LDCs), are generally accepted as the most salient influence on growth patterns and final stature. The widely experienced improvement in net nutrition which was apparent in most of the developed world across most of the twentieth century and more recently in the “modern” sector of some LDCs has lead to a secular trend, the unidirectional trend toward greater stature and faster maturation. Before the twentieth century, height cycling without a distinct direction was the dominant historical pattern. (Two other sources of stature increase have been hypothesized but have garnered little support among the medical community: the increased practice of infantile smallpox vaccination and heterosis (hybrid vigor), i.e. varietal cross-breeding within a species which produces offspring who are larger or stronger than either parent.)

The Definition and Determination of Nutritional Status

“Nutritional status” is a term critical to an understanding of anthropometrics. It encompasses more than simply diet, i.e. the intake of calories and nutrients, and is thus distinct from the more common term “nutrition.” While nutrition refers to the quantity and quality of food inputs to the human biological system, it makes no reference to the amounts needed for healthy functioning resulting from nutrient demand placed on the individual. Nutritional status, or synonymously “net nutrition,” refers to the summing up of nutrient input and demand on those nutrients. While work intensity is the most obvious demand, it is just one of many. Energy is required to resist infection. Pregnancy adds caloric and nutrient demands, as does breast-feeding. Calories expended in any of these fashions are available neither for basal metabolism, nor for growth. The difference between nutrition and nutritional status/net nutrition is important for anthropometrics, because it is the latter, not the former, for which auxological measurements are a proxy.

Human biologists and medical scientists generally agree that within genetically similar populations net nutrition is the primary determinant of adult physical stature. Height, as Bielicki notes, is “socially induced variation.” Figure 1 indicates the numerous channels of influence on the final adult stature of any individual. Anthropometric indicators reflect the relative ease or difficulty of acquiring sufficient nutrients to provide for growth in excess of the immediate needs of the body. Nutritional status and physical stature clearly are composite measures of well-being linked to economic processes. However, the link is mediated through a variety of social circumstances, some volitional, others not. Hence, anthropometric historians must evaluate each situation within its own economic, cultural, and historical context.

In earlier societies, and in some less developed countries today, access to nutrients was determined primarily by control of arable land. As markets for food developed and urban living became predominant, for increasing percentages of the population, access to nutrients depended upon the ability to purchase food, i.e. on real income. Additionally, food allocation within the family is not determined by markets but by intra-household bargaining as well as by tastes and custom. For example, in some cultures households distribute food resources so as to ensure nutritional adequacy for those family members engaged in income or resource-generating activity in order to maximize earning power. The handful of studies which include historical anthropometric data for women reveal that stature trends by gender do not always move in concert. Rather, in periods of declining nutritional status, women often exhibited a reduction in stature levels before such changes appeared among males. This is somewhat paradoxical because biologists generally argue that women’s growth trajectories are more resistant to a diminution in nutritional status than are those of men. Though too little historical research has been done on this issue to speak with certainty, the pattern might imply that, in periods of nutritional stress, women bore the initial brunt of deprivation.

Other cultural practices, including the high status accorded to the use of certain foods, such as white flour, polished rice, tea or coffee may promote greater consumption of nutritionally less valuable foods among those able to afford them. This would tend to reduce the resultant stature differences by income. Access to nutrients also depends upon other individual choices. A small landholder might decide to market much of his farm’s high-value, high-protein meat and dairy products, reducing his family’s consumption of these nutritious food products in order to maximize money income. However, while material welfare would increase, biological welfare, knowingly or unknowingly, would decline.

Disease-exposure variation occurs as a result of some factors under the individual’s control and other factors which are determined at the societal level. Pathogen prevalence and potency and the level of community sanitation are critical factors which are not directly affected by individual decision making. However, housing and occupation are often individually chosen and do help to determine the extent of disease exposure. Once transportation improvements allow housing segregation based on socio-economic status to occur within large urban areas, residence location can become an important influence. However, prior to such, for example in mid-nineteenth century United States, urban childhood mortality levels were more influenced by the number of children in a family than by parental occupation or socio-economic status. The close proximity of the homes of the wealthy and the poor seems to have created a common level of exposure to infectious agents and equally poor sanitary conditions for children of all economic classes.

Work intensity, another factor determining nutritional status, is a function of the age at which youth enter the labor force, educational attainment, the physical exertion needed in a chosen occupation, and the level of technology. There are obvious feedback effects from current nutritional status to future nutritional status. A low level of nutritional status today might hinder full-time labor-force participation, and result in low incomes, poor housing, and substandard food consumption in subsequent periods as well, thereby reinforcing the cycle of nutritional inadequacy.

Historical Anthropometrics

Early Developments in the Field

Le Roy Ladurie’s studies of nineteenth-century French soldiers published in the late 1960s and early 1970s are recognized as the first works in the spirit of modern historical anthropometrics. He documented that stature among French recruits varied with their socio-economic characteristics. In the U.S., the research was carried forward in the late 1970s, much based on nineteenth-century records of U.S. slaves transported from the upper to the lower South. Studies of Caribbean slaves followed.

In the 1980s numerous anthropometric works were generated in connection with a National Bureau of Economic Research (NBER) directed study of American and European mortality trends from 1650 to the present, coordinated by Robert W. Fogel. Motivated in great part by the desire to evaluate Thomas McKeown’s hypothesis that improvements in nutrition were the critical component in mortality declines in the seventeenth through the nineteenth centuries, the project has lead to the creation of numerous large anthropometric data bases. These have been the starting point for the analysis of trends in physical stature and net nutritional status on both sides of the Atlantic. While most historical anthropometric studies published in the U.S. during the early and mid-1980s were either outgrowths of the NBER project or were conducted by students of Robert Fogel, such as Richard Steckel and John Komlos, mortality trends were no longer the sole focus of historical anthropometrics. Anthropometric statistics were used to analyze the effect of industrialization on the populations experiencing it, as well as the characteristics of slavery in the United States. The data sources were primarily military records or documents relating to slaves. As the 1980s became the 1990s the geographic range of stature studies moved beyond Europe and North American to include Asia, Australia, and Africa. Other data sources were utilized. These included records from schools and utopian communities, certificates of freedom for manumitted slaves, voter registration cards, newspaper advertisements for runaway slaves and indentured servants, insurance applications, and a variety of prison inmate records. The number of anthropometric historians also expanded considerably.

Findings to Date

Major achievements to date in historical anthropometrics include 1) the determination of the main outlines of the trend in physical stature in Europe and North America between the eighteenth and twentieth centuries, and 2) the emergence of several well-supported, although still debated, hypotheses pertaining to the relationship between height and the economic and social developments which accompanied modern economic growth in these centuries.

Historical research on human height has indicated how much healthier the New World environment was compared to that of Europe. Europeans who immigrated to North America, on average, obtained a net nutritional status far better than that which was possible for them to attain in their place of birth. Eighteenth century North Americans attained mean heights not achieved by Europeans until the twentieth century. The combination of lower population density, lower levels of income inequality, and greater food resources bestowed a great benefit upon those growing up in North America. This advantage is evident not only in adult heights but also in the earlier timing of the adolescent growth spurt, as well as the earlier attainment of final height.

Table 1
Mean Heights of Adult Males (in inches)

Table 1
Mean Heights of Adult Males (in inches)–>

North America Europe
European Ancestry African Ancestry Hungary England Sweden
1775 – 1783 1861 – 1865 1943 – 1944 1811 – 1861 1943 – 1944 1813 – 1835 1816 – 1821 1843 – 1886
68.1 68.5 68.1 67.0 67.9 64.2 65.8 66.3

Sources: U.S. whites, 1775-1783: Kenneth L. Sokoloff and Georgia C. Villaflor, “The Early Achievement of Modern Stature in America,” Social Science History 6 (1982): 453-481. U.S. whites, 1861-65: Robert Margo and Richard Steckel, “Heights of Native-Born Whites during the Antebellum Period,” Journal of Economic History 43 (1983): 167-174. U.S. whites and blacks, 1943-44: Bernard D. Karpinos, “Height and Weight of Selective Service Registrants Processed for Military Service during World War II,” Human Biology 40 (1958): 292-321, Table 5. U.S. blacks, 1811-1861: Robert Margo and Richard Steckel, “The Height of American Slaves: New Evidence on Slave Nutrition and Health,” Social Science History 6 (1982): 516-538, Table 1. Hungary: John Komlos. Nutrition and Economic Development in the Eighteenth Century Habsburg Monarchy, Princeton: Princeton University Press, 1989, Table 2.1, 57. Britain: Roderick Floud, Kenneth Wachter, and Annabel Gregory, Height, Health, and History: Nutritional Status in the United Kingdom, 1750-1980, Cambridge: Cambridge University Press, 1990, Table 4.1, 148. Sweden: Lars G. Sandberg and Richard Steckel, “Overpopulation and Malnutrition Rediscovered: Hard Times in 19th-Century Sweden,” Explorations in Economic History 25 (1988): 1-19, Table 2, 7.

Note: Dates refer to dates of measurement.

Stature Cycles in Europe and America

The early finding that there was not a unidirectional upward trend in stature since the 1700s startled researchers, whose expectations were based on recent experience. Extrapolating backward, Floud, Wachter, and Gregory note that such surprise was misplaced, for if the twentieth century’s rate of height increase had been occurring for several centuries, medieval Europeans would have been dwarfs or midgets. Instead, in Europe cycles in height were evident. Though smaller in amplitude than in Europe, stature cycling was a feature of the American experience, as well. At the time of the American Revolution, the Civil War, and World War II, the mean height of adult, native-born white males was a fraction over 68 inches (Table 1), but there was some variation in between these periods with a small decline in the years before the Civil War and perhaps another one from 1860 into the 1880s. Just before the turn of the twentieth century, mean stature began its relatively uninterrupted increase which continues to the present day. These findings are based primarily on military records drawn from the early national army, Civil War forces, West Point Cadets, and the Ohio National Guard, although other data sets show similar trends. The free black population seems to have experienced a downturn in physical stature very similar to that of whites in the pre-Civil War period. However, an exception to the antebellum diminution in nutritional status has been found among slave men.

Per Capita Income and Height

In addition to the cycling in height, anthropometric historians have documented that the intuitively anticipated positive correlation between mean height and per capita income holds at the national level in the twentieth century. Steckel has shown that, in cross-national comparison, the correlation between height and per capita income is as high as .84 to .90. However, since per capita income is highly correlated with a series of other variables that also affect height, the exact pathway through which income affects height is not fully clear. Among the factors which help to explain the variation are better diet, medicine, improvements in sanitary infrastructure, longer schooling, more sedentary life, and better housing. Intense work regimes and psycho-social stress, both of which affect growth negatively, might also be mitigated by greater per capita income. However, prior to the twentieth century the relationship between height and income was not monotonic. U.S. troops during the American Revolution were nearly as tall as U.S. soldiers sent to Europe and Japan in the 1940s, despite the fact that per capita income in the earlier period was substantially below that in the latter. Similarly, while per capita income in the U.S. in the late 1770s was below that of the British, the American troops had a height advantage of several inches over their British counterparts in the War of Independence.

Height and Income Inequality

The level of income inequality also has a powerful influence on mean heights. Steckel’s analysis of data for the twentieth century indicates that a 0.1 decrease in the Gini coefficient (indicating greater income equality) is associated with a gain in mean stature of about 3.7 cm (1.5 inches). In societies with great inequality, increases in per capita income have little effect on average stature if the gains accrue primarily to the wealthier segments of the society. Conversely, even without changes in average national per capita income, a reduction in inequality can have similar positive impact upon the stature and health of those at the lower rungs of the income ladder.

The high level of social inequality at the onset of modern economic growth in England is exemplified by the substantial disparity between the height of students of the Sandhurst Royal Military Academy, an elite institution, and the Marine Society, a home for destitute boys in London. The difference in mean height at age fourteen exceeded three inches in favor of the gentry. In some years the gap was even greater. Komlos has documented similar findings elsewhere: regardless of location, boys from “prestigious military schools in England, France, and Germany were much taller than the population at large.” A similar pattern existed in the nineteenth-century U.S. However, the social gap in the U.S. was miniscule compared to that prevailing in the Old World. Stature also varied by occupational groups. In eighteenth and nineteenth century Europe and North America, white collar and professional workers tended to be significantly taller than laborers and unskilled workers. However, farmers, being close to the source of nutrients and with fewer interactions with urban disease pools, tended to be the tallest, though their advantage disappeared by the twentieth century.

Regional and Rural-Urban Differences

Floud, Wachter, and Gregory have shown that, in early nineteenth century Britain, regional variation in stature dwarfed occupational differences. In 1815, Scotsmen, rural and urban, as well as the Irish, were about one-half an inch taller than the non-London urban English of the day. The rural English were slightly shorter, on average, than Englishmen born in small and medium sized towns. Londoners, however, had a mean height almost one-third of an inch less than other urban dwellers in England and more than three-quarters of an inch below the Irish or the Scots. A similar pattern held among convicts transported to New South Wales, Australia, except that the stature of the rural English was well above the average for all other English transported convicts. Floud, Wachter, and Gregory show a trend of convergence in height among these groups after 1800. The tendency for low population density rural areas in the nineteenth century to be home to the tallest individuals was apparent from the Habsburg Monarchy to Scotland, and in the remote northern regions of late nineteenth-century Sweden and Japan as well. In colonial America the rural-urban gradient did not exist. As cities grew, the rural born began to display a stature advantage over their urban brethren. This divergence persisted into the nineteenth century, and disappeared in the early twentieth century, when the urban-born gained a height advantage.

The Early-Industrial-Growth and Antebellum Puzzles

These patterns of stature variation have been put into a framework in both the European and the American contexts. Respectively they are known as the “early-industrial-growth puzzle” and the “Antebellum puzzle.” The commonality which has been identified is that in the early stages of industrialization and/or market integration, even with rising per capita incomes, the biological well-being of the populations undergoing such change does not, necessarily, improve immediately. Rather, for at least some portions of the population, biological well-being declined during this period of economic growth. Explanations for these paradoxes (or puzzles) are still being investigated and include: rising income inequality, the greater spread of disease through more thoroughly developed transportation and marketing systems and urban growth, the rising real price of food as population growth outstripped the agricultural system’s ability to provide, and the choice of farmers to market rather than consume high value/high protein crops.

Slave Heights

Research on slave heights has provided important insight into the living standards of these bound laborers. Large differences in stature have been documented between slaves on the North American mainland and those in the Caribbean. Adult mainland slaves, both women and men, were approximately two inches taller than those in the West Indies throughout the eighteenth and nineteenth centuries. Steckel argues that the growth pattern and infant mortality rates of U.S. slave children indicate that they were moderately to severely malnourished, with mean heights for four to nine year olds below the second percentile of modern growth standards and with mortality rates twice those estimated for the entire United States population. Although below the fifth percentile throughout childhood, as adults these slaves were relatively tall by nineteenth-century standards, reaching about the twenty-fifth percentile of today’s height distribution, taller than most European populations of the time.

Height’s Correlation with Other Biological Indicators

The evaluation of McKeown’s hypothesis that much of the modern decline in mortality rates could be traced to improvements in nutrition (food intake) was one of the early rationales for the modern study of historical stature. Subsequent work has presented evidence for the parallel cycling of height and life expectancy in the United States during the nineteenth century. The relationship between the body-mass index, morbidity, and mortality risk within historical populations has also been documented. Along a similar line, Sandberg and Steckel’s data on Sweden have pointed out the parallel nature of stature trends and childhood mortality rates in the mid-nineteenth century.

Economic and social history are not the only two fields which have felt historical anthropometrics’ impact. North American slave height-by-age profiles developed by Steckel have been used by auxologists to exemplify the range of possible growth patterns among humans. Based on findings within the biological sciences, historical studies of stature have come full circle and are providing those same sciences with new data on human physical potential.

Methodological Issues

Accuracy problems in military-based data sets arise predominantly from carelessness of the measurer or from intentional misreporting of data rather than from lack of orthodox practice. Inadequate concern for accuracy can most often be noticed in heaping (height observations rounded to whole feet, six inch increments, or even numbered inches) and lack of fractional measurements. These “rounding” errors tend to be self-canceling. Of greater concern is intentional misreporting of either height or age, because minimum stature and age restrictions were often applied to military recruits. Young men, eager to discover the “romance” of military life or receive the bounty which sometimes accompanied enlistment, were not impervious to slight fabrication of their age. Recruiting officers, hoping to meet their assigned quotas quickly, might have been tempted to round measurements up to the minimum height requirement. Hence, it is not uncommon to find height and age heaping at either the age or stature minima.

For anthropometric historians, the issue of the representativeness of the population under study is similar to that for any social historian, but several specific caveats are appropriate when considering military samples. In time of peace military recruits tend to be less representative of the general population than are wartime armies. The military, with fewer demands for personnel, can be more selective, often instituting more stringent height minima, and occasionally maxima, for recruits. Such policies, as well as the self-interested behaviors noted above, require those who would use military data sets to evaluate and potentially adjust the data to account for the observations missing due to either left or right tail truncation. A series of techniques to account for such difficulties in the data have been developed, although there is still debate over the most appropriate technique. Other data sets also exhibit selectivity biases, although of different natures. Prison registers clearly do not provide a random sample of the population. The filter, however, is not based on size or desire for “exciting” work – rather on the propensity for criminal activity and on the enforcement mechanism of the judicial system. The representativeness of anthropometric samples can also be affected by previous selection by the Grim Reaper. Within Afro-Caribbean slave populations in Trinidad, death rates were significantly higher for shorter individuals (at all ages) than for the taller ones. The result is that a select group of more robust and taller individuals remained alive for eventual measurement.

One difficulty faced by anthropometric historians is the association of this research, more imagined than real, with previous misuses of body measurement. Nineteenth century American phrenologists used skull shape and size as a means of determining intelligence and as a way of justifying the enslavement of African-Americans. The Bertillon approach to evaluating prison inmates included the measurement and classification of lips, ears, feet, nose, and limbs in an effort to discern a genetic or racial basis for criminality. The Nazis attempted to breed the perfect race by eliminating what they perceived to be physically “inferior” peoples. Each, appropriately, has made many squeamish in regard to the use of body measurements as an index of social development. Further, while the biological research which supports historical anthropometrics is scientifically well founded and fully justifies the approach, care must be exercised to ensure that the impression is not given that researchers either are searching for, or promoting, an “aristocracy of the tall.” Being tall is not necessarily better in all circumstances, although recent work does indicate a series of social and economic advantages do accrue to the tall. However, for populations enduring an on-going sub-optimal net nutritional regime, an increase in mean height does signify improvement in the net nutritional level, and thus the general level of physical well-being. Untangling the factors responsible for change in this social indicator is complicated and height is not a complete proxy for the quality of life. However, it does provide a valuable means of assessing biological well-being in the past and the influence of social and economic developments on health.

Future Directions

Historical anthropometrics is maturing. Over the past several years a series of state-of-the-field articles and anthologies of critical works have been written or compiled. Each summarizes past accomplishments, consolidates isolated findings into more generalized conclusions, and/or points out the next steps for researchers. In 2004, the editors of Social Science History devoted an entire volume to anthropometric history, drawing upon both current work and remembrances of many of the field’s early and prominent researchers, including an integrative essay by Komlos and Baten. Anthropometric history now has its own journal, as John Komlos, who has literally established a center for historical anthropometrics in Munich, created Economics and Biology, “devoted to the exploration of the effect of socio-economic processes on human beings as biological organisms.” Early issues highlight the wide geographic, temporal, and conceptual range of historical anthropometric studies. Another project which shows the great range of current effort is Richard Steckel’s work with anthropologists to characterize very long term patterns in the movement of mean human height. Already this collaboration has produced, The Backbone of History: Health and Nutrition in the Western Hemisphere, a compilation of essays documenting the biological well-being of New World populations beginning in 5000 B.C. using anthropological evidence. Its findings, consistent with those of some other recent anthropological studies, indicate a decline in health status for members of Western Hemisphere cultures in the pre-Columbian period as these societies began the transition from economies based on hunting and gathering to ones relying more heavily on settled agriculture. Steckel has been working to expand this approach to Europe via a collaborative and interdisciplinary project funded in part by the U.S. National Science Foundation, titled, “A History of Health in Europe from the Late Paleolithic Era to the Present.”

Yet even with these impressive steps, continued work, similar to early efforts in the field, is still needed. Expansion of the number and type of samples are important steps in the confirmation and consolidation of early results. One of the field’s on-going frustrations is that, except for slave records, few data sets contain physical measurements for large numbers of females. To date, female slaves and ex-slaves, some late nineteenth century U.S. college women, along with transported female convicts are the primary sources of female historical stature. Generalizations of research findings to entire populations are hindered by the small amount of data on females and the knowledge, from that data which are extant, that stature trends for the two sexes do not mimic each other. Similarly, upper class samples of either sex are not common. Future efforts should be directed at locating samples which contain data on these two understudied groups.

As Riley noted, the problem which anthropometric historians seek to resolve is not the identification of likely influences on stature. The biological sciences have provided that theoretical framework. The task at hand is to determine the relative weight of the various influences or, in Fogel’s terms, to perform “an accounting exercise of particularly complicated nature, which involves measuring not only the direct effect of particular factors but also their indirect effects and their interactions with other factors.”

More localized studies, with sample sizes adequate statistical analysis, are needed. These will allow the determination of the social, economic, and demographic factors most closely associated with human height variation. Other key areas of future investigation include the functional consequences of differences in biological well-being proxied by height, including differences in labor productivity and life expectancy. Even with the strides that have been made, in some corners, skepticism remains about the approach. To combat this, researchers must be careful to stress repeatedly what anthropometric indicators proxy, what their limits are, and how knowledge of anthropometric trends can appropriately influence our understanding of economic and social history as well as inform social policy. The field promises many future insights into the nature of and influences on historical human well-being and thus clues about how human well-being, the focus of economics generally, can be more fully and more widely advanced.

Selected Bibliography

Survey/Overview Publications

Engerman, Stanley. “The Standard of Living Debate in International Perspective: Measures and Indicators.” In Health and Welfare during Industrialization, edited by Richard H. Steckel and Roderick Floud, 17-46. Chicago: University of Chicago Press, 1997.

Floud, Roderick, and Bernard Harris. “Health, Height, and Welfare: Britain 1700-1980.” In Health and Welfare during Industrialization, edited by Richard H. Steckel and Roderick Floud, 91-126. Chicago: University of Chicago Press, 1997.

Floud, Roderick, Kenneth Wachter, and Annabelle Gregory. “The Heights of Europeans since 1750: A New Source for European Economic History.” In Stature, Living Standards, and Economic Development: Essays in Anthropometric History, edited by John Komlos, 10-24. Chicago: University of Chicago Press, 1994.

Floud, Roderick, Kenneth Wachter, and Annabelle Gregory. Height, Health, and History: Nutritional Status in the United Kingdom, 1750-1980. Cambridge: Cambridge University Press, 1990.

Fogel, Robert W. “Nutrition and the Decline in Mortality since 1700: Some Preliminary Findings.” In Long-Term Factors in American Economic Growth, edited by Stanley Engerman and Robert Gallman, 439-527. Chicago: University of Chicago Press, 1987.

Haines, Michael R. “Growing Incomes, Shrinking People – Can Economic Development Be Hazardous to Your Health? Historical Evidence for the United States, England, and the Netherlands in the Nineteenth Century.” Social Science History 28 (2004): 249-70.

Haines, Michael R., Lee A. Craig, and Thomas Weiss. “The Short and the Dead: Nutrition, Mortality, and the ‘Antebellum Puzzle’ in the United States.” Journal of Economic History 63 (June 2003): 382-413.

Harris, Bernard. “Health, Height, History: An Overview of Recent Developments in Anthropometric History.” Social History of Medicine 7 (1994): 297-320.

Harris, Bernard. “The Height of Schoolchildren in Britain, 1900-1950.” In Stature, Living Standards and Economic Development: Essays in Anthropometric History, edited by John Komlos, 25-38. Chicago: University of Chicago Press, 1998.

Komlos, John, and Jörg Baten. The Biological Standard of Living in Comparative Perspectives: Proceedings of a Conference Held in Munich, January 18-23, 1997. Stuttgart: Franz Steiner Verlag, 1999.

Komlos, John, and Jörg Baten. “Looking Backward and Looking Forward: Anthropometric Research and the Development of Social Science History.” Social Science History 28 (2004): 191-210.

Komlos, John, and Timothy Cuff. Classics of Anthropometric History: A Selected Anthology, St. Katharinen, Germany: Scripta Mercaturae, 1998.

Komlos, John. “Anthropometric History: What Is It?” Magazine of History (Spring 1992): 3-5.

Komlos, John. Stature, Living Standards, and Economic Development: Essays in Anthropometric History. Chicago: University of Chicago Press, 1994.

Komlos, John. The Biological Standard of Living in Europe and America 1700-1900: Studies in Anthropometric History. Aldershot: Variorum Press, 1995.

Komlos, John. The Biological Standard of Living on Three Continents: Further Essays in Anthropometric History. Boulder: Westview Press, 1995.

Steckel, Richard H., and J.C. Rose. The Backbone of History: Health and Nutrition in the Western Hemisphere. New York: Cambridge University Press, 2002.

Steckel, Richard H., and Roderick Floud. Health and Welfare during Industrialization. Chicago: University of Chicago Press, 1997.

Steckel, Richard. “Height, Living Standards, and History.” Historical Methods 24 (1991): 183-87.

Steckel, Richard. “Stature and Living Standards in the United States.” In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John J. Wallis, 265-310. Chicago: University of Chicago Press, 1992.

Steckel, Richard. “Stature and the Standard of Living.” Journal of Economic Literature 33 (1995): 1903-40.

Steckel, Richard. “A History of the Standard of Living in the United States.” In EH.Net Encyclopedia, edited by Robert Whaples, http://www.eh.net/encyclopedia/contents/steckel.standard.living.us.php

Seminal Articles in Historical Anthropometrics

Aron, Jean-Paul, Paul Dumont, and Emmanuel Le Roy Ladurie. Anthropologie du Conscrit Francais. Paris: Mouton, 1972.

Eltis, David. “Nutritional Trends in Africa and the Americas: Heights of Africans, 1819-1839.” Journal of Interdisciplinary History 12 (1982): 453-75.

Engerman, Stanley. “The Height of U.S. Slaves.” Local Population Studies 16 (1976): 45-50.

Floud, Roderick and Kenneth Wachter. “Poverty and Physical Stature, Evidence on the Standard of Living of London Boys 1770-1870.” Social Science History 6 (1982): 422-52.

Fogel, Robert W. “Physical Growth as a Measure of the Economic Well-being of Populations: The Eighteenth and Nineteenth Centuries.” In Human Growth: A Comprehensive Treatise, second edition, volume 3, edited by F. Falkner and J.M. Tanner, 263-281. New York: Plenum, 1986.

Fogel, Robert W., Stanley Engerman, Roderick Floud, Gerald Friedman, Robert Margo, Kenneth Sokoloff, Richard Steckel, James Trussell, Georgia Villaflor and Kenneth Wachter. “Secular Changes in American and British Stature and Nutrition.” Journal of Interdisciplinary History 14 (1983): 445-81.

Fogel, Robert W., Stanley L. Engerman, and James Trussell. “Exploring the Uses of Data on Height: The Analysis of Long-Term Trends in Nutrition, Labor Welfare, and Labor Productivity.” Social Science History 6 (1982): 401-21.

Friedman, Gerald C. “The Heights of Slaves in Trinidad.” Social Science History 6 (1982): 482-515.

Higman, Barry W. “Growth in Afro-Caribbean Slave Populations.” American Journal of Physical Anthropology 50 (1979): 373-85.

Komlos, John. “The Height and Weight of West Point Cadets: Dietary Change in Antebellum America.” Journal of Economic History 47 (1987): 897-927.

Le Roy Ladurie, Emmanuel, N. Bernageau, and Y. Pasquet. “Le Conscrit et l’ordinateur: Perspectives de recherches sur les Archives Militaries du XIXieme siecle Francais.” Studi Storici 10 (1969): 260-308.

Le Roy Ladurie, Emmanuel. “The Conscripts of 1868: A Study of the Correlation between Geographical Mobility, Delinquency and Physical Stature and Other Aspects of the Situation of the Young Frenchmen Called to Do Military Service That Year.” In The Territory of the Historian. Translated by Ben and Sian Reynolds. Chicago: University of Chicago Press, 1979.

Margo, Robert and Richard Steckel. “Heights of Native Born Whites during the Antebellum Period.” Journal of Economic History 43 (1983): 167-74.

Margo, Robert and Richard Steckel. “The Height of American Slaves: New Evidence on Slave Nutrition and Health.” Social Science History 6 (1982): 516-38.

Steckel, Richard. “Height and per Capita Income.” Historical Methods 16 (1983): 1-7.

Steckel, Richard. “Slave Height Profiles from Coastwise Manifests.” Explorations in Economic History 16 (1979): 363-80.

Articles Addressing Methodological Issues

Heintel, Markus, Lars Sandberg and Richard Steckel. “Swedish Historical Heights Revisited: New Estimation Techniques and Results.” In The Biological Standard of Living in Comparative Perspective, edited by John Komlos and Jörg Baten, 449-58. Stuttgart: Franz Steiner, 1998.

Komlos, John, and Joo Han Kim. “Estimating Trends in Historical Heights.” Historical Methods 23 (1900): 116-20.

Riley, James C. “Height, Nutrition, and Mortality Risk Reconsidered.” Journal of Interdisciplinary History 24 (1994): 465-92.

Steckel, Richard. “Percentiles of Modern Height: Standards for Use in Historical Research.’ Historical Methods 29 (1996): 157-66.

Wachter, Kenneth, and James Trussell. “Estimating Historical Heights.” Journal of the American Statistical Association 77 (1982): 279-303.

Wachter, Kenneth. “Graphical Estimation of Military Heights.” Historical Methods 14 (1981): 31-42.

Publications Providing Bio-Medical Background for Historical Anthropometrics

Bielecki, T. “Physical Growth as a Measure of the Economic Well-being of Populations: The Twentieth Century.” In Human Growth, second edition, volume 3, edited by F. Falkner and J.M. Tanner, 283-305. New York: Plenum, 1986.

Bogin, Barry. Patterns of Human Growth. Cambridge: Cambridge University Press, 1988.

Eveleth, Phyllis B. “Population Differences in Growth: Environmental and Genetic Factors.” In Human Growth: A Comprehensive Treatise, second edition, volume 3, edited by F. Falkner and J.M. Tanner, 221-39. New York: Plenum, 1986.

Eveleth, Phyllis B. and James M. Tanner. Worldwide Variation in Human Growth. Cambridge: Cambridge University Press, 1976.

Tanner, James M. “Growth as a Target-Seeking Function: Catch-up and Catch-down Growth in Man.” In Human Growth: A Comprehensive Treatise, second edition, volume 1, edited by F. Falkner and J.M. Tanner, 167-80. New York: Plenum, 1986.

Tanner, James M. “The Potential of Auxological Data for Monitoring Economic and Social Well-Being.” Social Science History 6 (1982): 571-81.

Tanner, James M. A History of the Study of Human Growth. Cambridge: Cambridge University Press, 1981.

World Health Organization. “Use and Interpretation of Anthropometric Indicators of Nutritional Status.” Bulletin of the World Health Organization 64 (1986): 929-41.

Predecessors to Historical Anthropometrics

Bowles, G. T. New Types of Old Americans at Harvard and at Eastern Women’s Colleges. Cambridge, MA: Harvard University Press, 1952.

Damon, Albert. “Secular Trend in Height and Weight within Old American Families at Harvard, 1870-1965.” American Journal of Physical Anthropology 29 (1968): 45-50.

Damon, Albert. “Stature Increase among Italian-Americans: Environmental, Genetic, or Both?” American Journal of Physical Anthropology 23 (1965) 401-08.

Gould, Benjamin A. Investigations in the Military and Anthropological Statistics of American Soldiers. New York: Hurd and Houghton [for the U.S. Sanitary Commission], 1869.

Karpinos, Bernard D. “Height and Weight of Selective Service Registrants Processed for Military Service during World War II.” Human Biology 40 (1958): 292-321.

Publications Focused on Nonstature-Based Anthropometric Measures

Brudevoll, J.E., K. Liestol, and L. Walloe. “Menarcheal Age in Oslo during the Last 140 Years.” Annals of Human Biology 6 (1979): 407-16.

Cuff, Timothy. “The Body Mass Index Values of Nineteenth Century West Point Cadets: A Theoretical Application of Waaler’s Curves to a Historical Population.” Historical Methods 26 (1993): 171-83.

Komlos, John. “The Age at Menarche in Vienna.” Historical Methods 22 (1989): 158-63.

James M. Tanner. “Trend towards Earlier Menarche in London, Oslo, Copenhagen, the Netherlands, and Hungary.” Nature 243 (1973): 95-96.

Trussell, James, and Richard Steckel. “The Age of Slaves at Menarche and Their First Birth.” Journal of Interdisciplinary History 8 (1978): 477-505.

Waaler, Hans Th. “Height, Weight, and Mortality: The Norwegian Experience.” Acta Medica Scandinavica, supplement 679, 1984.

Ward, W. Peter, and Patricia C. Ward. “Infant Birth Weight and Nutrition in Industrializing Montreal.” American Historical Review 89 (1984): 324-45.

Ward, W. Peter. Birth Weight and Economic Growth: Women’s Living Standards in the Industrializing West. Chicago: University of Chicago Press, 1993.

Articles with a Non-western Geographic Focus

Cameron, Noel. “Physical Growth in a Transitional Economy: The Aftermath of South African Apartheid.” Economic and Human Biology 1 (2003): 29-42.

Eltis, David. ‘Welfare Trends among the Yoruba in the Early Nineteenth Century: The Anthropometric Evidence.” Journal of Economic History 50 (1990): 521-40.

Greulich, W.W. “Some Secular Changes in the Growth of American-born and Native Japanese Children.” American Journal of Physical Anthropology 45 (1976): 553-68.

Morgan, Stephen. “Biological Indicators of Change in the Standard of Living in China during the Twentieth Century.” In The Biological Standard of Living in Comparative Perspective, edited by John Komlos and Jörg Baten, 7-34. Struttart: Franz Steiner, 1998.

Nicholas, Stephen, Robert Gregory, and Sue Kimberley. “The Welfare of Indigenous and White Australians, 1890-1955.” In The Biological Standard of Living in Comparative Perspective, edited by John Komlos and Jörg Baten, 35-54. Stuttgart: Franz Steiner: 1998.

Salvatore, Ricardo D. “Stature, Nutrition, and Regional Convergence: The Argentine Northwest in the First Half of the Twentieth Century.” Social Science History 28 (2004): 297-324.

Shay, Ted. “The Level of Living in Japan, 1885-1938: New Evidence.’ In The Biological Standard of Living on Three Continents: Further Explorations in Anthropometric History, edited by John Komlos, 173-201. Boulder: Westview Press, 1995.

Articles with a North American Focus

Craig, Lee, and Thomas Weiss. “Nutritional Status and Agriculture Surpluses in antebellum United States.” In The Biological Standard of Living in Comparative Perspective, edited by John Komlos and Jörg Baten, 190-207. Stuttgart: Franz Steiner, 1998.

Komlos, John, and Peter Coclanis, “On the ‘Puzzling’ Antebellum Cycle of the Biological Standard of Living: The Case of Georgia,” Explorations in Economic History 34 (1997): 433-59.

Komlos, John. “Shrinking in a Growing Economy? The Mystery of Physical Stature during the Industrial Revolution,” Journal of Economic History 58 (1998): 779-802.

Komlos, John. “Toward an Anthropometric History of African-Americans: The Case of the Free Blacks in Antebellum Maryland.” In Strategic Factors in Nineteenth Century American Economic History: A Volume to Honor Robert W. Fogel, edited by Claudia Goldin and Hugh Rockoff, 267-329. Chicago: University of Chicago Press, 1992.

Murray, John. “Standards of the Present for People of the Past: Height, Weight, and Mortality among Men of Amherst College, 1834-1949.” Journal of Economic History 57 (1997): 585-606.

Murray, John. “Stature among Members of a Nineteenth Century American Shaker Commune.” Annals of Human Biology 20 (1993): 121-29.

Steckel, Richard. “A Peculiar Population: The Nutrition, Health, and Mortality of American Slaves from Childhood to Maturity.” Journal of Economic History 46 (1986): 721-41.

Steckel, Richard. “Health and Nutrition in the American Midwest: Evidence from the Height of Ohio National Guardsmen, 1850-1910.” In Stature, Living Standards, and Economic Development: Essays in Anthropometric History, edited by John Komlos, 153-70. Chicago: University of Chicago Press, 1994.

Steckel, Richard. “The Health and Mortality of Women and Children.” Journal of Economic History 48 (1988): 333-45.

Steegmann, A. Theodore Jr. “18th Century British Military Stature: Growth Cessation, Selective Recruiting, Secular Trends, Nutrition at Birth, Cold and Occupation.” Human Biology 57 (1985): 77-95.

Articles with a European Focus

Baten, Jörg. “Economic Development and the Distribution of Nutritional Resources in Bavaria, 1797-1839.” Journal of Income Distribution 9 (2000): 89-106.

Baten, Jörg. “Climate, Grain production, and Nutritional Status in Southern Germany during the XVIIIth Century.” Journal of European Economic History 30 (2001): 9-47.

Baten, Jörg and John Murray “Heights of Men and Women in the Nineteenth-century Bavaria: Economic, Nutritional, and Disease Influences.” Explorations in Economic History 37 (2000): 351-69.

Komlos, John. “Stature and Nutrition in the Habsburg Monarchy: The Standard of Living and Economic Development in the Eighteenth Century.” American Historical Review 90 (1985): 1149-61.

Komlos, John. “The Nutritional Status of French Students.” Journal of Interdisciplinary History 24 (1994): 493-508.

Komlos, John. “The Secular Trend in the Biological Standard of Living in the United Kingdom, 1730-1860.” Economic History Review 46 (1993): 115-44.

Nicholas, Stephen and Deborah Oxley. “The Living Standards of Women during the Industrial Revolution, 1795-1820.” Economic History Review 46 (1993): 723-49.

Nicholas, Stephen and Richard Steckel. “Heights and Living Standards of English Workers during the Early Years of Industrialization, 1770-1815.” Journal of Economic History 51 (1991): 937-57.

Oxley, Deborah. “Living Standards of Women in Prefamine Ireland.” Social Science History 28 (2004): 271-95.

Riggs, Paul. “The Standard of Living in Scotland, 1800-1850.” In Stature, Living Standards, and Economic Development: Essays in Anthropometric History, edited by John Komlos, 60-75. Chicago: University of Chicago Press: 1994.

Sandberg, Lars G. “Soldier, Soldier, What Made You Grow So Tall? A Study of Height, Health and Nutrition in Sweden, 1720-1881.” Economy and History 23 (1980): 91-105.

Steckel, Richard H. “New Light on the ‘Dark Ages’: The Remarkably Tall Stature of Northern European Men during the Medieval Era.” Social Science History 28 (2004): 211-30.

Citation: Cuff, Timothy. “Historical Anthropometrics”. EH.Net Encyclopedia, edited by Robert Whaples. August 29, 2004. URL http://eh.net/encyclopedia/historical-anthropometrics/

Attachment Size
Cuff.Anthropometrics.doc 169.5 KB