EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

Sweden – Economic Growth and Structural Change, 1800-2000

Lennart Schön, Lund University

This article presents an overview of Swedish economic growth performance internationally and statistically and an account of major trends in Swedish economic development during the nineteenth and twentieth centuries.1

Modern economic growth in Sweden took off in the middle of the nineteenth century and in international comparative terms Sweden has been rather successful during the past 150 years. This is largely thanks to the transformation of the economy and society from agrarian to industrial. Sweden is a small economy that has been open to foreign influences and highly dependent upon the world economy. Thus, successive structural changes have put their imprint upon modern economic growth.

Swedish Growth in International Perspective

The century-long period from the 1870s to the 1970s comprises the most successful part of Swedish industrialization and growth. On a per capita basis the Japanese economy performed equally well (see Table 1). The neighboring Scandinavian countries also grew rapidly but at a somewhat slower rate than Sweden. Growth in the rest of industrial Europe and in the U.S. was clearly outpaced. Growth in the entire world economy, as measured by Maddison, was even slower.

Table 1 Annual Economic Growth Rates per Capita in Industrial Nations and the World Economy, 1871-2005

Year Sweden Rest of Nordic Countries Rest of Western Europe United States Japan World Economy
1871/1875-1971/1975 2.4 2.0 1.7 1.8 2.4 1.5
1971/1975-2001/2005 1.7 2.2 1.9 2.0 2.2 1.6

Note: Rest of Nordic countries = Denmark, Finland and Norway. Rest of Western Europe = Austria, Belgium, Britain, France, Germany, Italy, the Netherlands, and Switzerland.

Source: Maddison (2006); Krantz/Schön (forthcoming 2007); World Bank, World Development Indicator 2000; Groningen Growth and Development Centre, www.ggdc.com.

The Swedish advance in a global perspective is illustrated in Figure 1. In the mid-nineteenth century the Swedish average income level was close to the average global level (as measured by Maddison). In a European perspective Sweden was a rather poor country. By the 1970s, however, the Swedish income level was more than three times higher than the global average and among the highest in Europe.

Figure 1
Swedish GDP per Capita in Relation to World GDP per Capita, 1870-2004
(Nine year moving averages)
Swedish GDP per Capita in Relation to World GDP per Capita, 1870-2004
Sources: Maddison (2006); Krantz/Schön (forthcoming 2007).

Note. The annual variation in world production between Maddison’s benchmarks 1870, 1913 and 1950 is estimated from his supply of annual country series.

To some extent this was a catch-up story. Sweden was able to take advantage of technological and organizational advances made in Western Europe and North America. Furthermore, Scandinavian countries with resource bases such as Sweden and Finland had been rather disadvantaged as long as agriculture was the main source of income. The shift to industry expanded the resource base and industrial development – directed both to a growing domestic market but even more to a widening world market – became the main lever of growth from the late nineteenth century.

Catch-up is not the whole story, though. In many industrial areas Swedish companies took a position at the technological frontier from an early point in time. Thus, in certain sectors there was also forging ahead,2 quickening the pace of structural change in the industrializing economy. Furthermore, during a century of fairly rapid growth new conditions have arisen that have required profound adaptation and a renewal of entrepreneurial activity as well as of economic policies.

The slow down in Swedish growth from the 1970s may be considered in this perspective. While in most other countries growth from the 1970s fell only in relation to growth rates in the golden post-war ages, Swedish growth fell clearly below the historical long run growth trend. It also fell to a very low level internationally. The 1970s certainly meant the end to a number of successful growth trajectories in the industrial society. At the same time new growth forces appeared with the electronic revolution, as well as with the advance of a more service based economy. It may be the case that this structural change hit the Swedish economy harder than most other economies, at least of the industrial capitalist economies. Sweden was forced into a transformation of its industrial economy and of its political economy in the 1970s and the 1980s that was more profound than in most other Western economies.

A Statistical Overview, 1800-2000

Swedish economic development since 1800 may be divided into six periods with different growth trends, as well as different composition of growth forces.

Table 2 Annual Growth Rates in per Capita Production, Total Investments, Foreign Trade and Population in Sweden, 1800-2000

Period Per capita GDP Investments Foreign Trade Population
1800-1840 0.6 0.3 0.7 0.8
1840-1870 1.2 3.0 4.6 1.0
1870-1910 1.7 3.0 3.3 0.6
1910-1950 2.2 4.2 2.0 0.5
1950-1975 3.6 5.5 6.5 0.6
1975-2000 1.4 2.1 4.3 0.4
1800-2000 1.9 3.4 3.8 0.7

Source: Krantz/Schön (forthcoming 2007).

In the first decades of the nineteenth century the agricultural sector dominated and growth was slow in all aspects but in population. Still there was per capita growth, but to some extent this was a recovery from the low levels during the Napoleonic Wars. The acceleration during the next period around the mid-nineteenth century is marked in all aspects. Investments and foreign trade became very dynamic ingredients with the onset of industrialization. They were to remain so during the following periods as well. Up to the 1970s per capita growth rates increased for each successive period. In an international perspective it is most notable that per capita growth rates increased also in the interwar period, despite the slow down in foreign trade. The interwar period is crucial for the long run relative success of Swedish economic growth. The decisive culmination in the post-war period with high growth rates in investments and in foreign trade stands out as well, as the deceleration in all aspects in the late twentieth century.

An analysis in a traditional growth accounting framework gives a long term pattern with certain periodic similarities (see Table 3). Thus, total factor productivity growth has increased over time up to the 1970s, only to decrease to its long run level in the last decades. This deceleration in productivity growth may be looked upon either as a failure of the “Swedish Model” to accommodate new growth forces or as another case of the “productivity paradox” in lieu of the information technology revolution.3

Table 3 Total Factor Productivity (TFP) Growth and Relative Contribution of Capital, Labor and TFP to GDP Growth in Sweden, 1840-2000

Period TFP Growth Capital Labor TFP
1840-1870 0.4 55 27 18
1870-1910 0.7 50 18 32
1910-1950 1.0 39 24 37
1950-1975 2.1 45 7 48
1975-2000 1.0 44 1 55
1840-2000 1.1 45 16 39

Source: See Table 2.

In terms of contribution to overall growth, TFP has increased its share for every period. The TFP share was low in the 1840s but there was a very marked increase with the onset of modern industrialization from the 1870s. In relative terms TFP reached its highest level so far from the 1970s, thus indicating an increasing role of human capital, technology and knowledge in economic growth. The role of capital accumulation was markedly more pronounced in early industrialization with the build-up of a modern infrastructure and with urbanization, but still capital did retain much of its importance during the twentieth century. Thus its contribution to growth during the post-war Golden Ages was significant with very high levels of material investments. At the same time TFP growth culminated with positive structural shifts, as well as increased knowledge intensity complementary to the investments. Labor has in quantitative terms progressively reduced its role in economic growth. One should observe, however, the relatively large importance of labor in Swedish economic growth during the interwar period. This was largely due to demographic factors and to the employment situation that will be further commented upon.

In the first decades of the nineteenth century, growth was still led by the primary production of agriculture, accompanied by services and transport. Secondary production in manufacturing and building was, on the contrary, very stagnant. From the 1840s the industrial sector accelerated, increasingly supported by transport and communications, as well as by private services. The sectoral shift from agriculture to industry became more pronounced at the turn of the twentieth century when industry and transportation boomed, while agricultural growth decelerated into subsequent stagnation. In the post-war period the volume of services, both private and public, increased strongly, although still not outpacing industry. From the 1970s the focus shifted to private services and to transport and communications, indicating fundamental new prerequisites of growth.

Table 4 Growth Rates of Industrial Sectors, 1800-2000

Period Agriculture Industrial and Hand Transport and Communic. Building Private Services Public Services GDP
1800-1840 1.5 0.3 1.1 -0.1 1.4 1.5 1.3
1840-1870 2.1 3.7 1.8 2.4 2.7 0.8 2.3
1870-1910 1.0 5.0 3.9 1.3 2.7 1.0 2.3
1910-1950 0.0 3.5 4.9 1.4 2.2 2.2 2.7
1950-1975 0.4 5.1 4.4 3.8 4.3 4.0 4.3
1975-2000 -0.4 1.9 2.6 -0.8 2.2 0.2 1.8
1800-2000 0.9 3.8 3.7 1.8 2.7 1.7 2.6

Source: See Table 2.

Note: Private services are exclusive of dwelling services.

Growth and Transformation in the Agricultural Society of the Early Nineteenth Century

During the first half of the nineteenth century the agricultural sector and the rural society dominated the Swedish economy. Thus, more than three-quarters of the population were occupied in agriculture while roughly 90 percent lived in the countryside. Many non-agrarian activities such as the iron industry, the saw mill industry and many crafts as well as domestic, religious and military services were performed in rural areas. Although growth was slow, a number of structural and institutional changes occurred that paved the way for future modernization.

Most important was the transformation of agriculture. From the late eighteenth century commercialization of the primary sector intensified. Particularly during the Napoleonic Wars, the domestic market for food stuffs widened. The population increase in combination with the temporary decrease in imports stimulated enclosures and reclamation of land, the introduction of new crops and new methods and above all it stimulated a greater degree of market orientation. In the decades after the war the traditional Swedish trade deficit in grain even shifted to a trade surplus with an increasing exportation of oats, primarily to Britain.

Concomitant with the agricultural transformation were a number of infrastructural and institutional changes. Domestic transportation costs were reduced through investments in canals and roads. Trade of agricultural goods was liberalized, reducing transaction costs and integrating the domestic market even further. Trading companies became more effective in attracting agricultural surpluses for more distant markets. In support of the agricultural sector new means of information were introduced by, for example, agricultural societies that published periodicals on innovative methods and on market trends. Mortgage societies were established to supply agriculture with long term capital for investments that in turn intensified the commercialization of production.

All these elements meant a profound institutional change in the sense that the price mechanism became much more effective in directing human behavior. Furthermore, a greater interest in information and in the main instrument of information, namely literacy, was infused. Traditionally, popular literacy had been upheld by the church, mainly devoted to knowledge of the primary Lutheran texts. In the new economic environment, literacy was secularized and transformed into a more functional literacy marked by the advent of schools for public education in the 1840s.

The Breakthrough of Modern Economic Growth in the Mid-nineteenth Century

In the decades around the middle of the nineteenth century new dynamic forces appeared that accelerated growth. Most notably foreign trade expanded by leaps and bounds in the 1850s and 1860s. With new export sectors, industrial investments increased. Furthermore, railways became the most prominent component of a new infrastructure and with this construction a new component in Swedish growth was introduced, heavy capital imports.

The upswing in industrial growth in Western Europe during the 1850s, in combination with demand induced through the Crimean War, led to a particularly strong expansion in Swedish exports with sharp price increases for three staple goods – bar iron, wood and oats. The charcoal-based Swedish bar iron had been the traditional export good and had completely dominated Swedish exports until mid-nineteenth century. Bar iron met, however, increasingly strong competition from British and continental iron and steel industries and Swedish exports had stagnated in the first half of the nineteenth century. The upswing in international demand, following the diffusion of industrialization and railway construction, gave an impetus to the modernization of Swedish steel production in the following decades.

The saw mill industry was a really new export industry that grew dramatically in the 1850s and 1860s. Up until this time, the vast forests in Sweden had been regarded mainly as a fuel resource for the iron industry and for household heating and local residential construction. With sharp price increases on the Western European market from the 1840s and 1850s, the resources of the sparsely populated northern part of Sweden suddenly became valuable. A formidable explosion of saw mill construction at the mouths of the rivers along the northern coastline followed. Within a few decades Swedish merchants, as well as Norwegian, German, British and Dutch merchants, became saw mill owners running large-scale capitalist enterprises at the fringe of the European civilization.

Less dramatic but equally important was the sudden expansion of Swedish oat exports. The market for oats appeared mainly in Britain, where short-distance transportation in rapidly growing urban centers increased the fleet of horses. Swedish oats became an important energy resource during the decades around the mid-nineteenth century. In Sweden this had a special significance since oats could be cultivated on rather barren and marginal soils and Sweden was richly endowed with such soils. Thus, the market for oats with strongly increasing prices stimulated further the commercialization of agriculture and the diffusion of new methods. It was furthermore so since oats for the market were a substitute for local flax production – also thriving on barren soils – while domestic linen was increasingly supplanted by factory-produced cotton goods.

The Swedish economy was able to respond to the impetus from Western Europe during these decades, to diffuse the new influences in the economy and to integrate them in its development very successfully. The barriers to change seem to have been weak. This is partly explained by the prior transformation of agriculture and the evolution of market institutions in the rural economy. People reacted to the price mechanism. New social classes of commercial peasants, capitalists and wage laborers had emerged in an era of domestic market expansion, with increased regional specialization, and population increase.

The composition of export goods also contributed to the diffusion of participation and to the diffusion of export income. Iron, wood and oats meant both a regional and a social distribution. The value of prior marginal resources such as soils in the south and forests in the north was inflated. The technology was simple and labor intensive in industry, forestry, agriculture and transportation. The demand for unskilled labor increased strongly that was to put an imprint upon Swedish wage development in the second half of the nineteenth century. Commercial houses and industrial companies made profits but export income was distributed to many segments of the population.

The integration of the Swedish economy was further enforced through initiatives taken by the State. The parliament decision in the 1850s to construct the railway trunk lines meant, first, a more direct involvement by the State in the development of a modern infrastructure and, second, new principles of finance since the State had to rely upon capital imports. At the same time markets for goods, labor and capital were liberalized and integration both within Sweden and with the world market deepened. The Swedish adoption of the Gold Standard in 1873 put a final stamp on this institutional development.

A Second Industrial Revolution around 1900

In the late nineteenth century, particularly in the 1880s, international competition became fiercer for agriculture and early industrial branches. The integration of world markets led to falling prices and stagnation in the demand for Swedish staple goods such as iron, sawn wood and oats. Profits were squeezed and expansion thwarted. On the other hand there arose new markets. Increasing wages intensified mechanization both in agriculture and in industry. The demand increased for more sophisticated machinery equipment. At the same time consumer demand shifted towards better foodstuff – such as milk, butter and meat – and towards more fabricated industrial goods.

The decades around the turn of the twentieth century meant a profound structural change in the composition of Swedish industrial expansion that was crucial for long term growth. New and more sophisticated enterprises were founded and expanded particularly from the 1890s, in the upswing after the Baring Crisis.

The new enterprises were closely related to the so called Second Industrial Revolution in which scientific knowledge and more complex engineering skills were main components. The electrical motor became especially important in Sweden. A new development block was created around this innovation that combined engineering skills in companies such as ASEA (later ABB) with a large demand in energy-intensive processes and with the large supply of hydropower in Sweden.4 Financing the rapid development of this large block engaged commercial banks, knitting closer ties between financial capital and industry. The State, once again, engaged itself in infrastructural development in support of electrification, still resorting to heavy capital imports.

A number of innovative industries were founded in this period – all related to increased demand for mechanization and engineering skills. Companies such as AGA, ASEA, Ericsson, Separator (AlfaLaval) and SKF have been labeled “enterprises of genius” and all are represented with renowned inventors and innovators. This was, of course, not an entirely Swedish phenomenon. These branches developed simultaneously on the Continent, particularly in nearby Germany and in the U.S. Knowledge and innovative stimulus was diffused among these economies. The question is rather why this new development became so strong in Sweden so that new industries within a relatively short period of time were able to supplant old resource-based industries as main driving forces of industrialization.

Traditions of engineering skills were certainly important, developed in old heavy industrial branches such as iron and steel industries and stimulated further by State initiatives such as railway construction or, more directly, the founding of the Royal Institute of Technology. But apart from that the economic development in the second half of the nineteenth century fundamentally changed relative factor prices and the profitability of allocation of resources in different lines of production.

The relative increase in the wages of unskilled labor had been stimulated by the composition of early exports in Sweden. This was much reinforced by two components in the further development – emigration and capital imports.

Within approximately the same period, 1850-1910, the Swedish economy received a huge amount of capital mainly from Germany and France, while delivering an equally huge amount of labor to primarily the U.S. Thus, Swedish relative factor prices changed dramatically. Swedish interest rates remained at rather high levels compared to leading European countries until 1910, due to a continuous large demand for capital in Sweden, but relative wages rose persistently (see Table 5). As in the rest of Scandinavia, wage increases were much stronger than GDP growth in Sweden indicating a shift in income distribution in favor of labor, particularly in favor of unskilled labor, during this period of increased world market integration.

Table 5 Annual Increase in Real Wages of Unskilled Labor and Annual GDP Growth per Capita, 1870-1910

Country Annual real wage increase, 1870-1910 Annual GDP growth per capita, 1870-1910
Sweden 2.8 1.7
Denmark and Norway 2.6 1.3
France, Germany and Great Britain 1.1 1.2
United States 1.1 1.6

Sources: Wages from Williamson (1995); GDP growth see Table 1.

Relative profitability fell in traditional industries, which exploited rich natural resources and cheap labor, while more sophisticated industries were favored. But the causality runs both ways. Had this structural shift with the growth of new and more profitable industries not occurred, the Swedish economy would not have been able to sustain the wage increase.5

Accelerated Growth in the War-stricken Period, 1910-1950

The most notable feature of long term Swedish growth is the acceleration in growth rates during the period 1910-1950, which in Europe at large was full of problems and catastrophes.6 Thus, Swedish per capita production grew at 2.2 percent annually while growth in the rest of Scandinavia was somewhat below 2 percent and in the rest of Europe hovered at 1 percent. The Swedish acceleration was based mainly on three pillars.

First, the structure created at the end of the nineteenth century was very viable, with considerable long term growth potential. It consisted of new industries and new infrastructures that involved industrialists and financial capitalists, as well as public sector support. It also involved industries meeting a relatively strong demand in war times, as well as in the interwar period, both domestically and abroad.

Second, the First World War meant an immense financial bonus to the Swedish market. A huge export surplus at inflated prices during the war led to the domestication of the Swedish national debt. This in turn further capitalized the Swedish financial market, lowering interest rates and ameliorating sequential innovative activity in industry. A domestic money market arose that provided the State with new instruments for economic policy that were to become important for the implementation of the new social democratic “Keynesian” policies of the 1930s.

Third, demographic development favored the Swedish economy in this period. The share of the economically active age group 15-64 grew substantially. This was due partly to the fact that prior emigration had sized down cohorts that now would have become old age pensioners. Comparatively low mortality of young people during the 1910s, as well as an end to mass emigration further enhanced the share of the active population. Both the labor market and domestic demand was stimulated in particular during the 1930s when the household forming age group of 25-30 years increased.

The augmented labor supply would have increased unemployment had it not been combined with the richer supply of capital and innovative industrial development that met elastic demand both domestically and in Europe.

Thus, a richer supply of both capital and labor stimulated the domestic market in a period when international market integration deteriorated. Above all it stimulated the development of mass production of consumption goods based upon the innovations of the Second Industrial Revolution. Significant new enterprises that emanated from the interwar period were very much related to the new logic of the industrial society, such as Volvo, SAAB, Electrolux, Tetra Pak and IKEA.

The Golden Age of Growth, 1950-1975

The Swedish economy was clearly part of the European Golden Age of growth, although Swedish acceleration from the 1950s was less pronounced than in the rest of Western Europe, which to a much larger extent had been plagued by wars and crises.7 The Swedish post-war period was characterized primarily by two phenomena – the full fruition of development blocks based upon the great innovations of the late nineteenth century (the electrical motor and the combustion engine) and the cementation of the “Swedish Model” for the welfare state. These two phenomena were highly complementary.

The Swedish Model had basically two components. One was a greater public responsibility for social security and for the creation and preservation of human capital. This led to a rapid increase in the supply of public services in the realms of education, health and children’s day care as well as to increases in social security programs and in public savings for transfers to pensioners program. The consequence was high taxation. The other component was a regulation of labor and capital markets. This was the most ingenious part of the model, constructed to sustain growth in the industrial society and to increase equality in combination with the social security program and taxation.

The labor market program was the result of negotiations between trade unions and the employers’ organization. It was labeled “solidaristic wage policy” with two elements. One was to achieve equal wages for equal work, regardless of individual companies’ ability to pay. The other element was to raise the wage level in low paid areas and thus to compress the wage distribution. The aim of the program was actually to increase the speed in the structural rationalization of industries and to eliminate less productive companies and branches. Labor should be transferred to the most productive export-oriented sectors. At the same time income should be distributed more equally. A drawback of the solidaristic wage policy from an egalitarian point of view was that profits soared in the productive sectors since wage increases were held back. However, capital market regulations hindered the ability of high profits to be converted into very high incomes for shareholders. Profits were taxed very low if they were converted into further investments within the company (the timing in the use of the funds was controlled by the State in its stabilization policy) but taxed heavily if distributed to share holders. The result was that investments within existing profitable companies were supported and actually subsidized while the mobility of capital dwindled and the activity at the stock market fell.

As long as the export sectors grew, the program worked well.8 Companies founded in the late nineteenth century and in the interwar period developed into successful multinationals in engineering with machinery, auto industries and shipbuilding, as well as in resource-based industries of steel and paper. The expansion of the export sector was the main force behind the high growth rates and the productivity increases but the sector was strongly supported by public investments or publicly subsidized investments in infrastructure and residential construction.

Hence, during the Golden Age of growth the development blocks around electrification and motorization matured in a broad modernization of the society, where mass consumption and mass production was supported by social programs, by investment programs and by labor market policy.

Crisis and Restructuring from the 1970s

In the 1970s and early 1980s a number of industries – such as steel works, pulp and paper, shipbuilding, and mechanical engineering – ran into crisis. New global competition, changing consumer behavior and profound innovative renewal, especially in microelectronics, made some of the industrial pillars of the Swedish Model crumble. At the same time the disadvantages of the old model became more apparent. It put obstacles to flexibility and to entrepreneurial initiatives and it reduced individual incentives for mobility. Thus, while the Swedish Model did foster rationalization of existing industries well adapted to the post-war period, it did not support more profound transformation of the economy.

One should not exaggerate the obstacles to transformation, though. The Swedish economy was still very open in the market for goods and many services, and the pressure to transform increased rapidly. During the 1980s a far-reaching structural change within industry as well as in economic policy took place, engaging both private and public actors. Shipbuilding was almost completely discontinued, pulp industries were integrated into modernized paper works, the steel industry was concentrated and specialized, and the mechanical engineering was digitalized. New and more knowledge-intensive growth industries appeared in the 1980s, such as IT-based telecommunication, pharmaceutical industries, and biotechnology, as well as new service industries.

During the 1980s some of the constituent components of the Swedish model were weakened or eliminated. Centralized negotiations and solidaristic wage policy disappeared. Regulations in the capital market were dismantled under the pressure of increasing international capital flows simultaneously with a forceful revival of the stock market. The expansion of public sector services came to an end and the taxation system was reformed with a reduction of marginal tax rates. Thus, Swedish economic policy and welfare system became more adapted to the main European level that facilitated the Swedish application of membership and final entrance into the European Union in 1995.

It is also clear that the period from the 1970s to the early twenty-first century comprise two growth trends, before and after 1990 respectively. During the 1970s and 1980s, growth in Sweden was very slow and marked by the great structural problems that the Swedish economy had to cope with. The slow growth prior to 1990 does not signify stagnation in a real sense, but rather the transformation of industrial structures and the reformulation of economic policy, which did not immediately result in a speed up of growth but rather in imbalances and bottle necks that took years to eliminate. From the 1990s up to 2005 Swedish growth accelerated quite forcefully in comparison with most Western economies.9 Thus, the 1980s may be considered as a Swedish case of “the productivity paradox,” with innovative renewal but with a delayed acceleration of productivity and growth from the 1990s – although a delayed productivity effect of more profound transformation and radical innovative behavior is not paradoxical.

Table 6 Annual Growth Rates per Capita, 1971-2005

Period Sweden Rest of Nordic Countries Rest of Western Europe United States World Economy
1971/1975-1991/1995 1.2 2.1 1.8 1.6 1.4
1991/1995-2001/2005 2.4 2.5 1.7 2.1 2.1

Sources: See Table 1.

The recent acceleration in growth may also indicate that some of the basic traits from early industrialization still pertain to the Swedish economy – an international attitude in a small open economy fosters transformation and adaptation of human skills to new circumstances as a major force behind long term growth.

References

Abramovitz, Moses. “Catching Up, Forging Ahead and Falling Behind.” Journal of Economic History 46, no. 2 (1986): 385-406.

Dahmén, Erik. “Development Blocks in Industrial Economics.” Scandinavian Economic History Review 36 (1988): 3-14.

David, Paul A. “The Dynamo and the Computer: An Historical Perspective on the Modern Productivity Paradox.” American Economic Review 80, no. 2 (1980): 355-61.

Eichengreen, Barry. “Institutions and Economic Growth: Europe after World War II.” In Economic Growth in Europe since 1945, edited by Nicholas Crafts and Gianni Toniolo. New York: Cambridge University Press, 1996.

Krantz, Olle and Lennart Schön. Swedish Historical National Accounts, 1800-2000. Lund: Almqvist and Wiksell International (forthcoming, 2007).

Maddison, Angus. The World Economy, Volumes 1 and 2. Paris: OECD (2006).

Schön, Lennart. “Development Blocks and Transformation Pressure in a Macro-Economic Perspective: A Model of Long-Cyclical Change.” Skandinaviska Enskilda Banken Quarterly Review 20, no. 3-4 (1991): 67-76.

Schön, Lennart. “External and Internal Factors in Swedish Industrialization.” Scandinavian Economic History Review 45, no. 3 (1997): 209-223.

Schön, Lennart. En modern svensk ekonomisk historia: Tillväxt och omvandling under två sekel (A Modern Swedish Economic History: Growth and Transformation in Two Centuries). Stockholm: SNS (2000).

Schön, Lennart. “Total Factor Productivity in Swedish Manufacturing in the Period 1870-2000.” In Exploring Economic Growth: Essays in Measurement and Analysis: A Festschrift for Riitta Hjerppe on Her Sixtieth Birthday, edited by S. Heikkinen and J.L. van Zanden. Amsterdam: Aksant, 2004.

Schön, Lennart. “Swedish Industrialization 1870-1930 and the Heckscher-Ohlin Theory.” In Eli Heckscher, International Trade, and Economic History, edited by Ronald Findlay et al. Cambridge, MA: MIT Press (2006).

Svennilson, Ingvar. Growth and Stagnation in the European Economy. Geneva: United Nations Economic Commission for Europe, 1954.

Temin, Peter. “The Golden Age of European Growth Reconsidered.” European Review of Economic History 6, no. 1 (2002): 3-22.

Williamson, Jeffrey G. “The Evolution of Global Labor Markets since 1830: Background Evidence and Hypotheses.” Explorations in Economic History 32, no. 2 (1995): 141-96.

Citation: Schön, Lennart. “Sweden – Economic Growth and Structural Change, 1800-2000″. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/sweden-economic-growth-and-structural-change-1800-2000/

The 1929 Stock Market Crash

Harold Bierman, Jr., Cornell University

Overview

The 1929 stock market crash is conventionally said to have occurred on Thursday the 24th and Tuesday the 29th of October. These two dates have been dubbed “Black Thursday” and “Black Tuesday,” respectively. On September 3, 1929, the Dow Jones Industrial Average reached a record high of 381.2. At the end of the market day on Thursday, October 24, the market was at 299.5 — a 21 percent decline from the high. On this day the market fell 33 points — a drop of 9 percent — on trading that was approximately three times the normal daily volume for the first nine months of the year. By all accounts, there was a selling panic. By November 13, 1929, the market had fallen to 199. By the time the crash was completed in 1932, following an unprecedentedly large economic depression, stocks had lost nearly 90 percent of their value.

The events of Black Thursday are normally defined to be the start of the stock market crash of 1929-1932, but the series of events leading to the crash started before that date. This article examines the causes of the 1929 stock market crash. While no consensus exists about its precise causes, the article will critique some arguments and support a preferred set of conclusions. It argues that one of the primary causes was the attempt by important people and the media to stop market speculators. A second probable cause was the great expansion of investment trusts, public utility holding companies, and the amount of margin buying, all of which fueled the purchase of public utility stocks, and drove up their prices. Public utilities, utility holding companies, and investment trusts were all highly levered using large amounts of debt and preferred stock. These factors seem to have set the stage for the triggering event. This sector was vulnerable to the arrival of bad news regarding utility regulation. In October 1929, the bad news arrived and utility stocks fell dramatically. After the utilities decreased in price, margin buyers had to sell and there was then panic selling of all stocks.

The Conventional View

The crash helped bring on the depression of the thirties and the depression helped to extend the period of low stock prices, thus “proving” to many that the prices had been too high.

Laying the blame for the “boom” on speculators was common in 1929. Thus, immediately upon learning of the crash of October 24 John Maynard Keynes (Moggridge, 1981, p. 2 of Vol. XX) wrote in the New York Evening Post (25 October 1929) that “The extraordinary speculation on Wall Street in past months has driven up the rate of interest to an unprecedented level.” And the Economist when stock prices reached their low for the year repeated the theme that the U.S. stock market had been too high (November 2, 1929, p. 806): “there is warrant for hoping that the deflation of the exaggerated balloon of American stock values will be for the good of the world.” The key phrases in these quotations are “exaggerated balloon of American stock values” and “extraordinary speculation on Wall Street.” Likewise, President Herbert Hoover saw increasing stock market prices leading up to the crash as a speculative bubble manufactured by the mistakes of the Federal Reserve Board. “One of these clouds was an American wave of optimism, born of continued progress over the decade, which the Federal Reserve Board transformed into the stock-exchange Mississippi Bubble” (Hoover, 1952). Thus, the common viewpoint was that stock prices were too high.

There is much to criticize in conventional interpretations of the 1929 stock market crash, however. (Even the name is inexact. The largest losses to the market did not come in October 1929 but rather in the following two years.) In December 1929, many expert economists, including Keynes and Irving Fisher, felt that the financial crisis had ended and by April 1930 the Standard and Poor 500 composite index was at 25.92, compared to a 1929 close of 21.45. There are good reasons for thinking that the stock market was not obviously overvalued in 1929 and that it was sensible to hold most stocks in the fall of 1929 and to buy stocks in December 1929 (admittedly this investment strategy would have been terribly unsuccessful).

Were Stocks Obviously Overpriced in October 1929?
Debatable — Economic Indicators Were Strong

From 1925 to the third quarter of 1929, common stocks increased in value by 120 percent in four years, a compound annual growth of 21.8%. While this is a large rate of appreciation, it is not obvious proof of an “orgy of speculation.” The decade of the 1920s was extremely prosperous and the stock market with its rising prices reflected this prosperity as well as the expectation that the prosperity would continue.

The fact that the stock market lost 90 percent of its value from 1929 to 1932 indicates that the market, at least using one criterion (actual performance of the market), was overvalued in 1929. John Kenneth Galbraith (1961) implies that there was a speculative orgy and that the crash was predictable: “Early in 1928, the nature of the boom changed. The mass escape into make-believe, so much a part of the true speculative orgy, started in earnest.” Galbraith had no difficulty in 1961 identifying the end of the boom in 1929: “On the first of January of 1929, as a matter of probability, it was most likely that the boom would end before the year was out.”

Compare this position with the fact that Irving Fisher, one of the leading economists in the U.S. at the time, was heavily invested in stocks and was bullish before and after the October sell offs; he lost his entire wealth (including his house) before stocks started to recover. In England, John Maynard Keynes, possibly the world’s leading economist during the first half of the twentieth century, and an acknowledged master of practical finance, also lost heavily. Paul Samuelson (1979) quotes P. Sergeant Florence (another leading economist): “Keynes may have made his own fortune and that of King’s College, but the investment trust of Keynes and Dennis Robertson managed to lose my fortune in 1929.”

Galbraith’s ability to ‘forecast’ the market turn is not shared by all. Samuelson (1979) admits that: “playing as I often do the experiment of studying price profiles with their dates concealed, I discovered that I would have been caught by the 1929 debacle.” For many, the collapse from 1929 to 1933 was neither foreseeable nor inevitable.

The stock price increases leading to October 1929, were not driven solely by fools or speculators. There were also intelligent, knowledgeable investors who were buying or holding stocks in September and October 1929. Also, leading economists, both then and now, could neither anticipate nor explain the October 1929 decline of the market. Thus, the conviction that stocks were obviously overpriced is somewhat of a myth.

The nation’s total real income rose from 1921 to 1923 by 10.5% per year, and from 1923 to 1929, it rose 3.4% per year. The 1920s were, in fact, a period of real growth and prosperity. For the period of 1923-1929, wholesale prices went down 0.9% per year, reflecting moderate stable growth in the money supply during a period of healthy real growth.

Examining the manufacturing situation in the United States prior to the crash is also informative. Irving Fisher’s Stock Market Crash and After (1930) offers much data indicating that there was real growth in the manufacturing sector. The evidence presented goes a long way to explain Fisher’s optimism regarding the level of stock prices. What Fisher saw was manufacturing efficiency rapidly increasing (output per worker) as was manufacturing output and the use of electricity.

The financial fundamentals of the markets were also strong. During 1928, the price-earnings ratio for 45 industrial stocks increased from approximately 12 to approximately 14. It was over 15 in 1929 for industrials and then decreased to approximately 10 by the end of 1929. While not low, these price-earnings (P/E) ratios were by no means out of line historically. Values in this range would be considered reasonable by most market analysts today. For example, the P/E ratio of the S & P 500 in July 2003 reached a high of 33 and in May 2004 the high was 23.

The rise in stock prices was not uniform across all industries. The stocks that went up the most were in industries where the economic fundamentals indicated there was cause for large amounts of optimism. They included airplanes, agricultural implements, chemicals, department stores, steel, utilities, telephone and telegraph, electrical equipment, oil, paper, and radio. These were reasonable choices for expectations of growth.

To put the P/E ratios of 10 to 15 in perspective, note that government bonds in 1929 yielded 3.4%. Industrial bonds of investment grade were yielding 5.1%. Consider that an interest rate of 5.1% represents a 1/(0.051) = 19.6 price-earnings ratio for debt.

In 1930, the Federal Reserve Bulletin reported production in 1920 at an index of 87.1 The index went down to 67 in 1921, then climbed steadily (except for 1924) until it reached 125 in 1929. This is an annual growth rate in production of 3.1%. During the period commodity prices actually decreased. The production record for the ten-year period was exceptionally good.

Factory payrolls in September were at an index of 111 (an all-time high). In October the index dropped to 110, which beat all previous months and years except for September 1929. The factory employment measures were consistent with the payroll index.

The September unadjusted measure of freight car loadings was at 121 — also an all-time record.2 In October the loadings dropped to 118, which was a performance second only to September’s record measure.

J.W. Kendrick (1961) shows that the period 1919-1929 had an unusually high rate of change in total factor productivity. The annual rate of change of 5.3% for 1919-1929 for the manufacturing sector was more than twice the 2.5% rate of the second best period (1948-1953). Farming productivity change for 1919-1929 was second only to the period 1929-1937. Overall, the period 1919-1929 easily took first place for productivity increases, handily beating the six other time periods studied by Kendrick (all the periods studies were prior to 1961) with an annual productivity change measure of 3.7%. This was outstanding economic performance — performance which normally would justify stock market optimism.

In the first nine months of 1929, 1,436 firms announced increased dividends. In 1928, the number was only 955 and in 1927, it was 755. In September 1929 dividend increased were announced by 193 firms compared with 135 the year before. The financial news from corporations was very positive in September and October 1929.

The May issue of the National City Bank of New York Newsletter indicated the earnings statements for the first quarter of surveyed firms showed a 31% increase compared to the first quarter of 1928. The August issue showed that for 650 firms the increase for the first six months of 1929 compared to 1928 was 24.4%. In September, the results were expanded to 916 firms with a 27.4% increase. The earnings for the third quarter for 638 firms were calculated to be 14.1% larger than for 1928. This is evidence that the general level of business activity and reported profits were excellent at the end of September 1929 and the middle of October 1929.

Barrie Wigmore (1985) researched 1929 financial data for 135 firms. The market price as a percentage of year-end book value was 420% using the high prices and 181% using the low prices. However, the return on equity for the firms (using the year-end book value) was a high 16.5%. The dividend yield was 2.96% using the high stock prices and 5.9% using the low stock prices.

Article after article from January to October in business magazines carried news of outstanding economic performance. E.K. Berger and A.M. Leinbach, two staff writers of the Magazine of Wall Street, wrote in June 1929: “Business so far this year has astonished even the perennial optimists.”

To summarize: There was little hint of a severe weakness in the real economy in the months prior to October 1929. There is a great deal of evidence that in 1929 stock prices were not out of line with the real economics of the firms that had issued the stock. Leading economists were betting that common stocks in the fall of 1929 were a good buy. Conventional financial reports of corporations gave cause for optimism relative to the 1929 earnings of corporations. Price-earnings ratios, dividend amounts and changes in dividends, and earnings and changes in earnings all gave cause for stock price optimism.

Table 1 shows the average of the highs and lows of the Dow Jones Industrial Index for 1922 to 1932.

Table 1
Dow-Jones Industrials Index Average
of Lows and Highs for the Year
1922 91.0
1923 95.6
1924 104.4
1925 137.2
1926 150.9
1927 177.6
1928 245.6
1929 290.0
1930 225.8
1931 134.1
1932 79.4

Sources: 1922-1929 measures are from the Stock Market Study, U.S. Senate, 1955, pp. 40, 49, 110, and 111; 1930-1932 Wigmore, 1985, pp. 637-639.

Using the information of Table 1, from 1922 to 1929 stocks rose in value by 218.7%. This is equivalent to an 18% annual growth rate in value for the seven years. From 1929 to 1932 stocks lost 73% of their value (different indices measured at different time would give different measures of the increase and decrease). The price increases were large, but not beyond comprehension. The price decreases taken to 1932 were consistent with the fact that by 1932 there was a worldwide depression.

If we take the 386 high of September 1929 and the 1929-year end value of 248.5, the market lost 36% of its value during that four-month period. Most of us, if we held stock in September 1929 would not have sold early in October. In fact, if I had money to invest, I would have purchased after the major break on Black Thursday, October 24. (I would have been sorry.)

Events Precipitating the Crash

Although it can be argued that the stock market was not overvalued, there is evidence that many feared that it was overvalued — including the Federal Reserve Board and the United States Senate. By 1929, there were many who felt the market price of equity securities had increased too much, and this feeling was reinforced daily by the media and statements by influential government officials.

What precipitated the October 1929 crash?

My research minimizes several candidates that are frequently cited by others (see Bierman 1991, 1998, 1999, and 2001).

  • The market did not fall just because it was too high — as argued above it is not obvious that it was too high.
  • The actions of the Federal Reserve, while not always wise, cannot be directly identified with the October stock market crashes in an important way.
  • The Smoot-Hawley tariff, while looming on the horizon, was not cited by the news sources in 1929 as a factor, and was probably not important to the October 1929 market.
  • The Hatry Affair in England was not material for the New York Stock Exchange and the timing did not coincide with the October crashes.
  • Business activity news in October was generally good and there were very few hints of a coming depression.
  • Short selling and bear raids were not large enough to move the entire market.
  • Fraud and other illegal or immoral acts were not material, despite the attention they have received.

Barsky and DeLong (1990, p. 280) stress the importance of fundamentals rather than fads or fashions. “Our conclusion is that major decade-to-decade stock market movements arise predominantly from careful re-evaluation of fundamentals and less so from fads or fashions.” The argument below is consistent with their conclusion, but there will be one major exception. In September 1929, the market value of one segment of the market, the public utility sector, should be based on existing fundamentals, and fundamentals seem to have changed considerably in October 1929.

A Look at the Financial Press

Thursday, October 3, 1929, the Washington Post with a page 1 headline exclaimed “Stock Prices Crash in Frantic Selling.” the New York Times of October 4 headed a page 1 article with “Year’s Worst Break Hits Stock Market.” The article on the first page of the Times cited three contributing factors:

  • A large broker loan increase was expected (the article stated that the loans increased, but the increase was not as large as expected).
  • The statement by Philip Snowden, England’s Chancellor of the Exchequer that described America’s stock market as a “speculative orgy.”
  • Weakening of margin accounts making it necessary to sell, which further depressed prices.

While the 1928 and 1929 financial press focused extensively and excessively on broker loans and margin account activity, the statement by Snowden is the only unique relevant news event on October 3. The October 4 (p. 20) issue of the Wall Street Journal also reported the remark by Snowden that there was “a perfect orgy of speculation.” Also, on October 4, the New York Times made another editorial reference to Snowden’s American speculation orgy. It added that “Wall Street had come to recognize its truth.” The editorial also quoted Secretary of the Treasury Mellon that investors “acted as if the price of securities would infinitely advance.” The Times editor obviously thought there was excessive speculation, and agreed with Snowden.

The stock market went down on October 3 and October 4, but almost all reported business news was very optimistic. The primary negative news item was the statement by Snowden regarding the amount of speculation in the American stock market. The market had been subjected to a barrage of statements throughout the year that there was excessive speculation and that the level of stock prices was too high. There is a possibility that the Snowden comment reported on October 3 was the push that started the boulder down the hill, but there were other events that also jeopardized the level of the market.

On August 8, the Federal Reserve Bank of New York had increased the rediscount rate from 5 to 6%. On September 26 the Bank of England raised its discount rate from 5.5 to 6.5%. England was losing gold as a result of investment in the New York Stock Exchange and wanted to decrease this investment. The Hatry Case also happened in September. It was first reported on September 29, 1929. Both the collapse of the Hatry industrial empire and the increase in the investment returns available in England resulted in shrinkage of English investment (especially the financing of broker loans) in the United States, adding to the market instability in the beginning of October.

Wednesday, October 16, 1929

On Wednesday, October 16, stock prices again declined. the Washington Post (October 17, p. 1) reported “Crushing Blow Again Dealt Stock Market.” Remember, the start of the stock market crash is conventionally identified with Black Thursday, October 24, but there were price declines on October 3, 4, and 16.

The news reports of the Post on October 17 and subsequent days are important since they were Associated Press (AP) releases, thus broadly read throughout the country. The Associated Press reported (p. 1) “The index of 20 leading public utilities computed for the Associated Press by the Standard Statistics Co. dropped 19.7 points to 302.4 which contrasts with the year’s high established less than a month ago.” This index had also dropped 18.7 points on October 3 and 4.3 points on October 4. The Times (October 17, p. 38) reported, “The utility stocks suffered most as a group in the day’s break.”

The economic news after the price drops of October 3 and October 4 had been good. But the deluge of bad news regarding public utility regulation seems to have truly upset the market. On Saturday, October 19, the Washington Post headlined (p. 13) “20 Utility Stocks Hit New Low Mark” and (Associated Press) “The utility shares again broke wide open and the general list came tumbling down almost half as far.” The October 20 issue of the Post had another relevant AP article (p. 12) “The selling again concentrated today on the utilities, which were in general depressed to the lowest levels since early July.”

An evaluation of the October 16 break in the New York Times on Sunday, October 20 (pp. 1 and 29) gave the following favorable factors:

  • stable business condition
  • low money rates (5%)
  • good retail trade
  • revival of the bond market
  • buying power of investment trusts
  • largest short interest in history (this is the total dollar value of stock sold where the investors do not own the stock they sold)

The following negative factors were described:

  • undigested investment trusts and new common stock shares
  • increase in broker loans
  • some high stock prices
  • agricultural prices lower
  • nervous market

The negative factors were not very upsetting to an investor if one was optimistic that the real economic boom (business prosperity) would continue. The Times failed to consider the impact on the market of the news concerning the regulation of public utilities.

Monday, October 21, 1929

On Monday, October 21, the market went down again. The Times (October 22) identified the causes to be

  • margin sellers (buyers on margin being forced to sell)
  • foreign money liquidating
  • skillful short selling

The same newspaper carried an article about a talk by Irving Fisher (p. 24) “Fisher says prices of stocks are low.” Fisher also defended investment trusts as offering investors diversification, thus reduced risk. He was reminded by a person attending the talk that in May he had “pointed out that predicting the human behavior of the market was quite different from analyzing its economic soundness.” Fisher was better with fundamentals than market psychology.

Wednesday, October 23, 1929

On Wednesday, October 23 the market tumbled. The Times headlines (October 24, p.1) said “Prices of Stocks Crash in Heavy Liquidation.” The Washington Post (p. 1) had “Huge Selling Wave Creates Near-Panic as Stocks Collapse.” In a total market value of $87 billion the market declined $4 billion — a 4.6% drop. If the events of the next day (Black Thursday) had not occurred, October 23 would have gone down in history as a major stock market event. But October 24 was to make the “Crash” of October 23 become merely a “Dip.”

The Times lamented October 24, (p. 38) “There was hardly a single item of news which might be construed as bearish.”

Thursday, October 24, 1929

Thursday, October 24 (Black Thursday) was a 12,894,650 share day (the previous record was 8,246,742 shares on March 26, 1929) on the NYSE. The headline on page one of the Times (October 25) was “Treasury Officials Blame Speculation.”

The Times (p. 41) moaned that the cost of call money had been 20% in March and the price break in March was understandable. (A call loan is a loan payable on demand of the lender.) Call money on October 24 cost only 5%. There should not have been a crash. The Friday Wall Street Journal (October 25) gave New York bankers credit for stopping the price decline with $1 billion of support.

the Washington Post (October 26, p. 1) reported “Market Drop Fails to Alarm Officials.” The “officials” were all in Washington. The rest of the country seemed alarmed. On October 25, the market gained. President Hoover made a statement on Friday regarding the excellent state of business, but then added how building and construction had been adversely “affected by the high interest rates induced by stock speculation” (New York Times, October 26, p. 1). A Times editorial (p. 16) quoted Snowden’s “orgy of speculation” again.

Tuesday, October 29, 1929

The Sunday, October 27 edition of the Times had a two-column article “Bay State Utilities Face Investigation.” It implied that regulation in Massachusetts was going to be less friendly towards utilities. Stocks again went down on Monday, October 28. There were 9,212,800 shares traded (3,000,000 in the final hour). The Times on Tuesday, October 29 again carried an article on the New York public utility investigating committee being critical of the rate making process. October 29 was “Black Tuesday.” The headline the next day was “Stocks Collapse in 16,410,030 Share Day” (October 30, p. 1). Stocks lost nearly $16 billion in the month of October or 18% of the beginning of the month value. Twenty-nine public utilities (tabulated by the New York Times) lost $5.1 billion in the month, by far the largest loss of any of the industries listed by the Times. The value of the stocks of all public utilities went down by more than $5.1 billion.

An Interpretive Overview of Events and Issues

My interpretation of these events is that the statement by Snowden, Chancellor of the Exchequer, indicating the presence of a speculative orgy in America is likely to have triggered the October 3 break. Public utility stocks had been driven up by an explosion of investment trust formation and investing. The trusts, to a large extent, bought stock on margin with funds loaned not by banks but by “others.” These funds were very sensitive to any market weakness. Public utility regulation was being reviewed by the Federal Trade Commission, New York City, New York State, and Massachusetts, and these reviews were watched by the other regulatory commissions and by investors. The sell-off of utility stocks from October 16 to October 23 weakened prices and created “margin selling” and withdrawal of capital by the nervous “other” money. Then on October 24, the selling panic happened.

There are three topics that require expansion. First, there is the setting of the climate concerning speculation that may have led to the possibility of relatively specific issues being able to trigger a general market decline. Second, there are investment trusts, utility holding companies, and margin buying that seem to have resulted in one sector being very over-levered and overvalued. Third, there are the public utility stocks that appear to be the best candidate as the actual trigger of the crash.

Contemporary Worries of Excessive Speculation

During 1929, the public was bombarded with statements of outrage by public officials regarding the speculative orgy taking place on the New York Stock Exchange. If the media say something often enough, a large percentage of the public may come to believe it. By October 29 the overall opinion was that there had been excessive speculation and the market had been too high. Galbraith (1961), Kindleberger (1978), and Malkiel (1996) all clearly accept this assumption. the Federal Reserve Bulletin of February 1929 states that the Federal Reserve would restrain the use of “credit facilities in aid of the growth of speculative credit.”

In the spring of 1929, the U.S. Senate adopted a resolution stating that the Senate would support legislation “necessary to correct the evil complained of and prevent illegitimate and harmful speculation” (Bierman, 1991).

The President of the Investment Bankers Association of America, Trowbridge Callaway, gave a talk in which he spoke of “the orgy of speculation which clouded the country’s vision.”

Adolph Casper Miller, an outspoken member of the Federal Reserve Board from its beginning described 1929 as “this period of optimism gone wild and cupidity gone drunk.”

Myron C. Taylor, head of U.S. Steel described “the folly of the speculative frenzy that lifted securities to levels far beyond any warrant of supporting profits.”

Herbert Hoover becoming president in March 1929 was a very significant event. He was a good friend and neighbor of Adolph Miller (see above) and Miller reinforced Hoover’s fears. Hoover was an aggressive foe of speculation. For example, he wrote, “I sent individually for the editors and publishers of major newspapers and magazine and requested them systematically to warn the country against speculation and the unduly high price of stocks.” Hoover then pressured Secretary of the Treasury Andrew Mellon and Governor of the Federal Reserve Board Roy Young “to strangle the speculative movement.” In his memoirs (1952) he titled his Chapter 2 “We Attempt to Stop the Orgy of Speculation” reflecting Snowden’s influence.

Buying on Margin

Margin buying during the 1920’s was not controlled by the government. It was controlled by brokers interested in their own well-being. The average margin requirement was 50% of the stock price prior to October 1929. On selected stocks, it was as high as 75%. When the crash came, no major brokerage firm was bankrupted, because the brokers managed their finances in a conservative manner. At the end of October, margins were lowered to 25%.

Brokers’ loans received a lot of attention in England, as they did in the United States. The Financial Times reported the level and the changes in the amount regularly. For example, the October 4 issue indicated that on October 3 broker loans reached a record high as money rates dropped from 7.5% to 6%. By October 9, money rates had dropped further to below .06. Thus, investors prior to October 24 had relatively easy access to funds at the lowest rate since July 1928.

the Financial Times (October 7, 1929, p. 3) reported that the President of the American Bankers Association was concerned about the level of credit for securities and had given a talk in which he stated, “Bankers are gravely alarmed over the mounting volume of credit being employed in carrying security loans, both by brokers and by individuals.” The Financial Times was also concerned with the buying of investment trusts on margin and the lack of credit to support the bull market.

My conclusion is that the margin buying was a likely factor in causing stock prices to go up, but there is no reason to conclude that margin buying triggered the October crash. Once the selling rush began, however, the calling of margin loans probably exacerbated the price declines. (A calling of margin loans requires the stock buyer to contribute more cash to the broker or the broker sells the stock to get the cash.)

Investment Trusts

By 1929, investment trusts were very popular with investors. These trusts were the 1929 version of closed-end mutual funds. In recent years seasoned closed-end mutual funds sell at a discount to their fundamental value. The fundamental value is the sum of the market values of the fund’s components (securities in the portfolio). In 1929, the investment trusts sold at a premium — i.e. higher than the value of the underlying stocks. Malkiel concludes (p. 51) that this “provides clinching evidence of wide-scale stock-market irrationality during the 1920s.” However, Malkiel also notes (p. 442) “as of the mid-1990’s, Berkshire Hathaway shares were selling at a hefty premium over the value of assets it owned.” Warren Buffett is the guiding force behind Berkshire Hathaway’s great success as an investor. If we were to conclude that rational investors would currently pay a premium for Warren Buffet’s expertise, then we should reject a conclusion that the 1929 market was obviously irrational. We have current evidence that rational investors will pay a premium for what they consider to be superior money management skills.

There were $1 billion of investment trusts sold to investors in the first eight months of 1929 compared to $400 million in the entire 1928. the Economist reported that this was important (October 12, 1929, p. 665). “Much of the recent increase is to be accounted for by the extraordinary burst of investment trust financing.” In September alone $643 million was invested in investment trusts (Financial Times, October 21, p. 3). While the two sets of numbers (from the Economist and the Financial Times) are not exactly comparable, both sets of numbers indicate that investment trusts had become very popular by October 1929.

The common stocks of trusts that had used debt or preferred stock leverage were particularly vulnerable to the stock price declines. For example, the Goldman Sachs Trading Corporation was highly levered with preferred stock and the value of its common stock fell from $104 a share to less than $3 in 1933. Many of the trusts were levered, but the leverage of choice was not debt but rather preferred stock.

In concept, investment trusts were sensible. They offered expert management and diversification. Unfortunately, in 1929 a diversification of stocks was not going to be a big help given the universal price declines. Irving Fisher on September 6, 1929 was quoted in the New York Herald Tribune as stating: “The present high levels of stock prices and corresponding low levels of dividend returns are due largely to two factors. One, the anticipation of large dividend returns in the immediate future; and two, reduction of risk to investors largely brought about through investment diversification made possible for the investor by investment trusts.”

If a researcher could find out the composition of the portfolio of a couple of dozen of the largest investment trusts as of September-October 1929 this would be extremely helpful. Seven important types of information that are not readily available but would be of interest are:

  • The percentage of the portfolio that was public utilities.
  • The extent of diversification.
  • The percentage of the portfolios that was NYSE firms.
  • The investment turnover.
  • The ratio of market price to net asset value at various points in time.
  • The amount of debt and preferred stock leverage used.
  • Who bought the trusts and how long they held.

The ideal information to establish whether market prices are excessively high compared to intrinsic values is to have both the prices and well-defined intrinsic values at the same moment in time. For the normal financial security, this is impossible since the intrinsic values are not objectively well defined. There are two exceptions. DeLong and Schleifer (1991) followed one path, very cleverly choosing to study closed-end mutual funds. Some of these funds were traded on the stock market and the market values of the securities in the funds’ portfolios are a very reasonable estimate of the intrinsic value. DeLong and Schleifer state (1991, p. 675):

“We use the difference between prices and net asset values of closed-end mutual funds at the end of the 1920s to estimate the degree to which the stock market was overvalued on the eve of the 1929 crash. We conclude that the stocks making up the S&P composite were priced at least 30 percent above fundamentals in late summer, 1929.”

Unfortunately (p. 682) “portfolios were rarely published and net asset values rarely calculated.” It was only after the crash that investment trusts started to reveal routinely their net asset value. In the third quarter of 1929 (p. 682), “three types of event seemed to trigger a closed-end fund’s publication of its portfolio.” The three events were (1) listing on the New York Stock Exchange (most of the trusts were not listed), (2) start up of a new closed-end fund (this stock price reflects selling pressure), and (3) shares selling at a discount from net asset value (in September 1929 most trusts were not selling at a discount, the inclusion of any that were introduces a bias). After 1929, some trusts revealed 1929 net asset values. Thus, DeLong and Schleifer lacked the amount and quality of information that would have allowed definite conclusions. In fact, if investors also lacked the information regarding the portfolio composition we would have to place investment trusts in a unique investment category where investment decisions were made without reliable financial statements. If investors in the third quarter of 1929 did not know the current net asset value of investment trusts, this fact is significant.

The closed-end funds were an attractive vehicle to study since the market for investment trusts in 1929 was large and growing rapidly. In August and September alone over $1 billion of new funds were launched. DeLong and Schleifer found the premiums of price over value to be large — the median was about 50% in the third quarter of 1929) (p. 678). But they worried about the validity of their study because funds were not selected randomly.

DeLong and Schleifer had limited data (pp. 698-699). For example, for September 1929 there were two observations, for August 1929 there were five, and for July there were nine. The nine funds observed in July 1929 had the following premia: 277%, 152%, 48%, 22%, 18% (2 times), 8% (3 times). Given that closed-end funds tend to sell at a discount, the positive premiums are interesting. Given the conventional perspective in 1929 that financial experts could manager money better than the person not plugged into the street, it is not surprising that some investors were willing to pay for expertise and to buy shares in investment trusts. Thus, a premium for investment trusts does not imply the same premium for other stocks.

The Public Utility Sector

In addition to investment trusts, intrinsic values are usually well defined for regulated public utilities. The general rule applied by regulatory authorities is to allow utilities to earn a “fair return” on an allowed rate base. The fair return is defined to be equal to a utility’s weighted average cost of capital. There are several reasons why a public utility can earn more or less than a fair return, but the target set by the regulatory authority is the weighted average cost of capital.

Thus, if a utility has an allowed rate equity base of $X and is allowed to earn a return of r, (rX in terms of dollars) after one year the firm’s equity will be worth X + rX or (1 + r)X with a present value of X. (This assumes that r is the return required by the market as well as the return allowed by regulators.) Thus, the present value of the equity is equal to the present rate base, and the stock price should be equal to the rate base per share. Given the nature of public utility accounting, the book value of a utility’s stock is approximately equal to the rate base.

There can be time periods where the utility can earn more (or less) than the allowed return. The reasons for this include regulatory lag, changes in efficiency, changes in the weather, and changes in the mix and number of customers. Also, the cost of equity may be different than the allowed return because of inaccurate (or incorrect) or changing capital market conditions. Thus, the stock price may differ from the book value, but one would not expect the stock price to be very much different than the book value per share for very long. There should be a tendency for the stock price to revert to the book value for a public utility supplying an essential service where there is no effective competition, and the rate commission is effectively allowing a fair return to be earned.

In 1929, public utility stock prices were in excess of three times their book values. Consider, for example, the following measures (Wigmore, 1985, p. 39) for five operating utilities.

border=”1″ cellspacing=”0″ cellpadding=”2″ class=”encyclopedia” width=”580″>

1929 Price-earnings Ratio

High Price for Year

Market Price/Book Value

Commonwealth Edison

35

3.31

Consolidated Gas of New York

39

3.34

Detroit Edison

35

3.06

Pacific Gas & Electric

28

3.30

Public Service of New Jersey

35

3.14

Sooner or later this price bubble had to break unless the regulatory authorities were to decide to allow the utilities to earn more than a fair return, or an infinite stream of greater fools existed. The decision made by the Massachusetts Public Utility Commission in October 1929 applicable to the Edison Electric Illuminating Company of Boston made clear that neither of these improbable events were going to happen (see below).

The utilities bubble did burst. Between the end of September and the end of November 1929, industrial stocks fell by 48%, railroads by 32% and utilities by 55% — thus utilities dropped the furthest from the highs. A comparison of the beginning of the year prices and the highest prices is also of interest: industrials rose by 20%, railroads by 19%, and utilities by 48%. The growth in value for utilities during the first nine months of 1929 was more than twice that of the other two groups.

The following high and low prices for 1929 for a typical set of public utilities and holding companies illustrate how severely public utility prices were hit by the crash (New York Times, 1 January 1930 quotations.)

1929
Firm High Price Low Price Low Price DividedBy High Price
American Power & Light 1753/8 641/4 .37
American Superpower 711/8 15 .21
Brooklyn Gas 2481/2 99 .44
Buffalo, Niagara & Eastern Power 128 611/8 .48
Cities Service 681/8 20 .29
Consolidated Gas Co. of N.Y. 1831/4 801/8 .44
Electric Bond and Share 189 50 .26
Long Island Lighting 91 40 .44
Niagara Hudson Power 303/4 111/4 .37
Transamerica 673/8 201/4 .30

Picking on one segment of the market as the cause of a general break in the market is not obviously correct. But the combination of an overpriced utility segment and investment trusts with a portion of the market that had purchased on margin appears to be a viable explanation. In addition, as of September 1, 1929 utilities industry represented $14.8 billion of value or 18% of the value of the outstanding shares on the NYSE. Thus, they were a large sector, capable of exerting a powerful influence on the overall market. Moreover, many contemporaries pointed to the utility sector as an important force in triggering the market decline.

The October 19, 1929 issue of the Commercial and Financial Chronicle identified the main depressing influences on the market to be the indications of a recession in steel and the refusal of the Massachusetts Department of Public Utilities to allow Edison Electric Illuminating Company of Boston to split its stock. The explanations offered by the Department — that the stock was not worth its price and the company’s dividend would have to be reduced — made the situation worse.

the Washington Post (October 17, p. 1) in explaining the October 16 market declines (an Associated Press release) reported, “Professional traders also were obviously distressed at the printed remarks regarding inflation of power and light securities by the Massachusetts Public Utility Commission in its recent decision.”

Straws That Broke the Camel’s Back?

Edison Electric of Boston

On August 2, 1929, the New York Times reported that the Directors of the Edison Electric Illuminating Company of Boston had called a meeting of stockholders to obtain authorization for a stock split. The stock went up to a high of $440. Its book value was $164 (the ratio of price to book value was 2.6, which was less than many other utilities).

On Saturday (October 12, p. 27) the Times reported that on Friday the Massachusetts Department of Public Utilities has rejected the stock split. The heading said “Bars Stock Split by Boston Edison. Criticizes Dividend Policy. Holds Rates Should Not Be Raised Until Company Can Reduce Charge for Electricity.” Boston Edison lost 15 points for the day even though the decision was released after the Friday closing. The high for the year was $440 and the stock closed at $360 on Friday.

The Massachusetts Department of Public Utilities (New York Times, October 12, p. 27) did not want to imply to investors that this was the “forerunner of substantial increases in dividends.” They stated that the expectation of increased dividends was not justified, offered “scathing criticisms of the company” (October 16, p. 42) and concluded “the public will take over such utilities as try to gobble up all profits available.”

On October 15, the Boston City Council advised the mayor to initiate legislation for public ownership of Edison, on October 16, the Department announced it would investigate the level of rates being charged by Edison, and on October 19, it set the dates for the inquiry. On Tuesday, October 15 (p. 41), there was a discussion in the Times of the Massachusetts decision in the column “Topic in Wall Street.” It “excited intense interest in public utility circles yesterday and undoubtedly had effect in depressing the issues of this group. The decision is a far-reaching one and Wall Street expressed the greatest interest in what effect it will have, if any, upon commissions in other States.”

Boston Edison had closed at 360 on Friday, October 11, before the announcement was released. It dropped 61 points at its low on Monday, (October 14) but closed at 328, a loss of 32 points.

On October 16 (p. 42), the Times reported that Governor Allen of Massachusetts was launching a full investigation of Boston Edison including “dividends, depreciation, and surplus.”

One major factor that can be identified leading to the price break for public utilities was the ruling by the Massachusetts Public Utility Commission. The only specific action was that it refused to permit Edison Electric Illuminating Company of Boston to split its stock. Standard financial theory predicts that the primary effect of a stock split would be to reduce the stock price by 50% and would leave the total value unchanged, thus the denial of the split was not economically significant, and the stock split should have been easy to grant. But the Commission made it clear it had additional messages to communicate. For example, the Financial Times (October 16, 1929, p. 7) reported that the Commission advised the company to “reduce the selling price to the consumer.” Boston was paying $.085 per kilowatt-hour and Cambridge only $.055. There were also rumors of public ownership and a shifting of control. The next day (October 17), the Times reported (p. 3) “The worst pressure was against Public Utility shares” and the headline read “Electric Issue Hard Hit.”

Public Utility Regulation in New York

Massachusetts was not alone in challenging the profit levels of utilities. The Federal Trade Commission, New York City, and New York State were all challenging the status of public utility regulation. New York Governor (Franklin D. Roosevelt) appointed a committee on October 8 to investigate the regulation of public utilities in the state. The Committee stated, “this inquiry is likely to have far-reaching effects and may lead to similar action in other States.” Both the October 17 and October 19 issues of the Times carried articles regarding the New York investigative committee. Professor Bonbright, a Roosevelt appointee, described the regulatory process as a “vicious system” (October 19, p. 21), which ignored consumers. The Chairman of the Public Service Commission, testifying before the Committee wanted more control over utility holding companies, especially management fees and other transfers.

The New York State Committee also noted the increasing importance of investment trusts: “mention of the influence of the investment trust on utility securities is too important for this committee to ignore” (New York Times, October 17, p. 18). They conjectured that the trusts had $3.5 billion to invest, and “their influence has become very important” (p. 18).

In New York City Mayor Jimmy Walker was fighting the accusation of graft charges with statements that his administration would fight aggressively against rate increases, thus proving that he had not accepted bribes (New York Times, October 23). It is reasonable to conclude that the October 16 break was related to the news from Massachusetts and New York.

On October 17, the New York Times (p. 18) reported that the Committee on Public Service Securities of the Investment Banking Association warned against “speculative and uniformed buying.” The Committee published a report in which it asked for care in buying shares in utilities.

On Black Thursday, October 24, the market panic began. The market dropped from 305.87 to 272.32 (a 34 point drop, or 9%) and closed at 299.47. The declines were led by the motor stocks and public utilities.

The Public Utility Multipliers and Leverage

Public utilities were a very important segment of the stock market, and even more importantly, any change in public utility stock values resulted in larger changes in equity wealth. In 1929, there were three potentially important multipliers that meant that any change in a public utility’s underlying value would result in a larger value change in the market and in the investor’s value.

Consider the following hypothetical values for a public utility:

Book value per share for a utility $50

Market price per share 162.502

Market price of investment trust holding stock (assuming a 100% 325.00

premium over market value)

Eliminating the utility’s $112.50 market price premium over book value, the market price of the investment trust would be $50 without a premium. The loss in market value of the stock of the investment trust and the utility would be $387.50 (with no premium). The $387.50 is equal to the $112.50 loss in underlying stock value and the $275 reduction in investment trust stock value. The public utility holding companies, in fact, were even more vulnerable to a stock price change since their ratio of price to book value averaged 4.44 (Wigmore, p. 43). The $387.50 loss in market value implies investments in both the firm’s stock and the investment trust.

For simplicity, this discussion has assumed the trust held all the holding company stock. The effects shown would be reduced if the trust held only a fraction of the stock. However, this discussion has also assumed that no debt or margin was used to finance the investment. Assume the individual investors invested only $162.50 of their money and borrowed $162.50 to buy the investment trust stock costing $325. If the utility stock went down from $162.50 to $50 and the trust still sold at a 100% premium, the trust would sell at $100 and the investors would have lost 100% of their investment since the investors owe $162.50. The vulnerability of the margin investor buying a trust stock that has invested in a utility is obvious.

These highly levered non-operating utilities offered an opportunity for speculation. The holding company typically owned 100% of the operating companies’ stock and both entities were levered (there could be more than two levels of leverage). There were also holding companies that owned holding companies (e.g., Ebasco). Wigmore (p. 43) lists nine of the largest public utility holding companies. The ratio of the low 1929 price to the high price (average) was 33%. These stocks were even more volatile than the publicly owned utilities.

The amount of leverage (both debt and preferred stock) used in the utility sector may have been enormous, but we cannot tell for certain. Assume that a utility purchases an asset that costs $1,000,000 and that asset is financed with 40% stock ($400,000). A utility holding company owns the utility stock and is also financed with 40% stock ($160,000). A second utility holding company owns the first and it is financed with 40% stock ($64,000). An investment trust owns the second holding company’s stock and is financed with 40% stock ($25,600). An investor buys the investment trust’s common stock using 50% margin and investing $12,800 in the stock. Thus, the $1,000,000 utility asset is financed with $12,800 of equity capital.

When the large amount of leverage is combined with the inflated prices of the public utility stock, both holding company stocks, and the investment trust the problem is even more dramatic. Continuing the above example, assume the $1,000,000 asset again financed with $600,000 of debt and $400,000 common stock, but the common stock has a $1,200,000 market value. The first utility holding company has $720,000 of debt and $480,000 of common. The second holding company has $288,000 of debt and $192,000 of stock. The investment trust has $115,200 of debt and $76,800 of stock. The investor uses $38,400 of margin debt. The $1,000,000 asset is supporting $1,761,600 of debt. The investor’s $38,400 of equity is very much in jeopardy.

Conclusions and Lessons

Although no consensus has been reached on the causes of the 1929 stock market crash, the evidence cited above suggests that it may have been that the fear of speculation helped push the stock market to the brink of collapse. It is possible that Hoover’s aggressive campaign against speculation, helped by the overpriced public utilities hit by the Massachusetts Public Utility Commission decision and statements and the vulnerable margin investors, triggered the October selling panic and the consequences that followed.

An important first event may have been Lord Snowden’s reference to the speculative orgy in America. The resulting decline in stock prices weakened margin positions. When several governmental bodies indicated that public utilities in the future were not going to be able to justify their market prices, the decreases in utility stock prices resulted in margin positions being further weakened resulting in general selling. At some stage, the selling panic started and the crash resulted.

What can we learn from the 1929 crash? There are many lessons, but a handful seem to be most applicable to today’s stock market.

  • There is a delicate balance between optimism and pessimism regarding the stock market. Statements and actions by government officials can affect the sensitivity of stock prices to events. Call a market overpriced often enough, and investors may begin to believe it.
  • The fact that stocks can lose 40% of their value in a month and 90% over three years suggests the desirability of diversification (including assets other than stocks). Remember, some investors lose all of their investment when the market falls 40%.
  • A levered investment portfolio amplifies the swings of the stock market. Some investment securities have leverage built into them (e.g., stocks of highly levered firms, options, and stock index futures).
  • A series of presumably undramatic events may establish a setting for a wide price decline.
  • A segment of the market can experience bad news and a price decline that infects the broader market. In 1929, it seems to have been public utilities. In 2000, high technology firms were candidates.
  • Interpreting events and assigning blame is unreliable if there has not been an adequate passage of time and opportunity for reflection and analysis — and is difficult even with decades of hindsight.
  • It is difficult to predict a major market turn with any degree of reliability. It is impressive that in September 1929, Roger Babson predicted the collapse of the stock market, but he had been predicting a collapse for many years. Also, even Babson recommended diversification and was against complete liquidation of stock investments (Financial Chronicle, September 7, 1929, p. 1505).
  • Even a market that is not excessively high can collapse. Both market psychology and the underlying economics are relevant.

References

Barsky, Robert B. and J. Bradford DeLong. “Bull and Bear Markets in the Twentieth Century,” Journal of Economic History 50, no. 2 (1990): 265-281.

Bierman, Harold, Jr. The Great Myths of 1929 and the Lessons to be Learned. Westport, CT: Greenwood Press, 1991.

Bierman, Harold, Jr. The Causes of the 1929 Stock Market Crash. Westport, CT, Greenwood Press, 1998.

Bierman, Harold, Jr. “The Reasons Stock Crashed in 1929.” Journal of Investing (1999): 11-18.

Bierman, Harold, Jr. “Bad Market Days,” World Economics (2001) 177-191.

Commercial and Financial Chronicle, 1929 issues.

Committee on Banking and Currency. Hearings on Performance of the National and Federal Reserve Banking System. Washington, 1931.

DeLong, J. Bradford and Andrei Schleifer, “The Stock Market Bubble of 1929: Evidence from Closed-end Mutual Funds.” Journal of Economic History 51, no. 3 (1991): 675-700.

Federal Reserve Bulletin, February, 1929.

Fisher, Irving. The Stock Market Crash and After. New York: Macmillan, 1930.

Galbraith, John K. The Great Crash, 1929. Boston, Houghton Mifflin, 1961.

Hoover, Herbert. The Memoirs of Herbert Hoover. New York, Macmillan, 1952.

Kendrick, John W. Productivity Trends in the United States. Princeton University Press, 1961.

Kindleberger, Charles P. Manias, Panics, and Crashes. New York, Basic Books, 1978.

Malkiel, Burton G., A Random Walk Down Wall Street. New York, Norton, 1975 and 1996.

Moggridge, Donald. The Collected Writings of John Maynard Keynes, Volume XX. New York: Macmillan, 1981.

New York Times, 1929 and 1930.

Rappoport, Peter and Eugene N. White, “Was There a Bubble in the 1929 Stock Market?” Journal of Economic History 53, no. 3 (1993): 549-574.

Samuelson, Paul A. “Myths and Realities about the Crash and Depression.” Journal of Portfolio Management (1979): 9.

Senate Committee on Banking and Currency. Stock Exchange Practices. Washington, 1928.

Siegel, Jeremy J. “The Equity Premium: Stock and Bond Returns since 1802,”

Financial Analysts Journal 48, no. 1 (1992): 28-46.

Wall Street Journal, October 1929.

Washington Post, October 1929.

Wigmore, Barry A. The Crash and Its Aftermath: A History of Securities Markets in the United States, 1929-1933. Greenwood Press, Westport, 1985.

1 1923-25 average = 100.

2 Based a price to book value ratio of 3.25 (Wigmore, p. 39).

Citation: Bierman, Harold. “The 1929 Stock Market Crash”. EH.Net Encyclopedia, edited by Robert Whaples. March 26, 2008. URL http://eh.net/encyclopedia/the-1929-stock-market-crash/

Slavery in the United States

Jenny Bourne, Carleton College

Slavery is fundamentally an economic phenomenon. Throughout history, slavery has existed where it has been economically worthwhile to those in power. The principal example in modern times is the U.S. South. Nearly 4 million slaves with a market value estimated to be between $3.1 and $3.6 billion lived in the U.S. just before the Civil War. Masters enjoyed rates of return on slaves comparable to those on other assets; cotton consumers, insurance companies, and industrial enterprises benefited from slavery as well. Such valuable property required rules to protect it, and the institutional practices surrounding slavery display a sophistication that rivals modern-day law and business.

THE SPREAD OF SLAVERY IN THE U.S.

Not long after Columbus set sail for the New World, the French and Spanish brought slaves with them on various expeditions. Slaves accompanied Ponce de Leon to Florida in 1513, for instance. But a far greater proportion of slaves arrived in chains in crowded, sweltering cargo holds. The first dark-skinned slaves in what was to become British North America arrived in Virginia — perhaps stopping first in Spanish lands — in 1619 aboard a Dutch vessel. From 1500 to 1900, approximately 12 million Africans were forced from their homes to go westward, with about 10 million of them completing the journey. Yet very few ended up in the British colonies and young American republic. By 1808, when the trans-Atlantic slave trade to the U.S. officially ended, only about 6 percent of African slaves landing in the New World had come to North America.

Slavery in the North

Colonial slavery had a slow start, particularly in the North. The proportion there never got much above 5 percent of the total population. Scholars have speculated as to why, without coming to a definite conclusion. Some surmise that indentured servants were fundamentally better suited to the Northern climate, crops, and tasks at hand; some claim that anti-slavery sentiment provided the explanation. At the time of the American Revolution, fewer than 10 percent of the half million slaves in the thirteen colonies resided in the North, working primarily in agriculture. New York had the greatest number, with just over 20,000. New Jersey had close to 12,000 slaves. Vermont was the first Northern region to abolish slavery when it became an independent republic in 1777. Most of the original Northern colonies implemented a process of gradual emancipation in the late eighteenth and early nineteenth centuries, requiring the children of slave mothers to remain in servitude for a set period, typically 28 years. Other regions above the Mason-Dixon line ended slavery upon statehood early in the nineteenth century — Ohio in 1803 and Indiana in 1816, for instance.

TABLE 1
Population of the Original Thirteen Colonies, selected years by type

1750 1750 1790 1790 1790 1810 1810 1810 1860 1860 1860

State

White Black White Free Slave White Free Slave White Free Slave
Nonwhite Nonwhite Nonwhite
108,270 3,010 232,236 2,771 2,648 255,179 6,453 310 451,504 8,643 - Connecticut
27,208 1,496 46,310 3,899 8,887 55,361 13,136 4,177 90,589 19,829 1,798 Delaware
4,200 1,000 52,886 398 29,264 145,414 1,801 105,218 591,550 3,538 462,198 Georgia
97,623 43,450 208,649 8,043 103,036 235,117 33,927 111,502 515,918 83,942 87,189 Maryland
183,925 4,075 373,187 5,369 - 465,303 6,737 - 1,221,432 9,634 - Massachusetts
26,955 550 141,112 630 157 182,690 970 - 325,579 494 - New Hampshire
66,039 5,354 169,954 2,762 11,423 226,868 7,843 10,851 646,699 25,318 - New Jersey
65,682 11,014 314,366 4,682 21,193 918,699 25,333 15,017 3,831,590 49,145 - New York
53,184 19,800 289,181 5,041 100,783 376,410 10,266 168,824 629,942 31,621 331,059 North Carolina
116,794 2,872 317,479 6,531 3,707 786,804 22,492 795 2,849,259 56,956 - Pennsylvania
29,879 3,347 64,670 3,484 958 73,214 3,609 108 170,649 3,971 - Rhode Island
25,000 39,000 140,178 1,801 107,094 214,196 4,554 196,365 291,300 10,002 402,406 South Carolina
129,581 101,452 442,117 12,866 292,627 551,534 30,570 392,518 1,047,299 58,154 490,865 Virginia
934,340 236,420 2,792,325 58,277 681,777 4,486,789 167,691 1,005,685 12,663,310 361,247 1,775,515 United States

Source: Historical Statistics of the U.S. (1970), Franklin (1988).

Slavery in the South

Throughout colonial and antebellum history, U.S. slaves lived primarily in the South. Slaves comprised less than a tenth of the total Southern population in 1680 but grew to a third by 1790. At that date, 293,000 slaves lived in Virginia alone, making up 42 percent of all slaves in the U.S. at the time. South Carolina, North Carolina, and Maryland each had over 100,000 slaves. After the American Revolution, the Southern slave population exploded, reaching about 1.1 million in 1810 and over 3.9 million in 1860.

TABLE 2
Population of the South 1790-1860 by type

Year White Free Nonwhite Slave
1790 1,240,454 32,523 654,121
1800 1,691,892 61,575 851,532
1810 2,118,144 97,284 1,103,700
1820 2,867,454 130,487 1,509,904
1830 3,614,600 175,074 1,983,860
1840 4,601,873 207,214 2,481,390
1850 6,184,477 235,821 3,200,364
1860 8,036,700 253,082 3,950,511

Source: Historical Statistics of the U.S. (1970).

Slave Ownership Patterns

Despite their numbers, slaves typically comprised a minority of the local population. Only in antebellum South Carolina and Mississippi did slaves outnumber free persons. Most Southerners owned no slaves and most slaves lived in small groups rather than on large plantations. Less than one-quarter of white Southerners held slaves, with half of these holding fewer than five and fewer than 1 percent owning more than one hundred. In 1860, the average number of slaves residing together was about ten.

TABLE 3
Slaves as a Percent of the Total Population
selected years, by Southern state

1750 1790 1810 1860
State Black/total Slave/total Slave/total Slave/total
population population population population
Alabama 45.12
Arkansas 25.52
Delaware 5.21 15.04 5.75 1.60
Florida 43.97
Georgia 19.23 35.45 41.68 43.72
Kentucky 16.87 19.82 19.51
Louisiana 46.85
Maryland 30.80 32.23 29.30 12.69
Mississippi 55.18
Missouri 9.72
North Carolina 27.13 25.51 30.39 33.35
South Carolina 60.94 43.00 47.30 57.18
Tennessee 17.02 24.84
Texas 30.22
Virginia 43.91 39.14 40.27 30.75
Overall 37.97 33.95 33.25 32.27

Sources: Historical Statistics of the United States (1970), Franklin (1988).

TABLE 4
Holdings of Southern Slaveowners
by states, 1860

State Total Held 1 Held 2 Held 3 Held 4 Held 5 Held 1-5 Held 100- Held 500+
slaveholders slave slaves Slaves slaves slaves slaves 499 slaves slaves
AL 33,730 5,607 3,663 2,805 2,329 1,986 16,390 344 -
AR 11,481 2,339 1,503 1,070 894 730 6,536 65 1
DE 587 237 114 74 51 34 510 - -
FL 5,152 863 568 437 365 285 2,518 47 -
GA 41,084 6,713 4,335 3,482 2,984 2,543 20,057 211 8
KY 38,645 9,306 5,430 4,009 3,281 2,694 24,720 7 -
LA 22,033 4,092 2,573 2,034 1,536 1,310 11,545 543 4
MD 13,783 4,119 1,952 1,279 1,023 815 9,188 16 -
MS 30,943 4,856 3,201 2,503 2,129 1,809 14,498 315 1
MO 24,320 6,893 3,754 2,773 2,243 1,686 17,349 4 -
NC 34,658 6,440 4,017 3,068 2,546 2,245 18,316 133 -
SC 26,701 3,763 2,533 1,990 1,731 1,541 11,558 441 8
TN 36,844 7,820 4,738 3,609 3,012 2,536 21,715 47 -
TX 21,878 4,593 2,874 2,093 1,782 1,439 12,781 54 -
VA 52,128 11,085 5,989 4,474 3,807 3,233 28,588 114 -
TOTAL 393,967 78,726 47,244 35,700 29,713 24,886 216,269 2,341 22

Source: Historical Statistics of the United States (1970).

Rapid Natural Increase in U.S. Slave Population

How did the U.S. slave population increase nearly fourfold between 1810 and 1860, given the demise of the trans-Atlantic trade? They enjoyed an exceptional rate of natural increase. Unlike elsewhere in the New World, the South did not require constant infusions of immigrant slaves to keep its slave population intact. In fact, by 1825, 36 percent of the slaves in the Western hemisphere lived in the U.S. This was partly due to higher birth rates, which were in turn due to a more equal ratio of female to male slaves in the U.S. relative to other parts of the Americas. Lower mortality rates also figured prominently. Climate was one cause; crops were another. U.S. slaves planted and harvested first tobacco and then, after Eli Whitney’s invention of the cotton gin in 1793, cotton. This work was relatively less grueling than the tasks on the sugar plantations of the West Indies and in the mines and fields of South America. Southern slaves worked in industry, did domestic work, and grew a variety of other food crops as well, mostly under less abusive conditions than their counterparts elsewhere. For example, the South grew half to three-quarters of the corn crop harvested between 1840 and 1860.

INSTITUTIONAL FRAMEWORK

Central to the success of slavery are political and legal institutions that validate the ownership of other persons. A Kentucky court acknowledged the dual character of slaves in Turner v. Johnson (1838): “[S]laves are property and must, under our present institutions, be treated as such. But they are human beings, with like passions, sympathies, and affections with ourselves.” To construct slave law, lawmakers borrowed from laws concerning personal property and animals, as well as from rules regarding servants, employees, and free persons. The outcome was a set of doctrines that supported the Southern way of life.

The English common law of property formed a foundation for U.S. slave law. The French and Spanish influence in Louisiana — and, to a lesser extent, Texas — meant that Roman (or civil) law offered building blocks there as well. Despite certain formal distinctions, slave law as practiced differed little from common-law to civil-law states. Southern state law governed roughly five areas: slave status, masters’ treatment of slaves, interactions between slaveowners and contractual partners, rights and duties of noncontractual parties toward others’ slaves, and slave crimes. Federal law and laws in various Northern states also dealt with matters of interstate commerce, travel, and fugitive slaves.

Interestingly enough, just as slave law combined elements of other sorts of law, so too did it yield principles that eventually applied elsewhere. Lawmakers had to consider the intelligence and volition of slaves as they crafted laws to preserve property rights. Slavery therefore created legal rules that could potentially apply to free persons as well as to those in bondage. Many legal principles we now consider standard in fact had their origins in slave law.

Legal Status Of Slaves And Blacks

By the end of the seventeenth century, the status of blacks — slave or free — tended to follow the status of their mothers. Generally, “white” persons were not slaves but Native and African Americans could be. One odd case was the offspring of a free white woman and a slave: the law often bound these people to servitude for thirty-one years. Conversion to Christianity could set a slave free in the early colonial period, but this practice quickly disappeared.

Skin Color and Status

Southern law largely identified skin color with status. Those who appeared African or of African descent were generally presumed to be slaves. Virginia was the only state to pass a statute that actually classified people by race: essentially, it considered those with one quarter or more black ancestry as black. Other states used informal tests in addition to visual inspection: one-quarter, one-eighth, or one-sixteenth black ancestry might categorize a person as black.

Even if blacks proved their freedom, they enjoyed little higher status than slaves except, to some extent, in Louisiana. Many Southern states forbade free persons of color from becoming preachers, selling certain goods, tending bar, staying out past a certain time of night, or owning dogs, among other things. Federal law denied black persons citizenship under the Dred Scott decision (1857). In this case, Chief Justice Roger Taney also determined that visiting a free state did not free a slave who returned to a slave state, nor did traveling to a free territory ensure emancipation.

Rights And Responsibilities Of Slave Masters

Southern masters enjoyed great freedom in their dealings with slaves. North Carolina Chief Justice Thomas Ruffin expressed the sentiments of many Southerners when he wrote in State v. Mann (1829): “The power of the master must be absolute, to render the submission of the slave perfect.” By the nineteenth century, household heads had far more physical power over their slaves than their employees. In part, the differences in allowable punishment had to do with the substitutability of other means of persuasion. Instead of physical coercion, antebellum employers could legally withhold all wages if a worker did not complete all agreed-upon services. No such alternate mechanism existed for slaves.

Despite the respect Southerners held for the power of masters, the law — particularly in the thirty years before the Civil War — limited owners somewhat. Southerners feared that unchecked slave abuse could lead to theft, public beatings, and insurrection. People also thought that hungry slaves would steal produce and livestock. But masters who treated slaves too well, or gave them freedom, caused consternation as well. The preamble to Delaware’s Act of 1767 conveys one prevalent view: “[I]t is found by experience, that freed [N]egroes and mulattoes are idle and slothful, and often prove burdensome to the neighborhood wherein they live, and are of evil examples to slaves.” Accordingly, masters sometimes fell afoul of the criminal law not only when they brutalized or neglected their slaves, but also when they indulged or manumitted slaves. Still, prosecuting masters was extremely difficult, because often the only witnesses were slaves or wives, neither of whom could testify against male heads of household.

Law of Manumission

One area that changed dramatically over time was the law of manumission. The South initially allowed masters to set their slaves free because this was an inherent right of property ownership. During the Revolutionary period, some Southern leaders also believed that manumission was consistent with the ideology of the new nation. Manumission occurred only rarely in colonial times, increased dramatically during the Revolution, then diminished after the early 1800s. By the 1830s, most Southern states had begun to limit manumission. Allowing masters to free their slaves at will created incentives to emancipate only unproductive slaves. Consequently, the community at large bore the costs of young, old, and disabled former slaves. The public might also run the risk of having rebellious former slaves in its midst.

Antebellum U.S. Southern states worried considerably about these problems and eventually enacted restrictions on the age at which slaves could be free, the number freed by any one master, and the number manumitted by last will. Some required former masters to file indemnifying bonds with state treasurers so governments would not have to support indigent former slaves. Some instead required former owners to contribute to ex-slaves’ upkeep. Many states limited manumissions to slaves of a certain age who were capable of earning a living. A few states made masters emancipate their slaves out of state or encouraged slaveowners to bequeath slaves to the Colonization Society, which would then send the freed slaves to Liberia. Former slaves sometimes paid fees on the way out of town to make up for lost property tax revenue; they often encountered hostility and residential fees on the other end as well. By 1860, most Southern states had banned in-state and post-mortem manumissions, and some had enacted procedures by which free blacks could voluntarily become slaves.

Other Restrictions

In addition to constraints on manumission, laws restricted other actions of masters and, by extension, slaves. Masters generally had to maintain a certain ratio of white to black residents upon plantations. Some laws barred slaves from owning musical instruments or bearing firearms. All states refused to allow slaves to make contracts or testify in court against whites. About half of Southern states prohibited masters from teaching slaves to read and write although some of these permitted slaves to learn rudimentary mathematics. Masters could use slaves for some tasks and responsibilities, but they typically could not order slaves to compel payment, beat white men, or sample cotton. Nor could slaves officially hire themselves out to others, although such prohibitions were often ignored by masters, slaves, hirers, and public officials. Owners faced fines and sometimes damages if their slaves stole from others or caused injuries.

Southern law did encourage benevolence, at least if it tended to supplement the lash and shackle. Court opinions in particular indicate the belief that good treatment of slaves could enhance labor productivity, increase plantation profits, and reinforce sentimental ties. Allowing slaves to control small amounts of property, even if statutes prohibited it, was an oft-sanctioned practice. Courts also permitted slaves small diversions, such as Christmas parties and quilting bees, despite statutes that barred slave assemblies.

Sale, Hire, And Transportation Of Slaves

Sales of Slaves

Slaves were freely bought and sold across the antebellum South. Southern law offered greater protection to slave buyers than to buyers of other goods, in part because slaves were complex commodities with characteristics not easily ascertained by inspection. Slave sellers were responsible for their representations, required to disclose known defects, and often liable for unknown defects, as well as bound by explicit contractual language. These rules stand in stark contrast to the caveat emptor doctrine applied in antebellum commodity sales cases. In fact, they more closely resemble certain provisions of the modern Uniform Commercial Code. Sales law in two states stands out. South Carolina was extremely pro-buyer, presuming that any slave sold at full price was sound. Louisiana buyers enjoyed extensive legal protection as well. A sold slave who later manifested an incurable disease or vice — such as a tendency to escape frequently — could generate a lawsuit that entitled the purchaser to nullify the sale.

Hiring Out Slaves

Slaves faced the possibility of being hired out by their masters as well as being sold. Although scholars disagree about the extent of hiring in agriculture, most concur that hired slaves frequently worked in manufacturing, construction, mining, and domestic service. Hired slaves and free persons often labored side by side. Bond and free workers both faced a legal burden to behave responsibly on the job. Yet the law of the workplace differed significantly for the two: generally speaking, employers were far more culpable in cases of injuries to slaves. The divergent law for slave and free workers does not necessarily imply that free workers suffered. Empirical evidence shows that nineteenth-century free laborers received at least partial compensation for the risks of jobs. Indeed, the tripartite nature of slave-hiring arrangements suggests why antebellum laws appeared as they did. Whereas free persons had direct work and contractual relations with their bosses, slaves worked under terms designed by others. Free workers arguably could have walked out or insisted on different conditions or wages. Slaves could not. The law therefore offered substitute protections. Still, the powerful interests of slaveowners also may mean that they simply were more successful at shaping the law. Postbellum developments in employment law — North and South — in fact paralleled earlier slave-hiring law, at times relying upon slave cases as legal precedents.

Public Transportation

Public transportation also figured into slave law: slaves suffered death and injury aboard common carriers as well as traveled as legitimate passengers and fugitives. As elsewhere, slave-common carrier law both borrowed from and established precedents for other areas of law. One key doctrine originating in slave cases was the “last-clear-chance rule.” Common-carrier defendants that had failed to offer slaves — even negligent slaves — a last clear chance to avoid accidents ended up paying damages to slaveowners. Slaveowner plaintiffs won several cases in the decade before the Civil War when engineers failed to warn slaves off railroad tracks. Postbellum courts used slave cases as precedents to entrench the last-clear-chance doctrine.

Slave Control: Patrollers And Overseers

Society at large shared in maintaining the machinery of slavery. In place of a standing police force, Southern states passed legislation to establish and regulate county-wide citizen patrols. Essentially, Southern citizens took upon themselves the protection of their neighbors’ interests as well as their own. County courts had local administrative authority; court officials appointed three to five men per patrol from a pool of white male citizens to serve for a specified period. Typical patrol duty ranged from one night per week for a year to twelve hours per month for three months. Not all white men had to serve: judges, magistrates, ministers, and sometimes millers and blacksmiths enjoyed exemptions. So did those in the higher ranks of the state militia. In many states, courts had to select from adult males under a certain age, usually 45, 50, or 60. Some states allowed only slaveowners or householders to join patrols. Patrollers typically earned fees for captured fugitive slaves and exemption from road or militia duty, as well as hourly wages. Keeping order among slaves was the patrollers’ primary duty. Statutes set guidelines for appropriate treatment of slaves and often imposed fines for unlawful beatings. In rare instances, patrollers had to compensate masters for injured slaves. For the most part, however, patrollers enjoyed quasi-judicial or quasi-executive powers in their dealings with slaves.

Overseers commanded considerable control as well. The Southern overseer was the linchpin of the large slave plantation. He ran daily operations and served as a first line of defense in safeguarding whites. The vigorous protests against drafting overseers into military service during the Civil War reveal their significance to the South. Yet slaves were too valuable to be left to the whims of frustrated, angry overseers. Injuries caused to slaves by overseers’ cruelty (or “immoral conduct”) usually entitled masters to recover civil damages. Overseers occasionally confronted criminal charges as well. Brutality by overseers naturally generated responses by their victims; at times, courts reduced murder charges to manslaughter when slaves killed abusive overseers.

Protecting The Master Against Loss: Slave Injury And Slave Stealing

Whether they liked it or not, many Southerners dealt daily with slaves. Southern law shaped these interactions among strangers, awarding damages more often for injuries to slaves than injuries to other property or persons, shielding slaves more than free persons from brutality, and generating convictions more frequently in slave-stealing cases than in other criminal cases. The law also recognized more offenses against slaveowners than against other property owners because slaves, unlike other property, succumbed to influence.

Just as assaults of slaves generated civil damages and criminal penalties, so did stealing a slave to sell him or help him escape to freedom. Many Southerners considered slave stealing worse than killing fellow citizens. In marked contrast, selling a free black person into slavery carried almost no penalty.

The counterpart to helping slaves escape — picking up fugitives — also created laws. Southern states offered rewards to defray the costs of capture or passed statutes requiring owners to pay fees to those who caught and returned slaves. Some Northern citizens worked hand-in-hand with their Southern counterparts, returning fugitive slaves to masters either with or without the prompting of law. But many Northerners vehemently opposed the peculiar institution. In an attempt to stitch together the young nation, the federal government passed the first fugitive slave act in 1793. To circumvent its application, several Northern states passed personal liberty laws in the 1840s. Stronger federal fugitive slave legislation then passed in 1850. Still, enough slaves fled to freedom — perhaps as many as 15,000 in the decade before the Civil War — with the help (or inaction) of Northerners that the profession of “slave-catching” evolved. This occupation was often highly risky — enough so that such men could not purchase life insurance coverage — and just as often highly lucrative.

Slave Crimes

Southern law governed slaves as well as slaveowners and their adversaries. What few due process protections slaves possessed stemmed from desires to grant rights to masters. Still, slaves faced harsh penalties for their crimes. When slaves stole, rioted, set fires, or killed free people, the law sometimes had to subvert the property rights of masters in order to preserve slavery as a social institution.

Slaves, like other antebellum Southern residents, committed a host of crimes ranging from arson to theft to homicide. Other slave crimes included violating curfew, attending religious meetings without a master’s consent, and running away. Indeed, a slave was not permitted off his master’s farm or business without his owner’s permission. In rural areas, a slave was required to carry a written pass to leave the master’s land.

Southern states erected numerous punishments for slave crimes, including prison terms, banishment, whipping, castration, and execution. In most states, the criminal law for slaves (and blacks generally) was noticeably harsher than for free whites; in others, slave law as practiced resembled that governing poorer white citizens. Particularly harsh punishments applied to slaves who had allegedly killed their masters or who had committed rebellious acts. Southerners considered these acts of treason and resorted to immolation, drawing and quartering, and hanging.

MARKETS AND PRICES

Market prices for slaves reflect their substantial economic value. Scholars have gathered slave prices from a variety of sources, including censuses, probate records, plantation and slave-trader accounts, and proceedings of slave auctions. These data sets reveal that prime field hands went for four to six hundred dollars in the U.S. in 1800, thirteen to fifteen hundred dollars in 1850, and up to three thousand dollars just before Fort Sumter fell. Even controlling for inflation, the prices of U.S. slaves rose significantly in the six decades before South Carolina seceded from the Union. By 1860, Southerners owned close to $4 billion worth of slaves. Slavery remained a thriving business on the eve of the Civil War: Fogel and Engerman (1974) projected that by 1890 slave prices would have increased on average more than 50 percent over their 1860 levels. No wonder the South rose in armed resistance to protect its enormous investment.

Slave markets existed across the antebellum U.S. South. Even today, one can find stone markers like the one next to the Antietam battlefield, which reads: “From 1800 to 1865 This Stone Was Used as a Slave Auction Block. It has been a famous landmark at this original location for over 150 years.” Private auctions, estate sales, and professional traders facilitated easy exchange. Established dealers like Franklin and Armfield in Virginia, Woolfolk, Saunders, and Overly in Maryland, and Nathan Bedford Forrest in Tennessee prospered alongside itinerant traders who operated in a few counties, buying slaves for cash from their owners, then moving them overland in coffles to the lower South. Over a million slaves were taken across state lines between 1790 and 1860 with many more moving within states. Some of these slaves went with their owners; many were sold to new owners. In his monumental study, Michael Tadman (1989) found that slaves who lived in the upper South faced a very real chance of being sold for profit. From 1820 to 1860, he estimated that an average of 200,000 slaves per decade moved from the upper to the lower South, most via sales. A contemporary newspaper, The Virginia Times, calculated that 40,000 slaves were sold in the year 1830.

Determinants of Slave Prices

The prices paid for slaves reflected two economic factors: the characteristics of the slave and the conditions of the market. Important individual features included age, sex, childbearing capacity (for females), physical condition, temperament, and skill level. In addition, the supply of slaves, demand for products produced by slaves, and seasonal factors helped determine market conditions and therefore prices.

Age and Price

Prices for both male and female slaves tended to follow similar life-cycle patterns. In the U.S. South, infant slaves sold for a positive price because masters expected them to live long enough to make the initial costs of raising them worthwhile. Prices rose through puberty as productivity and experience increased. In nineteenth-century New Orleans, for example, prices peaked at about age 22 for females and age 25 for males. Girls cost more than boys up to their mid-teens. The genders then switched places in terms of value. In the Old South, boys aged 14 sold for 71 percent of the price of 27-year-old men, whereas girls aged 14 sold for 65 percent of the price of 27-year-old men. After the peak age, prices declined slowly for a time, then fell off rapidly as the aging process caused productivity to fall. Compared to full-grown men, women were worth 80 to 90 percent as much. One characteristic in particular set some females apart: their ability to bear children. Fertile females commanded a premium. The mother-child link also proved important for pricing in a different way: people sometimes paid more for intact families.


Source: Fogel and Engerman (1974)

Other Characteristics and Price

Skills, physical traits, mental capabilities, and other qualities also helped determine a slave’s price. Skilled workers sold for premiums of 40-55 percent whereas crippled and chronically ill slaves sold for deep discounts. Slaves who proved troublesome — runaways, thieves, layabouts, drunks, slow learners, and the like — also sold for lower prices. Taller slaves cost more, perhaps because height acts as a proxy for healthiness. In New Orleans, light-skinned females (who were often used as concubines) sold for a 5 percent premium.

Fluctuations in Supply

Prices for slaves fluctuated with market conditions as well as with individual characteristics. U.S. slave prices fell around 1800 as the Haitian revolution sparked the movement of slaves into the Southern states. Less than a decade later, slave prices climbed when the international slave trade was banned, cutting off legal external supplies. Interestingly enough, among those who supported the closing of the trans-Atlantic slave trade were several Southern slaveowners. Why this apparent anomaly? Because the resulting reduction in supply drove up the prices of slaves already living in the U.S and, hence, their masters’ wealth. U.S. slaves had high enough fertility rates and low enough mortality rates to reproduce themselves, so Southern slaveowners did not worry about having too few slaves to go around.

Fluctuations in Demand

Demand helped determine prices as well. The demand for slaves derived in part from the demand for the commodities and services that slaves provided. Changes in slave occupations and variability in prices for slave-produced goods therefore created movements in slave prices. As slaves replaced increasingly expensive indentured servants in the New World, their prices went up. In the period 1748 to 1775, slave prices in British America rose nearly 30 percent. As cotton prices fell in the 1840s, Southern slave prices also fell. But, as the demand for cotton and tobacco grew after about 1850, the prices of slaves increased as well.

Interregional Price Differences

Differences in demand across regions led to transitional regional price differences, which in turn meant large movements of slaves. Yet because planters experienced greater stability among their workforce when entire plantations moved, 84 percent of slaves were taken to the lower South in this way rather than being sold piecemeal.

Time of Year and Price

Demand sometimes had to do with the time of year a sale took place. For example, slave prices in the New Orleans market were 10 to 20 percent higher in January than in September. Why? September was a busy time of year for plantation owners: the opportunity cost of their time was relatively high. Prices had to be relatively low for them to be willing to travel to New Orleans during harvest time.

Expectations and Prices

One additional demand factor loomed large in determining slave prices: the expectation of continued legal slavery. As the American Civil War progressed, prices dropped dramatically because people could not be sure that slavery would survive. In New Orleans, prime male slaves sold on average for $1381 in 1861 and for $1116 in 1862. Burgeoning inflation meant that real prices fell considerably more. By war’s end, slaves sold for a small fraction of their 1860 price.


Source: Data supplied by Stanley Engerman and reported in Walton and Rockoff (1994).

PROFITABILITY, EFFICIENCY, AND EXPLOITATION

That slavery was profitable seems almost obvious. Yet scholars have argued furiously about this matter. On one side stand antebellum writers such as Hinton Rowan Helper and Frederick Law Olmstead, many antebellum abolitionists, and contemporary scholars like Eugene Genovese (at least in his early writings), who speculated that American slavery was unprofitable, inefficient, and incompatible with urban life. On the other side are scholars who have marshaled masses of data to support their contention that Southern slavery was profitable and efficient relative to free labor and that slavery suited cities as well as farms. These researchers stress the similarity between slave markets and markets for other sorts of capital.

Consensus That Slavery Was Profitable

This battle has largely been won by those who claim that New World slavery was profitable. Much like other businessmen, New World slaveowners responded to market signals — adjusting crop mixes, reallocating slaves to more profitable tasks, hiring out idle slaves, and selling slaves for profit. One well-known instance shows that contemporaneous free labor thought that urban slavery may even have worked too well: employees of the Tredegar Iron Works in Richmond, Virginia, went out on their first strike in 1847 to protest the use of slave labor at the Works.

Fogel and Engerman’s Time on the Cross

Carrying the banner of the “slavery was profitable” camp is Nobel laureate Robert Fogel. Perhaps the most controversial book ever written about American slavery is Time on the Cross, published in 1974 by Fogel and co-author Stanley Engerman. These men were among the first to use modern statistical methods, computers, and large datasets to answer a series of empirical questions about the economics of slavery. To find profit levels and rates of return, they built upon the work of Alfred Conrad and John Meyer, who in 1958 had calculated similar measures from data on cotton prices, physical yield per slave, demographic characteristics of slaves (including expected lifespan), maintenance and supervisory costs, and (in the case of females) number of children. To estimate the relative efficiency of farms, Fogel and Engerman devised an index of “total factor productivity,” which measured the output per average unit of input on each type of farm. They included in this index controls for quality of livestock and land and for age and sex composition of the workforce, as well as amounts of output, labor, land, and capital

Time on the Cross generated praise — and considerable criticism. A major critique appeared in 1976 as a collection of articles entitled Reckoning with Slavery. Although some contributors took umbrage at the tone of the book and denied that it broke new ground, others focused on flawed and insufficient data and inappropriate inferences. Despite its shortcomings, Time on the Cross inarguably brought people’s attention to a new way of viewing slavery. The book also served as a catalyst for much subsequent research. Even Eugene Genovese, long an ardent proponent of the belief that Southern planters had held slaves for their prestige value, finally acknowledged that slavery was probably a profitable enterprise. Fogel himself refined and expanded his views in a 1989 book, Without Consent or Contract.

Efficiency Estimates

Fogel’s and Engerman’s research led them to conclude that investments in slaves generated high rates of return, masters held slaves for profit motives rather than for prestige, and slavery thrived in cities and rural areas alike. They also found that antebellum Southern farms were 35 percent more efficient overall than Northern ones and that slave farms in the New South were 53 percent more efficient than free farms in either North or South. This would mean that a slave farm that is otherwise identical to a free farm (in terms of the amount of land, livestock, machinery and labor used) would produce output worth 53 percent more than the free. On the eve of the Civil War, slavery flourished in the South and generated a rate of economic growth comparable to that of many European countries, according to Fogel and Engerman. They also discovered that, because slaves constituted a considerable portion of individual wealth, masters fed and treated their slaves reasonably well. Although some evidence indicates that infant and young slaves suffered much worse conditions than their freeborn counterparts, teenaged and adult slaves lived in conditions similar to — sometimes better than — those enjoyed by many free laborers of the same period.

Transition from Indentured Servitude to Slavery

One potent piece of evidence supporting the notion that slavery provides pecuniary benefits is this: slavery replaces other labor when it becomes relatively cheaper. In the early U.S. colonies, for example, indentured servitude was common. As the demand for skilled servants (and therefore their wages) rose in England, the cost of indentured servants went up in the colonies. At the same time, second-generation slaves became more productive than their forebears because they spoke English and did not have to adjust to life in a strange new world. Consequently, the balance of labor shifted away from indentured servitude and toward slavery.

Gang System

The value of slaves arose in part from the value of labor generally in the antebellum U.S. Scarce factors of production command economic rent, and labor was by far the scarcest available input in America. Moreover, a large proportion of the reward to owning and working slaves resulted from innovative labor practices. Certainly, the use of the “gang” system in agriculture contributed to profits in the antebellum period. In the gang system, groups of slaves perfomed synchronized tasks under the watchful overseer’s eye, much like parts of a single machine. Masters found that treating people like machinery paid off handsomely.

Antebellum slaveowners experimented with a variety of other methods to increase productivity. They developed an elaborate system of “hand ratings” in order to improve the match between the slave worker and the job. Hand ratings categorized slaves by age and sex and rated their productivity relative to that of a prime male field hand. Masters also capitalized on the native intelligence of slaves by using them as agents to receive goods, keep books, and the like.

Use of Positive Incentives

Masters offered positive incentives to make slaves work more efficiently. Slaves often had Sundays off. Slaves could sometimes earn bonuses in cash or in kind, or quit early if they finished tasks quickly. Some masters allowed slaves to keep part of the harvest or to work their own small plots. In places, slaves could even sell their own crops. To prevent stealing, however, many masters limited the products that slaves could raise and sell, confining them to corn or brown cotton, for example. In antebellum Louisiana, slaves even had under their control a sum of money called a peculium. This served as a sort of working capital, enabling slaves to establish thriving businesses that often benefited their masters as well. Yet these practices may have helped lead to the downfall of slavery, for they gave slaves a taste of freedom that left them longing for more.

Slave Families

Masters profited from reproduction as well as production. Southern planters encouraged slaves to have large families because U.S. slaves lived long enough — unlike those elsewhere in the New World — to generate more revenue than cost over their lifetimes. But researchers have found little evidence of slave breeding; instead, masters encouraged slaves to live in nuclear or extended families for stability. Lest one think sentimentality triumphed on the Southern plantation, one need only recall the willingness of most masters to sell if the bottom line was attractive enough.

Profitability and African Heritage

One element that contributed to the profitability of New World slavery was the African heritage of slaves. Africans, more than indigenous Americans, were accustomed to the discipline of agricultural practices and knew metalworking. Some scholars surmise that Africans, relative to Europeans, could better withstand tropical diseases and, unlike Native Americans, also had some exposure to the European disease pool.

Ease of Identifying Slaves

Perhaps the most distinctive feature of Africans, however, was their skin color. Because they looked different from their masters, their movements were easy to monitor. Denying slaves education, property ownership, contractual rights, and other things enjoyed by those in power was simple: one needed only to look at people to ascertain their likely status. Using color was a low-cost way of distinguishing slaves from free persons. For this reason, the colonial practices that freed slaves who converted to Christianity quickly faded away. Deciphering true religious beliefs is far more difficult than establishing skin color. Other slave societies have used distinguishing marks like brands or long hair to denote slaves, yet color is far more immutable and therefore better as a cheap way of keeping slaves separate. Skin color, of course, can also serve as a racist identifying mark even after slavery itself disappears.

Profit Estimates

Slavery never generated superprofits, because people always had the option of putting their money elsewhere. Nevertheless, investment in slaves offered a rate of return — about 10 percent — that was comparable to returns on other assets. Slaveowners were not the only ones to reap rewards, however. So too did cotton consumers who enjoyed low prices and Northern entrepreneurs who helped finance plantation operations.

Exploitation Estimates

So slavery was profitable; was it an efficient way of organizing the workforce? On this question, considerable controversy remains. Slavery might well have profited masters, but only because they exploited their chattel. What is more, slavery could have locked people into a method of production and way of life that might later have proven burdensome.

Fogel and Engerman (1974) claimed that slaves kept about ninety percent of what they produced. Because these scholars also found that agricultural slavery produced relatively more output for a given set of inputs, they argued that slaves may actually have shared in the overall material benefits resulting from the gang system. Other scholars contend that slaves in fact kept less than half of what they produced and that slavery, while profitable, certainly was not efficient. On the whole, current estimates suggest that the typical slave received only about fifty percent of the extra output that he or she produced.

Did Slavery Retard Southern Economic Development?

Gavin Wright (1978) called attention as well to the difference between the short run and the long run. He noted that slaves accounted for a very large proportion of most masters’ portfolios of assets. Although slavery might have seemed an efficient means of production at a point in time, it tied masters to a certain system of labor which might not have adapted quickly to changed economic circumstances. This argument has some merit. Although the South’s growth rate compared favorably with that of the North in the antebellum period, a considerable portion of wealth was held in the hands of planters. Consequently, commercial and service industries lagged in the South. The region also had far less rail transportation than the North. Yet many plantations used the most advanced technologies of the day, and certain innovative commercial and insurance practices appeared first in transactions involving slaves. What is more, although the South fell behind the North and Great Britain in its level of manufacturing, it compared favorably to other advanced countries of the time. In sum, no clear consensus emerges as to whether the antebellum South created a standard of living comparable to that of the North or, if it did, whether it could have sustained it.

Ultimately, the South’s system of law, politics, business, and social customs strengthened the shackles of slavery and reinforced racial stereotyping. As such, it was undeniably evil. Yet, because slaves constituted valuable property, their masters had ample incentives to take care of them. And, by protecting the property rights of masters, slave law necessarily sheltered the persons embodied within. In a sense, the apologists for slavery were right: slaves sometimes fared better than free persons because powerful people had a stake in their well-being.

Conclusion: Slavery Cannot Be Seen As Benign

But slavery cannot be thought of as benign. In terms of material conditions, diet, and treatment, Southern slaves may have fared as well in many ways as the poorest class of free citizens. Yet the root of slavery is coercion. By its very nature, slavery involves involuntary transactions. Slaves are property, whereas free laborers are persons who make choices (at times constrained, of course) about the sort of work they do and the number of hours they work.

The behavior of former slaves after abolition clearly reveals that they cared strongly about the manner of their work and valued their non-work time more highly than masters did. Even the most benevolent former masters in the U.S. South found it impossible to entice their former chattels back into gang work, even with large wage premiums. Nor could they persuade women back into the labor force: many female ex-slaves simply chose to stay at home. In the end, perhaps slavery is an economic phenomenon only because slave societies fail to account for the incalculable costs borne by the slaves themselves.

REFERENCES AND FURTHER READING

For studies pertaining to the economics of slavery, see particularly Aitken, Hugh, editor. Did Slavery Pay? Readings in the Economics of Black Slavery in the United States. Boston: Houghton-Mifflin, 1971.

Barzel, Yoram. “An Economic Analysis of Slavery.” Journal of Law and Economics 20 (1977): 87-110.

Conrad, Alfred H., and John R. Meyer. The Economics of Slavery and Other Studies. Chicago: Aldine, 1964.

David, Paul A., Herbert G. Gutman, Richard Sutch, Peter Temin, and Gavin Wright. Reckoning with Slavery: A Critical Study in the Quantitative History of American Negro Slavery. New York: Oxford University Press, 1976

Fogel , Robert W. Without Consent or Contract. New York: Norton, 1989.

Fogel, Robert W., and Stanley L. Engerman. Time on the Cross: The Economics of American Negro Slavery. New York: Little, Brown, 1974.

Galenson, David W. Traders, Planters, and Slaves: Market Behavior in Early English America. New York: Cambridge University Press, 1986

Kotlikoff, Laurence. “The Structure of Slave Prices in New Orleans, 1804-1862.” Economic Inquiry 17 (1979): 496-518.

Ransom, Roger L., and Richard Sutch. One Kind of Freedom: The Economic Consequences of Emancipation. New York: Cambridge University Press, 1977.

Ransom, Roger L., and Richard Sutch “Capitalists Without Capital” Agricultural History 62 (1988): 133-160.

Vedder, Richard K. “The Slave Exploitation (Expropriation) Rate.” Explorations in Economic History 12 (1975): 453-57.

Wright, Gavin. The Political Economy of the Cotton South: Households, Markets, and Wealth in the Nineteenth Century. New York: Norton, 1978.

Yasuba, Yasukichi. “The Profitability and Viability of Slavery in the U.S.” Economic Studies Quarterly 12 (1961): 60-67.

For accounts of slave trading and sales, see
Bancroft, Frederic. Slave Trading in the Old South. New York: Ungar, 1931. Tadman, Michael. Speculators and Slaves. Madison: University of Wisconsin Press, 1989.

For discussion of the profession of slave catchers, see
Campbell, Stanley W. The Slave Catchers. Chapel Hill: University of North Carolina Press, 1968.

To read about slaves in industry and urban areas, see
Dew, Charles B. Slavery in the Antebellum Southern Industries. Bethesda: University Publications of America, 1991.

Goldin, Claudia D. Urban Slavery in the American South, 1820-1860: A Quantitative History. Chicago: University of Chicago Press,1976.

Starobin, Robert. Industrial Slavery in the Old South. New York: Oxford University Press, 1970.

For discussions of masters and overseers, see
Oakes, James. The Ruling Race: A History of American Slaveholders. New York: Knopf, 1982.

Roark, James L. Masters Without Slaves. New York: Norton, 1977.

Scarborough, William K. The Overseer: Plantation Management in the Old South. Baton Rouge, Louisiana State University Press, 1966.

On indentured servitude, see
Galenson, David. “Rise and Fall of Indentured Servitude in the Americas: An Economic Analysis.” Journal of Economic History 44 (1984): 1-26.

Galenson, David. White Servitude in Colonial America: An Economic Analysis. New York: Cambridge University Press, 1981.

Grubb, Farley. “Immigrant Servant Labor: Their Occupational and Geographic Distribution in the Late Eighteenth Century Mid-Atlantic Economy.” Social Science History 9 (1985): 249-75.

Menard, Russell R. “From Servants to Slaves: The Transformation of the Chesapeake Labor System.” Southern Studies 16 (1977): 355-90.

On slave law, see
Fede, Andrew. “Legal Protection for Slave Buyers in the U.S. South.” American Journal of Legal History 31 (1987). Finkelman, Paul. An Imperfect Union: Slavery, Federalism, and Comity. Chapel Hill: University of North Carolina, 1981.

Finkelman, Paul. Slavery, Race, and the American Legal System, 1700-1872. New York: Garland, 1988.

Finkelman, Paul, ed. Slavery and the Law. Madison: Madison House, 1997.

Flanigan, Daniel J. The Criminal Law of Slavery and Freedom, 1800-68. New York: Garland, 1987.

Morris, Thomas D., Southern Slavery and the Law: 1619-1860. Chapel Hill: University of North Carolina Press, 1996.

Schafer, Judith K. Slavery, The Civil Law, and the Supreme Court of Louisiana. Baton Rouge: Louisiana State University Press, 1994.

Tushnet, Mark V. The American Law of Slavery, 1810-60: Considerations of Humanity and Interest. Princeton: Princeton University Press, 1981.

Wahl, Jenny B. The Bondsman’s Burden: An Economic Analysis of the Common Law of Southern Slavery. New York: Cambridge University Press, 1998.

Other useful sources include
Berlin, Ira, and Philip D. Morgan, eds. The Slave’s Economy: Independent Production by Slaves in the Americas. London: Frank Cass, 1991.

Berlin, Ira, and Philip D. Morgan, eds, Cultivation and Culture: Labor and the Shaping of Slave Life in the Americas. Charlottesville, University Press of Virginia, 1993.

Elkins, Stanley M. Slavery: A Problem in American Institutional and Intellectual Life. Chicago: University of Chicago Press, 1976.

Engerman, Stanley, and Eugene Genovese. Race and Slavery in the Western Hemisphere: Quantitative Studies. Princeton: Princeton University Press, 1975.

Fehrenbacher, Don. Slavery, Law, and Politics. New York: Oxford University Press, 1981.

Franklin, John H. From Slavery to Freedom. New York: Knopf, 1988.

Genovese, Eugene D. Roll, Jordan, Roll. New York: Pantheon, 1974.

Genovese, Eugene D. The Political Economy of Slavery: Studies in the Economy and Society of the Slave South . Middletown, CT: Wesleyan, 1989.

Hindus, Michael S. Prison and Plantation. Chapel Hill: University of North Carolina Press, 1980.

Margo, Robert, and Richard Steckel. “The Heights of American Slaves: New Evidence on Slave Nutrition and Health.” Social Science History 6 (1982): 516-538.

Phillips, Ulrich B. American Negro Slavery: A Survey of the Supply, Employment and Control of Negro Labor as Determined by the Plantation Regime. New York: Appleton, 1918.

Stampp, Kenneth M. The Peculiar Institution: Slavery in the Antebellum South. New York: Knopf, 1956.

Steckel, Richard. “Birth Weights and Infant Mortality Among American Slaves.” Explorations in Economic History 23 (1986): 173-98.

Walton, Gary, and Hugh Rockoff. History of the American Economy. Orlando: Harcourt Brace, 1994, chapter 13.

Whaples, Robert. “Where Is There Consensus among American Economic Historians?” Journal of Economic History 55 (1995): 139-154.

Data can be found at
U.S. Bureau of the Census, Historical Statistics of the United States, 1970, collected in ICPSR study number 0003, “Historical Demographic, Economic and Social Data: The United States, 1790-1970,” located at http://fisher.lib.virginia.edu/census/.

Citation: Bourne, Jenny. “Slavery in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 26, 2008. URL http://eh.net/encyclopedia/slavery-in-the-united-states/

The International Natural Rubber Market, 1870-1930

Zephyr Frank, Stanford University and Aldo Musacchio, Ibmec SãoPaulo

Overview of the Rubber Market, 1870-1930

Natural rubber was first used by the indigenous peoples of the Amazon basin for a variety of purposes. By the middle of the eighteenth century, Europeans had begun to experiment with rubber as a waterproofing agent. In the early nineteenth century, rubber was used to make waterproof shoes (Dean, 1987). The best source of latex, the milky fluid from which natural rubber products were made, was hevea brasiliensis, which grew predominantly in the Brazilian Amazon (but also in the Amazonian regions of Bolivia and Peru). Thus, by geographical accident, the first period of rubber’s commercial history, from the late 1700s through 1900, was centered in Brazil; the second period, from roughly 1910 on, was increasingly centered in East Asia as the result of plantation development. The first century of rubber was typified by relatively low levels of production, high wages, and very high prices; the period following 1910 was one of rapidly increasing production, low wages, and falling prices.

Uses of Rubber

The early uses of the material were quite limited. Initially the problem of natural rubber was its sensitivity to temperature changes, which altered its shape and consistency. In 1839 Charles Goodyear improved the process called vulcanization, which modified rubber so that it would support extreme temperatures. It was then that natural rubber became suitable for producing hoses, tires, industrial bands, sheets, shoes, shoe soles, and other products. What initially caused the beginning of the “Rubber Boom,” however, was the popularization of the bicycle. The boom would then be accentuated after 1900 by the development of the automobile industry and the expansion of the tire industry to produce car tires (Weinstein, 1983; Dean 1987).

Brazil’s Initial Advantage and High-Wage Cost Structure

Until the turn of the twentieth century Brazil and the countries that share the Amazon basin (i.e. Bolivia, Venezuela and Peru), were the only exporters of natural rubber. Brazil sold almost ninety percent of the total rubber commercialized in the world. The fundamental fact that explains Brazil’s entry into and domination of natural rubber production during the period 1870 through roughly 1913 is that most of the world’s rubber trees grew naturally in the Amazon region of Brazil. The Brazilian rubber industry developed a high-wage cost structure as the result of labor scarcity and lack of competition in the early years of rubber production. Since there were no credit markets to finance the trips of the workers of other parts of Brazil to the Amazon, workers paid their trips with loans from their future employers. Much like indenture servitude during colonial times in the United States, these loans were paid back to the employers with work once the laborers were established in the Amazon basin. Another factor that increased the costs of producing rubber was that most provisions for tappers in the field had to be shipped in from outside the region at great expense (Barham and Coomes, 1994). This made Brazilian production very expensive compared to the future plantations in Asia. Nevertheless Brazil’s system of production worked well as long as two conditions were met: first, that the demand for rubber did not grow too quickly, for wild rubber production could not expand rapidly owing to labor and environmental constraints; second, that competition based on some other more efficient arrangement of factors of production did not exist. As can be seen in Figure 1, Brazil dominated the natural rubber market until the first decade of the twentieth century.

Between 1900 and 1913, these conditions ceased to hold. First, the demand for rubber skyrocketed [see Figure 2], providing a huge incentive for other producers to enter the market. Prices had been high before, but Brazilian supply had been quite capable of meeting demand; now, prices were high and demand appeared insatiable. Plantations, which had been possible since the 1880s, now became a reality mainly in the colonies of Southeast Asia. Because Brazil was committed to a high-wage, labor-scarce production regime, it was unable to counter the entry of Asian plantations into the market it had dominated for half a century.

Southeast Asian Plantations Develop a Low-Cost, Labor-Intensive Alternative

In Asia, the British and Dutch drew upon their superior stocks of capital and vast pools of cheap colonial labor to transform rubber collection into a low-cost, labor-intensive industry. Investment per tapper in Brazil was reportedly 337 pounds sterling circa 1910; in the low-cost Asian plantations, investment was estimated at just 210 pounds per worker (Dean, 1987). Not only were Southeast Asian tappers cheaper, they were potentially eighty percent more productive (Dean, 1987).

Ironically, the new plantation system proved equally susceptible to uncertainty and competition. Unexpected sources of uncertainty arose in the technological development of automobile tires. In spite of colonialism, the British and Dutch were unable to collude to control production and prices plummeted after 1910. When the British did attempt to restrict production in the 1920s, the United States attempted to set up plantations in Brazil and the Dutch were happy to take market share. Yet it was too late for Brazil: the cost structure of Southeast Asian plantations could not be matched. In a sense, then, the game was no longer worth the candle: in order to compete in rubber production, Brazil would have to have had significantly lower wages — which would only have been possible with a vastly expanded transport network and domestic agriculture sector in the hinterland of the Amazon basin. Such an expensive solution made no economic sense in the 1910s and 20s when coffee and nascent industrialization in São Paulo offered much more promising prospects.

Natural Rubber Extraction and Commercialization: Brazil

Rubber Tapping in the Amazon Rainforest

One disadvantage Brazilian rubber producers suffered was that the organization of production depended on the distribution of Hevea brasiliensis trees in the forest. The owner (or often lease concessionary) of a large land plot would hire tappers to gather rubber by gouging the tree trunk with an axe. In Brazil, the usual practice was to make a big dent in the tree and put a small bowl to collect the latex that would come out of the trunk. Typically, tappers had two “rows” of trees they worked on, alternating one row per day. The “rows” contained several circular roads that went through the forest with more than 100 trees each. Rubber could only be collected during the tapping season (August to January), and the living conditions of tappers were hard. As the need for rubber expanded, tappers had to be sent deep into the Amazon rainforest to look for unexplored land with more productive trees. Tappers established their shacks close to the river because rubber, once smoked, was sent by boat to Manaus (capital of the state of Amazonas) or to Belém (capital of the state of Pará), both entrepots for rubber exporting to Europe and the US.[1]

Competition or Exploitation? Tappers and Seringalistas

After collecting the rubber, tappers would go back to their shacks and smoke the resin in order to make balls of partially filtered and purified rough rubber that could be sold at the ports. There is much discussion about the commercialization of the product. Weinstein (1983) argues that the seringalista — the employer of the rubber tapper — controlled the transportation of rubber to the ports, where he sold the rubber, many times in exchange for goods that could be sold (with a large gain) back to the tapper. In this economy money was scarce and the “wages” of tappers or seringueiros were determined by the price of rubber. Wages depended on the current price of rubber; the usual agreement for tappers was to split the gross profits with their patrons. These salaries were most commonly paid in goods, such as cigarettes, food, and tools. According to Weinstein (1983), the goods were overpriced by the seringalistas to extract larger profits from the seringueiros work. Barham and Coomes (1994), on the other hand, argue that the structure of the market in the Amazon was less closed and that independent traders would travel around the basin in small boats, willing to exchange goods for rubber. Poor monitoring by employers and an absent state facilitated these under-the-counter transactions, which allowed tappers to get better pay for their work.

Exporting Rubber

From the ports, rubber was in the hands of mainly Brazilian, British and American exporters. Contrary to what Weinstein (1983) argued, Brazilian producers or local merchants from the interior could choose whether to send the rubber on consignment to a New York commission house, rather than selling it to a exporter in the Amazon (Shelley, 1918). Rubber was taken, like other commodities, to ports in Europe and the US to be distributed to the industries that bought large amounts of the product in the London or New York commodities exchanges. A large part of rubber produced was traded at these exchanges, but tire manufacturers and other large consumers also made direct purchases from the distributors in the country of origin.[2]

Rubber Production in Southeast Asia

Seeds Smuggled from Brazil to Britain

The Hevea brasiliensis, the most important type of rubber tree, was an Amazonian species. This is why the countries of the Amazon basin were the main producers of rubber at the beginning of the international rubber trade. How, then, did British and Dutch colonies in Southeast Asia end up dominating the market? Brazil tried to prevent Hevea brasiliensis seeds from being exported, as the Brazilian government knew that by being the main producers of rubber, profits from rubber trading were insured. Protecting property rights in seeds proved a futile exercise. In 1876, the Englishman and aspiring author and rubber expert, Henry Wickham, smuggled 70,000 seeds to London, a feat for which he earned Brazil’s eternal opprobrium and an English knighthood. After experimenting with the seeds, 2,800 plants were raised at the Royal Botanical Gardens in London (Kew Gardens) and then shipped to Perideniya Gardens in Ceylon. In 1877 a case of 22 plants reached Singapore and were planted at the Singapore Botanical Garden. In the same year the first plant arrived in the Malay States. Since rubber trees needed between 6 to 8 years to be mature enough to yield good rubber, tapping began in the 1880s.

Scientific Research to Maximize Yields

In order to develop rubber extraction in the Malay States, more scientific intervention was needed. In 1888, H. N. Ridley was appointed director of the Singapore Botanical Garden and began experimenting with tapping methods. The final result of all the experimentations with different methods of tapping in Southeast Asia was the discovery of how to extract rubber in such a way that the tree would maintain a high yield for a long period of time. Rather than making a deep gouge with an axe on the rubber tree, as in Brazil, Southeast Asian tappers scraped the trunk of the tree by making a series of overlapped Y-shaped cuts with an axe, such that at the bottom there would be a canal ending in a collecting receptacle. According to Akers (1912), the tapping techniques in Asia insured the exploitation of the trees for longer periods, because the Brazilian technique scarred the tree’s bark and lowered yields over time.

Rapid Commercial Development and the Automobile Boom

Commercial planting in the Malay States began in 1895. The development of large-scale plantations was slow because of the lack of capital. Investors did not get interested in plantations until the prospects for rubber improved radically with the spectacular development of the automobile industry. By 1905, European capitalists were sufficiently interested in investing in large-scale plantations in Southeast Asia to plant some 38,000 acres of trees. Between 1905 and 1911 the annual increase was over 70,000 acres per year, and, by the end of 1911, the acreage in the Malay States reached 542,877 (Baxendale, 1913). The expansion of plantations was possible because of the sophistication in the organization of such enterprises. Joint stock companies were created to exploit the land grants and capital was raised through stock issues on the London Stock Exchange. The high returns during the first years (1906-1910) made investors ever more optimistic and capital flowed in large amounts. Plantations depended on a very disciplined system of labor and an intensive use of land.

Malaysia’s Advantages over Brazil

In addition to the intensive use of land, the production system in Malaysia had several economic advantages over that of Brazil. First, in the Malay States there was no specific tapping season, unlike Brazil where the rain did not allow tappers to collect rubber during six months of the year. Second, health conditions were better on the plantations, where rubber companies typically provided basic medical care and built infirmaries. In Brazil, by contrast, yellow fever and malaria made survival harder for rubber tappers who were dispersed in the forest and without even rudimentary medical attention. Finally, better living conditions and the support of the British and Dutch colonial authorities helped to attract Indian labor to the rubber plantations. Japanese and Chinese labor also immigrated to the plantations in Southeast Asia in response to relatively high wages (Baxendale, 1913).

Initially, demand for rubber was associated with specialized industrial components (belts and gaskets, etc.), consumer goods (golf balls, shoe soles, galoshes, etc.), and bicycle tires. Prior to the development of the automobile as a mass-marketed phenomenon, the Brazilian wild rubber industry was capable of meeting world demand and, furthermore, it was impossible for rubber producers to predict the scope and growth of the automobile industry prior to the 1900s. Thus, as Figure 3 indicates, growth in demand, as measured by U.K. imports, was not particularly rapid in the period 1880-1899. There was no reason to believe, in the early 1880s, that demand for rubber would explode as it did in the 1890s. Even as demand rose in the 1890s with the bicycle craze, the rate of increase was not beyond the capacity of wild rubber producers in Brazil and elsewhere (see figure 3). High rubber prices did not induce rapid increases in production or plantation development in the nineteenth century. In this context, Brazil developed a reasonably efficient industry based on its natural resource endowment and limited labor and capital sources.

In the first three decades of the twentieth century, major changes in both supply and demand created unprecedented uncertainty in rubber markets. On the supply side, Southeast Asian rubber plantations transformed the cost structure and capacity of the industry. On the demand side, and directly inducing plantation development, automobile production and associated demand for rubber exploded. Then, in the 1920s, competition and technological advance in tire production led to another shift in the market with profound consequences for rubber producers and tire manufacturers alike.

Rapid Price Fluctuations and Output Lags

Figure 1 shows the fluctuations of the Rubber Smoked Sheet type 1 (RSS1) price in London on an annual basis. The movements from 1906 to 1910 were very volatile on a monthly basis, as well, thus complicating forecasts for producers and making it hard for producers to decide how to react to market signals. Even though the information of prices and amounts in the markets were published every month in the major rubber journals, producers did not have a good idea of what was going to happen in the long run. If prices were high today, then they wanted to expand the area planted, but since it took from 6 to 8 years for trees to yield good rubber, they would have to wait to see the result of the expansion in production many years and price swings later. Since many producers reacted in the same way, periods of overproduction of rubber six to eight -odd years after a price rise were common.[3] Overproduction meant low prices, but since investments were mostly sunk (the costs of preparing the land, planting the trees and bringing in the workers could not be recovered and these resources could not be easily shifted to other uses), the market tended to stay oversupplied for long periods of time.

In figure 1 we see the annual price of Malaysian rubber plotted over time.

The years 1905 and 1906 marked historic highs for rubber prices, only to be surpassed briefly in 1909 and 1910. The area planted in rubber throughout Asia grew from 15,000 acres in 1901 to 433,000 acres in 1907; these plantings matured circa 1913, and cultivated rubber surpassed Brazilian wild rubber in volume exported.[4] The growth of the Asian rubber industry soon swamped Brazil’s market share and drove prices well below pre-Boom levels. After the major peak in prices of 1910, prices plummeted and followed a downward trend throughout the 1920s. By 1921, the bottom had dropped out of the market, and Malaysian rubber producers were induced by the British colonial authorities to enter into a scheme to restrict production. Plantations received export coupons that set quotas that limited the supply of rubber. The shortage of rubber did not affect prices until 1924 when the consumption passed the production of rubber and prices started to rise rapidly. This scheme had a short success because competition from the Dutch plantations in southeast Asia and others drove prices down by 1926. The plan was officially ended in 1928.[5]

Automobiles’ Impact on Rubber Demand

In order to understand the boom in rubber production, it is fundamental to look at the automobile industry. Cars had originally been adapted from horse-drawn carriages; some ran on wooden wheels, some on metal, some shod as it were in solid rubber. In any case, the ride at the speeds cars were soon capable of was impossible to bear. The pneumatic tire was quickly adopted from the bicycle, and the automobile tire industry was born — soon to account for well over half of rubber company sales in the United States where the vast majority of automobiles were manufactured in the early years of the industry.[6] The amount of rubber required to satisfy demand for automobile tires led first to a spike in rubber prices; second, it led to the development of rubber plantations in Asia.[7]

The connection between automobiles, plantations, and the rubber tire industry was explicit and obvious to observers at the time. Harvey Firestone, son of the founder of the company, put it this way:

It was not until 1898 that any serious attention was paid to plantation development. Then came the automobile, and with it the awakening on the part of everybody that without rubber there could be no tires, and without tires there could be no automobiles. (Firestone, 1932, p. 41)

Thus the emergence of a strong consuming sector linked to the automobile was necessary. For instance, the average price of rubber from 1880-1884 was 401 pounds sterling per ton; from 1900 to 1904, when the first plantations were beginning to be set up, the average price was 459 pounds sterling per ton. Thus, Asian plantations were developed both in response to high rubber prices and to what everyone could see was an exponentially growing source of demand in automobiles. Previous consumers of rubber did not show the kind of dynamism needed to spur entry by plantations into the natural rubber market, even though prices were very high throughout most of second half of the nineteenth century.

Producers Need to Forecast Future Supply and Demand Conditions

Rubber producers made decisions about production and planting during the period 1900-1912 with the aim to reap windfall profits, instead of thinking about the long-run sustainability of their business. High prices were an incentive for all to increase production, but increasing production, through more acreage planted could mean a loss for everyone in the future (because too much supply could drive the prices down). Yet, current prices could not yield profits when investment decisions had to be made six or more years in advance, as was the case in plantation production: in order to invest in plantations, capital had to be able to predict future interactions in supply and demand. Demand, although high and apparently relatively price inelastic, was not entirely predictable. It was predictable enough, however, for planters to expand acreage in rubber in Asia at a dramatic rate. Planters were often uncertain as to the aggregate level of supply: new plantations were constantly coming into production; others were entering into decline or bankruptcy. Thus their investments could yield a lot in the short run, but if all the people reacted in the same way, prices were driven down and profits were low too. This is what happened in the 1920s, after all the acreage expansion of the first two decades of the century.

Demand Growth Unexpectedly Slows in the 1920s

Plantings between 1912 and 1916 were destined to come into production during a period in which growth in the automobile industry leveled off significantly owing to recession in 1920-21. Making maters worse for rubber producers, major advances in tire technology further controlled demand — for example, the change from corded to balloon tires increased average tire tread mileage from 8,000 to 15,000 miles.[8] The shift from corded to balloon tires decreased demand for natural rubber even as the automobile industry recovered from recession in the early 1920s. In addition, better design of tire casings circa 1920 led to the growth of the retreading industry, the result of which was further saving on rubber. Finally, better techniques in cotton weaving lowered friction and heat and further extended tire life.[9] As rubber supplies increased and demand decreased and became more price inelastic, prices plummeted: neither demand nor price proved predictable over the long run and suppliers paid a stiff price for overextending themselves during the boom years. Rubber tire manufacturers suffered the same fate: competition and technology (which they themselves introduced) pushed prices downward and, at the same time, flattened demand (Allen, 1936).[10]

Now, if one looks at the price of rubber and the rate of growth in demand as measured by imports in the 1920s, it is clear that the industry was over-invested in capacity. The consequences of technological change were dramatic for tire manufacturer profits as well as for rubber producers.

Conclusion

The natural rubber trade underwent several radical transformations over the period 1870 to 1930. First, prior to 1910, it was associated with high costs of production and high prices for final goods; most rubber was produced, during this period, by tapping rubber trees in the Amazon region of Brazil. After 1900, and especially after 1910, rubber was increasingly produced on low-cost plantations in Southeast Asia. The price of rubber fell with plantation development and, at the same time, the volume of rubber demanded by car tire manufacturers expanded dramatically. Uncertainty, in terms of both supply and demand, (often driven by changing tire technology) meant that natural rubber producers and tire manufacturers both experienced great volatility in returns. The overall evolution of the natural rubber trade and the related tire manufacture industry was toward large volume, low-cost production in an internationally competitive environment marked by commodity price volatility and declining levels of profit as the industry matured.

References

Akers, C. E. Report on the Amazon Valley: Its Rubber Industry and Other Resources. London: Waterlow & Sons, 1912.

Allen, Hugh. The House of Goodyear. Akron: Superior Printing, 1936.

Alves Pinto, Nelson Prado. Política Da Borracha No Brasil. A Falência Da Borracha Vegetal. São Paulo: HUCITEC, 1984.

Babcock, Glenn D. History of the United States Rubber Company. Indiana: Bureau of Business Research, 1966.

Barham, Bradford, and Oliver Coomes. “The Amazon Rubber Boom: Labor Control, Resistance, and Failed Plantation Development Revisited.” Hispanic American Historical Review 74, no. 2 (1994): 231-57.

Barham, Bradford, and Oliver Coomes. Prosperity’s Promise. The Amazon Rubber Boom and Distorted Economic Development. Boulder: Westview Press, 1996.

Barham, Bradford, and Oliver Coomes. “Wild Rubber: Industrial Organisation and the Microeconomics of Extraction during the Amazon Rubber Boom (1860-1920).” Hispanic American Historical Review 26, no. 1 (1994): 37-72.

Baxendale, Cyril. “The Plantation Rubber Industry.” India Rubber World, 1 January 1913.

Blackford, Mansel and Kerr, K. Austin. BFGoodrich. Columbus: Ohio State University Press, 1996.

Brazil. Instituto Brasileiro de Geografia e Estatística. Anuário Estatístico Do Brasil. Rio de Janeiro: Instituto Brasileiro de Geografia e Estatística, 1940.

Dean, Warren. Brazil and the Struggle for Rubber: A Study in Environmental History. Cambridge: Cambridge University Press, 1987.

Drabble, J. H. Rubber in Malaya, 1876-1922. Oxford: Oxford University Press, 1973.

Firestone, Harvey Jr. The Romance and Drama of the Rubber Industry. Akron: Firestone Tire and Rubber Co., 1932.

Santos, Roberto. História Econômica Da Amazônia (1800-1920). São Paulo: T.A. Queiroz, 1980.

Schurz, William Lytle, O. D Hargis, Curtis Fletcher Marbut, and C. B Manifold. Rubber Production in the Amazon Valley by William L. Schurz, Commercial Attaché, and O.D. Hargis, Special Agent, of the Department of Commerce, and C.F. Marbut, Chief, Division of Soil Survey, and C.B. Manifold, Soil Surveyor, of the Department of Agriculture. U.S. Bureau of Foreign and Domestic Commerce (Dept. of Commerce) Trade Promotion Series: Crude Rubber Survey: Crude Rubber Survey: Trade Promotion Series, no. 4. no. 28. Washington: Govt. Print. Office, 1925.

Shelley, Miguel. “Financing Rubber in Brazil.” India Rubber World, 1 July 1918.

Weinstein, Barbara. The Amazon Rubber Boom, 1850-1920. Stanford: Stanford University Press, 1983.


Notes:

[1] Rubber taping in the Amazon basin is described in Weinstein (1983), Barham and Coomes (1994), Stanfield (1998), and in several articles published in India Rubber World, the main journal on rubber trading. See, for example, the explanation of tapping in the October 1, 1910 issue, or “The Present and Future of the Native Havea Rubber Industry” in the January 1, 1913 issue. For a detailed analysis of the rubber industry by region in Brazil by contemporary observers, see Schurz et al (1925).

[2] Newspapers such as The Economist or the London Times included sections on rubber trading, such as weekly or monthly reports of the market conditions, prices and other information. For the dealings between tire manufacturers and distributors in Brazil and Malaysia see Firestone (1932).

[3] Using cross-correlations of production and prices, we found that changes in production at time t were correlated with price changes in t-6 and t-8 (years). This is only weak evidence because these correlations are not statistically significant.

[4] Drabble (1973), 213, 220. The expansion in acreage was accompanied by a boom in company formation.

[5] Drabble (1973), 192-199. This was the so-called Stevenson Committee restriction, which lasted from 1922 to 1926. This plan basically limited the amount of rubber each planter could export assigning quotas through coupons.

[6] Pneumatic tires were first adapted to automobiles in 1896; Dunlop’s pneumatic bicycle tire was introduced in 1888. The great advantage of these tires over solid rubber was that they generated far less friction, extending tread life, and, of course, cushioned the ride and allowed for higher speeds.

[7] Early histories of the rubber industry tended to blame Brazilian “monopolists” for holding up supply and reaping windfall profits, see, e.g., Allen (1936), 116-117. In fact, rubber production in Brazil was far from monopolistic; other reasons account for supply inelasticity.

[8] Blackford and Kerr (1996), p. 88.

[9] The so-called “supertwist” weave allowed for the manufacture of larger, more durable tires, especially for trucks. Allen (1936), pp. 215-216.

[10] Allen (1936), p. 320.

Citation: Frank, Zephyr and Aldo Musacchio. “The International Natural Rubber Market, 1870-1930″. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-international-natural-rubber-market-1870-1930/

The Economics of the American Revolutionary War

Ben Baack, Ohio State University

By the time of the onset of the American Revolution, Britain had attained the status of a military and economic superpower. The thirteen American colonies were one part of a global empire generated by the British in a series of colonial wars beginning in the late seventeenth century and continuing on to the mid eighteenth century. The British military establishment increased relentlessly in size during this period as it engaged in the Nine Years War (1688-97), the War of Spanish Succession (1702-13), the War of Austrian Succession (1739-48), and the Seven Years War (1756-63). These wars brought considerable additions to the British Empire. In North America alone the British victory in the Seven Years War resulted in France ceding to Britain all of its territory east of the Mississippi River as well as all of Canada and Spain surrendering its claim to Florida (Nester, 2000).

Given the sheer magnitude of the British military and its empire, the actions taken by the American colonists for independence have long fascinated scholars. Why did the colonists want independence? How were they able to achieve a victory over what was at the time the world’s preeminent military power? What were the consequences of achieving independence? These and many other questions have engaged the attention of economic, legal, military, political, and social historians. In this brief essay we will focus only on the economics of the Revolutionary War.

Economic Causes of the Revolutionary War

Prior to the conclusion of the Seven Years War there was little, if any, reason to believe that one day the American colonies would undertake a revolution in an effort to create an independent nation-state. As apart of the empire the colonies were protected from foreign invasion by the British military. In return, the colonists paid relatively few taxes and could engage in domestic economic activity without much interference from the British government. For the most part the colonists were only asked to adhere to regulations concerning foreign trade. In a series of acts passed by Parliament during the seventeenth century the Navigation Acts required that all trade within the empire be conducted on ships which were constructed, owned and largely manned by British citizens. Certain enumerated goods whether exported or imported by the colonies had to be shipped through England regardless of the final port of destination.

Western Land Policies

Economic incentives for independence significantly increased in the colonies as a result of a series of critical land policy decisions made by the British government. The Seven Years’ War had originated in a contest between Britain and France over control of the land from the Appalachian Mountains to the Mississippi River. During the 1740s the British government pursued a policy of promoting colonial land claims to as well as settlement in this area, which was at the time French territory. With the ensuing conflict of land claims both nations resorted to the use of military force which ultimately led to the onset of the war. At the conclusion of the war as a result of one of many concessions made by France in the 1763 Treaty of Paris, Britain acquired all the contested land west of its colonies to the Mississippi River. It was at this point that the British government began to implement a fundamental change in its western land policy.

Britain now reversed its long-time position of encouraging colonial claims to land and settlement in the west. The essence of the new policy was to establish British control of the former French fur trade in the west by excluding any settlement there by the Americans. Implementation led to the development of three new areas of policy. 1. Construction of the new rules of exclusion. 2. Enforcement of the new exclusion rules. 3. Financing the cost of the enforcement of the new rules. First, the rules of exclusion were set out under the terms of the Proclamation of 1763 whereby colonists were not allowed to settle in the west. This action legally nullified the claims to land in the area by a host of individual colonists, land companies, as well as colonies. Second, enforcement of the new rules was delegated to the standing army of about 7,500 regulars newly stationed in the west. This army for the most part occupied former French forts although some new ones were built. Among other things, this army was charged with keeping Americans out of the west as well as returning to the colonies any Americans who were already there. Third, financing of the cost of the enforcement was to be accomplished by levying taxes on the Americans. Thus, Americans were being asked to finance a British army which was charged with keeping Americans out of the west (Baack, 2004).

Tax Policies

Of all the potential options available for funding the new standing army in the west, why did the British decide to tax their American colonies? The answer is fairly straightforward. First of all, the victory over the French in the Seven Years’ War had come at a high price. Domestic taxes had been raised substantially during the war and total government debt had increased nearly twofold (Brewer, 1989). In addition, taxes were significantly higher in Britain than in the colonies. One estimate suggests the per capita tax burden in the colonies ranged from two to four percent of that in Britain (Palmer, 1959). And finally, the voting constituencies of the members of parliament were in Britain not the colonies. All things considered, Parliament viewed taxing the colonies as the obvious choice.

Accordingly, a series of tax acts were passed by Parliament the revenue from which was to be used to help pay for the standing army in America. The first was the Sugar Act of 1764. Proposed by England’s Prime Minister the act lowered tariff rates on non-British products from the West Indies as well as strengthened their collection. It was hoped this would reduce the incentive for smuggling and thereby increase tariff revenue (Bullion, 1982). The following year Parliament passed the Stamp Act that imposed a tax commonly used in England. It required stamps for a broad range of legal documents as well as newspapers and pamphlets. While the colonial stamp duties were less than those in England they were expected to generate enough revenue to finance a substantial portion of the cost the new standing army. The same year passage of the Quartering Act imposed essentially a tax in kind by requiring the colonists to provide British military units with housing, provisions, and transportation. In 1767 the Townshend Acts imposed tariffs upon a variety of imported goods and established a Board of Customs Commissioners in the colonies to collect the revenue.

Boycotts

While the Americans could do little about the British army stationed in the west, they could do somthing about the new British taxes. American opposition to these acts was expressed initially in a variety of peaceful forms. While they did not have representation in Parliament, the colonists did attempt to exert some influence in it through petition and lobbying. However, it was the economic boycott that became by far the most effective means of altering the new British economic policies. In 1765 representatives from nine colonies met at the Stamp Act Congress in New York and organized a boycott of imported English goods. The boycott was so successful in reducing trade that English merchants lobbied Parliament for the repeal of the new taxes. Parliament soon responded to the political pressure. During 1766 it repealed both the Stamp and Sugar Acts (Johnson, 1997). In response to the Townshend Acts of 1767 a second major boycott started in 1768 in Boston and New York and subsequently spread to other cities leading Parliament in 1770 to repeal all of the Townshend duties except the one on tea. In addition, Parliament decided at the same time not to renew the Quartering Act.

With these actions taken by Parliament the Americans appeared to have successfully overturned the new British post war tax agenda. However, Parliament had not given up what it believed to be its right to tax the colonies. On the same day it repealed the Stamp Act, Parliament passed the Declaratory Act stating the British government had the full power and authority to make laws governing the colonies in all cases whatsoever including taxation. Legislation not principles had been overturned.

The Tea Act

Three years after the repeal of the Townshend duties British policy was once again to emerge as an issue in the colonies. This time the American reaction was not peaceful. It all started when Parliament for the first time granted an exemption from the Navigation Acts. In an effort to assist the financially troubled British East India Company Parliament passed the Tea Act of 1773, which allowed the company to ship tea directly to America. The grant of a major trading advantage to an already powerful competitor meant a potential financial loss for American importers and smugglers of tea. In December a small group of colonists responded by boarding three British ships in the Boston harbor and throwing overboard several hundred chests of tea owned by the East India Company (Labaree, 1964). Stunned by the events in Boston, Parliament decided not to cave in to the colonists as it had before. In rapid order it passed the Boston Port Act, the Massachusetts Government Act, the Justice Act, and the Quartering Act. Among other things these so-called Coercive or Intolerable Acts closed the port of Boston, altered the charter of Massachusetts, and reintroduced the demand for colonial quartering of British troops. Once done Parliament then went on to pass the Quebec Act as a continuation of its policy of restricting the settlement of the West.

The First Continental Congress

Many Americans viewed all of this as a blatant abuse of power by the British government. Once again a call went out for a colonial congress to sort out a response. On September 5, 1774 delegates appointed by the colonies met in Philadelphia for the First Continental Congress. Drawing upon the successful manner in which previous acts had been overturned the first thing Congress did was to organize a comprehensive embargo of trade with Britain. It then conveyed to the British government a list of grievances that demanded the repeal of thirteen acts of Parliament. All of the acts listed had been passed after 1763 as the delegates had agreed not to question British policies made prior to the conclusion of the Seven Years War. Despite all the problems it had created, the Tea Act was not on the list. The reason for this was that Congress decided not to protest British regulation of colonial trade under the Navigation Acts. In short, the delegates were saying to Parliament take us back to 1763 and all will be well.

The Second Continental Congress

What happened then was a sequence of events that led to a significant increase in the degree of American resistance to British polices. Before the Congress adjourned in October the delegates voted to meet again in May of 1775 if Parliament did not meet their demands. Confronted by the extent of the American demands the British government decided it was time to impose a military solution to the crisis. Boston was occupied by British troops. In April a military confrontation occurred at Lexington and Concord. Within a month the Second Continental Congress was convened. Here the delegates decided to fundamentally change the nature of their resistance to British policies. Congress authorized a continental army and undertook the purchase of arms and munitions. To pay for all of this it established a continental currency. With previous political efforts by the First Continental Congress to form an alliance with Canada having failed, the Second Continental Congress took the extraordinary step of instructing its new army to invade Canada. In effect, these actions taken were those of an emerging nation-state. In October as American forces closed in on Quebec the King of England in a speech to Parliament declared that the colonists having formed their own government were now fighting for their independence. It was to be only a matter of months before Congress formally declared it.

Economic Incentives for Pursuing Independence: Taxation

Given the nature of British colonial policies, scholars have long sought to evaluate the economic incentives the Americans had in pursuing independence. In this effort economic historians initially focused on the period following the Seven Years War up to the Revolution. It turned out that making a case for the avoidance of British taxes as a major incentive for independence proved difficult. The reason was that many of the taxes imposed were later repealed. The actual level of taxation appeared to be relatively modest. After all, the Americans soon after adopting the Constitution taxed themselves at far higher rates than the British had prior to the Revolution (Perkins, 1988). Rather it seemed the incentive for independence might have been the avoidance of the British regulation of colonial trade. Unlike some of the new British taxes, the Navigation Acts had remained intact throughout this period.

The Burden of the Navigation Acts

One early attempt to quantify the economic effects of the Navigation Acts was by Thomas (1965). Building upon the previous work of Harper (1942), Thomas employed a counterfactual analysis to assess what would have happened to the American economy in the absence of the Navigation Acts. To do this he compared American trade under the Acts with that which would have occurred had America been independent following the Seven Years War. Thomas then estimated the loss of both consumer and produce surplus to the colonies as a result of shipping enumerated goods indirectly through England. These burdens were partially offset by his estimated value of the benefits of British protection and various bounties paid to the colonies. The outcome of his analysis was that the Navigation Acts imposed a net burden of less than one percent of colonial per capita income. From this he concluded the Acts were an unlikely cause of the Revolution. A long series of subsequent works questioned various parts of his analysis but not his general conclusion (Walton, 1971). The work of Thomas also appeared to be consistent with the observation that the First Continental Congress had not demanded in its list of grievances the repeal of either the Navigation Acts or the Sugar Act.

American Expectations about Future British Policy

Did this mean then that the Americans had few if any economic incentives for independence? Upon further consideration economic historians realized that perhaps more important to the colonists were not the past and present burdens but rather the expected future burdens of continued membership in the British Empire. The Declaratory Act made it clear the British government had not given up what it viewed as its right to tax the colonists. This was despite the fact that up to 1775 the Americans had employed a variety of protest measures including lobbying, petitions, boycotts, and violence. The confluence of not having representation in Parliament while confronting an aggressive new British tax policy designed to raise their relatively low taxes may have made it reasonable for the Americans to expect a substantial increase in the level of taxation in the future (Gunderson, 1976, Reid, 1978). Furthermore a recent study has argued that in 1776 not only did the future burdens of the Navigation Acts clearly exceed those of the past, but a substantial portion would have borne by those who played a major role in the Revolution (Sawers, 1992). Seen in this light the economic incentive for independence would have been avoiding the potential future costs of remaining in the British Empire.

The Americans Undertake a Revolution

1776-77

British Military Advantages

The American colonies had both strengths and weaknesses in terms of undertaking a revolution. The colonial population of well over two million was nearly one third of that in Britain (McCusker and Menard, 1985). The growth in the colonial economy had generated a remarkably high level of per capita wealth and income (Jones, 1980). Yet the hurdles confronting the Americans in achieving independence were indeed formidable. The British military had an array of advantages. With virtual control of the Atlantic its navy could attack anywhere along the American coast at will and would have borne logistical support for the army without much interference. A large core of experienced officers commanded a highly disciplined and well-drilled army in the large-unit tactics of eighteenth century European warfare. By these measures the American military would have great difficulty in defeating the British. Its navy was small. The Continental Army had relatively few officers proficient in large-unit military tactics. Lacking both the numbers and the discipline of its adversary the American army was unlikely to be able to meet the British army on equal terms on the battlefield (Higginbotham, 1977).

British Financial Advantages

In addition, the British were in a better position than the Americans to finance a war. A tax system was in place that had provided substantial revenue during previous colonial wars. Also for a variety of reasons the government had acquired an exceptional capacity to generate debt to fund wartime expenses (North and Weingast, 1989). For the Continental Congress the situation was much different. After declaring independence Congress had set about defining the institutional relationship between it and the former colonies. The powers granted to Congress were established under the Articles of Confederation. Reflecting the political environment neither the power to tax nor the power to regulate commerce was given to Congress. Having no tax system to generate revenue also made it very difficult to borrow money. According to the Articles the states were to make voluntary payments to Congress for its war efforts. This precarious revenue system was to hamper funding by Congress throughout the war (Baack, 2001).

Military and Financial Factors Determine Strategy

It was within these military and financial constraints that the war strategies by the British and the Americans were developed. In terms of military strategies both of the contestants realized that America was simply too large for the British army to occupy all of the cities and countryside. This being the case the British decided initially that they would try to impose a naval blockade and capture major American seaports. Having already occupied Boston, the British during 1776 and 1777 took New York, Newport, and Philadelphia. With plenty of room to maneuver his forces and unable to match those of the British, George Washington chose to engage in a war of attrition. The purpose was twofold. First, by not engaging in an all out offensive Washington reduced the probability of losing his army. Second, over time the British might tire of the war.

Saratoga

Frustrated without a conclusive victory, the British altered their strategy. During 1777 a plan was devised to cut off New England from the rest of the colonies, contain the Continental Army, and then defeat it. An army was assembled in Canada under the command of General Burgoyne and then sent to and down along the Hudson River. It was to link up with an army sent from New York City. Unfortunately for the British the plan totally unraveled as in October Burgoyne’s army was defeated at the battle of Saratoga and forced to surrender (Ketchum, 1997).

The American Financial Situation Deteriorates

With the victory at Saratoga the military side of the war had improved considerably for the Americans. However, the financial situation was seriously deteriorating. The states to this point had made no voluntary payments to Congress. At the same time the continental currency had to compete with a variety of other currencies for resources. The states were issuing their own individual currencies to help finance expenditures. Moreover the British in an effort to destroy the funding system of the Continental Congress had undertaken a covert program of counterfeiting the Continental dollar. These dollars were printed and then distributed throughout the former colonies by the British army and agents loyal to the Crown (Newman, 1957). Altogether this expansion of the nominal money supply in the colonies led to a rapid depreciation of the Continental dollar (Calomiris, 1988, Michener, 1988). Furthermore, inflation may have been enhanced by any negative impact upon output resulting from the disruption of markets along with the destruction of property and loss of able-bodied men (Buel, 1998). By the end of 1777 inflation had reduced the specie value of the Continental to about twenty percent of what it had been when originally issued. This rapid decline in value was becoming a serious problem for Congress in that up to this point almost ninety percent of its revenue had been generated from currency emissions.

1778-83

British Invasion of the South

The British defeat at Saratoga had a profound impact upon the nature of the war. The French government still upset by their defeat by the British in the Seven Years War and encouraged by the American victory signed a treaty of alliance with the Continental Congress in early 1778. Fearing a new war with France the British government sent a commission to negotiate a peace treaty with the Americans. The commission offered to repeal all of the legislation applying to the colonies passed since 1763. Congress rejected the offer. The British response was to give up its efforts to suppress the rebellion in the North and in turn organize an invasion of the South. The new southern campaign began with the taking of the port of Savannah in December. Pursuing their southern strategy the British won major victories at Charleston and Camden during the spring and summer of 1780.

Worsening Inflation and Financial Problems

As the American military situation deteriorated in the South so did the financial circumstances of the Continental Congress. Inflation continued as Congress and the states dramatically increased the rate of issuance of their currencies. At the same time the British continued to pursue their policy of counterfeiting the Continental dollar. In order to deal with inflation some states organized conventions for the purpose of establishing wage and price controls (Rockoff, 1984). With few contributions coming from the states and a currency rapidly losing its value, Congress resorted to authorizing the army to confiscate whatever it needed to continue the war effort (Baack, 2001, 2008).

Yorktown

Fortunately for the Americans the British military effort collapsed before the funding system of Congress. In a combined effort during the fall of 1781 French and American forces trapped the British southern army under the command of Cornwallis at Yorktown, Virginia. Under siege by superior forces the British army surrendered on October 19. The British government had now suffered not only the defeat of its northern strategy at Saratoga but also the defeat of its southern campaign at Yorktown. Following Yorktown, Britain suspended its offensive military operations against the Americans. The war was over. All that remained was the political maneuvering over the terms for peace.

The Treaty of Paris

The Revolutionary War officially concluded with the signing of the Treaty of Paris in 1783. Under the terms of the treaty the United States was granted independence and British troops were to evacuate all American territory. While commonly viewed by historians through the lens of political science, the Treaty of Paris was indeed a momentous economic achievement by the United States. The British ceded to the Americans all of the land east of the Mississippi River which they had taken from the French during the Seven Years War. The West was now available for settlement. To the extent the Revolutionary War had been undertaken by the Americans to avoid the costs of continued membership in the British Empire, the goal had been achieved. As an independent nation the United States was no longer subject to the regulations of the Navigation Acts. There was no longer to be any economic burden from British taxation.

THE FORMATION OF A NATIONAL GOVERNMENT

When you start a revolution you have to be prepared for the possibility you might win. This means being prepared to form a new government. When the Americans declared independence their experience of governing at a national level was indeed limited. In 1765 delegates from various colonies had met for about eighteen days at the Stamp Act Congress in New York to sort out a colonial response to the new stamp duties. Nearly a decade passed before delegates from colonies once again got together to discuss a colonial response to British policies. This time the discussions lasted seven weeks at the First Continental Congress in Philadelphia during the fall of 1774. The primary action taken at both meetings was an agreement to boycott trade with England. After having been in session only a month, delegates at the Second Continental Congress for the first time began to undertake actions usually associated with a national government. However, when the colonies were declared to be free and independent states Congress had yet to define its institutional relationship with the states.

The Articles of Confederation

Following the Declaration of Independence, Congress turned to deciding the political and economic powers it would be given as well as those granted to the states. After more than a year of debate among the delegates the allocation of powers was articulated in the Articles of Confederation. Only Congress would have the authority to declare war and conduct foreign affairs. It was not given the power to tax or regulate commerce. The expenses of Congress were to be made from a common treasury with funds supplied by the states. This revenue was to be generated from exercising the power granted to the states to determine their own internal taxes. It was not until November of 1777 that Congress approved the final draft of the Articles. It took over three years for the states to ratify the Articles. The primary reason for the delay was a dispute over control of land in the West as some states had claims while others did not. Those states with claims eventually agreed to cede them to Congress. The Articles were then ratified and put into effect on March 1, 1781. This was just a few months before the American victory at Yorktown. The process of institutional development had proved so difficult that the Americans fought almost the entire Revolutionary War with a government not sanctioned by the states.

Difficulties in the 1780s

The new national government that emerged from the Revolution confronted a host of issues during the 1780s. The first major one to be addressed by Congress was what to do with all of the land acquired in the West. Starting in 1784 Congress passed a series of land ordinances that provided for land surveys, sales of land to individuals, and the institutional foundation for the creation of new states. These ordinances opened the West for settlement. While this was a major accomplishment by Congress, other issues remained unresolved. Having repudiated its own currency and no power of taxation, Congress did not have an independent source of revenue to pay off its domestic and foreign debts incurred during the war. Since the Continental Army had been demobilized no protection was being provided for settlers in the West or against foreign invasion. Domestic trade was being increasingly disrupted during the 1780s as more states began to impose tariffs on goods from other states. Unable to resolve these and other issues Congress endorsed a proposed plan to hold a convention to meet in Philadelphia in May of 1787 to revise the Articles of Confederation.

Rather than amend the Articles, the delegates to the convention voted to replace them entirely with a new form of national government under the Constitution. There are of course many ways to assess the significance of this truly remarkable achievement. One is to view the Constitution as an economic document. Among other things the Constitution specifically addressed many of the economic problems that confronted Congress during and after the Revolutionary War. Drawing upon lessons learned in financing the war, no state under the Constitution would be allowed to coin money or issue bills of credit. Only the national government could coin money and regulate its value. Punishment was to be provided for counterfeiting. The problems associated with the states contributing to a common treasury under the Articles were overcome by giving the national government the coercive power of taxation. Part of the revenue was to be used to pay for the common defense of the United States. No longer would states be allowed to impose tariffs as they had done during the 1780s. The national government was now given the power to regulate both foreign and interstate commerce. As a result the nation was to become a common market. There is a general consensus among economic historians today that the economic significance of the ratification of the Constitution was to lay the institutional foundation for long run growth. From the point of view of the former colonists, however, it meant they had succeeded in transferring the power to tax and regulate commerce from Parliament to the new national government of the United States.

TABLES
Table 1 Continental Dollar Emissions (1775-1779)

Year of Emission Nominal Dollars Emitted (000) Annual Emission As Share of Total Nominal Stock Emitted Specie Value of Annual Emission (000) Annual Emission As Share of Total Specie Value Emitted
1775 $6,000 3% $6,000 15%
1776 19,000 8 15,330 37
1777 13,000 5 4,040 10
1778 63,000 26 10,380 25
1779 140,500 58 5,270 13
Total $241,500 100% $41,020 100%

Source: Bullock (1895), 135.
Table 2 Currency Emissions by the States (1775-1781)

Year of Emission Nominal Dollars Emitted (000) Year of Emission Nominal Dollars Emitted (000)
1775 $4,740 1778 $9,118
1776 13,328 1779 17,613
1777 9,573 1780 66,813
1781 123.376
Total $27,641 Total $216,376

Source: Robinson (1969), 327-28.

References

Baack, Ben. “Forging a Nation State: The Continental Congress and the Financing of the War of American Independence.” Economic History Review 54, no.4 (2001): 639-56.

Baack, Ben. “British versus American Interests in Land and the War of American Independence.” Journal of European Economic History 33, no. 3 (2004): 519-54.

Baack, Ben. “America’s First Monetary Policy: Inflation and Seigniorage during the Revolutionary War.” Financial History Review 15, no. 2 (2008): 107-21.

Baack, Ben, Robert A. McGuire, and T. Norman Van Cott. “Constitutional Agreement during the Drafting of the Constitution: A New Interpretation.” Journal of Legal Studies 38, no. 2 (2009): 533-67.

Brewer, John. The Sinews of Power: War, Money and the English State, 1688- 1783. London: Cambridge University Press, 1989.

Buel, Richard. In Irons: Britain’s Naval Supremacy and the American Revolutionary Economy. New Haven: Yale University Press, 1998.

Bullion, John L. A Great and Necessary Measure: George Grenville and the Genesis of the Stamp Act, 1763-1765. Columbia: University of Missouri Press, 1982.

Bullock, Charles J. “The Finances of the United States from 1775 to 1789, with Especial Reference to the Budget.” Bulletin of the University of Wisconsin 1, no. 2 (1895): 117-273.

Calomiris, Charles W. “Institutional Failure, Monetary Scarcity, and the Depreciation of the Continental.” Journal of Economic History 48, no. 1 (1988): 47-68.

Egnal, Mark. A Mighty Empire: The Origins of the American Revolution. Ithaca: Cornell University Press, 1988.

Ferguson, E. James. The Power of the Purse: A History of American Public Finance, 1776-1790. Chapel Hill: University of North Carolina Press, 1961.

Gunderson, Gerald. A New Economic History of America. New York: McGraw- Hill, 1976.

Harper, Lawrence A. “Mercantilism and the American Revolution.” Canadian Historical Review 23 (1942): 1-15.

Higginbotham, Don. The War of American Independence: Military Attitudes, Policies, and Practice, 1763-1789. Bloomington: Indiana University Press, 1977.

Jensen, Merrill, editor. English Historical Documents: American Colonial Documents to 1776 New York: Oxford university Press, 1969.

Johnson, Allen S. A Prologue to Revolution: The Political Career of George Grenville (1712-1770). New York: University Press, 1997.

Jones, Alice H. Wealth of a Nation to Be: The American Colonies on the Eve of the Revolution. New York: Columbia University Press, 1980.

Ketchum, Richard M. Saratoga: Turning Point of America’s Revolutionary War. New York: Henry Holt and Company, 1997.

Labaree, Benjamin Woods. The Boston Tea Party. New York: Oxford University Press, 1964.

Mackesy, Piers. The War for America, 1775-1783. Cambridge: Harvard University Press, 1964.

McCusker, John J. and Russell R. Menard. The Economy of British America, 1607- 1789. Chapel Hill: University of North Carolina Press, 1985.

Michener, Ron. “Backing Theories and the Currencies of Eighteenth-Century America: A Comment.” Journal of Economic History 48, no. 3 (1988): 682-92.

Nester, William R. The First Global War: Britain, France, and the Fate of North America, 1756-1775. Westport: Praeger, 2000.

Newman, E. P. “Counterfeit Continental Currency Goes to War.” The Numismatist 1 (January, 1957): 5-16.

North, Douglass C., and Barry R. Weingast. “Constitutions and Commitment: The Evolution of Institutions Governing Public Choice in Seventeenth-Century England.” Journal of Economic History 49 No. 4 (1989): 803-32.

O’Shaughnessy, Andrew Jackson. An Empire Divided: The American Revolution and the British Caribbean. Philadelphia: University of Pennsylvania Press, 2000.

Palmer, R. R. The Age of Democratic Revolution: A Political History of Europe and America. Vol. 1. Princeton: Princeton University Press, 1959.

Perkins, Edwin J. The Economy of Colonial America. New York: Columbia University Press, 1988.

Reid, Joseph D., Jr. “Economic Burden: Spark to the American Revolution?” Journal of Economic History 38, no. 1 (1978): 81-100.

Robinson, Edward F. “Continental Treasury Administration, 1775-1781: A Study in the Financial History of the American Revolution.” Ph.D. diss., University of Wisconsin, 1969.

Rockoff, Hugh. Drastic Measures: A History of Wage and Price Controls in the United States. Cambridge: Cambridge University Press, 1984.

Sawers, Larry. “The Navigation Acts Revisited.” Economic History Review 45, no. 2 (1992): 262-84.

Thomas, Robert P. “A Quantitative Approach to the Study of the Effects of British Imperial Policy on Colonial Welfare: Some Preliminary Findings.” Journal of Economic History 25, no. 4 (1965): 615-38.

Tucker, Robert W. and David C. Hendrickson. The Fall of the First British Empire: Origins of the War of American Independence. Baltimore: Johns Hopkins Press, 1982.

Walton, Gary M. “The New Economic History and the Burdens of the Navigation Acts.” Economic History Review 24, no. 4 (1971): 533-42.

Citation: Baack, Ben. “Economics of the American Revolutionary War”. EH.Net Encyclopedia, edited by Robert Whaples. November 13, 2001 (updated August 5, 2010). URL http://eh.net/encyclopedia/the-economics-of-the-american-revolutionary-war/

Economic History of Portugal

Luciano Amaral, Universidade Nova de Lisboa

Main Geographical Features

Portugal is the south-westernmost country of Europe. With the approximate shape of a vertical rectangle, it has a maximum height of 561 km and a maximum length of 218 km, and is delimited (in its north-south range) by the parallels 37° and 42° N, and (in its east-west range) by the meridians 6° and 9.5° W. To the west, it faces the Atlantic Ocean, separating it from the American continent by a few thousand kilometers. To the south, it still faces the Atlantic, but the distance to Africa is only of a few hundred kilometers. To the north and the east, it shares land frontiers with Spain, and both countries constitute the Iberian Peninsula, a landmass separated directly from France and, then, from the rest of the continent by the Pyrenees. Two Atlantic archipelagos are still part of Portugal, the Azores – constituted by eight islands in the same latitudinal range of mainland Portugal, but much further west, with a longitude between 25° and 31° W – and Madeira – two islands, to the southwest of the mainland, 16° and 17° W, 32.5° and 33° N.

Climate in mainland Portugal is of the temperate sort. Due to its southern position and proximity to the Mediterranean Sea, the country’s weather still presents some Mediterranean features. Temperature is, on average, higher than in the rest of the continent. Thanks to its elongated form, Portugal displays a significant variety of landscapes and sometimes brisk climatic changes for a country of such relatively small size. Following a classical division of the territory, it is possible to identify three main geographical regions: a southern half – with practically no mountains and a very hot and dry climate – and a northern half subdivided into two other vertical sub-halves – with a north-interior region, mountainous, cool but relatively dry, and a north-coast region, relatively mountainous, cool and wet. Portugal’s population is close to 10,000,000, in an area of about 92,000 square kilometers (35,500 square miles).

The Period before the Creation of Portugal

We can only talk of Portugal as a more or less clearly identified and separate political unit (although still far from a defined nation) from the eleventh or twelfth centuries onwards. The geographical area which constitutes modern Portugal was not, of course, an eventless void before that period. But scarcity of space allows only a brief examination of the earlier period, concentrating on its main legacy to future history.

Roman and Visigothic Roots

That legacy is overwhelmingly marked by the influence of the Roman Empire. Portugal owes to Rome its language (a descendant of Latin) and main religion (Catholicism), as well as its primary juridical and administrative traditions. Interestingly enough, little of the Roman heritage passed directly to the period of existence of Portugal as a proper nation. Momentous events filtered the transition. Romans first arrived in the Iberian Peninsula around the third century B.C., and kept their rule until the fifth century of the Christian era. Then, they succumbed to the so-called “barbarian invasions.” Of the various peoples that then roamed the Peninsula, certainly the most influential were the Visigoths, a people of Germanic origin. The Visigoths may be ranked as the second most important force in the shaping of future Portugal. The country owes them the monarchical institution (which lasted until the twentieth century), as well as the preservation both of Catholicism and (although substantially transformed) parts of Roman law.

Muslim Rule

The most spectacular episode following Visigoth rule was the Muslim invasion of the eighth century. Islam ruled the Peninsula from then until the fifteenth century, although occupying an increasingly smaller area from the ninth century onwards, as the Christian Reconquista started repelling it with growing efficiency. Muslim rule set the area on a path different from the rest of Western Europe for a few centuries. However, apart from some ethnic traits legated to its people, a few words in its lexicon, as well as certain agricultural, manufacturing and sailing techniques and knowledge (of which the latter had significant importance to the Portuguese naval discoveries), nothing of the magnitude of the Roman heritage was left in the peninsula by Islam. This is particularly true of Portugal, where Muslim rule was less effective and shorter than in the South of Spain. Perhaps the most important legacy of Muslim rule was, precisely, its tolerance towards the Roman heritage. Much representative of that tolerance was the existence during the Muslim period of an ethnic group, the so-called moçárabe or mozarabe population, constituted by traditional residents that lived within Muslim communities, accepted Muslim rule, and mixed with Muslim peoples, but still kept their language and religion, i.e. some form of Latin and the Christian creed.

Modern Portugal is a direct result of the Reconquista, the Christian fight against Muslim rule in the Iberian Peninsula. That successful fight was followed by the period when Portugal as a nation came to existence. The process of creation of Portugal was marked by the specific Roman-Germanic institutional synthesis that constituted the framework of most of the country’s history.

Portugal from the Late Eleventh Century to the Late Fourteenth Century

Following the Muslim invasion, a small group of Christians kept their independence, settling in a northern area of the Iberian Peninsula called Asturias. Their resistance to Muslim rule rapidly transformed into an offensive military venture. During the eighth century a significant part of northern Iberia was recovered to Christianity. This frontier, roughly cutting the peninsula in two halves, held firm until the eleventh century. Then, the crusaders came, mostly from France and Germany, inserting the area in the overall European crusade movement. By the eleventh century, the original Asturian unit had been divided into two kingdoms, Leon and Navarra, which in turn were subdivided into three new political units, Castile, Aragon and the Condado Portucalense. The Condado Portucalense (the political unit at the origin of future Portugal) resulted from a donation, made in 1096, by the Leonese king to a Crusader coming from Burgundy (France), Count Henry. He did not claim the title king, a job that would be fulfilled only by his son, Afonso Henriques (generally accepted as the first king of Portugal) in the first decade of the twelfth century.

Condado Portucalense as the King’s “Private Property”

Such political units as the various peninsular kingdoms of that time must be seen as entities differing in many respects from current nations. Not only did their peoples not possess any clear “national consciousness,” but also the kings themselves did not rule them based on the same sort of principle we tend to attribute to current rulers (either democratic, autocratic or any other sort). Both the Condado Portucalense and Portugal were understood by their rulers as something still close to “private property” – the use of quotes here is justified by the fact that private property, in the sense we give to it today, was a non-existent notion then. We must, nevertheless, stress this as the moment in which Portuguese rulers started seeing Portugal as a political unit separate from the remaining units in the area.

Portugal as a Military Venture

Such novelty was strengthened by the continuing war against Islam, still occupying most of the center and south of what later became Portugal. This is a crucial fact about Portugal in its infancy, and one that helps one understand the most important episode in Portuguese history , the naval discoveries, i.e. that the country in those days was largely a military venture against Islam. As, in that fight, the kingdom expanded to the south, it did so separately from the other Christian kingdoms existing in the peninsula. And these ended up constituting the two main negative forces for Portugal’s definition as an independent country, i.e. Islam and the remaining Iberian Christian kingdoms. The country achieved a clear geographical definition quite early in its history, more precisely in 1249, when King Sancho II conquered the Algarve from Islam. Remarkably for a continent marked by so much permanent frontier redesign, Portugal acquired then its current geographical shape.

The military nature of the country’s growth gave rise to two of its most important characteristics in early times: Portugal was throughout this entire period a frontier country, and one where the central authority was unable to fully control the territory in its entirety. This latter fact, together with the reception of the Germanic feudal tradition, shaped the nature of the institutions then established in the country. This was particularly important in understanding the land donations made by the crown. These were crucial, for they brought a dispersion of central powers, devolved to local entities, as well as a delegation of powers we would today call “public” to entities we would call “private.” Donations were made in favor of three sorts of groups: noble families, religious institutions and the people in general of particular areas or cities. They resulted mainly from the needs of the process of conquest: noblemen were soldiers, and the crown’s concession of the control of a certain territory was both a reward for their military feats as well as an expedient way of keeping the territory under control (even if in a more indirect way) in a period when it was virtually impossible to directly control the full extent of the conquered area. Religious institutions were crucial in the Reconquista, since the purpose of the whole military effort was to eradicate the Muslim religion from the country. Additionally, priests and monks were full military participants in the process, not limiting their activity to studying or preaching. So, as the Reconquista proceeded, three sorts of territories came into existence: those under direct control of the crown, those under the control of local seigneurs (which subdivided into civil and ecclesiastical) and the communities.

Economic Impact of the Military Institutional Framework

This was an institutional framework that had a direct economic impact. The crown’s donations were not comparable to anything we would nowadays call private property. The land’s donation had attached to it the ability conferred on the beneficiary to a) exact tribute from the population living in it, b) impose personal services or reduce peasants to serfdom, and c) administer justice. This is a phenomenon that is typical of Europe until at least the eighteenth century, and is quite representative of the overlap between the private and public spheres then prevalent. The crown felt it was entitled to give away powers we would nowadays call public, such as those of taxation and administering justice, and beneficiaries from the crown’s donations felt they were entitled to them. As a further limit to full private rights, the land was donated under certain conditions, restricting the beneficiaries’ power to divide, sell or buy it. They managed those lands, thus, in a manner entirely dissimilar from a modern enterprise. And the same goes for actual farmers, those directly toiling the land, since they were sometimes serfs, and even when they were not, had to give personal services to seigneurs and pay arbitrary tributes.

Unusually Tight Connections between the Crown and High Nobility

Much of the history of Portugal until the nineteenth century revolves around the tension between these three layers of power – the crown, the seigneurs and the communities. The main trend in that relationship was, however, in the direction of an increased weight of central power over the others. This is already visible in the first centuries of existence of the country. In a process that may look paradoxical, that increased weight was accompanied by an equivalent increase in seigneurial power at the expense of the communities. This gave rise to a uniquely Portuguese institution, which would be of extreme importance for the development of the Portuguese economy (as we will later see): the extremely tight connection between the crown and the high nobility. As a matter of fact, very early in the country’s history, the Portuguese nobility and Church became much dependent on the redistributive powers of the crown, in particular in what concerns land and the tributes associated with it. This led to an apparently contradictory process, in which at the same time as the crown was gaining ascendancy in the ruling of the country, it also gave away to seigneurs some of those powers usually considered as being public in nature. Such was the connection between the crown and the seigneurs that the intersection between private and public powers proved to be very resistant in Portugal. That intersection lasted longer in Portugal than in other parts of Europe, and consequently delayed the introduction in the country of the modern notion of property rights. But this is something to be developed later, and to fully understand it we must go through some further episodes of Portuguese history. For now, we must note the novelty brought by these institutions. Although they can be seen as unfriendly to property rights from a nineteenth- and twentieth-century vantage point, they represented in fact a first, although primitive and incomplete, definition of property rights of a certain sort.

Centralization and the Evolution of Property

As the crown’s centralization of power proceeded in the early history of the country, some institutions such as serfdom and settling colonies gave way to contracts that granted fuller personal and property rights to farmers. Serfdom was not exceptionally widespread in early Portugal – and tended to disappear from the thirteenth century onwards. More common was the settlement of colonies, a situation in which settlers were simple toilers of land, having to pay significant tributes to either the king or seigneurs, but had no rights over buying and selling the land. From the thirteenth century onwards, as the king and the seigneurs began encroaching on the kingdom’s land and the military situation got calmer, serfdom and settling contracts were increasingly substituted by contracts of the copyhold type. When compared with current concepts of private property, copyhold includes serious restrictions to the full use of private property. Yet, it represented an improvement when compared to the prior legal forms of land use. In the end, private property as we understand it today began its dissemination through the country at this time, although in a form we would still consider primitive. This, to a large extent, repeats with one to two centuries of delay, the evolution that had already occurred in the core of “feudal Europe,” i.e. the Franco-Germanic world and its extension to the British Isles.

Movement toward an Exchange Economy

Precisely as in that core “feudal Europe,” such institutional change brought a first moment of economic growth to the country – of course, there are no consistent figures for economic activity in this period, and, consequently, this is entirely based on more or less superficial evidence pointing in that direction. The institutional change just noted was accompanied by a change in the way noblemen and the Church understood their possessions. As the national territory became increasingly sheltered from the destruction of war, seigneurs became less interested in military activity and conquest, and more so in the good management of the land they already owned land. Accompanying that, some vague principles of specialization also appeared. Some of those possessions were thus significantly transformed into agricultural firms devoted to a certain extent to selling on the market. One should not, of course, exaggerate the importance acquired by the exchange of goods in this period. Most of the economy continued to be of a non-exchange or (at best) barter character. But the signs of change were important, as a certain part of the economy (small as it was) led the way to future more widespread changes. Not by chance, this is the period when we have evidence of the first signs of monetization of the economy, certainly a momentous change (even if initially small in scale), corresponding to an entirely new framework for economic relations.

These essential changes are connected with other aspects of the country’s evolution in this period. First, the war at the frontier (rather than within the territory) seems to have had a positive influence on the rest of the economy. The military front was constituted by a large number of soldiers, who needed constant supply of various goods, and this geared a significant part of the economy. Also, as the conquest enlarged the territory under the Portuguese crown’s control, the king’s court became ever more complex, thus creating one more demand pole. Additionally, together with enlargement of territory also came the insertion within the economy of various cities previously under Muslim control (such as the future capital, Lisbon, after 1147). All this was accompanied by a widespread movement of what we might call internal colonization, whose main purpose was to farm previously uncultivated agricultural land. This is also the time of the first signs of contact of Portuguese merchants with foreign markets, and foreign merchants with Portuguese markets. There are various signs of the presence of Portuguese merchants in British, French and Flemish ports, and vice versa. Much of Portuguese exports were of a typical Mediterranean nature, such as wine, olive oil, salt, fish and fruits, and imports were mainly of grain and textiles. The economy became, thus, more complex, and it is only natural that, to accompany such changes, the notions of property, management and “firm” changed in such a way as to accommodate the new evolution. The suggestion has been made that the success of the Christian Reconquista depended to a significant extent on the economic success of those innovations.

Role of the Crown in Economic Reforms

Of additional importance for the increasing sophistication of the economy is the role played by the crown as an institution. From the thirteenth century onwards, the rulers of the country showed a growing interest in having a well organized economy able to grant them an abundant tax base. Kings such as Afonso III (ruling from 1248 until 1279) and D. Dinis (1279-1325) became famous for their economic reforms. Monetary reforms, fiscal reforms, the promotion of foreign trade, and the promotion of local fairs and markets (an extraordinarily important institution for exchange in medieval times) all point in the direction of an increased awareness on the part of Portuguese kings of the relevance of promoting a proper environment for economic activity. Again, we should not exaggerate the importance of that awareness. Portuguese kings were still significantly (although not entirely) arbitrary rulers, able with one decision to destroy years of economic hard work. But changes were occurring, and some in a direction positive for economic improvement.

As mentioned above, the definition of Portugal as a separate political entity had two main negative elements: Islam as occupier of the Iberian Peninsula and the centralization efforts of the other political entities in the same area. The first element faded as the Portuguese Reconquista, by mid-thirteenth century, reached the southernmost point in the territory of what is today’s Portugal. The conflict (either latent or open) with the remaining kingdoms of the peninsula was kept alive much beyond that. As the early centuries of the first millennium unfolded, a major centripetal force emerged in the peninsula, the kingdom of Castile. Castile progressively became the most successful centralizing political unit in the area. Such success reached a first climatic moment by the middle of the fifteenth century, during the reign of Ferdinand and Isabella, and a second one by the end of the sixteenth century, with the brief annexation of Portugal by the Spanish king, Phillip II. Much of the effort of Portuguese kings was to keep Portugal independent of those other kingdoms, particularly Castile. But sometimes they envisaged something different, such as an Iberian union with Portugal as its true political head. It was one of those episodes that led to a major moment both for the centralization of power in the Portuguese crown within the Portuguese territory and for the successful separation of Portugal from Castile.

Ascent of John I (1385)

It started during the reign of King Ferdinand (of Portugal), during the sixth and seventh decades of the fourteenth century. Through various maneuvers to unite Portugal to Castile (which included war and the promotion of diverse coups), Ferdinand ended up marrying his daughter to the man who would later become king of Castile. Ferdinand was, however, generally unsuccessful in his attempts to tie the crowns under his heading, and when he died in 1383 the king of Castile (thanks to his marriage with Ferdinand’s daughter) became the legitimate heir to the Portuguese crown. This was Ferdinand’s dream in reverse. The crowns would unite, but not under Portugal. The prospect of peninsular unity under Castile was not necessarily loathed by a large part of Portuguese elites, particularly parts of the aristocracy, which viewed Castile as a much more noble-friendly kingdom. This was not, however, a unanimous sentiment, and a strong reaction followed, led by other parts of the same elite, in order to keep the Portuguese crown in the hands of a Portuguese king, separate from Castile. A war with Castile and intimations of civil war ensued, and in the end Portugal’s independence was kept. The man chosen to be the successor of Ferdinand, under a new dynasty, was the bastard son of Peter I (Ferdinand’s father), the man who became John I in 1385.

This was a crucial episode, not simply because of the change in dynasty, imposed against the legitimate heir to the throne, but also because of success in the centralization of power by the Portuguese crown and, as a consequence, of separation of Portugal from Castile. Such separation led Portugal, additionally, to lose interest in further political adventures concerning Castile, and switch its attention to the Atlantic. It was the exploration of this path that led to the most unique period in Portuguese history, one during which Portugal reached heights of importance in the world that find no match in either its past or future history. This period is the Discoveries, a process that started during John I’s reign, in particular under the forceful direction of the king’s sons, most famous among them the mythical Henry, the Navigator. The 1383-85 crisis and John’s victory can thus be seen as the founding moment of the Portuguese Discoveries.

The Discoveries and the Apex of Portuguese International Power

The Discoveries are generally presented as the first great moment of world capitalism, with markets all over the world getting connected under European leadership. Albeit true, this is a largely post hoc perspective, for the Discoveries became a big commercial adventure only somewhere half-way into the story. Before they became such a thing, the aims of the Discoveries’ protagonists were mostly of another sort.

The Conquest of Ceuta

An interesting way to have a fuller picture of the Discoveries is to study the Portuguese contribution to them. Portugal was the pioneer of transoceanic navigation, discovering lands and sea routes formerly unknown to Europeans, and starting trades and commercial routes that linked Europe to other continents in a totally unprecedented fashion. But, at the start, the aims of the whole venture were entirely other. The event generally chosen to date the beginning of the Portuguese discoveries is the conquest of Ceuta – a city-state across the Straits of Gibraltar from Spain – in 1415. In itself such voyage would not differ much from other attempts made in the Mediterranean Sea from the twelfth century onwards by various European travelers. The main purpose of all these attempts was to control navigation in the Mediterranean, in what constitutes a classical fight between Christianity and Islam. Other objectives of Portuguese travelers were the will to find the mythical Prester John – a supposed Christian king surrounded by Islam: there are reasons to suppose that the legend of Prester John is associated with the real existence of the Copt Christians of Ethiopia – and to reach, directly at the source, the gold of Sudan. Despite this latter objective, religious reasons prevailed over others in spurring the first Portuguese efforts of overseas expansion. This should not surprise us, however, for Portugal had since its birth been, precisely, an expansionist political unit under a religious heading. The jump to the other side of the sea, to North Africa, was little else than the continuation of that expansionist drive. Here we must understand Portugal’s position as determined by two elements, one that was general to the whole European continent, and another one, more specific. The first is that the expansion of Portugal in the Middle-Ages coincides with the general expansion of Europe. And Portugal was very much a part of that process. The second is that, by being part of the process, Portugal was (by geographical hazard) at the forefront of the process. Portugal (and Spain) was in the first line of attack and defense against Islam. The conquest of Ceuta, by Henry, the Navigator, is hence a part of that story of confrontation with Islam.

Exploration from West Africa to India

The first efforts of Henry along the Western African coast and in the Atlantic high sea can be put within this same framework. The explorations along the African coast had two main objectives: to have a keener perception of how far south Islam’s strength went, and to surround Morocco, both in order to attack Islam on a wider shore and to find alternative ways to reach Prester John. These objectives depended, of course, on geographical ignorance, as the line of coast Portuguese navigators eventually found was much larger than the one Henry expected to find. In these efforts, Portuguese navigators went increasingly south, but also, mainly due to accidental changes of direction, west. Such westbound dislocations led to the discovery, in the first decades of the fifteenth century, of three archipelagos, the Canaries, Madeira (and Porto Santo) and the Azores. But the major navigational feat of this period was the passage of Cape Bojador in 1434, in the sequence of which the whole western coast of the African continent was opened for exploration and increasingly (and here is the novelty) commerce. As Africa revealed its riches, mostly gold and slaves, these ventures began acquiring a more strict economic meaning. And all this kept on fostering the Portuguese to go further south, and when they reached the southernmost tip of the African continent, to pass it and go east. And so they did. Bartolomeu Dias crossed the Cape of Good Hope in 1487 and ten years later Vasco da Gama would entirely circumnavigate Africa to reach India by sea. By the time of Vasco da Gama’s journey, the autonomous economic importance of intercontinental trade was well established.

Feitorias and Trade with West Africa, the Atlantic Islands and India

As the second half of the fifteenth century unfolded, Portugal created a complex trade structure connecting India and the African coast to Portugal and, then, to the north of Europe. This consisted of a net of trading posts (feitorias) along the African coast, where goods were shipped to Portugal, and then re-exported to Flanders, where a further Portuguese feitoria was opened. This trade was based on such African goods as gold, ivory, red peppers, slaves and other less important goods. As was noted by various authors, this was somehow a continuation of the pattern of trade created during the Middle Ages, meaning that Portugal was able to diversify it, by adding new goods to its traditional exports (wine, olive oil, fruits and salt). The Portuguese established a virtual monopoly of these African commercial routes until the early sixteenth century. The only threats to that trade structure came from pirates originating in Britain, Holland, France and Spain. One further element of this trade structure was the Atlantic Islands (Madeira, the Azores and the African archipelagos of Cape Verde and São Tomé). These islands contributed with such goods as wine, wheat and sugar cane. After the sea route to India was discovered and the Portuguese were able to establish regular connections with India, the trading structure of the Portuguese empire became more complex. Now the Portuguese began bringing multiple spices, precious stones, silk and woods from India, again based on a net of feitorias there established. The maritime route to India acquired an extreme importance to Europe, precisely at this time, since the Ottoman Empire was then able to block the traditional inland-Mediterranean route that supplied the continent with Indian goods.

Control of Trade by the Crown

One crucial aspect of the Portuguese Discoveries is the high degree of control exerted by the crown over the whole venture. The first episodes in the early fifteenth century, under Henry the Navigator (as well as the first exploratory trips along the African coast) were entirely directed by the crown. Then, as the activity became more profitable, it was, first, liberalized, and then rented (in totu) to merchants, whom were constrained to pay the crown a significant share of their profits. Finally, when the full Indo-African network was consolidated, the crown controlled directly the largest share of the trade (although never monopolizing it), participated in “public-private” joint-ventures, or imposed heavy tributes on traders. The grip of the crown increased with growth of the size and complexity of the empire. Until the early sixteenth century, the empire consisted mainly of a network of trading posts. No serious attempt was made by the Portuguese crown to exert a significant degree of territorial control over the various areas constituting the empire.

The Rise of a Territorial Empire

This changed with the growth of trade from India and Brazil. As India was transformed into a platform for trade not only around Africa but also in Asia, a tendency was developed (in particular under Afonso de Albuquerque, in the early sixteenth century) to create an administrative structure in the territory. This was not particularly successful. An administrative structure was indeed created, but stayed forever incipient. A relatively more complex administrative structure would only appear in Brazil. Until the middle of the sixteenth century, Brazil was relatively ignored by the crown. But with the success of the system of sugar cane plantation in the Atlantic Isles, the Portuguese crown decided to transplant it to Brazil. Although political power was controlled initially by a group of seigneurs to whom the crown donated certain areas of the territory, the system got increasingly more centralized as time went on. This is clearly visible with the creation of the post of governor-general of Brazil, directly respondent to the crown, in 1549.

Portugal Loses Its Expansionary Edge

Until the early sixteenth century, Portugal capitalized on being the pioneer of European expansion. It monopolized African and, initially, Indian trade. But, by that time, changes were taking place. Two significant events mark the change in political tide. First, the increasing assertiveness of the Ottoman Empire in the Eastern Mediterranean, which coincided with a new bout of Islamic expansionism – ultimately bringing the Mughal dynasty to India – as well as the re-opening of the Mediterranean route for Indian goods. This put pressure on Portuguese control over Indian trade. Not only was political control over the subcontinent now directly threatened by Islamic rulers, but also the profits from Indian trade started declining. This is certainly one of the reasons why Portugal redirected its imperial interests to the south Atlantic, particularly Brazil – the other reasons being the growing demand for sugar in Europe and the success of the sugar cane plantation system in the Atlantic islands. The second event marking the change in tide was the increased assertiveness of imperial Spain, both within Europe and overseas. Spain, under the Habsburgs (mostly Charles V and Phillip II), exerted a dominance over the European continent which was unprecedented since Roman times. This was complemented by the beginning of exploration of the American continent (from the Caribbean to Mexico and the Andes), again putting pressure on the Portuguese empire overseas. What is more, this is the period when not only Spain, but also Britain, Holland and France acquired navigational and commercial skills equivalent to the Portuguese, thus competing with them in some of their more traditional routes and trades. By the middle of the sixteenth century, Portugal had definitely lost the expansionary edge. And this would come to a tragic conclusion in 1580, with the death of the heirless King Sebastian in North Africa and the loss of political independence to Spain, under Phillip II.

Empire and the Role, Power and Finances of the Crown

The first century of empire brought significant political consequences for the country. As noted above, the Discoveries were directed by the crown to a very large extent. As such, they constituted one further step in the affirmation of Portugal as a separate political entity in the Iberian Peninsula. Empire created a political and economic sphere where Portugal could remain independent from the rest of the peninsula. It thus contributed to the definition of what we might call “national identity.” Additionally, empire enhanced significantly the crown’s redistributive power. To benefit from profits from transoceanic trade, to reach a position in the imperial hierarchy or even within the national hierarchy proper, candidates had to turn to the crown. As it controlled imperial activities, the crown became a huge employment agency, capable of attracting the efforts of most of the national elite. The empire was, thus, transformed into an extremely important instrument of the crown in order to centralize power. It has already been mentioned that much of the political history of Portugal from the Middle Ages to the nineteenth century revolves around the tension between the centripetal power of the crown and the centrifugal powers of the aristocracy, the Church and the local communities. Precisely, the imperial episode constituted a major step in the centralization of the crown’s power. The way such centralization occurred was, however, peculiar, and that would bring crucial consequences for the future. Various authors have noted how, despite the growing centralizing power of the crown, the aristocracy was able to keep its local powers, thanks to the significant taxing and judicial autonomy it possessed in the lands under its control. This is largely true, but as other authors have noted, this was done with the crown acting as an intermediary agent. The Portuguese aristocracy was since early times much less independent from the crown than in most parts of Western Europe, and this situation accentuated during the days of empire. As we have seen above, the crown directed the Reconquista in a way that made it able to control and redistribute (through the famous donations) most of the land that was conquered. In those early medieval days, it was, thus, the service to the crown that made noblemen eligible to benefit from land donations. It is undoubtedly true that by donating land the crown was also giving away (at least partially) the monopoly of taxing and judging. But what is crucial here is its significant intermediary power. With empire, that power increased again. And once more a large part of the aristocracy became dependent on the crown to acquire political and economic power. The empire became, furthermore, the main means of financing of the crown. Receipts from trade activities related to the empire (either profits, tariffs or other taxes) never went below 40 percent of total receipts of the crown, until the nineteenth century, and this was only briefly in its worst days. Most of the time, those receipts amounted to 60 or 70 percent of total crown’s receipts.

Other Economic Consequences of the Empire

Such a role for the crown’s receipts was one of the most important consequences of empire. Thanks to it, tax receipts from internal economic activity became in large part unnecessary for the functioning of national government, something that was going to have deep consequences, precisely for that exact internal activity. This was not, however, the only economic consequence of empire. One of the most important was, obviously, the enlargement of the trade base of the country. Thanks to empire, the Portuguese (and Europe, through the Portuguese) gained access to vast sources of precious metals, stones, tropical goods (such as fruit, sugar, tobacco, rice, potatoes, maize, and more), raw materials and slaves. Portugal used these goods to enlarge its comparative advantage pattern, which helped it penetrate European markets, while at the same time enlarging the volume and variety of imports from Europe. Such a process of specialization along comparative advantage principles was, however, very incomplete. As noted above, the crown exerted a high degree of control over the trade activity of empire, and as a consequence, many institutional factors interfered in order to prevent Portugal (and its imperial complex) from fully following those principles. In the end, in economic terms, the empire was inefficient – something to be contrasted, for instance, with the Dutch equivalent, much more geared to commercial success, and based on clearer efficiency managing-methods. By so significantly controlling imperial trade, the crown became a sort of barrier between the empire’s riches and the national economy. Much of what was earned in imperial activity was spent either on maintaining it or on the crown’s clientele. Consequently, the spreading of the gains from imperial trade to the rest of the economy was highly centralized in the crown. A much visible effect of this phenomenon was the fantastic growth and size of the country’s capital, Lisbon. In the sixteenth century, Lisbon was the fifth largest city in Europe, and from the sixteenth century to the nineteenth century it was always in the top ten, a remarkable feat for a country with such a small population as Portugal. And it was also the symptom of a much inflated bureaucracy, living on the gains of empire, as well as of the low degree of repercussion of those gains of empire through the whole of the economy.

Portuguese Industry and Agriculture

The rest of the economy did, indeed, remain very much untouched by this imperial manna. Most of industry was untouched by it, and the only visible impact of empire on the sector was by fostering naval construction and repair, and all the accessory activities. Most of industry kept on functioning according to old standards, far from the impact of transoceanic prosperity. And much the same happened with agriculture. Although benefiting from the introduction of new crops (mostly maize, but also potatoes and rice), Portuguese agriculture did not benefit significantly from the income stream arising from imperial trade, in particular when we could expect it to be a source of investment. Maize constituted an important technological innovation which had a much important impact on the Portuguese agriculture’s productivity, but it was too localized in the north-western part of the country, thus leaving the rest of the sector untouched.

Failure of a Modern Land Market to Develop

One very important consequence of empire on agriculture and, hence, on the economy, was the preservation of the property structure coming from the Middle Ages, namely that resulting from the crown’s donations. The empire enhanced again the crown’s powers to attract talent and, consequently, donate land. Donations were regulated by official documents called Cartas de Foral, in which the tributes due to the beneficiaries were specified. During the time of the empire, the conditions ruling donations changed in a way that reveals an increased monarchical power: donations were made for long periods (for instance, one life), but the land could not be sold nor divided (and, thus, no parts of it could be sold separately) and renewal required confirmation on the part of the crown. The rules of donation, thus, by prohibiting buying, selling and partition of land, were a major obstacle to the existence not only of a land market, but also of a clear definition of property rights, as well as freedom in the management of land use.

Additionally, various tributes were due to the beneficiaries. Some were in kind, some in money, some were fixed, others proportional to the product of the land. This process dissociated land ownership and appropriation of land product, since the land was ultimately the crown’s. Furthermore, the actual beneficiaries (thanks to the donation’s rules) had little freedom in the management of the donated land. Although selling land in such circumstances was forbidden to the beneficiaries, renting it was not, and several beneficiaries did so. A new dissociation between ownership and appropriation of product was thus introduced. Although in these donations some tributes were paid by freeholders, most of them were paid by copyholders. Copyhold granted to its signatories the use of land in perpetuity or in lives (one to three), but did not allow them to sell it. This introduced a new dissociation between ownership, appropriation of land product and its management. Although it could not be sold, land under copyhold could be ceded in “sub-copyhold” contracts – a replication of the original contract under identical conditions. This introduced, obviously, a new complication to the system. As should be clear by now, such a “baroque” system created an accumulation of layers of rights over the land, as different people could exert different rights over it, and each layer of rights was limited by the other layers, and sometimes conflicting with them in an intricate way. A major consequence of all this was the limited freedom the various owners of rights had in the management of their assets.

High Levels of Taxation in Agriculture

A second direct consequence of the system was the complicated juxtaposition of tributes on agricultural product. The land and its product in Portugal in those days were loaded with tributes (a sort of taxation). This explains one recent historian’s claim (admittedly exaggerated) that, in that period, those who owned the land did not toil it, and those who toiled it did not hold it. We must distinguish these tributes from strict rent payments, as rent contracts are freely signed by the two (or more) sides taking part in it. The tributes we are discussing here represented, in reality, an imposition, which makes the use of the word taxation appropriate to describe them. This is one further result of the already mentioned feature of the institutional framework of the time, the difficulty to distinguish between the private and the public spheres.

Besides the tributes we have just described, other tributes also impended on the land. Some were, again, of a nature we would call private nowadays, others of a more clearly defined public nature. The former were the tributes due to the Church, the latter the taxes proper, due explicitly as such to the crown. The main tribute due to the Church was the tithe. In theory, the tithe was a tenth of the production of farmers and should be directly paid to certain religious institutions. In practice, not always was it a tenth of the production nor did the Church always receive it directly, as its collection was in a large number of cases rented to various other agents. Nevertheless, it was an important tribute to be paid by producers in general. The taxes due to the crown were the sisa (an indirect tax on consumption) and the décima (an income tax). As far as we know, these tributes weighted on average much less than the seigneurial tributes. Still, when added to them, they accentuated the high level of taxation or para-taxation typical of the Portuguese economy of the time.

Portugal under Spanish Rule, Restoration of Independence and the Eighteenth Century

Spanish Rule of Portugal, 1580-1640

The death of King Sebastian in North Africa, during a military mission in 1578, left the Portuguese throne with no direct heir. There were, however, various indirect candidates in line, thanks to the many kinship links established by the Portuguese royal family to other European royal and aristocratic families. Among them was Phillip II of Spain. He would eventually inherit the Portuguese throne, although only after invading the country in 1580. Between 1578 and 1580 leaders in Portugal tried unsuccessfully to find a “national” solution to the succession problem. In the end, resistance to the establishment of Spanish rule was extremely light.

Initial Lack of Resistance to Spanish Rule

To understand why resistance was so mild one must bear in mind the nature of such political units as the Portuguese and Spanish kingdoms at the time. These kingdoms were not the equivalent of contemporary nation-states. They had a separate identity, evident in such things as a different language, a different cultural history, and different institutions, but this didn’t amount to being a nation. The crown itself, when seen as an institution, still retained many features of a “private” venture. Of course, to some extent it represented the materialization of the kingdom and its “people,” but (by the standards of current political concepts) it still retained a much more ambiguous definition. Furthermore, Phillip II promised to adopt a set of rules allowing for extensive autonomy: the Portuguese crown would be “aggregated” to the Spanish crown although not “absorbed” or “associated” or even “integrated” with it. According to those rules, Portugal was to keep its separate identity as a crown and as a kingdom. All positions in the Portuguese government were to be attributed to Portuguese persons, the Portuguese language was the only one allowed in official matters in Portugal, positions in the Portuguese empire were to be attributed only to Portuguese.

The implementation of such rules depended largely on the willingness of the Portuguese nobility, Church and high-ranking officials to accept them. As there were no major popular revolts that could pressure these groups to decide otherwise, they did not have much difficulty in accepting them. In reality, they saw the new situation as an opportunity for greater power. After all, Spain was then the largest and most powerful political unit in Europe, with vast extensions throughout the world. To participate in such a venture under conditions of great autonomy was seen as an excellent opening.

Resistance to Spanish Rule under Phillip IV

The autonomous status was kept largely untouched until the third decade of the seventeenth century, i.e., until Phillip IV’s reign (1621-1640, in Portugal). This was a reign marked by an important attempt at centralization of power under the Spanish crown. A major impulse for this was Spain’s participation in the Thirty Years War. Simply put, the financial stress caused by the war forced the crown not only to increase fiscal pressure on the various political units under it but also to try to control them more closely. This led to serious efforts at revoking the autonomous status of Portugal (as well as other European regions of the empire). And it was as a reaction to those attempts that many Portuguese aristocrats and important personalities led a movement to recover independence. This movement must, again, be interpreted with care, paying attention to the political concepts of the time. This was not an overtly national reaction, in today’s sense of the word “national.” It was mostly a reaction from certain social groups that felt a threat to their power by the new plans of increased centralization under Spain. As some historians have noted, the 1640 revolt should be best understood as a movement to preserve the constitutional elements of the framework of autonomy established in 1580, against the new centralizing drive, rather than a national or nationalist movement.

Although that was the original intent of the movement, the fact is that, progressively, the new Portuguese dynasty (whose first monarch was John IV, 1640-1656) proceeded to an unprecedented centralization of power in the hands of the Portuguese crown. This means that, even if the original intent of the mentors of the 1640 revolt was to keep the autonomy prevalent both under pre-1580 Portuguese rule and post-1580 Spanish rule, the final result of their action was to favor centralization in the Portuguese crown, and thus help define Portugal as a clearly separate country. Again, we should be careful not to interpret this new bout of centralization in the seventeenth and eighteenth centuries as the creation of a national state and of a modern government. Many of the intermediate groups (in particular the Church and the aristocracy) kept their powers largely intact, even powers we would nowadays call public (such as taxation, justice and police). But there is no doubt that the crown increased significantly its redistributive power, and the nobility and the church had, increasingly, to rely on service to the crown to keep most of their powers.

Consequences of Spanish Rule for the Portuguese Empire

The period of Spanish rule had significant consequences for the Portuguese empire. Due to integration in the Spanish empire, Portuguese colonial territories became a legitimate target for all of Spain’s enemies. The European countries having imperial strategies (in particular, Britain, the Netherlands and France) no longer saw Portugal as a countervailing ally in their struggle with Spain, and consequently promoted serious assaults on Portuguese overseas possessions. There was one further element of the geopolitical landscape of the period that aggravated the willingness of competitors to attack Portugal, and that was Holland’s process of separation from the Spanish empire. Spain was not only a large overseas empire but also an enormous European one, of which Holland was a part until the 1560s. Holland, precisely, saw the Portuguese section of the Iberian empire as its weakest link, and, accordingly, attacked it in a fairly systematic way. The Dutch attack on Portuguese colonial possessions ranged from America (Brazil) to Africa (Sao Tome and Angola) to Asia (India, several points in Southeast Asia, and Indonesia), and in the course of it several Portuguese territories were conquered, mostly in Asia. Portugal, however, managed to keep most of its African and American territories.

The Shift of the Portuguese Empire toward the Atlantic

When it regained independence, Portugal had to re-align its external position in accordance with the new context. Interestingly enough, all those rivals that had attacked the country’s possessions during Spanish rule initially supported its separation. France was the most decisive partner in the first efforts to regain independence. Later (in the 1660s, in the final years of the war with Spain) Britain assumed that role. This was to inaugurate an essential feature of Portuguese external relations. From then on Britain became the most consistent Portuguese foreign partner. In the 1660s such a move was connected to the re-orientation of the Portuguese empire. What had until then been the center of empire (its Eastern part – India and the rest of Asia) lost importance. At first, this was due to the renewal in activity in the Mediterranean route, something that threatened the sea route to India. Then, this was because the Eastern empire was the part where the Portuguese had ceded more territory during Spanish rule, in particular to the Netherlands. Portugal kept most of its positions both in Africa and America, and this part of the world was to acquire extreme importance in the seventeenth and eighteenth centuries. In the last decades of the seventeenth century, Portugal was able to develop numerous trades mostly centered in Brazil (although some of the Atlantic islands also participated), involving sugar, tobacco and tropical woods, all sent to the growing market for luxury goods in Europe, to which was added a growing and prosperous trade of slaves from West Africa to Brazil.

Debates over the Role of Brazilian Gold and the Methuen Treaty

The range of goods in Atlantic trade acquired an important addition with the discovery of gold in Brazil in the late seventeenth century. It is the increased importance of gold in Portuguese trade relations that helps explain one of the most important diplomatic moments in Portuguese history, the Methuen Treaty (also called the Queen Anne Treaty), signed between Britain and Portugal in 1703. Many Portuguese economists and historians have blamed the treaty for Portugal’s inability to achieve modern economic growth during the eighteenth and nineteenth centuries. It must be remembered that the treaty stipulated tariffs to be reduced in Britain for imports of Portuguese wine (favoring it explicitly in relation to French wine), while, as a counterpart, Portugal had to eliminate all prohibitions on imports of British wool textiles (even if tariffs were left in place). Some historians and economists have seen this as Portugal’s abdication of having a national industrial sector and, instead, specializing in agricultural goods for export. As proof, such scholars present figures for the balance of trade between Portugal and Britain after 1703, with the former country exporting mainly wine and the latter textiles, and a widening trade deficit. Other authors, however, have shown that what mostly allowed for this trade (and the deficit) was not wine but the newly discovered Brazilian gold. Could, then, gold be the culprit for preventing Portuguese economic growth? Most historians now reject the hypothesis. The problem would lie not in a particular treaty signed in the early eighteenth century but in the existing structural conditions for the economy to grow – a question to be dealt with further below.

Portuguese historiography currently tends to see the Methuen Treaty mostly in the light of Portuguese diplomatic relations in the seventeenth and eighteenth centuries. The treaty would mostly mark the definite alignment of Portugal within the British sphere. The treaty was signed during the War of Spanish Succession. This was a war that divided Europe in a most dramatic manner. As the Spanish crown was left without a successor in 1700, the countries of Europe were led to support different candidates. The diplomatic choice ended up being polarized around Britain, on the one side, and France, on the other. Increasingly, Portugal was led to prefer Britain, as it was the country that granted more protection to the prosperous Portuguese Atlantic trade. As Britain also had an interest in this alignment (due to the important Portuguese colonial possessions), this explains why the treaty was economically beneficial to Portugal (contrary to what some of the older historiography tended to believe) In fact, in simple trade terms, the treaty was a good bargain for both countries, each having been given preferential treatment for certain of its more typical goods.

Brazilian Gold’s Impact on Industrialization

It is this sequence of events that has led several economists and historians to blame gold for the Portuguese inability to industrialize in the eighteenth and nineteenth centuries. Recent historiography, however, has questioned the interpretation. All these manufactures were dedicated to the production of luxury goods and, consequently, directed to a small market that had nothing to do (in both the nature of the market and technology) with those sectors typical of European industrialization. Were it to continue, it is very doubtful it would ever have become a full industrial spurt of the kind then underway in Britain. The problem lay elsewhere, as we will see below.

Prosperity in the Early 1700s Gives Way to Decline

Be that as it may, the first half of the eighteenth century was a period of unquestionable prosperity for Portugal, mostly thanks to gold, but also to the recovery of the remaining trades (both tropical and from the mainland). Such prosperity is most visible in the period of King John V (1706-1750). This is generally seen as the Portuguese equivalent to the reign of France’s Louis XIV. Palaces and monasteries of great dimensions were then built, and at the same time the king’s court acquired a pomp and grandeur not seen before or after, all financed largely by Brazilian gold. By the mid-eighteenth century, however, it all began to falter. The beginning of decline in gold remittances occurred in the sixth decade of the century. A new crisis began, which was compounded by the dramatic 1755 earthquake, which destroyed a large part of Lisbon and other cities. This new crisis was at the root of a political project aiming at a vast renaissance of the country. This was the first in a series of such projects, all of them significantly occurring in the sequence of traumatic events related to empire. The new project is associated with King Joseph I period (1750-1777), in particular with the policies of his prime-minister, the Marquis of Pombal.

Centralization under the Marquis of Pombal

The thread linking the most important political measures taken by the Marquis of Pombal is the reinforcement of state power. A major element in this connection was his confrontation with certain noble and church representatives. The most spectacular episodes in this respect were, first, the killing of an entire noble family and, second, the expulsion of the Jesuits from national soil. Sometimes this is taken as representing an outright hostile policy towards both aristocracy and church. However, it should be best seen as an attempt to integrate aristocracy and church into the state, thus undermining their autonomous powers. In reality, what the Marquis did was to use the power to confer noble titles, as well as the Inquisition, as means to centralize and increase state power. As a matter of fact, one of the most important instruments of recruitment for state functions during the Marquis’ rule was the promise of noble titles. And the Inquisition’s functions also changed form being mainly a religious court, mostly dedicated to the prosecution of Jews, to becoming a sort of civil political police. The Marquis’ centralizing policy covered a wide range of matters, in particular those most significant to state power. Internal police was reinforced, with the creation of new police institutions directly coordinated by the central government. The collection of taxes became more efficient, through an institution more similar to a modern Treasury than any earlier institutions. Improved collection also applied to tariffs and profits from colonial trade.

Centralizing power by the government had significant repercussions in certain aspects of the relationship between state and civil society. Although the Marquis’ rule is frequently pictured as violent, it included measures generally considered as “enlightened.” Such is the case of the abolition of the distinction between “New Christians” and Christians (new Christians were Jews converted to Catholicism, and as such suffered from a certain degree of segregation, constituting an intermediate category between Jews and Christians proper). Another very important political measure by the Marquis was the abolition of slavery in the empire’s mainland (even if slavery kept on being used in the colonies and the slave trade continued to prosper, there is no way of questioning the importance of the measure).

Economic Centralization under the Marquis of Pombal

The Marquis applied his centralizing drive to economic matters as well. This happened first in agriculture, with the creation of a monopolizing company for trade in Port wine. It continued in colonial trade, where the method applied was the same, that is, the creation of companies monopolizing trade for certain products or regions of the empire. Later, interventionism extended to manufacturing. Such interventionism was essentially determined by the international trade crisis that affected many colonial goods, the most important among them gold. As the country faced a new international payments crisis, the Marquis reverted to protectionism and subsidization of various industrial sectors. Again, as such state support was essentially devoted to traditional, low-tech, industries, this policy failed to boost Portugal’s entry into the group of countries that first industrialized.

Failure to Industrialize

The country would never be the same after the Marquis’ consulate. The “modernization” of state power and his various policies left a profound mark in the Portuguese polity. They were not enough, however, to create the necessary conditions for Portugal to enter a process of industrialization. In reality, most of the structural impediments to modern growth were left untouched or aggravated by the Marquis’ policies. This is particularly true of the relationship between central power and peripheral (aristocratic) powers. The Marquis continued the tradition exacerbated during the fifteenth and sixteenth centuries of liberally conferring noble titles to court members. Again, this accentuated the confusion between the public and the private spheres, with a particular incidence (for what concerns us here) in the definition of property and property rights. The act of granting a noble title by the crown, on many occasions implied a donation of land. The beneficiary of the donation was entitled to collect tributes from the population living in the territory but was forbidden to sell it and, sometimes, even rent it. This meant such beneficiaries were not true owners of the land. The land could not exactly be called their property. This lack of private rights was, however, compensated by the granting of such “public” rights as the ability to obtain tributes – a sort of tax. Beneficiaries of donations were, thus, neither true landowners nor true state representatives. And the same went for the crown. By giving away many of the powers we tend to call public today, the crown was acting as if it could dispose of land under its administration in the same manner as private property. But since this was not entirely private property, by doing so the crown was also conceding public powers to agents we would today call private. Such confusion did not help the creation of either a true entrepreneurial class or of a state dedicated to the protection of private property rights.

The whole property structure described above was kept, even after the reforming efforts of the Marquis of Pombal. The system of donations as a method of payment for jobs taken at the King’s court as well as the juxtaposition of various sorts of tributes, either to the crown or local powers, allowed for the perpetuation of a situation where the private and the public spheres were not clearly separated. Consequently, property rights were not well defined. If there is a crucial reason for Portugal’s impaired economic development, these are the things we should pay attention to. Next, we will begin the study of the nineteenth and twentieth centuries, and see how difficult was the dismantling of such an institutional structure and how it affected the growth potential of the Portuguese economy.

Suggested Reading:

Birmingham, David. A Concise History of Portugal. Cambridge: Cambridge University Press, 1993.

Boxer, C.R. The Portuguese Seaborne Empire, 1415-1825. New York: Alfred A. Knopf, 1969.

Godinho, Vitorino Magalhães. “Portugal and Her Empire, 1680-1720.” The New Cambridge Modern History, Vol. VI. Cambridge: Cambridge University Press, 1970.

Oliveira Marques, A.H. History of Portugal. New York: Columbia University Press, 1972.

Wheeler, Douglas. Historical Dictionary of Portugal. London: Scarecrow Press, 1993.

Citation: Amaral, Luciano. “Economic History of Portugal”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/economic-history-of-portugal/

Public Sector Pensions in the United States

Lee A. Craig, North Carolina State University

Introduction

Although employer-provided retirement plans are a relatively recent phenomenon in the private sector, dating from the late nineteenth century, public sector plans go back much further in history. From the Roman Empire to the rise of the early-modern nation state, rulers and legislatures have provided pensions for the workers who administered public programs. Military pensions, in particular, have a long history, and they have often been used as a key element to attract, retain, and motivate military personnel. In the United States, pensions for disabled and retired military personnel predate the signing of the U.S. Constitution.

Like military pensions, pensions for loyal civil servants date back centuries. Prior to the nineteenth century, however, these pensions were typically handed out on a case-by-case basis; except for the military, there were few if any retirement plans or systems with well-defined rules for qualification, contributions, funding, and so forth. Most European countries maintained some type of formal pension system for their public sector workers by the late nineteenth century. Although a few U.S. municipalities offered plans prior to 1900, most public sector workers were not offered pensions until the first decades of the twentieth century. Teachers, firefighters, and police officers were typically the first non-military workers to receive a retirement plan as part of their compensation.

By 1930, pension coverage in the public sector was relatively widespread in the United States, with all federal workers being covered by a pension and an increasing share of state and local employees included in pension plans. In contrast, pension coverage in the private sector during the first three decades of the twentieth century remained very low, perhaps as low as 10 to 12 percent of the labor force (Clark, Craig, and Wilson 2003). Even today, pension coverage is much higher in the public sector than it is in the private sector. Over 90 percent of public sector workers are covered by an employer-provided pension plan, whereas only about half of the private sector work force is covered (Employee Benefit Research Institute 1997).

It should be noted that although today the term “pension” generally refers to cash payments received after the termination of one’s working years, typically in the form of an annuity, historically, a much wider range of retiree benefits, survivor’s annuities, and disability benefits were also referred to as pensions. In the United States, for example, the initial army and navy pension systems were primarily disability plans. However, disability was often liberally defined and included superannuation or the inability to perform regular duties due to infirmities associated with old age. In fact, every disability plan created for U.S. war veterans eventually became an old-age pension plan, and the history of these plans often reflected broader economic and social trends.

Early Military Pensions

Ancient Rome

Military pensions date from antiquity. Almost from its founding, the Roman Republic offered pensions to its successful military personnel; however, these payments, which often took the form of land or special appropriations, were generally ad hoc and typically based on the machinations of influential political cliques. As a result, on more than one occasion, a pension served as little more than a bribe to incite soldiers to serve as the personal troops of the politicians who secured the pension. No small amount of the turmoil accompanying the Republic’s decline can be attributed to this flaw in Roman public finance.

After establishing the Empire, Augustus, who knew a thing or two about the politics and economics of military issues, created a formal pension plan (13 BC): Veteran legionnaires were to receive a pension upon the completion of sixteen years in a legion and four years in the military reserves. This was a true retirement plan designed to reward and mollify veterans returning from Rome’s frontier campaigns. The original Augustan pension suffered from the fact that it was paid from general revenues (and Augustus’ own generous contributions), and in 5 AD (6 AD according to some sources), Augustus established a special fund (aeririum militare) from which retiring soldiers were paid. Although the length of service was also increased from sixteen years on active duty to twenty (and five years in the reserves), the pension system was explicitly funded through a five percent tax on inheritances and a one percent tax on all transactions conducted through auctions — essentially a sales tax. Retiring legionnaires were to receive 3,000 denarii; centurions received considerably larger stipends (Crook 1996). In the first century AD, a lump-sum payment of 3,000 denarii would have represented a substantial amount of money — at least by working class standards. A single denarius equaled roughly a days’ wage for a common laborer; so at an eight percent discount rate (Homer and Sylla 1991), the pension would have yielded an annuity of roughly 66 to 75 percent of a laborer’s annual earnings. Curiously, the basic parameters of the Augustan pension system look much like those of modern public sector pension plans. Although the state pension system perished with Rome, the key features — twenty to twenty-five years of service to quality and a “replacement rate” of 66 to 75 percent — would reemerge more than a thousand years later to become benchmarks for modern public sector plans.

Early-modern Europe

The Roman pension system collapsed, or perhaps withered away is the better term, with Rome itself, and for nearly a thousand years military service throughout Western Civilization was based on personal allegiance within a feudal hierarchy. During the Middle Ages, there were no military pensions strictly comparable to the Roman system, but with the establishment of the nation state came the reemergence of standing armies led by professional soldiers. Like the legions of Imperial Rome, these armies owed their allegiance to a state rather than to a person. The establishment of standardized systems of military pensions followed very shortly thereafter, beginning as early as the sixteenth century in England. During its 1592-93 session, Parliament established “reliefe for Souldiours … [who] adventured their lives and lost their limbs or disabled their bodies” in the service of the Crown (quoted in Clark, Craig, and Wilson 2003, p. 29). Annual pensions were not to exceed ten pounds for “private soldiers,” or twenty pounds for a “lieutenant.” Although one must be cautious in the use of income figures and exchange rates from that era, an annuity of ten pounds would have roughly equaled fifty gold dollars (at subsequent exchange rates), which was the equivalent of per capita income a century or so later, making the pension generous by contemporary standards.

These pensions were nominally disability payments not retirement pensions, though governments often awarded the latter on a case-by-case basis, and by the eighteenth century all of the other early-modern Great Powers — France, Austria, Spain, and Prussia — maintained some type of military pensions for their officer castes. These public pensions were not universally popular. Indeed, they were often viewed as little more than spoils. Samuel Johnson famously described a public pension as “generally understood to mean pay given to a state-hireling for treason to his country” (quoted in Clark, Craig, and Wilson 2003, 29). By the early nineteenth century, Britain, France, Prussia, and Spain all had formal retirement plans for their military personnel. The benchmark for these plans was the British “half-pay” system in which retired, disabled or otherwise unemployed officers received roughly fifty percent of their base pay. This was fairly lucrative compared to the annuities received by their continental counterparts.

Military Pensions in the United States

Prior to the American Revolution, Britain’s American colonies provided pensions to disabled men who were injured defending the colonists and their property from the French, the Spanish, and the natives. During the Revolutionary War the colonies extended this coverage to the members of their militias. Several colonies maintained navies, and they also offered pensions to their naval personnel. Independent of the actions of the colonial legislatures, the Continental Congress established pensions for its army (1776) and naval forces (1775). U.S. military pensions have been continuously provided, in one form or another ever since.

Revolutionary War Era

Although initially these were all strictly disability plans, in order to keep the troops in the field during the crucial months leading up to the Battle of Yorktown (1781), Congress authorized the payment of a life annuity, equal to one-half base pay, to all officers remaining in the service for the duration of the Revolution. It was not long before Congress and the officers in question realized that the national governments’ cash-flow situation and the present value of its future revenues were insufficient to meet this promise. Ultimately, the leaders of the disgruntled officers met at Newburgh, New York and pressed their demands on Congress, and in the spring of 1783, Congress converted the life annuities to a fixed-term payment equal to full pay for five years. Even these more limited obligations were not fully paid to qualifying veterans, and only the direct intervention of George Washington defused a potential coup (Ferguson 1961; Middlekauff 1982). The Treaty of Paris was signed in September of 1783, and the Continental Army was furloughed shortly thereafter. The officers’ pension claims were subsequently met to a degree by special interest-bearing “commutation certificates” — bonds, essentially. It took another eight years before the Constitution and Alexander Hamilton’s financial reforms placed the new federal government in a position to honor these obligations by the issuance of the new (consolidated) federal debt. However, because of the country’s precarious financial situation, between the Revolution and the consolidation of the debt, many embittered officers sold their “commutation” bonds in the secondary market at a steep discount.

In addition to a “regular” army pension plan, every war from the Revolution through the Indian Wars of the late-nineteenth century, saw the creation of a pension plan for the veterans of that particular war. Although every one of those plans was initially a disability plan, they were all eventually converted into an old-age pension plan — though this conversion often took a long time. The Revolutionary War plan became a general retirement plan in 1832 — 49 years after the Treaty of Paris ended the war. At that time every surviving veteran of the Revolutionary War received a pension equal to 100 percent of his base pay at the end of the war. Similarly, it was 56 years after the War of 1812, before survivors of that war were given retirement pensions.

Severance Pay

As for a retirement plan for the “regular” army, there was none until the Civil War; however, soldiers who were discharged after 1800 were given three months’ pay as severance. Officers were initially offered the same severance package as enlisted personnel, but in 1802, officers began receiving one months’ pay for each year of service over three years. Hence an officer with twelve years of service earning, say, $40 a month could, theoretically, convert his severance into an annuity, which at a six percent rate of interest would pay $2.40 a month, or less than $30 a year. This was substantially less than a prime farmhand could expect to earn and a pittance compared to that of, say, a British officer. Prior to the onset of the War of 1812, Congress supplemented these disability and severance packages with a type of retirement pension. Any soldier who enlisted for five years and who was honorably discharged would receive, in addition to his three months’ severance, 160 acres of land from the so-called military reserve. If he was killed in action or died in the service, his widow or heir(s) would receive the same benefit. The reservation price of public land at that time was $2.00 per acre ($1.64 for cash). So, the severance package would have been worth roughly $350, which, annuitized at six percent, would have yielded less than $2.00 a month in perpetuity. This was an ungenerous settlement by almost any standard. Of course in a nation of small farmers, a 160 acres might have represented a good start for a young cash-poor farmhand just out of the army.

The Army Develops a Retirement Plan

The Civil War resulted in a fundamental change in this system. Seeking the power to cull the active list of officers, the Lincoln administration persuaded Congress to pass the first general army retirement law. All officers could apply for retirement after 40 years of service, and a formal retirement board could retire any officer (after 40 years of service) who was deemed incapable of field service. There was a limit put on the number of officers who could be retired in this manner. Congress amended the law several times over the next few decades, with the key changes coming in 1870 and 1882. Taken together, these acts established 30 years as the minimum service requirement, 75 percent of base pay as the standard pension, and age 64 as the mandatory retirement age. This was the basic army pension plan until 1920, when Congress established the “up-or-out” policy in which an officer who was not deemed to be on track for promotion was retired. As such, he was to receive a retirement benefit equal to 2.5 percent multiplied by years of service not to exceed 75 percent of his base pay at the time of retirement. Although the maximum was reduced to 60 percent in 1924, it was subsequently increased back to 75 percent, and the service requirement was reduced to 20 years. As such, this remains the basic plan for military personnel to this day (Hustead and Hustead 2001).

Except for the disability plans that were eventually converted to old-page pensions, prior to 1885 the army retirement plan was only available to commissioned officers; however, in that year Congress created the first systematic retirement plan for enlisted personnel in the U.S. Army. Like the officers’ plan, it permitted retirement upon the completion of 30 years service at 75 percent of base pay. With the subsequent reduction in the minimum service requirement to 20 years, the enlisted plan merged with that for officers.

Naval Pensions

Until after World War I, the army and the navy maintained separate pension plans for their officers. The Continental Navy created a pension plan for its officers and seamen in 1775, even before an army plan was established. In the following year the navy plan was merged with the first army pension plan, and it too was eventually converted to a retirement plan for surviving veterans in 1832. The first disability pension plan for “regular” navy personnel was created in 1799. Officers’ benefits were not to exceed half-pay, while those for seamen and marines were not to exceed $5.00 a month, which was roughly 33 percent of an unskilled seaman’s base pay or 25 percent of that of a hired laborer in the private sector.

Except for the eventual conversion of the war pensions to retirement plans, there was no formal retirement plan for naval personnel until 1855. In that year Congress created a review board composed of five officers from each of the following ranks: captain, commander, and lieutenant. The board was to identify superannuated officers or those generally found to be unfit for service, and at the discretion of the Secretary of the Navy, the officers were to be placed on the reserve list at half-pay subject to the approval of the President. Before the plan had much impact the Civil War intervened, and in 1861 Congress established the essential features of the navy retirement plan, which were to remain in effect throughout the rest of the century. Like the army plan, retirement could occur through one of two ways: Either a retirement board could find the officer incapable of continuing on active duty, or after 40 years of service an officer could apply for retirement. In either case, officers on the retired list remained subject to recall; they were entitled to wear their uniforms; they were subject to the Articles of War and courts-martial; and they received 75 percent of their base pay. However, just as with the army certain constraints on the length of the retired list limited the effectiveness of the act.

In 1899, largely at the urging of then Assistant Secretary of the Navy Theodore Roosevelt, the navy adopted a rather Byzantine scheme for identifying and forcibly retiring officers deemed unfit to continue on active duty. Retirement (or “plucking”) boards were responsible for identifying those to be retired. Officers could avoid the ignominy of forced retirement by volunteering to retire, and there was a ceiling on the number who could be retired by the boards. In addition, all officers retired under this plan were to receive 75 percent of the sea pay of the next rank above that which they held at the time of retirement. (This last feature was amended in 1912, and officers simply received three-fourths of the pay of the rank in which they retired.) During the expansion of the navy leading up to America’s participation in the World War I, the plan was further amended, and in 1915 the president was authorized, with the advice and consent of the Senate, to reinstate any officer involuntarily retired under the 1899 act.

Still, the navy continued to struggle with its superannuated officers. In 1908, Congress finally granted naval officers the right to retire voluntarily at 75 percent of the active-duty pay upon the completion of 30 years of service. In 1916, navy pension rules were again altered, and this time a basic principle – “up or out” (with a pension) – was established, a principle which continues to this day. There were four basic components that differentiated the new navy pension plan from earlier ones. First, promotion to the ranks of rear admiral, captain, and commander were based on the recommendations of a promotion board. Prior to that time, promotions were based solely on seniority. Second, the officers on the active list were to be distributed among the ranks according to percentages that were not to exceed certain limits; thus, there was a limit placed on the number of officers who could be promoted to a certain rank. Third, age limits were placed on officers in each grade. Officers who obtained a certain age in a certain rank were retired with their pay equal to 2.5 percent multiplied by the number of years in service, with the maximum not to exceed 75 percent of their final active-duty pay. For example, a commander who reached age 50 and who had not been selected for promotion to captain, would be placed on the retired list. If he had served 25 years, then he would receive 62.5 percent of his base pay upon retirement. Finally, the act also imposed the same mandatory retirement provision on naval personnel as the 1882 (amended in 1890) act imposed on army personnel, with age 64 being established as the universal age of retirement in the armed forces of the United States.

These plans applied to naval officers only; however, in 1867 Congress authorized the retirement of seamen and marines who had served 20 or more years and who had become infirm as a result of old-age. These veterans would receive one-half their base pay for life. In addition, the act allowed any seaman or marine who had served 10 or more years and subsequently become disabled to apply to the Secretary of the Navy for a “suitable amount of relief” up to one-half base pay from the navy’s pension fund (see below). In 1899, the retirement act of 1885, which covered enlisted army personnel, was extended to enlisted navy personnel, with a few minor differences, which were eliminated in 1907. From that year, all enlisted personnel in both services were entitled to voluntarily retire at 75 percent of their pay and other allowances after 30 years’ of service, subsequently reduced to 20 years.

Funding U.S. Military Pensions

The history of pensions, particularly public sector pensions, cannot be easily separated from the history of pension finance. The creation of a pension plan coincides with the simultaneous creation of pension liabilities, and the parameters of the plan establish the size and the timing of those liabilities. U.S. Army pensions have always been funded on a “pay-as-you-go” basis from the general revenues of the U.S. Treasury. Thus army pensions have always been simply one more liability of the federal government. Despite the occasional accounting gimmick, the general revenues and obligations of the federal government are highly fungible, and so discussing the actuarial properties of the U.S. Army pension plan is like discussing the actuarial properties of the Department of Agriculture or the salaries of F.B.I. agents. However, until well into the twentieth century, this was not the case with navy pensions. They were long paid from a specific fund established separately from the general accounts of the treasury, and thus, their history is quite different from that of the army’s pensions.

From its inception in 1775, the navy’s pension plan for officers and seamen was financed with monies from the sale of captured prizes — enemy ships and those of other states carrying contraband. This funding mechanism meant that the flow of revenues needed to finance the navy’s pension liabilities were very erratic over time, fluctuating with the fortunes of war and peace. To manage these monies, the Continental Congress (and later the U.S. Congress) established the navy pension fund and allowed the trustees of this fund to invest the monies in a wide range of assets, including private equities. The history of the management of this pension fund illustrates many of the problems that can arise when public pension monies are used to purchase private assets. These include the loss of a substantial proportion of its assets on bad investments in private equities, the treasury’s bailout of the fund for these losses, and investment decisions that were influenced by political pressure. In addition there is evidence of gross malfeasance on the part of the agents of the fund, including trading on their on accounts, insider trading, and outright fraud.

Excluding a brief interlude just prior to the Civil War, the navy pension fund had a colorful history, lasting nearly one hundred and fifty years. Between its establishment in 1775 and 1842, it went bankrupt no less than three times, being bailed out by Congress each time. By 1842, there was little opportunity to continue to replenish the fund with fresh prize monies, and Congress, temporarily as it turned out, converted the navy pensions to a pay-as-you-go system, like army pensions. With the onset of the Civil War, the Union Navy’s blockade of Confederate ports created new prize opportunities, and the fund was reestablished, and navy pensions were once again paid from the prize fund. The fund subsequently accumulated an enormous balance. Like the antebellum losses of the fund, its postbellum surplus became something of a political football, and after much acrimonious debate, Congress took much of the fund’s balance and turned it over to the treasury. Still, the remnants of the fund persisted into the 1930s (Clark, Craig, and Wilson 2003).

Federal Civil Service Pensions

Like military pensions, pensions for loyal civil servants date back centuries; however, pension plans are of a more recent vintage, generally dating from the nineteenth century in Europe. In the United States, the federal government did not adopt a universal pension plan for civilian employees until 1920. This is not to say that there were no federal pensions before 1920. Pensions were available for some retiring civil servants, but Congress created them on a case-by-case basis. In the year before the federal pension plan went into effect, for example, there were 1,467 special acts of Congress either granting a new pension (912) or increasing the payments on old pensions (555) (Clark, Craig, and Wilson 2003). This process was as inefficient as it was capricious. Ending this system became a key objective of Congressional reforms.

The movement to create public sector pension plans at the turn of the twentieth century reflected the broader growth of the welfare state, particularly in Europe. As part of the progressive movement, many progressives envisioned the nascent European “cradle-to-grave” programs as the precursor of a better society, one with a new social covenant between the state and its people. Old-age pensions would fill the last step before the grave. Although the ultimate goal of this movement, universal old-age pensions, would not be realized until the creation of the social security system during the Great Depression, the initial objective was to have the government supply old-age security to its own workers. To support the movement in the United States, proponents of universal old-age pensions pointed out that by the early twentieth century, thirty-two countries around the world, including most of the European states and many regimes considered to be reactionary on social issues, had some type of old-age pension for their non-military public employees. If the Russians could humanely treat their superannuated civil servants, the argument went, why couldn’t the United States.

Establishing the Civil Service System

In the United States, the key to the creation of a civil service pension plan was the creation of a civil service. Prior to the late nineteenth century, the vast majority of federal employees were patronage employees — that is they served at the leisure of an elected or appointed official. With the tremendous growth of the number of such employees in the nineteenth century, the costs of the patronage system eventually outweighed the benefits derived from it. For example, over the century as a whole the number of post offices grew from 906 to 44,848; federal revenues grew from $3 million to over $400 million; and non-military employment went from 1,000 to 100,000. Indeed, the federal labor force nearly doubled in the 1870s alone (Johnson and Libecap 1994). The growth rates of these indicators of the size of the public sector are large even when compared to the dramatic fourteen-fold increase in U.S. population between 1800 and 1900. As a result, in 1883 Congress passed the Pendleton Act, which created the federal civil service, and which was passed largely, though not entirely, along party lines. As the party in power, the Republicans saw the conversion of federal employment from patronage to “merit” as an opportunity to gain the lifetime loyalty of an entire cohort of federal workers. In other words, by converting patronage jobs to civil service jobs, the party in power attempted to create lifetime tenure for its patronage workers. Of course, once in their civil service jobs, protected from the harshest effects of the market and the spoils system, federal workers simply did not want to retire — or put another way, many tended to retire on the job — and thus the conversion from patronage to civil service led to an abundance of superannuated federal workers. Thus began the quest for a federal pension plan.

Passage of the Federal Employees Retirement Act

A bill providing pensions for non-military employees of the federal government was introduced in every session of Congress between 1900 and 1920. Representatives of workers’ groups, the executive branch, the United States Civil Service Commission and inquiries conducted by congressional committees all requested or recommended the adoption of retirement plans for civil-service employees. While the political dynamics between these parties was often subtle and complex, the campaigns culminated in the passage of the Federal Employees Retirement Act on May 22, 1920 (Craig 1995). The key features of the original act of 1920 included:

  • All classified civil service employees qualified for a pension after reaching age 70 and rendering at least 15 years of service. Mechanics, letter carriers, and post office clerks were eligible for a pension after reaching age 65, and railway clerks qualified at age 62.
  • The ages at which employees qualified were also mandatory retirement ages. An employee could, however, be retained for two years beyond the mandatory age if his department head and the head of the Civil Service Commission approved.
  • All eligible employees were required to contribute two and one-half percent of their salaries or wages towards the payment of pensions.
  • The pension benefit was determined by the number of years of service. Class A employees were those who had served 30 or more years. Their benefit was 60 percent of their average annual salary during the last ten years of service. The benefits were scaled down through Class F employees (at least 15 years but less than 18 years of service). They received 30 percent of their average annual salary during the last ten years of service.

Although subsequently revised, this plan remains one of the two main civil service pension plans in the United States, and it served as something of a model for many subsequent pension plans in the United States. The other, newer federal plan, established in 1983, is a hybrid. That is, it has a traditional defined benefit component, a defined contribution component, and a Social Security component (Hustead and Hustead 2001).

State and Local Pensions

Decades before the states or the federal government provided civilian workers with a pension plan, several large American cities established plans for at least some of their employees. Until the first decades of the twentieth century, however, these plans were generally limited to three groups of employees: police officers, firefighters, and teachers. New York City established the first such plan for its police officers in 1857. Like the early military plans, the New York City police pension plan was a disability plan until a retirement feature was added in 1878 (Mitchell et al. 2001). Only a few other (primarily large) cities joined New York with a plan before 1900. In contrast, municipal workers in Austria-Hungary, Belgium, France, Germany, the Netherlands, Spain, Sweden, and the United Kingdom were covered by retirement plans by 1910 (Squier 1912).

Despite the relatively late start, the subsequent growth of such plans in the United States was rapid. By 1916, 159 cities had a plan for one or more of these groups of workers, and 21 of those cities included other municipal employees in some type of pension coverage (Monthly Labor Review, 1916). In 1917, 85 percent of cities with 100,000 or more residents paid some form of police pension; as did 66 percent of those with populations between 50,000 and 100,000; and 50 percent of cities with population between 30,000 and 50,000 had some pension liability (James 1921). These figures do not mean that all of these cities had a formal retirement plan. They only indicate that a city had at least $1 of pension liability. This liability could have been from a disability pension, a forced savings plan, or a discretionary pension. Still, by 1928, the Monthly Labor Review (April, 1928) could characterize police and fire plans as “practically universal”. At that time, all cities with populations of over 400,000 had a pension plan for either police officers or firefighters or both. Only one did not have a plan for police officers, and only one did not have a plan for firefighters. Several of those cities also had plans for their other municipal employees, and some cities maintained pension plans for their public school teachers separately from state teachers’ plans, which are reviewed below.

Eventually, some states also began to establish pension plans for state employees; however, initially these plans were primarily limited to teachers. Massachusetts established the first retirement pension plan for general state employees in 1911. The plan required workers to pay up to 5 percent of their salaries to a trust fund. Benefits were payable upon retirement. Workers were eligible to retire at age 60, and retirement was mandatory at age 70. At the time of retirement, the state purchased an annuity equal to twice the accumulated value (with interest) of the employee’s contribution. The calculation of the appropriate interest rate was, in many cases, not straightforward. Sometimes market rates or yields from a portfolio of assets were employed; sometimes a rate was simply established by legislation (see below). The Massachusetts plan initially became something of a model for subsequent public-sector pensions, but it was soon replaced by what became the standard public sector, defined benefit plan, much like the federal plan described above, in which the pension annuity was based on years of service and end-of-career earnings. Curiously, the Massachusetts plan resembled in some respects what have been referred to more recently as cash balance plans — hybrid plans that contain elements of both defined benefit and defined contribution plans.

Relative to the larger municipalities, the states were, in general, quite slow to adopt pension plans for their employees. As late as 1929, only six states had anything like a civil service pension plan for their (non-teacher) employees (Millis and Montgomery 1938). The record shows that pensions for state and local civil servants are for the most part, twentieth-century developments. However, after individual municipalities began adopting plans for their teachers in the early twentieth century, the states moved fairly aggressively in the 1910s and 1920s to create or consolidate plans for their other teachers. By the late 1920s, 21 states had formal retirement plans for their public school teachers (Clark, Craig, and Wilson 2003). On the one hand, this summary of state and local pension plans suggests that of all of the political units in the United States, the states themselves were the slowest to create pension plans for their civil service workers. However, this observation is slightly misleading. In 1930, 40 percent of all state and local employees were schoolteachers, and the 21 states that maintained a plan for their teachers included the most populous states at the time. While public sector pensions at the state and local level were far from universal by the 1920s, they did cover a substantial proportion of public sector workers, and that proportion was growing rapidly in the early decades of the twentieth century.

Funding State and Local Pensions

No discussion of the public sector pension plans would be complete without addressing the way in which the various plans were funded. The term “funded pension” is often used to mean a pension plan that had a specific source of revenues dedicated to pay for the plan’s liabilities. Historically, most public sector pension plans required some contribution from the employees covered by the plan, and in a sense, this contribution “funded” the plan; however, the term “funded” is more often taken to mean that the pension plan receives a stream of public funds from, for example, a specific source, such a share of property tax revenues. In addition, the term “actuarially sound” is often used to describe a pension plan in which the present value of tangible assets roughly equaled the present value of expected liabilities. Whereas one would logically expect an actuarially sound plan to be a funded plan, indeed a “fully funded” plan, a funded plan need not be actuarially sound, because it is possible that the flow of funds was simply too small to sufficiently cover liabilities.

Many early state and local plans were not funded at all; and fewer still were actuarially sound. Of course, in another sense, public sector pension plans are implicitly funded to the extent that they are backed by the coercive powers of the state. Through their monopoly of taxation, financially solvent and militarily successful states will be able to rely on their tax bases to fund their pension liabilities. Although this is exactly how most of the early state and local plans were ultimately financed, this is not what is typically meant by the term “funded plan”. Still, an important part of the history of state and local pensions revolves around exactly what happened to the funds (mostly employee contributions) that were maintained on behalf of the public sector workers.

Although the maintenance and operation of the state and local pension funds varied greatly during this early period, most plans required a contribution from workers, and this contribution was to be deposited in a so-called “annuity fund.” The assets of the fund were to be “invested” in various ways. In some cases the funds were invested “in accordance with the laws of the state governing the investment of savings bank funds.” In others the investments of the fund were to be credited “regular interest”, which was defined as, “the rate determined by the retirement board, and shall be substantially that which is actually earned by the fund of the retirement association.” This “rate” varied from state to state. In Connecticut, for example, it was literally a realized rate – i.e. a market rate. In Massachusetts, it was initially set at 3 percent by the retirement board, but subsequently it became a realized rate, which turned out to be roughly 4 percent in the late 1910s. In Pennsylvania, law set the rate at 4 percent. In addition, all three states created a “pension fund”, which contained the state’s contribution to the workers’ retirement annuity. In Connecticut and Massachusetts, this fund simply consisted of “such amounts as shall be appropriated by the general assembly from time to time.” In other words, the state’s share of the pension was on a “pay-as-you-go” basis. In Pennsylvania, however, the state actually contributed 2.8 percent of a teacher’s salary semi-annually to the state pension fund (Clark, Craig, and Wilson 2003).

By the late 1920s some states were basing their contributions to their teachers’ pension fund on actuarial calculations. The first states to adopt such plans were New Jersey, Ohio, and Vermont (Studenski 1920). What this meant in practice was that the state essentially estimated its expected future liability based on a worker’s experience, age, earnings, life expectancy, and so forth, and then deposited that amount into the pension fund. This was originally referred to as a “scientific” pension plan. These were truly funded and actuarially sound defined benefit plans.

As noted, several of the early plans paid an annuity based on the performance of the pension fund. The return on the fund’s portfolio is important because it would ultimately determine the soundness of the funding scheme and in some case the actual annuity the worker would receive. Even the funded, defined benefit plans based the worker’s and the employer’s contributions on expected earnings on the invested funds. How did these early state and local pension funds manage the assets they held? Several state plans restricted the plans to holding only those assets that could be held by state chartered mutual savings banks. Typically, these banks could hold federal, state, or local government debt. In most states, they could usually hold debt issued by private corporations and occasionally private equities. In the first half of the twentieth century, there were 19 states that chartered mutual savings banks. They were overwhelmingly in the Northeast, Midwest, and Far West — the same regions in which state and local pension plans were most prevalent. However, in most cases the corporate securities were limited to those on a so-called “legal list,” which was supposed to contain only the safest corporate investments. Admission to the legal list was based on a compilation of corporate assets, earnings, dividends, prior default records and so forth. The objective was to provide a list that consisted of the bluest of blue chip corporate securities. In the early decades of the twentieth century, these lists were dominated by railroad and public-utility issues (Hickman 1958). States, such as Massachusetts that did not restrict investments to those held by mutual savings banks, placed similar limits on state pension funds. Massachusetts limited investments to those that could be made in state-established “sinking funds”. Ohio explicitly limited its pension funds to U.S. debt, Ohio state debt, and the debt of any “county, village, city, or school district of the state of Ohio” (Studenski 1920).

Collectively, the objective of these restrictions was risk minimization — though the economics of that choice is not as simple it might appear. Cities and states that invested in their own municipal bonds faced an inherent moral hazard. Specifically, public employees might be forced to contribute a proportion of their earnings to their pension funds. If the city then purchased debt at par from itself for the pension fund when that debt might for various reasons not circulate at par on the open market, then the city could be tempted to go to the pension fund rather than the market for funds. This process would tend to insulate the city from the discipline of the market, which would in turn tend to cause the city to over-invest in activities financed in this way. Thus, the pension funds, actually the workers themselves, would essentially be forced to subsidize other city operations. In practice, the main beneficiaries would have been the contractors whose activities were funded by the workers’ pensions funds. At the time, these would have included largely sewer, water, and road projects. The Chicago police pension fund offers an example of the problem. An audit of the fund in 1912 reported: “It is to be regretted that there are no complete statistical records showing the operation of this fund in the city of Chicago.” As a recent history of pensions noted, “It is hard to imagine that the records were simply misplaced by accident” (Clark, Craig, and Wilson 2003, 213). Thus, like the U.S. Navy pension fund, the agents of these municipal and state funds faced a moral hazard that scholars are still analyzing more than a century later.

References

Clark, Robert L., Lee A. Craig, and Jack W. Wilson. A History of Public Sector Pensions. Philadelphia: University of Pennsylvania Press, 2003.

Craig, Lee A. “The Political Economy of Public-Private Compensation Differentials: The Case of Federal Pensions.” Journal of Economic History 55 (1995): 304-320.

Crook, J. A. “Augustus: Power, Authority, Achievement.” In The Cambridge Ancient History, edited by Alan K. Bowman, Edward Champlin, and Andrew Lintoff. Cambridge: Cambridge University Press, 1996.

Employee Benefit Research Institute. EBRI Databook on Employee Benefits. Washington, D. C.: EBRI, 1997.

Ferguson, E. James. Power of the Purse: A History of American Public Finance. Chapel Hill, NC: University of North Carolina Press, 1961.

Hustead, Edwin C., and Toni Hustead. “Federal Civilian and Military Retirement Systems.” In Pensions in the Public Sector, edited by Olivia S. Mitchell and Edwin C. Hustead, 66-104. Philadelphia: University of Pennsylvania Press, 2001.

James, Herman G. Local Government in the United States. New York: D. Appleton & Company, 1921.

Johnson, Ronald N., and Gary D. Libecap. The Federal Civil Service System and the Problem of Bureaucracy. Chicago: University of Chicago Press, 1994.

Middlekauff, Robert. The Glorious Cause: The American Revolution, 1763-1789. New York: Oxford University Press, 1982.

Millis, Harry A., and Royal E. Montgomery. Labor’s Risk and Social Insurance. New York: McGraw-Hill, 1938.

Mitchell, Olivia S., David McCarthy, Stanley C. Wisniewski, and Paul Zorn. “Developments in State and Local Pension Plans.” In Pensions in the Public Sector, edited by Olivia S. Mitchell and Edwin C. Hustead. Philadelphia: University of Pennsylvania Press, 2001.

Monthly Labor Review, various issues.

Squier, Lee Welling. Old Age Dependency in the United States. New York: Macmillan, 1912

Studenski, Paul. 1920. Teachers’ Pension Systems in the United States: A Critical and Descriptive Study. New York: D. Appleton and Company, 1920

Citation: Craig, Lee. “Public Sector Pensions in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2003. URL http://eh.net/encyclopedia/public-sector-pensions-in-the-united-states/

The Economic History of Norway

Ola Honningdal Grytten, Norwegian School of Economics and Business Administration

Overview

Norway, with its population of 4.6 million on the northern flank of Europe, is today one of the most wealthy nations in the world, both measured as GDP per capita and in capital stock. On the United Nation Human Development Index, Norway has been among the three top countries for several years, and in some years the very top nation. Huge stocks of natural resources combined with a skilled labor force and the adoption of new technology made Norway a prosperous country during the nineteenth and twentieth century.

Table 1 shows rates of growth in the Norwegian economy from 1830 to the present using inflation-adjusted gross domestic product (GDP). This article splits the economic history of Norway into two major phases — before and after the nation gained its independence in 1814.

Table 1
Phases of Growth in the Real Gross Domestic Product of Norway, 1830-2003

(annual growth rates as percentages)

Year GDP GDP per capita
1830-1843 1.91 0.86
1843-1875 2.68 1.59
1875-1914 2.02 1.21
1914-1945 2.28 1.55
1945-1973 4.73 3.81
1973-2003 3.28 2.79
1830-2003 2.83 2.00

Source: Grytten (2004b)

Before Independence

The Norwegian economy was traditionally based on local farming communities combined with other types of industry, basically fishing, hunting, wood and timber along with a domestic and international-trading merchant fleet. Due to topography and climatic conditions the communities in the North and the West were more dependent on fish and foreign trade than the communities in the south and east, which relied mainly on agriculture. Agricultural output, fish catches and wars were decisive for the waves in the economy previous to independence. This is reflected in Figure 1, which reports a consumer price index for Norway from 1516 to present.

The peaks in this figure mark the sixteenth-century Price Revolution (1530s to 1590s), the Thirty Years War (1618-1648), the Great Nordic War (1700-1721), the Napoleonic Wars (1800-1815), the only period of hyperinflation in Norway — World War I (1914-1918) — and the stagflation period, i.e. high rates of inflation combined with a slowdown in production, in the 1970s and early 1980s.

Figure 1
Consumer Price Index for Norway, 1516-2003 (1850 = 100).

Figure 1
Source: Grytten (2004a)

During the last decades of the eighteenth century the Norwegian economy bloomed along with a first era of liberalism. Foreign trade of fish and timber had already been important for the Norwegian economy for centuries, and now the merchant fleet was growing rapidly. Bergen, located at the west coast, was the major city, with a Hanseatic office and one of the Nordic countries’ largest ports for domestic and foreign trade.

When Norway gained its independence from Denmark in 1814, after a tight union covering 417 years, it was a typical egalitarian country with a high degree of self-supply from agriculture, fisheries and hunting. According to the population censuses from 1801 and 1815 more than ninety percent of the population of 0.9 million lived in rural areas, mostly on small farms.

After Independence (1814)

Figure 2 shows annual development in GDP by expenditure (in fixed 2000 prices) from 1830 to 2003. The series, with few exceptions, reveal steady growth rates with few huge fluctuations. However, economic growth as a more or less continuous process started in the 1840s. We can also conclude that the growth process slowed down during the last three decades of the nineteenth century. The years 1914-1945 were more volatile than any other period in question, while there was an impressive and steady rate of growth until the mid 1970s and from then on slower growth.

Figure 2
Gross Domestic Product for Norway by Expenditure Category
(in 2000 Norwegian Kroner)

Figure 2
Source: Grytten (2004b)

Stagnation and Institution Building, 1814-1843

The newborn state lacked its own institutions, industrial entrepreneurs and domestic capital. However, due to its huge stocks of natural resources and its geographical closeness to the sea and to the United Kingdom, the new state, linked to Sweden in a loose royal union, seized its opportunities after some decades. By 1870 it had become a relatively wealthy nation. Measured in GDP per capita Norway was well over the European average, in the middle of the West European countries, and in fact, well above Sweden.

During the first decades after its independence from Denmark, the new state struggled with the international recession after the Napoleonic wars, deflationary monetary policy, and protectionism from the UK.

The Central Bank of Norway was founded in 1816, and a national currency, the spesidaler pegged to silver was introduced. The daler depreciated heavily during the first troubled years of recession in the 1820s.

The Great Boom, 1843-1875

After the Norwegian spesidaler gained its par value to silver in 1842, Norway saw a period of significant economic growth up to the mid 1870s. This impressive growth was mirrored in only a few other countries. The growth process was very much initiated by high productivity growth in agriculture and the success of the foreign sector. The adoption of new structures and technology along with substitution from arable to lifestock production made labor productivity in agriculture increase by about 150 percent between 1835 and 1910. The exports of timber, fish and in particular maritime services achieved high growth rates. In fact, Norway became a major power in shipping services during this period, accounting for about seven percent of the world merchant fleet in 1875. Norwegian sailing vessels freighted international goods all over the world at low prices.

The success of the Norwegian foreign sector can be explained by a number of factors. Liberalization of world trade and high international demand secured a market for Norwegian goods and services. In addition, Norway had vast stocks of fish and timber along with maritime skills. According to recent calculations, GDP per capita had an annual growth rate of 1.6 percent 1843 to 1876, well above the European average. At the same time the Norwegian annual rate of growth for exports was 4.8 percent. The first modern large-scale manufacturing industry in Norway saw daylight in the 1840s, when textile plants and mechanized industry were established. A second wave of industrialization took place in the 1860s and 1870s. Following the rapid productivity growth in agriculture, food processing and dairy production industries showed high growth in this period.

During this great boom, capital was imported mainly from Britain, but also from Sweden, Denmark and Germany, the four most important Norwegian trading partners at the time. In 1536 the King of Denmark and Norway chose the Lutheran faith as the state religion. In consequence of the Reformation, reading became compulsory; consequently Norway acquired a generally skilled and independent labor force. The constitution from 1814 also cleared the way for liberalism and democracy. The puritan revivals during the nineteenth century created a business environment, which raised entrepreneurship, domestic capital and a productive labor force. In the western and southern parts of the country these puritan movements are still strong, both in daily life and within business.

Relative Stagnation with Industrialization, 1875-1914

Norway’s economy was hit hard during the “depression” from mid 1870s to the early 1890s. GDP stagnated, particular during the 1880s, and prices fell until 1896. This stagnation is mirrored in the large-scale emigration from Norway to North America in the 1880s. At its peak in 1882 as many as 28,804 persons, 1.5 percent of the population, left the country. All in all, 250,000 emigrated in the period 1879-1893, equal to 60 percent of the birth surplus. Only Ireland had higher emigration rates than Norway between 1836 and 1930, when 860,000 Norwegians left the country.

The long slow down can largely been explained by Norway’s dependence on the international economy and in particular the United Kingdom, which experienced slower economic growth than the other major economies of the time. As a result of the international slowdown, Norwegian exports contracted in several years, but expanded in others. A second reason for the slowdown in Norway was the introduction of the international gold standard. Norway adopted gold in January 1874, and due to the trade deficit, lack of gold and lack of capital, the country experienced a huge contraction in gold reserves and in the money stock. The deflationary effect strangled the economy. Going onto the gold standard caused the appreciation of the Norwegian currency, the krone, as gold became relatively more expensive compared to silver. A third explanation of Norway’s economic problems in the 1880s is the transformation from sailing to steam vessels. Norway had by 1875 the fourth biggest merchant fleet in the world. However, due to lack of capital and technological skills, the transformation from sail to steam was slow. Norwegian ship owners found a niche in cheap second-hand sailing vessels. However, their market was diminishing, and finally, when the Norwegian steam fleet passed the size of the sailing fleet in 1907, Norway was no longer a major maritime power.

A short boom occurred from the early 1890s to 1899. Then, a crash in the Norwegian building industry led to a major financial crash and stagnation in GDP per capita from 1900 to 1905. Thus from the middle of the 1870s until 1905 Norway performed relatively bad. Measured in GDP per capita, Norway, like Britain, experienced a significant stagnation relative to most western economies.

After 1905, when Norway gained full independence from Sweden, a heavy wave of industrialization took place. In the 1890s the fish preserving and cellulose and paper industries started to grow rapidly. From 1905, when Norsk Hydro was established, manufacturing industry connected to hydroelectrical power took off. It is argued, quite convincingly, that if there was an industrial breakthrough in Norway, it must have taken place during the years 1905-1920. However, the primary sector, with its labor-intensive agriculture and increasingly more capital-intensive fisheries, was still the biggest sector.

Crises and Growth, 1914-1945

Officially Norway was neutral during World War I. However, in terms of the economy, the government clearly took the side of the British and their allies. Through several treaties Norway gave privileges to the allied powers, which protected the Norwegian merchant fleet. During the war’s first years, Norwegian ship owners profited from the war, and the economy boomed. From 1917, when Germany declared war against non-friendly vessels, Norway took heavy losses. A recession replaced the boom.

Norway suspended gold redemption in August 1914, and due to inflationary monetary policy during the war and in the first couple of years afterward, demand was very high. When the war came to an end this excess demand was met by a positive shift in supply. Thus, Norway, like other Western countries experienced a significant boom in the economy from the spring of 1919 to the early autumn 1920. The boom was followed by high inflation, trade deficits, currency depreciation and an overheated economy.

The international postwar recession beginning in autumn 1920, hit Norway more severely than most other countries. In 1921 GDP per capita fell by eleven percent, which was only exceeded by the United Kingdom. There are two major reasons for the devastating effect of the post-war recession. In the first place, as a small open economy, Norway was more sensitive to international recessions than most other countries. This was in particular the case because the recession hit the country’s most important trading partners, the United Kingdom and Sweden, so hard. Secondly, the combination of strong and mostly pro-cyclical inflationary monetary policy from 1914 to 1920 and thereafter a hard deflationary policy made the crisis worse (Figure 3).

Figure 3
Money Aggregates for Norway, 1910-1930

Figure 3
Source: Klovland (2004a)

In fact, Norway pursued a long, but non-persistent deflationary monetary policy aimed at restoring the par value of the krone (NOK) up to May 1928. In consequence, another recession hit the economy during the middle of the 1920s. Hence, Norway was one of the worst performers in the western world in the 1920s. This can best be seen in the number of bankruptcies, a huge financial crisis and mass unemployment. Bank losses amounted to seven percent of GDP in 1923. Total unemployment rose from about one percent in 1919 to more than eight percent in 1926 and 1927. In manufacturing it reached more than 18 percent the same years.

Despite a rapid boom and success within the whaling industry and shipping services, the country never saw a convincing recovery before the Great Depression hit Europe in late summer 1930. The worst year for Norway was 1931, when GDP per capita fell by 8.4 percent. This, however, was not only due to the international crisis, but also to a massive and violent labor conflict that year. According to the implicit GDP deflator prices fell more than 63 percent from 1920 to 1933.

All in all, however, the depression of the 1930s was milder and shorter in Norway than in most western countries. This was partly due to the deflationary monetary policy in the 1920s, which forced Norwegian companies to become more efficient in order to survive. However, it was probably more important that Norway left gold as early as September 27th, 1931 only a week after the United Kingdom. Those countries that left gold early, and thereby employed a more inflationary monetary policy, were the best performers in the 1930s. Among them were Norway and its most important trading partners, the United Kingdom and Sweden.

During the recovery period, Norway in particular saw growth in manufacturing output, exports and import substitution. This can to a large extent be explained by currency depreciation. Also, when the international merchant fleet contracted during the drop in international trade, the Norwegian fleet grew rapidly, as Norwegian ship owners were pioneers in the transformation from steam to diesel engines, tramp to line freights and into a new expanding niche: oil tankers.

The primary sector was still the largest in the economy during the interwar years. Both fisheries and agriculture struggled with overproduction problems, however. These were dealt with by introducing market controls and cartels, partly controlled by the industries themselves and partly by the government.

The business cycle reached its bottom in late 1932. Despite relatively rapid recovery and significant growth both in GDP and in employment, unemployment stayed high, and reached 10-11 percent on annual basis from 1931 to 1933 (Figure 4).

Figure 4
Unemployment Rate and Public Relief Work
as a Percent of the Work Force, 1919-1939

Figure 4
Source: Hodne and Grytten (2002)

The standard of living became poorer in the primary sector, among those employed in domestic services and for the underemployed and unemployed and their households. However, due to the strong deflation, which made consumer prices fall by than 50 percent from autumn 1920 to summer 1933, employees in manufacturing, construction and crafts experienced an increase in real wages. Unemployment stayed persistently high due to huge growth in labor supply, as result of immigration restrictions by North American countries from the 1920s onwards.

Denmark and Norway were both victims of a German surprise attack the 9th of April 1940. After two months of fighting, the allied troops surrendered in Norway on June 7th and the Norwegian royal family and government escaped to Britain.

From then until the end of the war there were two Norwegian economies, the domestic German-controlled and the foreign Norwegian- and Allied-controlled economy. The foreign economy was primarily established on the basis of the huge Norwegian merchant fleet, which again was among the biggest in the world accounting for more than seven percent of world total tonnage. Ninety percent of this floating capital escaped the Germans. The ships were united into one state-controlled company, NORTASHIP, which earned money to finance the foreign economy. The domestic economy, however, struggled with a significant fall in production, inflationary pressure and rationing of important goods, which three million Norwegians had to share with 400.000 Germans occupying the country.

Economic Planning and Growth, 1945-1973

After the war the challenge was to reconstruct the economy and re-establish political and economic order. The Labor Party, in office from 1935, grabbed the opportunity to establish a strict social democratic rule, with a growing public sector and widespread centralized economic planning. Norway first declined the U.S. proposition of financial aid after the world. However, due to lack of hard currencies they accepted the Marshall aid program. By receiving 400 million dollars from 1948 to 1952, Norway was one of the biggest per capita recipients.

As part of the reconstruction efforts Norway joined the Bretton Woods system, GATT, the IMF and the World Bank. Norway also chose to become member of NATO and the United Nations. In 1958 the country also joined the European Free Trade Area (EFTA). The same year Norway made the krone convertible to the U.S. dollar, as many other western countries did with their currencies.

The years from 1950 to 1973 are often called the golden era of the Norwegian economy. GDP per capita showed an annual growth rate of 3.3 percent. Foreign trade stepped up even more, unemployment barely existed and the inflation rate was stable. This has often been explained by the large public sector and good economic planning. The Nordic model, with its huge public sector, has been said to be a success in this period. If one takes a closer look into the situation, one will, nevertheless, find that the Norwegian growth rate in the period was lower than that for most western nations. The same is true for Sweden and Denmark. The Nordic model delivered social security and evenly-distributed wealth, but it did not necessarily give very high economic growth.

Figure 5
Public Sector as a Percent of GDP, 1900-1990

Figure 5
Source: Hodne and Grytten (2002)

Petroleum Economy and Neoliberalism, 1973 to the Present

After the Bretton Woods system fell apart (between August 1971 and March 1973) and the oil price shock in autumn 1973, most developed economies went into a period of prolonged recession and slow growth. In 1969 Philips Petroleum discovered petroleum resources at the Ekofisk field, which was defined as part of the Norwegian continental shelf. This enabled Norway to run a countercyclical financial policy during the stagflation period in the 1970s. Thus, economic growth was higher and unemployment lower than for most other western countries. However, since the countercyclical policy focused on branch and company subsidies, Norwegian firms soon learned to adapt to policy makers rather than to the markets. Hence, both productivity and business structure did not have the incentives to keep pace with changes in international markets.

Norway lost significant competitive power, and large-scale deindustrialization took place, despite efforts to save manufacturing industry. Another reason for deindustrialization was the huge growth in the profitable petroleum sector. Persistently high oil prices from the autumn 1973 to the end of 1985 pushed labor costs upward, through spillover effects from high wages in the petroleum sector. High labor costs made the Norwegian foreign sector less competitive. Thus, Norway saw deindustrialization at a more rapid pace than most of her largest trading partners. Due to the petroleum sector, however, Norway experienced high growth rates in all the three last decades of the twentieth century, bringing Norway to the top of the world GDP per capita list at the dawn of the new millennium. Nevertheless, Norway had economic problems both in the eighties and in the nineties.

In 1981 a conservative government replaced Labor, which had been in power for most of the post-war period. Norway had already joined the international wave of credit liberalization, and the new government gave fuel to this policy. However, along with the credit liberalization, the parliament still ran a policy that prevented market forces from setting interest rates. Instead they were set by politicians, in contradiction to the credit liberalization policy. The level of interest rates was an important part of the political game for power, and thus, they were set significantly below the market level. In consequence, a substantial credit boom was created in the early 1980s, and continued to the late spring of 1986. As a result, Norway had monetary expansion and an artificial boom, which created an overheated economy. When oil prices fell dramatically from December 1985 onwards, the trade surplus was suddenly turned to a huge deficit (Figure 6).

Figure 6
North Sea Oil Prices and Norway’s Trade Balance, 1975-2000

Figure 6
Source: Statistics Norway

The conservative-center government was forced to keep a tighter fiscal policy. The new Labor government pursued this from May 1986. Interest rates were persistently high as the government now tried to run a trustworthy fixed-currency policy. In the summer of 1990 the Norwegian krone was officially pegged to the ECU. When the international wave of currency speculation reached Norway during autumn 1992 the central bank finally had to suspend the fixed exchange rate and later devaluate.

In consequence of these years of monetary expansion and thereafter contraction, most western countries experienced financial crises. It was relatively hard in Norway. Prices of dwellings slid, consumers couldn’t pay their bills, and bankruptcies and unemployment reached new heights. The state took over most of the larger commercial banks to avoid a total financial collapse.

After the suspension of the ECU and the following devaluation, Norway had growth until 1998, due to optimism, an international boom and high prices of petroleum. The Asian financial crisis also rattled the Norwegian stock market. At the same time petroleum prices fell rapidly, due to internal problems among the OPEC countries. Hence, the krone depreciated. The fixed exchange rate policy had to be abandoned and the government adopted inflation targeting. Along with changes in monetary policy, the center coalition government was also able to monitor a tighter fiscal policy. At the same time interest rates were high. As result, Norway escaped the overheating process of 1993-1997 without any devastating effects. Today the country has a strong and sound economy.

The petroleum sector is still very important in Norway. In this respect the historical tradition of raw material dependency has had its renaissance. Unlike many other countries rich in raw materials, natural resources have helped make Norway one of the most prosperous economies in the world. Important factors for Norway’s ability to turn resource abundance into economic prosperity are an educated work force, the adoption of advanced technology used in other leading countries, stable and reliable institutions, and democratic rule.

References

Basberg, Bjørn L. Handelsflåten i krig: Nortraship: Konkurrent og alliert. Oslo: Grøndahl and Dreyer, 1992.

Bergh, Tore Hanisch, Even Lange and Helge Pharo. Growth and Development. Oslo: NUPI, 1979.

Brautaset, Camilla. “Norwegian Exports, 1830-1865: In Perspective of Historical National Accounts.” Ph.D. dissertation. Norwegian School of Economics and Business Administration, 2002.

Bruland, Kristine. British Technology and European Industrialization. Cambridge: Cambridge University Press, 1989.

Danielsen, Rolf, Ståle Dyrvik, Tore Grønlie, Knut Helle and Edgar Hovland. Norway: A History from the Vikings to Our Own Times. Oslo: Scandinavian University Press, 1995.

Eitrheim. Øyvind, Jan T. Klovland and Jan F. Qvigstad, editors. Historical Monetary Statistics for Norway, 1819-2003. Oslo: Norges Banks skriftserie/Occasional Papers, no 35, 2004.

Hanisch, Tore Jørgen. “Om virkninger av paripolitikken.” Historisk tidsskrift 58, no. 3 (1979): 223-238.

Hanisch, Tore Jørgen, Espen Søilen and Gunhild Ecklund. Norsk økonomisk politikk i det 20. århundre. Verdivalg i en åpen økonomi. Kristiansand: Høyskoleforlaget, 1999.

Grytten, Ola Honningdal. “A Norwegian Consumer Price Index 1819-1913 in a Scandinavian Perspective.” European Review of Economic History 8, no.1 (2004): 61-79.

Grytten, Ola Honningdal. “A Consumer Price Index for Norway, 1516-2003.” Norges Bank: Occasional Papers, no. 1 (2004a): 47-98.

Grytten. Ola Honningdal. “The Gross Domestic Product for Norway, 1830-2003.” Norges Bank: Occasional Papers, no. 1 (2004b): 241-288.

Hodne, Fritz. An Economic History of Norway, 1815-1970. Tapir: Trondheim, 1975.

Hodne, Fritz. The Norwegian Economy, 1920-1980. London: Croom Helm and St. Martin’s, 1983.

Hodne, Fritz and Ola Honningdal Grytten. Norsk økonomi i det 19. århundre. Bergen: Fagbokforlaget, 2000.

Hodne, Fritz and Ola Honningdal Grytten. Norsk økonomi i det 20. århundre. Bergen: Fagbokforlaget, 2002.

Klovland, Jan Tore. “Monetary Policy and Business Cycles in the Interwar Years: The Scandinavian Experience.” European Review of Economic History 2, no. 2 (1998):

Klovland, Jan Tore. “Monetary Aggregates in Norway, 1819-2003.” Norges Bank: Occasional Papers, no. 1 (2004a): 181-240.

Klovland, Jan Tore. “Historical Exchange Rate Data, 1819-2003”. Norges Bank: Occasional Papers, no. 1 (2004b): 289-328.

Lange, Even, editor. Teknologi i virksomhet. Verkstedsindustri i Norge etter 1840. Oslo: Ad Notam Forlag, 1989.

Nordvik, Helge W. “Finanspolitikken og den offentlige sektors rolle i norsk økonomi i mellomkrigstiden”. Historisk tidsskrift 58, no. 3 (1979): 239-268.

Sejersted, Francis. Demokratisk kapitalisme. Oslo: Universitetsforlaget, 1993.

Søilen. Espen. “Fra frischianisme til keynesianisme? En studie av norsk økonomisk politikk i lys av økonomisk teori, 1945-1980.” Ph.D. dissertation. Bergen: Norwegian School of Economics and Business Administration, 1998.

Citation: Grytten, Ola. “The Economic History of Norway”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-norway/

An Economic History of New Zealand in the Nineteenth and Twentieth Centuries

John Singleton, Victoria University of Wellington, New Zealand

Living standards in New Zealand were among the highest in the world between the late nineteenth century and the 1960s. But New Zealand’s economic growth was very sluggish between 1950 and the early 1990s, and most Western European countries, as well as several in East Asia, overtook New Zealand in terms of real per capita income. By the early 2000s, New Zealand’s GDP per capita was in the bottom half of the developed world.

Table 1:
Per capita GDP in New Zealand
compared with the United States and Australia
(in 1990 international dollars)

US Australia New Zealand NZ as
% of US
NZ as % of
Austrialia
1840 1588 1374 400 25 29
1900 4091 4013 4298 105 107
1950 9561 7412 8456 88 114
2000 28129 21540 16010 57 74

Source: Angus Maddison, The World Economy: Historical Statistics. Paris: OECD, 2003, pp. 85-7.

Over the second half of the twentieth century, argue Greasley and Oxley (1999), New Zealand seemed in some respects to have more in common with Latin American countries than with other advanced western nations. As well as a snail-like growth rate, New Zealand followed highly protectionist economic policies between 1938 and the 1980s. (In absolute terms, however, New Zealanders continued to be much better off than their Latin American counterparts.) Maddison (1991) put New Zealand in a middle-income group of countries, including the former Czechoslovakia, Hungary, Portugal, and Spain.

Origins and Development to 1914

When Europeans (mainly Britons) started to arrive in Aotearoa (New Zealand) in the early nineteenth century, they encountered a tribal society. Maori tribes made a living from agriculture, fishing, and hunting. Internal trade was conducted on the basis of gift exchange. Maori did not hold to the Western concept of exclusive property rights in land. The idea that land could be bought and sold was alien to them. Most early European residents were not permanent settlers. They were short-term male visitors involved in extractive activities such as sealing, whaling, and forestry. They traded with Maori for food, sexual services, and other supplies.

Growing contact between Maori and the British was difficult to manage. In 1840 the British Crown and some Maori signed the Treaty of Waitangi. The treaty, though subject to various interpretations, to some extent regularized the relationship between Maori and Europeans (or Pakeha). At roughly the same time, the first wave of settlers arrived from England to set up colonies including Wellington and Christchurch. Settlers were looking for a better life than they could obtain in overcrowded and class-ridden England. They wished to build a rural and largely self-sufficient society.

For some time, only the Crown was permitted to purchase land from Maori. This land was then either resold or leased to settlers. Many Maori felt – and many still feel – that they were forced to give up land, effectively at gunpoint, in return for a pittance. Perhaps they did not always grasp that land, once sold, was lost forever. Conflict over land led to intermittent warfare between Maori and settlers, especially in the 1860s. There was brutality on both sides, but the Europeans on the whole showed more restraint in New Zealand than in North America, Australia, or Southern Africa.

Maori actually required less land in the nineteenth century because their numbers were falling, possibly by half between the late eighteenth and late nineteenth centuries. By the 1860s, Maori were outnumbered by British settlers. The introduction of European diseases, alcohol, and guns contributed to the decline in population. Increased mobility and contact between tribes may also have spread disease. The Maori population did not begin to recover until the twentieth century.

Gold was discovered in several parts of New Zealand (including Thames and Otago) in the mid-nineteenth century, but the introduction of sheep farming in the 1850s gave a more enduring boost to the economy. Australian and New Zealand wool was in high demand in the textile mills of Yorkshire. Sheep farming necessitated the clearing of native forests and the planting of grasslands, which changed the appearance of large tracts of New Zealand. This work was expensive, and easy access to the London capital market was critical. Economic relations between New Zealand and Britain were strong, and remained so until the 1970s.

Between the mid-1870s and mid-1890s, New Zealand was adversely affected by weak export prices, and in some years there was net emigration. But wool prices recovered in the 1890s, just as new exports – meat and dairy produce – were coming to prominence. Until the advent of refrigeration in the early 1880s, New Zealand did not export meat and dairy produce. After the introduction of refrigeration, however, New Zealand foodstuffs found their way on to the dinner tables of working class families in Britain, but not the tables of the middle and upper classes, as they could afford fresh produce.

In comparative terms, the New Zealand economy was in its heyday in the two decades before 1914. New Zealand (though not its Maori shadow, Aotearoa) was a wealthy, dynamic, and egalitarian society. The total population in 1914 was slightly above one million. Exports consisted almost entirely of land-intensive pastoral commodities. Manufactures loomed large in New Zealand’s imports. High labor costs, and the absence of scale economies in the tiny domestic market, hindered industrialization, though there was some processing of export commodities and imports.

War, Depression and Recovery, 1914-38

World War One disrupted agricultural production in Europe, and created a robust demand for New Zealand’s primary exports. Encouraged by high export prices, New Zealand farmers borrowed and invested heavily between 1914 and 1920. Land exchanged hands at very high prices. Unfortunately, the early twenties brought the start of a prolonged slump in international commodity markets. Many farmers struggled to service and repay their debts.

The global economic downturn, beginning in 1929-30, was transmitted to New Zealand by the collapse in commodity prices on the London market. Farmers bore the brunt of the depression. At the trough, in 1931-32, net farm income was negative. Declining commodity prices increased the already onerous burden of servicing and repaying farm mortgages. Meat freezing works, woolen mills, and dairy factories were caught in the spiral of decline. Farmers had less to spend in the towns. Unemployment rose, and some of the urban jobless drifted back to the family farm. The burden of external debt, the bulk of which was in sterling, rose dramatically relative to export receipts. But a protracted balance of payments crisis was avoided, since the demand for imports fell sharply in response to the drop in incomes. The depression was not as serious in New Zealand as in many industrial countries. Prices were more flexible in the primary sector and in small business than in modern, capital-intensive industry. Nevertheless, the experience of depression profoundly affected New Zealanders’ attitudes towards the international economy for decades to come.

At first, there was no reason to expect that the downturn in 1929-30 was the prelude to the worst slump in history. As tax and customs revenue fell, the government trimmed expenditure in an attempt to balance the budget. Only in 1931 was the severity of the crisis realized. Further cuts were made in public spending. The government intervened in the labor market, securing an order for an all-round reduction in wages. It pressured and then forced the banks to reduce interest rates. The government sought to maintain confidence and restore prosperity by helping farms and other businesses to lower costs. But these policies did not lead to recovery.

Several factors contributed to the recovery that commenced in 1933-34. The New Zealand pound was devalued by 14 percent against sterling in January 1933. As most exports were sold for sterling, which was then converted into New Zealand pounds, the income of farmers was boosted at a stroke of the pen. Devaluation increased the money supply. Once economic actors, including the banks, were convinced that the devaluation was permanent, there was an increase in confidence and in lending. Other developments played their part. World commodity prices stabilized, and then began to pick up. Pastoral output and productivity continued to rise. The 1932 Ottawa Agreements on imperial trade strengthened New Zealand’s position in the British market at the expense of non-empire competitors such as Argentina, and prefigured an increase in the New Zealand tariff on non-empire manufactures. As was the case elsewhere, the recovery in New Zealand was not the product of a coherent economic strategy. When beneficial policies were adopted it was as much by accident as by design.

Once underway, however, New Zealand’s recovery was comparatively rapid and persisted over the second half of the thirties. A Labour government, elected towards the end of 1935, nationalized the central bank (the Reserve Bank of New Zealand). The government instructed the Reserve Bank to create advances in support of its agricultural marketing and state housing schemes. It became easier to obtain borrowed funds.

An Insulated Economy, 1938-1984

A balance of payments crisis in 1938-39 was met by the introduction of administrative restrictions on imports. Labour had not been prepared to deflate or devalue – the former would have increased unemployment, while the latter would have raised working class living costs. Although intended as a temporary expedient, the direct control of imports became a distinctive feature of New Zealand economic policy until the mid-1980s.

The doctrine of “insulationism” was expounded during the 1940s. Full employment was now the main priority. In the light of disappointing interwar experience, there were doubts about the ability of the pastoral sector to provide sufficient work for New Zealand’s growing population. There was a desire to create more industrial jobs, even though there seemed no prospect of achieving scale economies within such a small country. Uncertainty about export receipts, the need to maintain a high level of domestic demand, and the competitive weakness of the manufacturing sector, appeared to justify the retention of quantitative import controls.

After 1945, many Western countries retained controls over current account transactions for several years. When these controls were relaxed and then abolished in the fifties and early sixties, the anomalous nature of New Zealand’s position became more visible. Although successive governments intended to liberalize, in practice they achieved little, except with respect to trade with Australia.

The collapse of the Korean War commodity boom, in the early 1950s, marked an unfortunate turning point in New Zealand’s economic history. International conditions were unpropitious for the pastoral sector in the second half of the twentieth century. Despite the aspirations of GATT, the United States, Western Europe and Japan restricted agricultural imports, especially of temperate foodstuffs, subsidized their own farmers and, in the case of the Americans and the Europeans, dumped their surpluses in third markets. The British market, which remained open until 1973, when the United Kingdom was absorbed into the EEC, was too small to satisfy New Zealand. Moreover, even the British resorted to agricultural subsidies. Compared with the price of industrial goods, the price of agricultural produce tended to weaken over the long term.

Insulation was a boon to manufacturers, and New Zealand developed a highly diversified industrial structure. But competition was ineffectual, and firms were able to pass cost increases on to the consumer. Import barriers induced many British, American, and Australian multinationals to establish plants in New Zealand. The protected industrial economy did have some benefits. It created jobs – there was full employment until the 1970s – and it increased the stock of technical and managerial skills. But consumers and farmers were deprived of access to cheaper – and often better quality – imported goods. Their interests and welfare were neglected. Competing demand from protected industries also raised the costs of farm inputs, including labor power, and thus reduced the competitiveness of New Zealand’s key export sector.

By the early 1960s, policy makers had realized that New Zealand was falling behind in the race for greater prosperity. The British food market was under threat, as the Macmillan government began a lengthy campaign to enter the protectionist EEC. New Zealand began to look for other economic partners, and the most obvious candidate was Australia. In 1901, New Zealand had declined to join the new federation of Australian colonies. Thus it had been excluded from the Australian common market. After lengthy negotiations, a partial New Zealand-Australia Free Trade Agreement (NAFTA) was signed in 1965. Despite initial misgivings, many New Zealand firms found that they could compete in the Australian market, where tariffs against imports from the rest of the world remained quite high. But this had little bearing on their ability to compete with European, Asian, and North American firms. NAFTA was given renewed impetus by the Closer Economic Relations (CER) agreement of 1983.

Between 1973 and 1984, New Zealand governments were overwhelmed by a group of inter-related economic crises, including two serious supply shocks (the oil crises), rising inflation, and increasing unemployment. Robert Muldoon, the National Party (conservative) prime minister between 1975 and 1984, pursued increasingly erratic macroeconomic policies. He tightened government control over the economy in the early eighties. There were dramatic fluctuations in inflation and in economic growth. In desperation, Muldoon imposed a wage and price freeze in 1982-84. He also mounted a program of large-scale investments, including the expansion of a steel works, and the construction of chemical plants and an oil refinery. By means of these investments, he hoped to reduce the import bill and secure a durable improvement in the balance of payments. But the “Think Big” strategy failed – the projects were inadequately costed, and inherently risky. Although Muldoon’s intention had been to stabilize the economy, his policies had the opposite effect.

Economic Reform, 1984-2000

Muldoon’s policies were discredited, and in 1984 the Labour Party came to power. All other economic strategies having failed, Labour resolved to deregulate and restore the market process. (This seemed very odd at the time.) Within a week of the election, virtually all controls over interest rates had been abolished. Financial markets were deregulated, and, in March 1985, the New Zealand dollar was floated. Other changes followed, including the sale of public sector trading organizations, the reduction of tariffs and the elimination of import licensing. However, reform of the labor market was not completed until the early 1990s, by which time National (this time without Muldoon or his policies) was back in office.

Once credit was no longer rationed, there was a large increase in private sector borrowing, and a boom in asset prices. Numerous speculative investment and property companies were set up in the mid-eighties. New Zealand’s banks, which were not used to managing risk in a deregulated environment, scrambled to lend to speculators in an effort not to miss out on big profits. Many of these ventures turned sour, especially after the 1987 share market crash. Banks were forced to reduce their lending, to the detriment of sound as well as unsound borrowers.

Tight monetary policy and financial deregulation led to rising interest rates after 1984. The New Zealand dollar appreciated strongly. Farmers bore the initial brunt of high borrowing costs and a rising real exchange rate. Manufactured imports also became more competitive, and many inefficient firms were forced to close. Unemployment rose in the late eighties and early nineties. The early 1990s were marked by an international recession, which was particularly painful in New Zealand, not least because of the high hopes raised by the post-1984 reforms.

An economic recovery began towards the end of 1991. With a brief interlude in 1998, strong growth persisted for the remainder of the decade. Confidence was gradually restored to the business sector. Unemployment began to recede. After a lengthy time lag, the economic reforms seemed to be paying off for the majority of the population.

Large structural changes took place after 1984. Factors of production switched out of the protected manufacturing sector, and were drawn into services. Tourism boomed as the relative cost of international travel fell. The face of the primary sector also changed, and the wine industry began to penetrate world markets. But not all manufacturers struggled. Some firms adapted to the new environment and became more export-oriented. For instance, a small engineering company, Scott Technology, became a world leader in the provision of equipment for the manufacture of refrigerators and washing machines.

Annual inflation was reduced to low single digits by the early nineties. Price stability was locked in through the 1989 Reserve Bank Act. This legislation gave the central bank operational autonomy, while compelling it to focus on the achievement and maintenance of price stability rather than other macroeconomic objectives. The Reserve Bank of New Zealand was the first central bank in the world to adopt a regime of inflation targeting. The 1994 Fiscal Responsibility Act committed governments to sound finance and the reduction of public debt.

By 2000, New Zealand’s population was approaching four million. Overall, the reforms of the eighties and nineties were responsible for creating a more competitive economy. New Zealand’s economic decline relative to the rest of the OECD was halted, though it was not reversed. In the nineties, New Zealand enjoyed faster economic growth than either Germany or Japan, an outcome that would have been inconceivable a few years earlier. But many New Zealanders were not satisfied. In particular, they were galled that their closest neighbor, Australia, was growing even faster. Australia, however, was an inherently much wealthier country with massive mineral deposits.

Assessment

Several explanations have been offered for New Zealand’s relatively poor economic performance during the twentieth century.

Wool, meat, and dairy produce were the foundations of New Zealand’s prosperity in Victorian and Edwardian times. After 1920, however, international market conditions were generally unfavorable to pastoral exports. New Zealand had the wrong comparative advantage to enjoy rapid growth in the twentieth century.

Attempts to diversify were only partially successful. High labor costs and the small size of the domestic market hindered the efficient production of standardized labor-intensive goods (e.g. garments) and standardized capital-intensive goods (e.g. autos). New Zealand might have specialized in customized and skill-intensive manufactures, but the policy environment was not conducive to the promotion of excellence in niche markets. Between 1938 and the 1980s, Latin American-style trade policies fostered the growth of a ramshackle manufacturing sector. Only in the late eighties did New Zealand decisively reject this regime.

Geographical and geological factors also worked to New Zealand’s disadvantage. Australia drew ahead of New Zealand in the 1960s, following the discovery of large mineral deposits for which there was a big market in Japan. Staple theory suggests that developing countries may industrialize successfully by processing their own primary products, instead of by exporting them in a raw state. Canada had coal and minerals, and became a significant industrial power. But New Zealand’s staples of wool, meat and dairy produce offered limited downstream potential.

Canada also took advantage of its proximity to the U.S. market, and access to U.S. capital and technology. American-style institutions in the labor market, business, education and government became popular in Canada. New Zealand and Australia relied on, arguably inferior, British-style institutions. New Zealand was a long way from the world’s economic powerhouses, and it was difficult for its firms to establish and maintain contact with potential customers and collaborators in Europe, North America, or Asia.

Clearly, New Zealand’s problems were not all of its own making. The elimination of agricultural protectionism in the northern hemisphere would have given a huge boost the New Zealand economy. On the other hand, in the period between the late 1930s and mid-1980s, New Zealand followed inward-looking economic policies that hindered economic efficiency and flexibility.

References

Bassett, Michael. The State in New Zealand, 1840-1984. Auckland: Auckland University Press, 1998.

Belich, James. Making Peoples: A History of the New Zealanders from Polynesian Settlement to the End of the Nineteenth Century, Auckland: Penguin, 1996.

Condliffe, John B. New Zealand in the Making. London: George Allen & Unwin, 1930.

Dalziel, Paul. “New Zealand’s Economic Reforms: An Assessment.” Review of Political Economy 14, no. 2 (2002): 31-46.

Dalziel, Paul and Ralph Lattimore. The New Zealand Macroeconomy: Striving for Sustainable Growth with Equity. Melbourne: Oxford University Press, fifth edition, 2004.

Easton, Brian. In Stormy Seas: The Post-War New Zealand Economy. Dunedin: University of Otago Press, 1997.

Endres, Tony and Ken Jackson. “Policy Responses to the Crisis: Australasia in the 1930s.” In Capitalism in Crisis: International Responses to the Great Depression, edited by Rick Garside, 148-65. London: Pinter, 1993.

Evans, Lewis, Arthur Grimes, and Bryce Wilkinson (with David Teece), “Economic Reform in New Zealand 1984-95: The Pursuit of Efficiency.” Journal of Economic Literature 34, no. 4 (1996): 1856-1902.

Gould, John D. The Rake’s Progress: the New Zealand Economy since 1945. Auckland: Hodder and Stoughton, 1982.

Greasley, David and Les Oxley. “A Tale of Two Dominions: Comparing the Macroeconomic Records of Australia and Canada since 1870.” Economic History Review 51, no. 2 (1998): 294-318.

Greasley, David and Les Oxley. “Outside the Club: New Zealand’s Economic Growth, 1870-1993.” International Review of Applied Economics 14, no. 2 (1999): 173-92.

Greasley, David and Les Oxley. “Regime Shift and Fast Recovery on the Periphery: New Zealand in the 1930s.” Economic History Review 55, no. 4 (2002): 697-720.

Hawke, Gary R. The Making of New Zealand: An Economic History. Cambridge: Cambridge University Press, 1985.

Jones, Steve R.H. “Government Policy and Industry Structure in New Zealand, 1900-1970.” Australian Economic History Review 39, no, 3 (1999): 191-212.

Mabbett, Deborah. Trade, Employment and Welfare: A Comparative Study of Trade and Labour Market Policies in Sweden and New Zealand, 1880-1980. Oxford: Clarendon Press, 1995.

Maddison, Angus. Dynamic Forces in Capitalist Development. Oxford: Oxford University Press, 1991.

Maddison, Angus. The World Economy: Historical Statistics. Paris: OECD, 2003.

McKinnon, Malcolm. Treasury: 160 Years of the New Zealand Treasury. Auckland: Auckland University Press in association with the Ministry for Culture and Heritage, 2003.

Schedvin, Boris. “Staples and Regions of the Pax Britannica.” Economic History Review 43, no. 4 (1990): 533-59.

Silverstone, Brian, Alan Bollard, and Ralph Lattimore, editors. A Study of Economic Reform: The Case of New Zealand. Amsterdam: Elsevier, 1996.

Singleton, John. “New Zealand: Devaluation without a Balance of Payments Crisis.” In The World Economy and National Economies in the Interwar Slump, edited by Theo Balderston, 172-90. Basingstoke: Palgrave, 2003.

Singleton, John and Paul L. Robertson. Economic Relations between Britain and Australasia, 1945-1970. Basingstoke: Palgrave, 2002.

Ville, Simon. The Rural Entrepreneurs: A History of the Stock and Station Agent Industry in Australia and New Zealand. Cambridge: Cambridge University Press, 2000.

Citation: Singleton, John. “New Zealand in the Nineteenth and Twentieth Centuries”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/an-economic-history-of-new-zealand-in-the-nineteenth-and-twentieth-centuries/

Money in the American Colonies

Ron Michener, University of Virginia

“There certainly can’t be a greater Grievance to a Traveller, from one Colony to another, than the different values their Paper Money bears.” An English visitor, circa 1742 (Kimber, 1998, p. 52).

The monetary arrangements in use in America before the Revolution were extremely varied. Each colony had its own conventions, tender laws, and coin ratings, and each issued its own paper money. The monetary system within each colony evolved over time, sometimes dramatically, as when Massachusetts abolished the use of paper money within her borders in 1750 and returned to a specie standard. Any encyclopedia-length overview of the subject will, unavoidably, need to generalize, and few generalizations about the colonial monetary system are immune to criticism because counterexamples can usually be found somewhere in the historical record. Those readers who find their interest piqued by this article would be well advised to continue their study of the subject by consulting the more detailed discussions available in Brock (1956, 1975, 1992), Ernst (1973), and McCusker (1978).

Units of Account

In the colonial era the unit of account and the medium of exchange were distinct in ways that now seem strange. An example from modern times suggests how the ancient system worked. Nowadays race horses are auctioned in England using guineas as the unit of account, although the guinea coin has long since disappeared. It is understood by all who participate in these auctions that payment is made according to the rule that one guinea equals 21s. Guineas are the unit of account, but the medium of exchange accepted in payment is something else entirely. The unit of account and medium of exchange were similarly disconnected in colonial times (Adler, 1900).

The units of account in colonial times were pounds, shillings, and pence (1£ = 20s., 1s. = 12d.).1 These pounds, shillings, and pence, however, were local units, such as New York money, Pennsylvania money, Massachusetts money, or South Carolina money and should not be confused with sterling. To do so is comparable to treating modern Canadian dollars and American dollars as interchangeable simply because they are both called “dollars.” All the local currencies were less valuable than sterling.2 A Spanish piece of eight, for instance, was worth 4 s. 6 d. sterling at the British mint. The same piece of eight, on the eve of the Revolution, would have been treated as 6 s. in New England, as 8 s. in New York, as 7 s. 6 d. in Philadelphia, and as 32 s. 6 d. in Charleston (McCusker, 1978).

Colonists assigned local currency values to foreign specie coins circulating there in these pounds, shillings and pence. The same foreign specie coins (most notably the Spanish dollar) continued to be legal tender in the United States in the first half of the nineteenth century as well as a considerable portion of the circulating specie (Andrews, 1904, pp. 327-28; Michener and Wright, 2005, p. 695). Because the decimal divisions of the dollar so familiar to us today were a newfangled innovation in the early Republic and because the same coins continued to circulate the traditional units of account were only gradually abandoned. Lucius Elmer, in his account of the early settlement of Cumberland County, New Jersey, describes how “Accounts were generally kept in this State in pounds, shillings, and pence, of the 7 s. 6 d. standard, until after 1799, in which year a law was passed requiring all accounts to be kept in dollars or units, dimes or tenths, cents or hundredths, and mills or thousandths. For several years, however, aged persons inquiring the price of an article in West Jersey or Philadelphia, required to told the value in shillings and pence, they not being able to keep in mind the newly-created cents or their relative value . . . So lately as 1820 some traders and tavern keepers in East Jersey kept their accounts in [New] York currency.”3 About 1820, John Quincy Adams (1822) surveyed the progress that had been made in familiarizing the public with the new units:

“It is now nearly thirty years since our new monies of account, our coins, and our mint, have been established. The dollar, under its new stamp, has preserved its name and circulation. The cent has become tolerably familiarized to the tongue, wherever it has been made by circulation familiar to the hand. But the dime having been seldom, and the mille never presented in their material images to the people, have remained . . . utterly unknown. . . . Even now, at the end of thirty years, ask a tradesman, or shopkeeper, in any of our cities, what is a dime or mille, and the chances are four in five that he will not understand your question. But go to New York and offer in payment the Spanish coin, the unit of the Spanish piece of eight [one reale], and the shop or market-man will take it for a shilling. Carry it to Boston or Richmond, and you shall be told it is not a shilling, but nine pence. Bring it to Philadelphia, Baltimore, or the City of Washington, and you shall find it recognized for an eleven-penny bit; and if you ask how that can be, you shall learn that, the dollar being of ninety-pence, the eight part of it is nearer to eleven than to any other number . . .4 And thus we have English denominations most absurdly and diversely applied to Spanish coins; while our own lawfully established dime and mille remain, to the great mass of the people, among the hidden mysteries of political economy – state secrets.”5

It took many more decades for the colonial unit of account to disappear completely. Elmer’s account (Elmer, 1869, p. 137) reported that “Even now, in New York, and in East Jersey, where the eighth of a dollar, so long the common coin in use, corresponded with the shilling of account, it is common to state the price of articles, not above two or three dollars, in shillings, as for instance, ten shillings rather than a dollar and a quarter.”

Not only were the unit of account and medium of exchange disconnected in an unfamiliar manner, but terms such as money and currency did not mean precisely the same thing in colonial times that they do today. In colonial times, “money” and “currency” were practically synonymous and signified whatever was conventionally used as a medium of exchange. The word “currency” today refers narrowly to paper money, but that wasn’t so in colonial times. “The Word, Currency,” Hugh Vance wrote in 1740, “is in common Use in the Plantations . . . and signifies Silver passing current either by Weight or Tale. The same Name is also applicable as well to Tobacco in Virginia, Sugars in the West Indies &c. Every thing at the Market-Rate may be called a Currency; more especially that most general Commodity, for which Contracts are usually made. And according to that Rule, Paper-Currency must signify certain Pieces of Paper, passing current in the Market as Money” (Vance, 1740, CCR III, pp. 396, 431).

Failure to appreciate that the unit of account and medium of exchange were quite distinct in colonial times, and that a familiar term like “currency” had a subtly different meaning, can lead unsuspecting historians astray. They often assume that a phrase such as “£100 New York money” or “£100 New York currency” necessarily refers to £100 of the bills of credit issued by New York. In fact, it simply means £100 of whatever was accepted as money in New York, according to the valuations prevailing in New York.6 Such subtle misunderstandings have led some historians to overestimate the ubiquity of paper money in colonial America.

Means of Payment – Book Credit

While simple “cash-and-carry” transactions sometimes occurred most purchases involved at least short-term book credit; Henry Laurens wrote that before the Revolution it had been “the practice to give credit for one and more years for 7/8th of the whole traffic” (Burnet, 1923, vol. 2, pp. 490-1). The buyer would receive goods and be debited on the seller’s books for an agreed amount in the local money of account. The debt would be extinguished when the buyer paid the seller either in the local medium of exchange or in equally valued goods or services acceptable to the seller. When it was mutually agreeable the debt could be and often was paid in ways that nowadays seem very unorthodox – with the delivery of chickens, or a week’s work fixing fences on land owned by the seller. The debt might be paid at one remove, by the buyer fixing fences on land owned by someone to whom the seller was himself indebted. Accounts would then be settled among the individuals involved. Account books testify to the pervasiveness of this system, termed “bookkeeping barter” by Baxter. Baxter examined the accounts of John Hancock and his father Thomas Hancock, both prominent Boston merchants, whose business dealings naturally involved an atypically large amount of cash. Even these gentlemen managed most of their transactions in such a way that no cash ever changed hands (Baxter, 1965; Plummer, 1942; Soltow, 1965, pp. 124-55; Forman, 1969).

An astonishing array of goods and services therefore served by mutual consent at some time or other to extinguish debt. Whether these goods ought all to be classified as “money” is doubtful; they certainly lacked the liquidity and universal acceptability in exchange that ordinarily defines money. At certain times and in certain colonies, however, specific commodities came to be so widely used in transactions that they might appropriately be termed money. Specie, of course, was such a commodity, but its worldwide acceptance as money made it special, so it is convenient to set it aside for a moment and focus on the others.

Means of Payment – Commodity Money

At various times and places in the colonies such items as tobacco, rice, sugar, beaver skins, wampum, and country pay all served as money. These items were generally accorded a special monetary status by various acts of colonial legislatures. Whether the legislative fiat was essential in monetizing these commodities or whether it simply acknowledged the existing state of affairs is open to question. Sugar was used in the British Caribbean, tobacco was used in the Chesapeake, and rice in South Carolina, each being the central product of their respective plantation economies. Wampum signifies the stringed shells used by the Indians as money before the arrival of European settlers. Wampum and beaver skins were commonly used as money in the northern colonies in the early stages of settlement when the fur trade and Indian trade were still mainstays of the local economy (Nettels, 1928, 1934; Fernow, 1893; Massey, 1976; Brock, 1975, pp. 9-18).

Country pay is more complicated. Where it was used, country pay consisted of a hodgepodge of locally produced agricultural commodities that had been monetized by the colonial legislature. A list of commodities, such as Indian corn, beef, pork, etc. were assigned specific monetary values (so many s. per bushel or barrel), and debtors were permitted by statute to pay certain debts with their choice of these commodities at nominal values set by the colonial legislature.7 In some instances country pay was declared a legal tender for all private debts although contracts explicitly requiring another form of payment might be exempted (Gottfried, 1936; Judd, 1905, pp. 94-96). Sometimes country pay was only a legal tender in payment of obligations to the colonial or town governments. Even where country pay was a legal tender only in payment of taxes it was often used in private transactions and even served as a unit of account. Probate inventories from colonial Connecticut, where country pay was widely used, are generally denominated in country pay (Main and Main, 1988).8

There were predictable difficulties where commodity money was used. A pound in “country pay” was simply not worth a pound in cash even as that cash was valued locally. The legislature sometimes overvalued agricultural commodities in setting their nominal prices. Even when the legislature’s prices were not biased in favor of debtors the debtor still had the power to select the particular commodity tendered and had some discretion over the quality of that commodity. In late 17th century Massachusetts the rule of thumb used to convert country pay to cash was that three pounds in country pay were worth two pounds cash (Republicæ, 1731, pp. 376, 390).9 Even this formula seems to have overvalued country pay. When a group of men seeking to rent a farm in Connecticut offered Boston merchant Thomas Bannister £22 of country pay in 1700, Bannister hesitated. It appears Bannister wanted to be paid £15 per annum in cash. Country pay was “a very uncertain thing,” he wrote. Some years £22 in country pay might be worth £10, some years £12, but he did not expect to see a day when it would fetch fifteen.10 Savvy merchants such as Bannister paid careful attention to the terms of payment. An unwary trader could easily be cheated. Just such an incident occurs in the comic satirical poem “The Sotweed Factor.” Sotweed is slang for tobacco, and a factor was a person in America representing a British merchant. Set in late seventeenth-century Maryland, the poem is a first-person account of the tribulations and humiliations a newly-arrived Briton suffers while seeking to enter the tobacco trade. The Briton agrees with a Quaker merchant to exchange his trade goods for ten thousand weight of oronoco tobacco in cask and ready to ship. When the Quaker fails to deliver any tobacco, the aggrieved factor sues him at the Annapolis court, only to discover that his attorney is a quack who divides his time between pretending to be a lawyer and pretending to be a doctor and that the judges have to be called away from their Punch and Rum at the tavern to hear his case. The verdict?

The Byast Court without delay,
Adjudg’d my Debt in Country Pay:
In Pipe staves, Corn, or Flesh of Boar,
Rare Cargo for the English Shoar.

Thus ruined the poor factor sails away never to return. A footnote to the reader explains “There is a Law in this Country, the Plaintiff may pay his Debt in Country pay, which consists in the produce of the Plantation” (Cooke, 1708).

By the middle of the eighteenth century commodity money had essentially disappeared in northern port cities, but still lingered in the hinterlands and plantation colonies. A pamphlet written in Boston in 1740 observed “Look into our British Plantations, and you’ll see [commodity] Money still in Use, As, Tobacco in Virginia, Rice in South Carolina, and Sugars in the Islands; they are the chief Commodities, used as the general Money, Contracts are made for them, Salaries and Fees of Office are paid in them, and sometimes they are made a lawful Tender at a yearly assigned Rate by publick Authority, even when Silver was promised” (Vance, 1740, CCR III, p. 396). North Carolina was an extreme case. Country pay there continued as a legal tender even in private debts. The system was amended in 1754 and 1764 to require rated commodities to be delivered to government warehouses and be judged of acceptable quality at which point warehouse certificates were issued to the value of the goods (at mandated, not market prices): these certificates were a legal tender (Bullock, 1969, pp. 126-7, 157).

Means of Payment – Bills of Credit

Cash came in two forms: full-bodied specie coins (usually Spanish or Portuguese) and paper money known as “bills of credit.” Bills of credit were notes issued by provincial governments that were similar in many ways to modern paper money: they were issued in convenient denominations, were often a legal tender in the payment of debts, and routinely passed from man to man in transactions.11 Bills of credit were ordinarily put into circulation in one of two ways. The most common method was for the colony to issue bills to pay its debts. Bills of credit were originally designed as a kind of tax-anticipation scrip, similar to that used by many localities in the United States during the Great Depression (Harper, 1948). Therefore when bills of credit were issued to pay for current expenditures a colony would ordinarily levy taxes over the next several years sufficient to call the bills in so they might be destroyed.12 A second method was for the colony to lend newly printed bills on land security at attractive interest rates. The agency established to make these loans was known as a “land bank” (Thayer, 1953).13 Bills of credit were denominated in the £., s., and d. of the colony of issue, and therefore were usually the only form of money in circulation that was actually denominated in the local unit of account.14

Sometimes even the bills of credit issued in a colony were not denominated in the local unit of account. In 1764 Maryland redeemed its Maryland-pound-denominated bills of credit and in 1767 issued new dollar-denominated bills of credit. Nonetheless Maryland pounds, not dollars, remained the predominant unit of account in Maryland up to the Revolution (Michener and Wright, 2006a, p. 34; Grubb; 2006a, pp. 66-67; Michener and Wright, 2006c, p. 264). The most striking example occurred in New England. Massachusetts, Connecticut, New Hampshire, and Rhode Island all had, long before the 1730s, emitted paper money in bills of credit known as “old tenor” bills of credit, and “old tenor” had become the most commonly-used unit of account in New England. The old tenor bills of all four colonies passed interchangeably and at par with one another throughout New England.

Beginning in 1737, Massachusetts introduced a new kind of paper money known as “new tenor.” New tenor can be thought of as a monetary reform that ultimately failed to address underlying issues. It also served as a way of evading a restriction the Board of Trade had placed on the Governor of Massachusetts that limited him to emissions of not more than £30,000. The Massachusetts assembly declared each pound of the new tenor bills to be worth £3 in old tenor bills. What actually happened is that old tenor (abbreviated in records of the time as “O.T.”) continued to be the unit of account in New England, and so long as the old bills continued to circulate, a decreasing portion of the medium of exchange. Each new tenor bill was reckoned at three times its face value in old tenor terms. This was just the beginning of the confusion, for yet newer Massachusetts “new tenor” emissions were created, and the original “new tenor” emission became known as the “middle tenor.”15 The new “new tenor” bills emitted by Massachusetts were accounted in old tenor terms at four times their face value. These bills, like the old ones, circulated across colony borders throughout New England. As if this were not complicated enough, New Hampshire, Rhode Island, and Connecticut all created new tenor emission of their own, and the factors used to convert these new tenor bills into old tenor terms varied across colonies (Davis, 1970; Brock, 1975; McCusker, pp. 131-137). Connecticut, for instance, had a new tenor emission such that each new tenor bill was worth 3½ times its face value in old tenor (Connecticut, vol. 8, pp. 359-60; Brock, 1975, pp. 45-6). “They have a variety of paper currencies in the [New England] provinces; viz., that of New Hampshire, the Massachusetts, Rhode Island, and Connecticut,” bemoaned an English visitor, “all of different value, divided and subdivided into old and new tenors, so that it is a science to know the nature and value of their moneys, and what will cost a stranger some study and application” (Hamilton, 1907, p. 179). Throughout New England, however, Old Tenor remained the unit of account. “The Price of [provisions sold at Market],” a contemporary pamphlet noted, “has been constantly computed in Bills of the old Tenor, ever since the Emission of the middle and new Tenor Bills, just as it was before their Emission, and with no more Regard to or Consideration of either the middle or new Tenor Bills, than if they had never been emitted” (Enquiry, 1744, CCR IV, p. 174). This occurred despite the fact that by 1750 only an inconsiderable portion of the bills of credit in circulation were denominated in old tenor.16

For the most part, bills of credit were fiat money. Although a colony’s treasurer would often consent to exchange these bills for other forms of cash in the treasury, there was rarely a provision in the law stating that holders of bills of credit had a legally binding claim on the government for a fixed sum in specie, and treasurers were sometimes unable to accommodate people who wished to exchange money (Nicholas, 1912, p. 257; The New York Mercury, January 27, 1759, November 24, 1760).17 The form of the bills themselves was sometimes misleading in this respect. It was not uncommon for the bills to be inscribed with an explicit statement that the bill was worth a certain sum in silver. This was often no more than an expression of the assembly’s hope, at the time of issuance, of how the bills would circulate.18 Colonial courts sometimes allowed inhabitants to pay less to royal officials and proprietors by valuing bills of credit used to pay fees, dues, and quit rents according to their “official” rather than actual specie values. (Michener and Wright, 2006c, p. 258, fn. 5; Hart, 2005, pp. 269-71).

Maryland’s paper money was unique. Maryland’s paper money – unlike that of other colonies – gave the possessor an explicit legal claim on a valuable asset. Maryland had levied a tax and invested the proceeds of the tax in London. It issued bills of credit promising a fixed sum in sterling bills of exchange at predetermined dates, to be drawn on the colony’s balance in London. The colony’s accrued balances in London were adequate to fund the redemption, and when redemption dates arrived in 1748 and 1764 the sums due were paid in full so the colony’s pledge was considered credible.

Maryland’s paper money was unique in other ways as well. Its first emission was put into circulation in a novel fashion. Of the £90,000 emitted in 1733, £42,000 was lent to inhabitants, while the other £48,000 was simply given away, at the rate of £1.5 per taxable (McCusker, 1978, pp. 190-196; Brock, 1975, chapter 8; Lester, 1970, chapter 5). Maryland’s paper money was so peculiar that it is unrepresentative of the colonial experience. This was recognized even by contemporaries. Hugh Vance, in the Postscript to his Inquiry into the Nature and Uses of Money, dismissed Maryland as “intirely out of the Question; their Bills being on the Foot of promissory Notes” Vance, 1740, CCR III, p. 462).

In 1690, Massachusetts was the first colony to issue bills of credit (Felt, 1839, pp. 49-52; Davis, 1970, vol. 1, chapter 1; Goldberg, 2009).19 The bills were issued to pay soldiers returning from a failed military expedition against Quebec. Over time, the rest of the colonies followed suit. The last holdout was Virginia, which issued its first bills of credit in 1755 to defray expenses associated with its entry into the French and Indian War (Brock, 1975, chapter 9). The common denominator here is wartime finance, and it is worthwhile to recognize that the vast majority of the bills of credit issued in the colonies were issued during wartime to pay for pressing military expenditures. Peacetime issues did occur and are in some respects quite interesting as they seem to have been motivated in part by a desire to stimulate the economy (Lester, 1970). However, peacetime emissions are dwarfed by those that occurred in war.20 Some historians enamored of the land bank system, whereby newly emitted bills were lent to landowners in order to promote economic development, have stressed the economic development aspect of colonial emissions – particularly those of Pennsylvania – while minimizing the military finance aspect (Schweitzer, 1989, pp. 313-4). The following graph, however, illustrates the fundamental importance of war finance; the dramatic spike marks the French and Indian War (Brock, 1992, Tables 4, 6).

//

ole.gif

That bills in circulation peaked in 1760 reflects the fact that Quebec fell in 1759 and Montreal in 1760, so that the land war in North America was effectively over by 1760.

Because bills were disproportionally emitted for wartime finance it is not surprising that the colonies whose currencies depreciated due to over-issue were those who shared a border with a hostile neighbor – the New England colonies bordering French Canada and the Carolinas bordering Spanish Florida.21 The colonies from New York to Virginia were buffered by their neighbors and therefore issued no more than modest amounts of paper money until they were drawn into the French and Indian War, by which time their economies were large enough to temporarily absorb the issues.

It is important not to confuse the bills of credit issued by a colony with the bills of credit circulating in that colony. “Under the circumstances of America before the war,” a Maryland resident wrote in 1787, “there was a mutual tacit consent that the paper of each colony should be received by its neighbours” (Hanson, 1787, p. 24).22 Between 1710 and 1750, the currencies of Massachusetts, Connecticut, New Hampshire, and Rhode Island passed indiscriminately and at par with one another in everyday transactions throughout New England (Brock, 1975, pp. 35-6). Although not quite so integrated a currency area as New England the colonies of New York, Pennsylvania, New Jersey, and Delaware each had bills of credit circulating within its neighbors’ borders (McCusker, 1978, pp. 169-70, 181-182). In the early 1760s, Pennsylvania money was the primary medium of exchange in Maryland (Maryland Gazette, September 15, 1763; Hazard, 1852, Eighth Series, vol. VII, p. 5826; McCusker, 1978, p. 193). In 1764 one quarter of South Carolina’s bills of credit circulated in North Carolina and Georgia (Ernst, 1973, p. 106). Where the currencies of neighboring colonies were of equal value, as was the case in New England between 1710 and 1750, bills of credit of neighboring colonies could be credited and debited in book accounts at face value. When this was not the case, as when Pennsylvania, Connecticut, or New Jersey bills of credit were used to pay a debt in New York, an adjustment had to be made to convert these sums to New York money. The conversion was usually based on the par values assigned to Spanish dollars by each colony. Indeed, this was also how merchants generally handled intercolonial exchange transactions (McCusker, 1978, p. 123). For example, on the eve of the Revolution a Spanish dollar was rated at 7 s. 6 d. in Pennsylvania money and at 8 s. in New York money. The ratio of eight to seven and a half being equal to 1.06666, Pennsylvania bills of credit were accepted in New York at a 6 and 1/3% advance (Stevens, 1867, pp. 10-11, 18). Connecticut rated the Spanish dollar at 6 s., and because the ratio of eight to six is 1.333, Connecticut bills of credit were accepted at a one third advance in New York (New York Journal, July 13, 1775). New Jersey’s paper money was a peculiar exception to this rule. By the custom of New York’s merchants, New Jersey bills of credit were accepted for thirty years or more at an advance of one pence in the shilling, or 8 and 1/3%, even though New Jersey rated the Spanish dollar at 7 s, 6 d., just as Pennsylvania did. The practice was controversial in New York, and the advance was finally reduced to the “logical” 6 and 2/3% advance by an act of the New York assembly in 1774.23

Means of Payment – Foreign Specie Coins

Specie coins were the other kind of cash that commonly circulated in the colonies. Few specie coins were minted in the colonies. Massachusetts coined silver “pine tree shillings” between 1652 and the closing of the mint in the early 1680s. This was the only mint of any size or duration in the colonies, although minting of small copper coins and tokens did occur at a number of locations (Jordan, 2002; Mossman, 1993). Colonial coinage is interesting numismatically, but economically it was too slight to be of consequence. Most circulating specie was minted abroad. The gold and silver coins circulating in the colonies were generally of Spanish or Portuguese origin. Among the most important of these coins were the Portuguese Johannes and moidore (more formally, the moeda d’ouro) and the Spanish dollar and pistole. The Johanneses were gold coins, 8 escudos (12,800 reis) in denomination; their name derived from the obverse of the coin, which bore the bust of Johannes V. Minted in Portugal and Brazil they were commonly known in the colonies as “joes.” The fractional denominations were 4 escudo and 2 escudo coins of the same origin. The 4 escudo (6,400 reis) coin, or “half joe,” was one of the most commonly used coins in the late colonial period. The moidore was another Portuguese gold coin, 4,000 reis in denomination. That these coins were being used as a medium of exchange in the colonies is not so peculiar as it might appear. Raphael Solomon (1976, p. 37) noted that these coins “played a very active part in international commerce, flowing in and out of the major seaports in both the Eastern and Western Hemispheres.” In the late colonial period the mid-Atlantic colonies began selling wheat and flour to Spain and Portugal “for which in return, they get hard cash” (Lydon, 1965; Virginia Gazette, January 12, 1769; Brodhead, 1853, vol. 8, p. 448).

The Spanish dollar and its fractional parts were, in McCusker’s (1978, p. 7) words, “the premier coin of the Atlantic world in the seventeenth and eighteenth centuries.” Well known and widely circulated throughout the world, its preeminence in colonial North America accounts for the fact that the United States uses dollars, rather than pounds, as its unit of account. The Spanish pistole was the Spanish gold coin most often encountered in America. While these coins were the most common, many others also circulated there (Solomon, 1976; McCusker, 1978, pp. 3-12).

Alongside the well-known gold and silver coins were various copper coins, most notably the English half-pence, that served as small change in the colonies. Fractional parts of the Spanish dollar and the pistareen, a small silver coin of base alloy, were also commonly used as change.24

None of these foreign specie coins were denominated in local currency units, however. One needed a rule to determine what a particular coin, such as a Spanish dollar, was worth in the £., s., and d. of local currency. Because foreign specie coins were in circulation long before any of the colonies issued paper money setting a rating on these coins amounted to picking a numeraire for the economy; that is, it defined what one meant by a pound of local currency. The ratings attached to individual coins were not haphazard: They were designed to reflect the relative weight and purity of the bullion in each coin as well as the ratio of gold to silver prices prevailing in the wider world.

In the early years of colonization these coin values were set by the colonial assemblies (Nettels, 1934, chap. 9; Solomon, 1976, pp. 28-29; John Hemphill, 1964, chapter 3). In 1700 Pennsylvania passed an act raising the rated value of its coins, causing the Governor of Maryland to complain to the Board of Trade of the difficulties this created in Maryland. He sought the Board’s permission for Maryland to follow suit. When the Board investigated the matter it concluded that the “liberty taken in many of your Majesty’s Plantations, to alter the rates of their coins as often as they think fit, does encourage an indirect practice of drawing the money from one Plantation to another, to the undermining of each other’s trade.” In response they arranged for the disallowance of the Pennsylvania act and a royal proclamation to put an end to the practice.25

Queen Anne’s proclamation, issued in 1704, prohibited a Spanish dollar of 17½ dwt. from passing for more than 6 s. in the colonies. Other current foreign silver coins were rated proportionately and similarly prohibited from circulating at a higher value. This particular rating of coins became known as “proclamation money.”26 It might seem peculiar that the// proclamation did not dictate that the colonies adopt the same ratings as prevailed in England. The Privy Council, however, had incautiously approved a Massachusetts act passed in 1697 rating Spanish dollars at 6 s., and attorney general Edward Northey felt the act could not be nullified by proclamation. This induced the Board of Trade to adopt the rating of the Massachusetts act.27

Had the proclamation been put into operation its effects would have been extremely deflationary because in most colonies coins were already passing at higher rates. When the proclamation reached America only Barbados attempted to enforce it. In New York Governor Lord Cornbury suspended its operation and wrote the Board of Trade that he could not enforce it in New York while it was being ignored in neighboring colonies as New York would be “ruined beyond recovery” if he did so (Brodhead, 1853, vol. 4, pp. 1131-1133; Brock, 1975, chapter 4). A chorus of such responses led the Board of Trade to take the matter to Parliament in hopes of enforcing a uniform compliance throughout America (House of Lords, 1921, pp. 302-3). On April 1, 1708, Parliament passed “An Act for ascertaining the Rates of foreign Coins in her Majesty’s Plantations in America” (Ruffhead, vol. 4, pp. 324-5). The act reiterated the restrictions embodied in Queen Anne’s Proclamation, and declared that anyone “accounting, receiving, taking, or paying the same contrary to the Directions therein contained, shall suffer six Months Imprisonment . . . and shall likewise forfeit the Sum of ten Pounds for every such Offence . . .”

The “Act for ascertaining the Rates of foreign Coins” never achieved its desired aim. In the colonies it was largely ignored, and business continued to be conducted just as if the act had never been passed. Pennsylvania, it was true, went though a show of complying but even that lapsed after a while (Brock, 1975, chapter 4). What the act did do, however, was push the process of coin rating into the shadows because it was no longer possible to address it in an open way by legislative enactment. Laws that passed through colonial legislatures (certain charter and proprietary colonies excepted) were routinely reviewed by the Privy Council, and if found to be inconsistent with British law, were declared null and void.

Two avenues remained open to alter coin ratings – private agreements among merchants that would not be subject to review in London, and a legislative enactment so stealthy as to slip through review unnoticed. New York was the first to succeed using stealth. In November 1709 it emitted bills of credit “for Tenn thousand Ounces of Plate or fourteen Thousand Five hundred & fourty five Lyon Dollars” (Lincoln, 1894, vol. 1, chap. 207, pp. 695-7). The Lyon dollar was an obscure silver coin that had escaped being explicitly mentioned in the enumeration of allowable values that had accompanied Queen Anne’s proclamation. Since 15 years previously New York had rated the Lyon dollar at 5 s. 6 d., it was generally supposed that that rating was still in force (Solomon, 1976, p. 30). The value of silver implied in the law’s title is 8 s. an ounce – a value higher than allowed by Parliament. Until 1723, New York’s emission acts contained clauses designed to rate an ounce of silver at 8 s. The act in 1714, for instance, tediously enumerated the denominations of the bills to be printed, in language such as “Five Hundred Sixty-eight Bills, of Twenty-five Ounces of Plate, or Ten Pounds value each” (Lincoln, 1894, vol. 1, chap. 280, pp. 819). When the Board of Trade finally realized what New York was up to it was too late: the earlier laws had already been confirmed. When the Board wrote Governor Hunter to complain, he replied, in part, “Tis not in the power of men or angels to beat the people of this Continent out of a silly notion of their being gainers by the Augmentation of the value of Plate” (Brodhead, vol. 5, p. 476). These colony laws were still thought to be in force in the late colonial period. Gaine’s New York Pocket Almanack for 1760 states that “Spanish Silver . . . here ‘tis fixed by Law at 8 s. per Ounce, but is often sold and bought from 9 s. to 9 s. and 3 d.”

In 1753 Maryland also succeeded using stealth, including revised coin ratings inconsistent with Queen Anne’s proclamation in “An Act for Amending the Staple of Tobacco, for Preventing Fraud in His Majesty’s Customs, and for the Limitation of Officer’s Fees” (McCusker, 1978, p. 192).

The most common subterfuge was for a colony’s merchants to meet and agree on coin ratings. Once the merchants agreed on such ratings, the colonial courts appear to have deferred to them, which is not surprising in light of the fact that many judges and legislators were drawn from the merchants’ ranks (e.g. Horle, 1991). These private agreements effectively nullified not only the act of Parliament but also local statutes, such as those rating silver in New York at 8 s. an ounce. Records of many such agreements have survived.28 There is also testimony that these agreements were commonplace. Lewis Morris remarked that “It is a common practice … [for] the merchants to put what value they think fit upon Gold and Silver coynes current in the Plantations.” When the Philadelphia merchants published a notice in the Pennsylvania Gazette of September 16, 1742 enumerating the values they had agreed to put on foreign gold and silver coins, only the brazenness of the act came as a surprise to Morris. “Tho’ I believe by the merchants private Agreements amongst themselves they have allwaies done the same thing since the Existence of A paper currency, yet I do not remember so publick an instance of defying an act of parliament” (Morris, 1993, vol. 3, pp. 260-262, 273). These agreements, when backed by a strong consensus among merchants, seem to have been effective. Decades later, Benjamin Franklin (1959, vol. 14, p. 232) recollected how the agreement that had offended Morris “had a great Effect in fixing the Value and Rates of our Gold and Silver.”

After the New York Chamber of Commerce was founded in 1768, merchant deliberations on these agreements were recorded. During this period, the coin ratings in effect in New York were routinely published in almanacs, particularly Gaine’s New-York pocket almanac. When the New York Chamber of Commerce resolved to change the rating of coins and the minimum allowable weight for guineas the almanac values changed immediately to reflect those adopted by the Chamber (Stevens, 1867, pp. 56-7. 69).29

ole1.gif

The coin rating table above, reproduced from The New-York Pocket Almanack for the Year 1771 shows how coin-rating worked in practice in the late colonial period. (Note the reference to the deliberations of the Chamber of Commerce.) It shows, for instance, that if you tendered a half joe in payment of debt in Pennsylvania, you would be credited with having paid £3 Pennsylvania money. If the same half joe were tendered in payment of a debt in New York you would be credited with having paid £ 3 4 s. New York money. In Connecticut it would have been £2 8 s. Connecticut money.30

The colonists possessed no central bank and colonial treasurers, however willing they might have been to exchange paper for specie, sometimes found themselves without the means to do so. That these coin ratings were successfully maintained for decades on end was a testament to the public’s faith in the bills of credit, which made them willing to voluntarily exchange them for specie at the established rate. Writing in 1786 and attempting to explain why New Jersey’s colonial bills of credit had retained their value, “Eugenio” attributed their success to the fact that it possessed what he called “the means of instant realization at value.” This awkward phrase signified the bills were instantly convertible at par. “Eugenio” went on to explain why:

“It is true that government did not raise a sum of coin and deposit the same in the treasury to exchange the bills on demand; but the faith of the government, the opinion of the people, and the security of the fund formerly by a well-timed and steady policy, went so hand in hand and so concurred to support each other, that the people voluntarily and without the least compulsion threw all their gold and silver, not locking up a shilling, into circulation concurrently with the bills; whereby the whole coin of the government became forthwith upon an emission of paper, a bank of deposit at every man’s door for the instant realization or immediate exchange of his bill into gold or silver. This had a benign and equitable, a persuasive, a satisfactory, and an extensive influence. If any one doubted the validity or price of his bill, his neighbor immediately removed his doubts by exchanging it without loss into gold or silver. If any one for a particular purpose needed the precious metals, his bill procured them at the next door, without a moment’s delay or a penny’s diminution. So high was the opinion of the people raised, that often an advance was given for paper on account of the convenience of carriage. In the market as well as in the payment of debts, the paper and the coin possessed a voluntary, equal, and concurrent circulation, and no special contract was made which should be paid or whether they should be received at a difference. By this instant realization and immediate exchange, the government had all the gold and silver in the community as effectually in their hands as if those precious metals had all been locked up in their treasury. By this realization and exchange they could extend credit to any degree it was required. The people could not be induced to entertain a doubt of their paper, because the government had never failed them in a single instance, either in war or in peace (New Jersey Gazette, January 30, 1786).”

Insofar as colonial bills of credit were convertible on demand into specie at the rated specie value of coins, there is no mystery as to why those bills of credit maintained their value. How merchants maintained and enforced such accords, however, is relatively inscrutable. Some economists are incredulous that private associations of merchants could accomplish the feat. The best evidence on this question can be found in a pamphlet by a disgruntled inhabitant complaining of the actions of a merchants’ association in Antigua (Anon., 1740), which provides a tantalizing glimpse of the methods merchants used.

Means of Payment – Private debt instruments

This leaves private debt instruments, such as bank notes, bills of exchange, notes of hand, and shop notes. It is sometimes asserted that there were no banks in colonial America, but this is something of an overstatement. There were several experiments made and several embryonic private banks actually got notes into circulation. Andrew McFarland Davis devoted an entire volume to banking in colonial New England (Davis, 1970, vol. 2; Perkins 1991 ). Perhaps the most successful bank of the era was established in South Carolina in 1731. It apparently issued notes totaling £50,000 South Carolina money and operated successfully for a decade.31 However, the banks that did exist did not last long enough or succeed in putting enough notes in circulation for us to be especially concerned about them.

Bills of exchange were similar to checks. A hypothetical example will illustrate how they functioned. The process of creating a bill of exchange began when someone obtained a balance on account overseas (in the case of the colonies, that place was often London). Suppose a Virginia tobacco producer consigned his tobacco to be sold in England, with the sterling proceeds to remain temporarily in the hands of a London merchant. The Virginia planter could then draw on those funds, by writing a bill of exchange payable in London. Suppose further that the planter drew a bill of exchange on his London correspondent, and sold it to a Virginia merchant, who then transmitted it to London to pay a balance due on imported dry goods. When the bill of exchange reached London, the dry goods wholesaler who received it would call on the London merchant holding the funds in order to receive the payment specified in the bill of exchange.

Bills of exchange were widely used in foreign trade, and were the preferred and most common method for paying debts due overseas. Because of the nature of the trade they financed, bills of exchange were usually in large denominations. Also, because bills of exchange were drawn on particular people or institutions overseas, there was an element of risk involved. Perhaps the person drawing the bill was writing a bad check, or perhaps the person on whom the bill was drawn was himself a deadbeat. One needed to be confident of the reputations of the parties involved when purchasing a bill of exchange. Perhaps because of their large denominations and the asymmetric information problems involved, bills of exchange played a limited role as a medium of exchange in the inland economy (McCusker, 1978, especially pp. 20-21).

Small denomination IOUs, called “notes of hand” were widespread, and these were typically denominated in local currency units. For the most part, these were not designed to circulate as a medium of exchange. When someone purchased goods from a shopkeeper on credit, the shopkeeper would generally get a “note of hand” as a receipt. In the court records in the Connecticut archives, one can find the case files for countless colonial-era cases where an individual was sued for nonpayment of a small debt.32 The court records generally include a note of hand entered as evidence to prove the debt. Notes of hand sometimes were proffered to third parties in payment of debt, however, particularly if the issuer was a person of acknowledged creditworthiness (Mather, 1691, p. 191). Some individuals of modest means created notes of hand in small denominations and attempted to circulate them as a medium of exchange; in Pennsylvania in 1768, a newspaper account stated that 10% of the cash offered in the retail trade consisted of such notes (Pennsylvania Chronicle, October 12, 1768; Kimber, 1998, p. 53). Indeed, many private banking schemes, such as the Massachusetts merchants’ bank, the New Hampshire merchants’ bank, the New London Society, and the Land Bank of 1740 were modeled on private notes of hand, and each consisted of an association designed to circulate such notes on a large scale. For the most part, however, notes of hand lacked the universal acceptability that would have unambiguously qualified them as money.

Shop notes were “notes of hand” of a particular type and seem to have been especially widespread in colonial New England. The twentieth-century analogue to shop notes would be scrip issued by an employer that could be used for purchases at the company store.33 Shop notes were I.O.U.s of local shopkeepers, redeemable through the shopkeeper. Such an I.O.U. might promise, for example, £6 in local currency value, half in money and half in goods (Weeden, 1891, vol. 2, p. 589; Ernst, 1990). Hugh Vance described the origins of shop notes in a 1740 pamphlet:

“… by the best Information I can have from Men of Credit then living, the Fact is truly this, viz. about the Year 1700, Silver-Money became exceedingly scarce, and the Trade so embarassed, that we begun to go into the Use of Shop-Goods, as the Money. The Shopkeepers told the Tradesmen, who had Draughts upon them from the Merchants for all Money, that they could not pay all in Money (and very truly) and so by Degrees brought the Tradesmen into the Use of taking Part in Shop-Goods; and likewise the Merchants, who must always follow the natural Course of Trade, were forced into the Way of agreeing with Tradesmen, Fishermen, and others; and also with the Shopkeepers, to draw Bills for Part and sometimes for all Shop-Goods (Vance, 1740, CCR III, pp. 390-91).”

Vance’s account seems accurate in all respects save one. Merchants played an active role in introducing shop notes into circulation. By the 1740s shop notes had been much abused, and it was disingenuous of Vance (himself a merchant) to suggest that merchants had had the system thrust upon them by shopkeepers. Merchants used shop notes to expedite sales and returns. The merchant might contact a shopkeeper and a shipbuilder. The shipbuilder would build a ship for the merchant, the ship to be sent to England and sold as a way of making returns. In exchange the merchant would provide the builder with shop notes and the shopkeeper with imported goods. The builder used the shop notes to pay his workers. The shop notes, in turn, were redeemed at the shop of the shopkeeper when presented to him by workers (Boston Weekly Postboy, December 8, 1740). Thomas Fitch tried to interest an English partner in just such a scheme in 1710:

“Realy it’s extream difficult to raise money here, for goods are generally Sold to take 1/2 money & 1/2 goods again out of the buyers Shops to pay builders of Ships [etc?] which is a great advantage in the readier if not higher sale of goods, as well as that it procures the Return; Wherefore if we sell goods to be paid in money we must give long time or they will not medle (Fitch, 1711, to Edward Warner, November 22, 1710).”

Like other substitutes for cash, shop notes were seldom worth their stated values. A 1736 pamphlet, for instance, reported wages to be 6s in bills of credit, or 7s if paid in shop notes (Anonymous, 1736, p. 143). One reason shop notes failed to remain at par with cash is that shopkeepers often refused to redeem them except with merchandise of their own choosing. Another abuse was to interpret money to mean British goods; half money, half goods often meant no money at all.34

Controversies

Colonial bills of credit were controversial when they were first issued, and have remained controversial to this day. Those who have wanted to highlight the evils of inflation have focused narrowly on the colonies where the bills of credit depreciated most dramatically – those colonies being New England and the Carolinas, with New England being a special focus because of the wealth of material that exists concerning New England history. When Hillsborough drafted a report for the Board of Trade intended to support the abolition of legal tender paper money in the colonies he rested his argument on the inflationary experiences of these colonies (printed in Whitehead, 1885, vol. IX, pp. 405-414). Those who have wanted to defend the use of bills of credit in the colonies have focused on the Middle colonies, where inflation was practically nonexistent. This tradition dates back at least to Benjamin Franklin (1959, vol. 14, pp. 77-87), who drafted a reply to the Board of Trade’s report in an effort to persuade Parliament to repeal of the Currency Act of 1764. Nineteenth-century authors, such as Bullock (1969) and Davis (1970), tended to follow Hillsborough’s lead whereas twentieth-century authors, such as Ferguson (1953) and Schweitzer (1987), followed Franklin’s.

Changing popular attitudes towards inflation have helped to rehabilitate the colonists. Whereas inflation in earlier centuries was rare, and even the mild inflation suffered in England between 1797 and 1815 was sufficient to stir a political uproar, the twentieth century has become inured to inflation. Even in colonial New England between 1711 and 1749, which was thought to have done a disgraceful job in managing its bills of credit, peacetime inflation was only about 5% per annum. Inflation during King George’s War was about 35% per annum.35

Nineteenth-century economists were guilty of overgeneralizing based on the unrepresentative inflationary experiences and associated debtor-creditor conflicts that occurred in a few colonies. Some twentieth-century economists, however, have swung too far in the other direction by generalizing on the basis of the success of the system in the Middle colonies and by attributing the benign outcomes there to the fundamental soundness of the system and its sagacious management. It would be closer to the truth, I believe, to note that the virtuous restraint exhibited by the Middle colonies was imposed upon them. Emissions in these colonies were sometimes vetoed by royal authorities and frequently stymied by instructions issued to royal or proprietary governors. The success of the Middle colonies owes much to the simple fact that they did not exert themselves in war to the extent that their New England neighbors did and that they were not permitted to freely issue bills of credit in peacetime.

A recent controversy has developed over the correct answer to the question – Why did some bills of credit depreciate, while others did not? Many early writers took it for granted that the price level in a colony would vary proportionally with the number of bills of credit the colony issued. This assumption was mocked by Ernst (1973, chapter 1) and devastated by West (1978). West performed simple regressions relating the quantity of bills of credit outstanding to price indices where such data exist. For most colonies he found no correlation between these variables. This was particularly striking because in the Middle colonies there was a dramatic increase in the quantity of bills of credit outstanding during the French and Indian War, and a dramatic decrease afterwards. Yet this large fluctuation seemed to have little effect on the purchasing power of those bills of credit as measured by prices of bills of exchange and the imperfect commodity price indices we possess. Only in New England in the first half of the eighteenth century did there seem to be a strong correlation between bills of credit outstanding and prices and exchange rates. Officer (2005) examined the New England episode and concluded that the quantity theory provides an adequate explanation in this instance, making the contrast with many other colonies (most notably, the Middle colonies) even more remarkable.

Seizing on West’s results Bruce Smith suggested that they disproved the quantity theory of money and provided evidence in favor of an alternative theory of money based on theoretical models of Wallace and Sargent, which Smith characterized as the “backing theory.”36 According to Smith (1985a, p. 534), the redemption provisions enacted when bills of credit were introduced into circulation on tax and loan funds were what prevented them from depreciating. “Just as the value of privately issued liabilities depends on the issuers’ balance sheet,” he wrote, “the same is true for government liabilities. Thus issues of money which are accompanied by increases in the (expected) discounted present value of the government’s revenues need not be inflationary.” One obvious problem with this theory is that the New England bills of credit which did depreciate were issued in exactly the same way. Smith’s answer was that the New England colonies administered their tax and loan funds poorly and New England’s poor administration accounted for the inflation experienced there.

Others who did not wholly agree with Smith – especially his sweeping refutation of the quantity theory – nonetheless pointed to the redemption provisions in explaining why bills of credit often retained their value (Wicker, 1985; Bernholz, 1988; Calomiris, 1988; Sumner, 1993; Rousseau, 2007). Of those who assigned credit to the redemption provisions, however, only Smith grappled with the key question; namely, why essentially identical redemption provisions failed to prevent inflation elsewhere.

Crediting careful administration of tax and loan funds for the steady value of some colonial currencies, and haphazard administration for the depreciation of others looks superficially appealing. The experiences of Pennsylvania and Rhode Island, generally thought to be the most and least successful issuers of colonial bills of credit, fit the hypothesis nicely. However, when one examines other cases, the hypothesis breaks down. Connecticut was generally credited with administering her bills of credit very carefully, yet they depreciated in lockstep with those of her New England neighbors for forty years (Brock, 1975, pp. 43-47). Virginia’s bills of credit retained their value even though Virginia’s colonial treasurer was discovered to have embezzled a sum equal to nearly half of Virginia’s total outstanding bills of credit and returned them to circulation (Michener, 1987, p. 247). North Carolina’s bills of credit held their value well in the late colonial period despite tax administration so notoriously corrupt it led to an armed revolt (Michener, 1987, pp. 248-9, Ernst, 1973, p. 221).

A competing explanation has been offered by Michener (1987, 1988), Brock (1992), McCallum (1992), and Michener and Wright (2006b). According to this explanation, the coin rating system operating in the colonies meant they were effectively on a specie standard with a de facto fixed par of exchange. Provided emissions of paper money did not exceed the amount needed for domestic purposes (“normal real balances,” in McCallum’s terminology) some specie would remain in circulation, prices would remain stable, and the fixed par could be maintained. Where emissions exceeded this bound specie would disappear from circulation and exchange rates would float freely, no longer tethered to the fixed par. Further emissions would cause inflation.37 This was said to account for inflation in New England after 1712, where specie did, in fact, completely disappear from circulation (Hutchinson, 1936, vol. 2, p. 154; Michener, 1987, pp. 288-94). If this explanation is correct, it would suggest that emissions of bills of credit ought to be offset by specie outflows, ceteris paribus.

Critics of the “specie circulated at rated values” explanation have frequently disregarded the ceteris paribus qualification and maintained that the theory implies specie flows always ought to be highly negatively correlated with changes in the quantity of bills of credit. This amounts to assuming the quantity of money demanded per capita in colonial America was nearly constant. If this were a valid test of the theory, one would be forced to reject it, because the specie stock fell little, if at all, in the Middle colonies in 1755-1760 as bills of credit increased, and when bills of credit began to decrease after 1760, specie became scarcer.

The flaw in critics’ reasoning, in my opinion, is that it assumes three unwarranted facts. First, that the demand for money, narrowly defined to mean bills of credit plus specie, was very stable despite the widespread use of bookkeeping barter; Second, that the absence of evidence of large interest rate fluctuations is evidence of the absence of large interest rate fluctuations (Smith, 1985b, pp. 1193, 1198; Letwin, 1982, p. 466); Third, that the opportunity cost of holding money is adequately measured by the nominal interest rate.38

With respect to the first point, colonial wars significantly influenced the demand for money. During peacetime, most transactions were handled by means of book credit. During major wars, however, many men served in the militia. Men in military service were paid in cash and taken far from the community in which their creditworthiness was commonly known, reducing both their need for book credit and their ability to obtain it. Moreover, it would have to give a shopkeeper pause and discourage him from advancing book credit to consider the real possibility that even his civilian customers might find themselves in the militia in the near future and gone from the local community, possibly forever. In each of the major colonial wars there is evidence suggesting an increase in cash real balances that could be attributed to the war’s impact on the book credit system. The increase in real money balances during the French and Indian War and the subsequent decrease can be largely accounted for in this way. With respect to the second point, fluctuations in the money supply are even compatible with a stable demand for money if periods when money is scarce are also periods when interest rates are high, as is also suggested by the historical record.39 It is true that the maximum interest rates specified in colonial usury laws are stable, generally in the range of 6%-8% per annum, often a bit lower late in the colonial era than at its beginning. This has been taken as evidence that colonial interest rates were stable. However, we know that these usury laws were commonly evaded and that market rates were often much higher (Wright, 2002, pp. 19-26). Some indication of how much higher became evident in the summer of 1768 when the Privy Council unexpectedly struck down New Hampshire’s usury law.40 News of the disallowance did not reach New Hampshire until the end of the year, at which time New Hampshire, having sunk the bills of credit issued to finance the French and Indian War during the 5 year interval permitted by the Currency Act of 1751, was in the throes of a liquidity crisis.41 Governor Wentworth reported to the Lords of Trade, that “Interest arose to 30 p. Ct. within six days of the repeal of the late Act.”42 By contrast, when cash was plentiful in Pennsylvania at the height of the French and Indian War, Pennsylvania’s “wealthy people were catching at every opportunity of letting out their money on good security, on common interest [that is, seven per cent].”43 With respect to the third point, the received theory that the nominal interest rate measures the opportunity cost of holding real money balances is derived from models in which individuals are free to borrow and lend at the nominal interest rate. Insofar as lenders respected the usury ceilings, borrowers were unable to borrow freely at the nominal interest rate. Recent work on moral hazard and adverse selection suggest that even private unregulated lenders forced to make loans in an environment characterized by seriously asymmetric information would be wise to ration loans by charging less than market clearing rates and limiting allowed borrowing. The creditworthiness of individuals was more difficult to determine in colonial times than today, and asymmetric information problems were rife. Under such circumstances, even an unregulated market rate of interest (if we had such data, which we don’t) would understate the opportunity cost of holding money for constrained borrowers.

The debate over why some colonial bills of credit depreciated, while others did not has spilled over into another related question: how much cash [i.e., paper money plus specie] circulated in the American colonies, and how much was in bills of credit, and how much was in specie? Clearly, if there was hardly any specie anywhere in colonial America, the concomitant circulation of specie at fixed rates could scarcely account for the stable purchasing power of bills of credit.

Determining how much cash circulated in the colonies is no easy matter, because the amount of specie in circulation is so hard to determine. The issue is further complicated by the fact that the total amount of cash in circulation fluctuated considerably from year to year, depending on such things as the demand for colonial staples and the magnitude of British military expenditure in the colonies (Sachs, 1957; Hemphill, 1964). The mix of bills of credit and specie in circulation was also highly variable. In the Middle colonies – and much of the most contentious debate involves the Middle colonies – the quantity of bills of credit in circulation was very modest (both absolutely and in per-capita terms) before the French and Indian War. The quantity exploded to cover military expenditures during the French and Indian War, and then fell again following 1760, until by the late colonial period, the quantity outstanding was once again very modest. Pennsylvania’s experience is not atypical of the Middle colonies. In 1754, on the eve of the French and Indian War, only £81,500 in Pennsylvania bills of credit were in circulation. At the height of the conflict, in 1760, this had increased to £446,158, but by 1773 the sum had been reduced to only £135,006 (Brock, 1992, Table 6). Any conclusion about the importance of bills of credit in the colonial money supply has to be carefully qualified because it will depend on the year in question.

Traditionally, economic historians have focused their attention on the eve of the Revolution, with a special focus on 1774, because of Alice Hanson Jones’s extensive study of 1774 probate records. Even with the inquiry dramatically narrowed, estimates have varied widely. McCusker and Menard (1985, p. 338), citing Alexander Hamilton for authority, estimated that just before the Revolution the “current cash” totaled 30 million dollars. Of the 30 million dollars, Hamilton said 8 million consisted of specie (27%). On the basis of this authority, Smith (1985a, p. 538; 1988, p. 22) has maintained that specie was a comparatively minor component in the colonial money supply.

Hamilton was arguing in favor of banks when he made this oft-cited estimate, and his purpose in presenting it was to show that the circulation was capable of absorbing a great deal of paper money, which ought to make us wonder whether his estimate might have been biased by his political agenda. Whether biased, or simply misinformed, Hamilton clearly got his facts wrong.

All estimates of the quantity of colonial bills of credit in circulation – including those of Brock (1975, 1992) that have been relied on by recent authors of all sides of the debate – lead inescapably to the conclusion that in 1774 there were very few bills of credit left outstanding, nowhere near the 22 million dollars implied by Hamilton. Calculations along these lines were first performed by Ratchford. Ratchford (1941, pp. 24-25) estimated the total quantity of bills of credit outstanding in each colony on the eve of the Revolution, and then added the local £., s., and d. of all the colonies (a true case of adding apples and oranges), converted to dollars by valuing dollars at 6 s. each, and concluded that the total was equal to about $5.16 million.

Ratchford’s method of summing local pounds and then converting to dollars is incorrect because local pounds did not have a uniform value across colonies. Since dollars were commonly rated at more than 6 s., his procedure resulted in an inflated estimate. We can correct this error by using McCusker’s (1978) data on 1774 exchange rates to convert local currency to sterling for each colony, obtain a sum in pounds sterling, and then convert to dollars using the rated value of the dollar in pounds sterling, 4½ s. Four and a half s. was very near the dollar’s value in London bullion markets in 1774, so no appreciable error arises from using the rated value. Doing so reduces Ratchford’s estimate to $3.42 million. Replacing Ratchford’s estimates of currency outstanding in New York, New Jersey, Pennsylvania, Virginia, and South Carolina with apparently superior data published by Brock (1975, 1992) reduces the total to $2.93 million. Even allowing for some imprecision in the data, this simply can’t be reconciled with Hamilton’s apparently mythical $22 million in paper money!

How much current cash was there in the colonies in 1774? Alice Hanson Jones’s extensive research into probate records gives an independent estimate of the money supply. Jones (1980, table 5.2) estimated that per capita cash-holding in the Middle colonies in 1774 was £1.8 sterling, and that the entire money supply of the thirteen colonies was slightly more than 12 million dollars.44 McCallum (1992) proposed another way to estimate total money balances in the colonies. McCallum started with the few episodes where historians generally agree paper money entirely displaced specie, making the total money supply measurable. He used money balances in these episodes as a basis for estimating money balances in other colonies by deriving approximate measures of the variability of money holdings over colonies and over time. Given the starkly different methodologies, it is remarkable that McCallum’s approach yields an answer practically indistinguishable from Jones’s.45

Various contemporary estimates, including estimates by Pelatiah Webster, Noah Webster, and Lord Sheffield, also suggest the total colonial money supply in 1774 was ten to twelve million dollars, mostly in specie (Michener 1988, p. 687; Elliot, 1845, p. 938). If we tentatively accept that the total money supply in the American colonies in 1774 was about twelve million dollars, and that only three million dollars worth of bills of credit remained outstanding, then fully 75% of the prewar money supply must have been in specie.

Even this may be an underestimate. Colonial probate inventories are notoriously incomplete, and the usual presumption is that Jones’s estimates are likely to be downwardly biased. Two examples not involving money illustrate the general problem. In Jones’s collection of inventories, over 20% of the estates did not include any clothes (Lindert, 1981, p. 657). In an independent survey of Surry County, Virginia probate records, Anna Hawley (1987, pp. 27-8) noted that only 34% of the estates listed hoes despite the fact that the region’s staple crops, corn and tobacco, had to be hoed several times a year.

In Jones’s 1774 database an amazing 70% of all estates were devoid of money. While the widespread use of credit made it possible to do without money in most transactions it is likely some estates contained cash that does not appear in probate inventories. Peter Lindert (1981, p. 658) surmised “cash was simply allocated informally among survivors even before probate took place.” McCusker and Menard (1985, p. 338, fn. 14) concurred noting “cash would have been one of the things most likely to have been distributed outside the usual probate proceedings.” If Jones actually underestimated cash holdings in 1774 the implication would be that more than 75% of the prewar money supply must have been specie.

That most of the cash circulating in the colonies in 1774 must have been specie seems like an inescapable conclusion. The issue has been clouded, however, by the existence of many contradictory and internally inconsistent estimates in the literature. By using them to defend his contention that specie was relatively unimportant, Smith (1988, p. 22) drew attention to these estimates.

The first such estimate was made by Roger Weiss (1970, p. 779), who computed the ratio of paper money to total money in the Middle colonies, using Jones’s probate data to estimate total money balances as has been done here; he arrived at a considerably smaller fraction of specie in the money supply. There is a simple explanation for this puzzling result: Weiss, whose article was published in 1970, based his analysis on Jones’s 1968 dissertation rather than her 1980 book. In her dissertation, Jones (1968, Tables 3 and 4, pp. 50-51) estimated the money supply in the three Middle colonies at £2.0 local currency per free white capita. Since £1 local currency was worth about £0.6 sterling, Weiss began with an estimated total money supply of £1.2 sterling per free white capita (equal to £1.13 per capita), rather than Jones’s more recent estimate of £1.8 sterling per capita.

Another authority is Letwin (1982, p. 467), who estimated that more than 60% of the money supply of Pennsylvania in 1775 was paper. Letwin used the Historical Statistics of the United States for his money supply data, and a casual back-of-the-envelope estimate that nominal balances in Pennsylvania were £700,000 in 1775 to conclude that 63% of Pennsylvania’s money supply was paper money. However, the data in Historical Statistics of the United States are known to be incorrect: Using Letwin’s back-of-the-envelope estimate, but redoing the calculation using Brock’s estimates of paper money in circulation, gives the result that in 1775 only 45.5% of Pennsylvania’s money supply was paper money; for 1774 the figure is 31%.46

That good faith attempts to estimate the stock of specie in the colonies in 1774 have given rise to such wildly varying and inconsistent estimates gives some indication of the task that remains to be accomplished.47 Many hints about how the specie stock varied over time in colonial America can be found in newspapers, legislative records, pamphlets and correspondence. Organizing those fragments of evidence and interpreting them is going to require great skill and will probably have to be done colony by colony. In addition, if the key to the purchasing power of colonial currency lies in the ratings attached to coins as I personally believe it does, then more effort is going to have to be paid in the future to tracking how those ratings evolved over time. Our knowledge at the moment is very fragmentary, probably because the politics of paper money has so engrossed the attention of historians that few people have attached much significance to coin ratings.

Economic historian Farley Grubb has proposed (2003, 2004, 2007) that the composition of the medium of exchange in colonial America and the early Republic can be determined from the unit of account used in arm’s length transactions, such as rewards offered in runaway ads and prices recorded in indentured servant contract registrations. If, for instance, a runaway reward is offered in pounds, shillings and pence, it means (Grubb argues) that colonial or state bills of credit were the medium of exchange used, while dollar rewards in such ads would imply silver. Grubb then uses contract registrations in the early Republic (2003, 2007) and runaway ads in colonial Pennsylvania (2004) to develop time series for hitherto unmeasurable components of the money supply and draws many striking conclusions from them. I believe Grubb is proceeding on a mistaken premise. Reversing Grubb’s procedure and using runaway ads in the early Republic and contract registrations in colonial Pennsylvania yields dramatically different results, which suggests the method is not useful. I have participated in this contentious published debate (see Michener and Wright 2005, 2006a, 2006c and Grubb 2003, 2004, 2006a, 2006b, 2007) and will leave it to the reader to draw his or her own conclusions.

Notes:

1. Beginning in 1767, Maryland issued bills of credit denominated in dollars (McCusker, 1978, p. 194).

2. For a number of years, Georgia money was an exception to this rule (McCusker, 1978, pp. 227-8).

3. Elmer (1869, p. 137). Similarly, historian Robert Shalhope (Shalhope, 2003, pp. 140, 142, 147, 290) documents a Vermont farmer who continued to reckon, at least some of the time, in New York currency (i.e. 8 shillings = $1) well into the 1820s.

4. To clarify: In New York, a dollar was rated at eight shillings, hence one reale, an eighth of a dollar, was one shilling. In Richmond and Boston, the dollar was rated at six shillings, or 72 pence, one eighth of which is 9 pence. In Philadelphia and Baltimore, the dollar was rated at seven shillings six pence, or ninety pence, and an eighth of a dollar would be 11.25 pence.

5. In 1822, for example, P. T. Barnum, then a young man from Connecticut making his first visit to New York, paid too much for a brace of oranges because of confusion over the unit of account. “I was told,” he later related, “[the oranges] were four pence apiece [as Barnum failed to realise, in New York there were 96 pence to the dollar], and as four pence in Connecticut was six cents, I offered ten cents for two oranges, which was of course readily taken; and thus, instead of saving two cents, as I thought, I actually paid two cents more than the price demanded” (Barnum, 1886, p. 18).

6. One way to see the truth of this statement is to examine colonial records predating the emission of colonial bills of credit. Virginia pounds are referred to long before Virginia issued its first bills of credit in 1755. See, for example, Pennsylvania Gazette, September 20, 1736, quoting Votes of the House of Burgesses in Virginia, August 30, 1736 or the Pennsylvania Gazette, May 29, 1746, quoting a runaway ad that mentions “a bond from a certain Fielding Turner to William Williams, for 42 pounds Virginia currency.” Advertisements in the Philadelphia newspapers in 1720 promise rewards for the return of runaway servants and slaves in Pennsylvania pounds, even though Pennsylvania did not issue its first bills of credit until 1723. The contemporary meaning of “currency” sheds light on otherwise confusing statements, such as an ad in the Pennsylvania Gazette, May 12, 1763, where the advertiser offered a reward for the recovery of £460 “New York currency” that was stolen from him and then parenthetically noted “the greatest part of said Money was in Jersey Bills.”

7. For an example of a complete list, see Felt (1839, pp. 82-83).

8. Further discussion of country pay in Connecticut can be found in Bronson (1865, pp. 23-4).

9. Weiss (1974, pp. 580-85) cites a passage from an 1684 court case that appears to contradict this discount. However, inspecting the court records shows that the initial debt consisted of 34s. 5d. in money to which the court added 17s. 3d. to cover the difference between money and country pay, a ratio of pay to money of exactly 3 to 2 (Massachusetts, 1961, pp. 303-4). Other good illustrations of the divergence of cash and country pay prices can be found in Knight (1935, pp. 40-1) and Judd (1905, pp. 95-6). The multiple price system was not limited to Massachusetts and Connecticut (Coulter, 1944, p. 107).

10. Thomas Bannister to Mr. Joseph Thomson, March 8, 1699/1700 in (Bannister, 1708).

11. In New York, for instance, early issues were legal tender, but the Currency Act of 1764 put a halt to new issues of legal tender paper money; the legal tender status of practically all existing issues expired in 1768. After prolonged and contentious negotiation with imperial authorities, the Currency Act of 1770 permitted New York to issue paper money that was a legal tender in payments to the colonial government, but not in private transactions. New York made its first issue under the terms of the Currency Act of 1770 in early 1771 (Ernst, 1973).

12. Ordinarily, but not always. For instance, in 1731 South Carolina reissued £106,500 in bills of credit without creating any tax fund with which to redeem them (Nettels, 1934, pp. 261-2; Brock, 1975, p. 123). The Board of Trade repeatedly pressured the colony to create a tax fund for this purpose, but without success. That no tax funds had been earmarked to redeem these bills was common knowledge, but it did not make the bills less acceptable as a medium of exchange, or adversely affect their value. The episode contradicts the common supposition that the promise of future redemption played a key role in determining the value of colonial currencies.

13. Once the bills of credit were placed in circulation, no distinction was made between them based on how they were originally issued. It is not as if one could only pay taxes with bills of the first sort, or repay mortgages with bills of the second sort. Many colonies, to save the cost of printing, would reuse worn but serviceable notes. A bill originally issued on loan, upon returning to the colonial treasury, might be reissued on tax funds; often it would have been impossible, even in principle, for an individual to examine the bills in his possession and deduce the funds ostensibly backing them.

14. Late in the seventeenth century Massachusetts briefly operated a mint that issued silver coins denominated in the local unit of account (Jordon, 2002). On the eve of the Revolution, Virginia obtained official permission to have copper coins minted for use in Virginia (Davis, 1970, vol. 1, chapter 2; Newman, 1956).

15. The Massachusetts government, unable to honor redemption promises made when the first new tenor emission was first created, decided in 1742 to revalue these bills from three to one to four to one with old tenor as compensation. When Massachusetts returned to a specie standard, the remaining middle tenor bills were redeemed at four to one (Davis, 1970; McCusker, 1978, p. 133).

16. New and old tenors have led to much confusion. In the Boston Weekly News Letter, July 1, 1742, there is an ad pertaining to someone who mistakenly passed Rhode Island New Tenor in Boston at three to one, when it was supposed to be valued at four to one. Modern day historians have also occasionally been misled. An excellent example can be found in Patterson (1961, p. 27). Patterson believed he had unearthed evidence of outrageous fraud during the Massachusetts currency reform, whereas he had, in fact, simply failed to convert a sum in an official document stated in new tenor terms into appropriate old tenor terms. Sufro (1976, p. 247) following Patterson, made similar accusations based on a similar misunderstanding of New England’s monetary units.

17. That colonial treasurers did not unfailingly provide this service is implicit in statements found in merchant letters complaining of how difficult it sometimes became to convert paper money to specie (Beekman to Evan and Francis Malbone, March 10, 1769, White, 1956, p. 522).

18. Nathaniel Appleton (1748) preached a sermon excoriating the province of Massachusetts Bay for flagrantly failing to keep the promises inscribed on the face of its bills of credit.

19. Goldberg (2009) uses circumstantial evidence to suggest that Massachusetts was engaged in a “monetary ploy to fool the king” when it made its first emissions. In Goldberg’s telling of the tale, the king had been furious about the Massachusetts mint and officially issuing paper money that was a full legal tender would have been a “colossal mistake” because it would have endangered the colony’s effort to obtain a new charter, which was essential to confirm the land grants the colony had already made. The alleged ploy Goldberg discovered was a provision passed shortly afterwards: “Ordered that all country pay with one third abated shall pass as current money to pay all country’s debts at the same prices set by this court.” Since those with a claim on the Treasury were going to be tendered either paper money or country pay, and since Goldberg interprets this as requiring those creditors to accept either 3 pounds in paper money or 2 pounds in country pay, the provision was, in Goldberg’s estimation, a way of forcing the paper money on the populace at a one third discount. The shortchanging of the public creditors, through some mechanism not adequately explained to my understanding, was sufficient to make the new paper money a defacto legal tender.

There are several problems with Goldberg’s analysis. Jordan (2002, pp. 36-45) has recently written the definitive history of the Massachusetts mint, and he minutely reviews the evidence pertaining to the Massachusetts mint and British reaction to it. He concludes that “there was no concerted effort by the king and his ministers to crush the Massachusetts mint.” In 1692 Massachusetts obtained a new charter and passed a law making the bills of credit a legal tender. The new charter required Massachusetts to submit all its laws to London for review, yet the imperial authorities quietly ratified the legal tender law, even though they were fully empowered to veto it, which seems very peculiar if the legal tender status of the bills was as unpopular with the King and his ministers as Goldberg maintains. The smoking gun Goldberg cites appears to me to be no more than a statement of the “three pounds of country pay equals two pounds cash” rule that prevailed in Massachusetts in the late seventeenth century. In his argument, Goldberg tacitly assumes that a pound of country pay was equal in value to a pound of hard money; he observes that the new bills of credit initially circulated at a one third discount (with respect to specie) and that this might have arisen because recipients (according to his interpretation) were offered only two pounds of country pay in lieu of three pounds of bills of credit (Goldberg, p. 1102). However, because country pay itself was worth, at most, two thirds of its nominal value in specie, by Goldberg’s reasoning paper money should have been at a discount of at least five ninths with respect to specie.

The paper money era in Massachusetts brought forth approximately fifty pamphlets and hundreds of newspaper articles and public debates in the Assembly, none of which confirm Goldberg’s inference.

20. The role bills of credit played as a means of financing government expenditures is discussed in Ferguson (1953).

21. Georgia was not founded until 1733, and one reason for its founding was to create a military buffer to protect the Carolinas from the Spanish in Florida.

22. Grubb (2004, 2006a, 2006b) argues that bills of credit did not commonly circulate across colony borders. Michener and Wright (2006a, 2006c) dispute Grubb’s analysis and provide (Michener and Wright 2006a, pp. 12-13, 24-30) additional evidence of the phenomenon.

23. Poor Thomas Improved: Being More’s Country Almanack for … 1768 gives as a rule that “To reduce New-Jersey Bills into York Currency, only add one penny to every shilling, and the Sum is determined.” (McCusker, 1978, pp. 170-71; Stevens, 1867, pp. 151-3, 160-1, 168, 185-6, 296; Lincoln, 1894, vol. 5, Chapter 1654, pp. 638-9.)

24. In two articles, John R. Hanson (1979, 1980) argued that bills of credit were important to the colonial economy because they provided much-needed small denomination money. His analysis, however, completely ignores the presence of half-pence, pistareens, and fractional denominations of the Spanish dollar. The Spanish minted halves, quarters, eighths, and sixteenths of the dollar, which circulated in the colonies (Solomon, 1976, pp. 31-32). For a good introduction to small change in the colonies, see Andrews (1886), Newman (1976), Mossman (1993, pp. 105-142), and Kays (2001).

25. Council of Trade and Plantations to the Queen, November 23, 1703, in Calendar of State Papers, 1702-1703, entry #1299. Brock, 1975, chap. 4.

26. This, it should be noted, is what British authorities meant by “proclamation money.” Since salaries of royal officials, fees, quit rents, etc. were often denominated in proclamation money, colonial courts often found a rationale to attach their own interpretation to “proclamation money” so as to reduce the real value of such salaries and fees. In New York, for example, eight shillings in New York’s bills of credit were ostensibly worth one ounce of silver although by the late colonial period they were actually worth less. This valuation of bills of credit made each seven pounds of New York bills of credit in principle worth six pounds in proclamation money. The New York courts used that fact to establish the rule that seven pounds in New York currency could pay a debt of six pounds proclamation money. This rule allowed New Yorkers to pay less in real terms than was contemplated by the British (Hart, 2005, pp. 269-71).

27. Brock (1975). The text of the proclamation can be found in the Boston New-Letter, December 11, 1704. To be precise, the Proclamation rate was actually in slight contradiction to that in the Massachusetts law, which had rated a piece of eight weighing 17 dwt. at 6 s. See Brock (1975, p. 133, fn. 7).

28. This contention has engendered considerable controversy, but the evidence for it seems to me both considerable and compelling. Apart from evidence cited in the text, see for Massachusetts, Michener (1987, p. 291, fn. 54), Waite Winthrop to Samuel Reade, March 5, 1708 and Wait Winthrop to Samuel Reade, October 22, 1709 in Winthrop (1892, pp. 165, 201); For South Carolina see South Carolina Gazette, May 14, 1753; August 13, 1744; and Manigualt (1969, p. 188); For Pennsylvania see Pennsylvania Gazette, April 2, 1730, December 3, 1767, February 15, 1775, March 8, 1775; For St. Kitts see Roberdeau to Hyndman & Thomas, October 16, 1766, in Roberdeau (1771); For Antigua, see Anonymous (1740).

29. The Chamber of Commerce adopted its measure in October 1769, apparently too late in the year to appear in the “1770” almanacs, which were printed and sold in late 1769. The 1771 almanacs, printed in 1770, include the revised coin ratings.

30. Note that the relative ratings of the half joe are aligned with the ratings of the dollar. For example, the ratio of the New York value of the half joe to the Pennsylvania value is 64 s./60 s. = 1.066666, and the ratio of the New York value of the half joe to the Connecticut value is 64 s./48 s. = 1.3333.

31. This bank has been largely overlooked, but is well documented. Letter of a Merchant in South Carolina to Alexander Cumings, Charlestown, May 23, 1730, South Carolina Public Records, Vol XIV, pp. 117-20; Anonymous (1734); Easterby (1951, [March 5, 1736/37] vol. 1, pp. 309-10); Governor Johnson to the Board of Trade in Calendar of State Papers, 1731, entry 488, p. 342; Whitaker (1741, p. 25); and Vance (1740, p. 463).

32.I base this on my own experience reviewing the contents of RG3 Litchfield County Court Files, Box 1 at the Connecticut State Library.

33. Though best documented in New England, Benjamin Franklin (1729, CCR II, p. 340) mentions their use in Pennsylvania.

34. See Douglass (1740, CCR III, pp. 328-329) and Vance (1740, CCR III, pp. 328-329). Douglass and Vance disagreed on all the substantive issues, so that their agreement on this point is especially noteworthy. See also Boston Weekly Newsletter, Feb. 12-19, 1741.

35. Data on New England prices during this period are very limited, but annual data exist for wheat prices and silver prices. Regressing the log of these prices on time yields an annual growth rate of prices approximately that mentioned in the text. The price data leave much to be desired, and the inflation estimates should be understood as simply a crude characterization. However, it does show that New England’s peacetime inflation during this era was not so extreme as to shock modern sensibilities.

36. Smith (1985a, 1985b). The quantity theory holds that the price level is determined by the supply and demand for money – loosely, how much money is chasing how many goods. Smith’s version of the backing theory is summarized by the passage quoted from his article.

37. John Adams explained this very clearly in a letter written June 22, 1780 to Vergennes (Wharton, vol. 3, p. 811). Adams’s “certain sum” and McCallum’s “normal real balances” are essentially the same, although Adams is speaking in nominal and McCallum in real terms.

A certain sum of money is necessary to circulate among the society in order to carry on their business. This precise sum is discoverable by calculation and reducible to certainty. You may emit paper or any other currency for this purpose until you reach this rule, and it will not depreciate. After you exceed this rule it will depreciate, and no power or act of legislation hitherto invented will prevent it. In the case of paper, if you go on emitting forever, the whole mass will be worth no more than that was which was emitted within the rule.

38. One of the principle observations Smith (1985b, p. 1198) makes in dismissing the possible importance of interest rate fluctuations is “it is known that sterling bills of exchange did not circulate at a discount.” Sterling bills were payable at a future date, and Smith presumably means that sterling bills should have been discounted if interest made an appreciable difference in their market value. Sterling bills, however, were discounted. These bills were not payable at a particular fixed date, but rather on a certain number of days after they were first presented for payment. For example, a bill might be payable “on sixty days sight,” meaning that once the bill was presented (in London, for example, to the person upon whom it was drawn) the person would have sixty days in which to make payment. Not all bills were drawn at the same sight, and sight periods of 30, 60, and 90 days were all common. Bills payable sooner sold at higher prices, and bills could be and sometimes were discounted in London to obtain quicker payment (McCusker, 1978, p. 21, especially fn. 25; David Vanhorne to Nicholas Browne and Co., October 3, 1766. Brown Papers, P-V2, John Carter Brown Library). In the early Federal period many newspapers published extensive prices current that included prices of bills drawn on 30, 60, and 90 days’ sight.

39. Franklin (1729) wrote a tract on colonial currency, in which he maintained as one of his propositions that “A great Want of Money in any Trading Country, occasions Interest to be at a very high Rate.” An anonymous referee warned that when colonists complained of a “want of money” that they were not complaining of a lack of a circulating medium per se, but were expressing a desire for more credit at lower interest rates. I do not entirely agree with the referee. I believe many colonists, like Franklin, reasoned like modern-day Keynesians, and believed high interest rates and scarce credit were caused by an inadequate money supply. For more on this subject, see Wright (2002, chapter 1).

40. Public Record Office, CO 5/ 947, August 13, 1768, pp. 18-23.

41. New Hampshire Gazette and Historical Chronicle, January 13, 1769.

42. Public Record Office, Wentworth to Hillsborough, CO 5/ 936, July 3, 1769.

43. Pennsylvania Chronicle, and Universal Advertiser, 28 December 1767.

44. This should be understood to be paper money and specie equal in value to 12 million dollars, not 12 million Spanish dollars. The fraction of specie in the money supply can’t be directly estimated from probate records. Jones (1980, p. 132) found that “whether the cash was in coin or paper was rarely stated.”

45. McCallum deflated money balances by the free white population rather than the total population. Using population estimates to put the numbers on a comparable basis reveals how close McCallum’s estimates are to those of Jones. For example, McCallum’s estimate for the Middle colonies, converted to a per-capita basis, is approximately £1.88 sterling.

46. This incident illustrates how mistakes about colonial currency are propagated and seem never to die out. Henry Phillips 1865 book presented data on Pennsylvania bills of credit outstanding. One of his major “findings” was that Pennsylvania retired only £25,000 between 1760 and 1769. This was a mistake: Brock (1992, table 6) found £225,247 had been retired over the same period. Because of the retirements Phillips missed, he overestimated the quantity of Pennsylvania bills of credit in circulation in the late colonial period by 50 to 100%. Lester (1939, pp. 88, 108) used Phillips’s series; Ratchford (1941) obtained his data from Lester. Through Ratchford, Phillips’s series found its way into Historical Statistics of the United States.

47. Benjamin Allen Hicklin (2007) maintains that generations of historians have exaggerated the scarcity of specie in seventeenth and early eighteenth century Massachusetts. Hicklin’s analysis illustrates the unsettled state of our knowledge about colonial specie stocks.

References:

Adams, John Q. “Report upon Weights and Measures.” Reprinted in The North American Review, Boston: Oliver Everett, vol. 14 (New Series, Vol. 5) (1822), pp. 190-230.

Adler, Simon L. Money and Money Units in the American Colonies, Rochester NY: Rochester Historical Society, 1900.

Andrew, A. Piatt. “The End of the Mexican Dollar.” Quarterly Journal of Economics, vol. 18, no. 3 (1904), pp. 321-56.

Andrews, Israel W. “McMaster on our Early Money,” Magazine of Western History, vol. 4 (1886), pp. 141-52.

Anonymous. An Essay on Currency, Charlestown, South Carolina: Printed and sold by Lewis Timothy, 1734.

Anonymous. Two Letters to Mr. Wood on the Coin and Currency in the Leeward Islands, &c. London: Printed for J. Millan, 1740.

Anonymous. “The Melancholy State of this Province Considered,” Boston, 1736, reprinted in Andrew McFarland Davis (ed.), Colonial Currency Reprints, Boston: The Prince Society, 1911, vol III, pp. 135-147.

Appleton, Nathaniel. The Cry of Oppression, Boston: J. Draper, 1748.

Bannister, Thomas. Thomas Bannister letter book, 1695-1708, MSS, Newport Historical Society, Newport, RI.

Barnum, Phineas T. The Life of P.T. Barnum, Buffalo: The Courier Company Printers, 1886.

Baxter, William. The House of Hancock, New York: Russell and Russell, Inc., 1965.

Bernholz, Peter. “Inflation, Monetary Regime and the Financial Asset Theory of Money,” Kyklos, vol. 41, fasc. 1 (1988), pp. 5-34.

Brodhead, John R. Documents Relative to the Colonial History of the State of New York, Albany, NY: Weed Parsons, Printers, 1853.

Brock, Leslie V. Manuscript for a book on Currency, Brock Collection, Accession number 10715, microfilm reel #M1523, Alderman Library special collections, University of Virginia, circa 1956. This book was to be the sequel to Currency of the American Colonies, carrying the story to 1775.

Brock, Leslie V. The Currency of the American Colonies, 1700-1764, New York: Arno Press, 1975.

Brock, Leslie V. “The Colonial Currency, Prices, and Exchange Rates,” Essays in History, vol. 34 (1992), 70-132. This article contains the best available data on colonial bills of credit in circulation.

Bronson, Henry. “A Historical Account of Connecticut Currency, Colonial Money, and Finances of the Revolution,” Printed in New Haven Colony Historical Papers, New Haven, vol. 1, 1865.

Bullock, Charles J. Essays on the Monetary History of the United States, New York: Greenwood Press, 1969.

Burnett, Edmund C. Letters to Members of the Continental Congress, Carnegie Institution of Washington Publication no. 299, Papers of the Dept. of Historical Research, Gloucester, MA: P. Smith, 1963.

Calomiris, Charles W. “Institutional Failure, Monetary Scarcity, and the Depreciation of the Continental,” Journal of Economic History, 48 (1988), pp. 47-68

Cooke, Ebenezer. The Sot-weed Factor Or, A Voyage To Maryland. A Satyr. In which Is describ’d, the laws, government, courts And constitutions of the country, and also the buildings, feasts, frolicks, entertainments And drunken humours of the inhabitants of that part of America. In burlesque verse, London: B. Bragg, 1708.

Connecticut. Public Records of the Colony of Connecticut [1636-1776], Hartford CT: Brown and Parsons, 1850-1890.

Coulter, Calvin Jr. The Virginia Merchant, Ph. D. dissertation, Princeton University, 1944.

Davis, Andrew McFarland. Currency and Banking in the Province of the Massachusetts Bay, New York: Augustus M. Kelley, 1970.

Douglass, William.“A Discourse concerning the Currencies of the British Plantations in America &c.” Boston, 1739, reprinted in Andrew McFarland Davis (ed.), Colonial Currency Reprints, Boston: The Prince Society, 1911, vol III, pp.307-356.

Easterby, James H. et. al. The Journal of the Commons House of Assembly, Columbia: Historical Commission of South Carolina, 1951-.

Elliot, Jonathan. The Funding System of the United States and of Great Britain, Washington, D.C.: Blair and River, 1845.

Elmer, Lucius Q. C., History of the Early Settlement and Progress of Cumberland Conty, New Jersey; and of the Currency of this and the Adjoining Colonies. Bridgeport, N.J.: George F. Nixon, Publisher, 1869.

Enquiry into the State of the Bills of Credit of the Province of the Massachusetts-Bay in New-England: In a Letter from a Gentleman in Boston to a Merchant in London. Boston, 1743/4, reprinted in Andrew McFarland Davis (ed.), Colonial Currency Reprints, Boston: The Prince Society, 1911, vol IV, pp.149-209.

Ernst, Joseph A. Money and Politics in America, 1755-1775, Chapel Hill, NC: University of North Carolina Press, 1973.

Ernst, Joseph A. “The Labourers Have been the Greatest Sufferers; the Truck System in Early Eighteenth-Century Massachusetts,” in Merchant Credit and Labour Strategies in Historical Perspective, Rosemary E. Ommer, ed., Frederickton, New Brunswick: Acadiensis Press, 1990.

Felt, Joseph B. Historical Account of Massachusetts Currency. New York: Burt Franklin, 1968, reprint of 1839 edition.

Ferguson, James E. “Currency Finance, An Interpretation of Colonial Monetary Practices,” William and Mary Quarterly, 10, no. 2 (April 1953): 153-180.

Fernow, Berthold. “Coins and Currency in New-York,” The Memorial History of New York, New York, 1893, vol. 4, pp. 297-343.

Fitch, Thomas. Thomas Fitch letter book, 1703-1711, MSS, American Antiquarian Society, Worcester, MA.

Forman, Benno M. “The Account Book of John Gould, Weaver, of Topsfield, Massachusetts,” Essex Institute Historical Collections, vol. 105, no. 1 (1969), pp. 36-49.

Franklin, Benjamin. “A Modest Enquiry into the Nature and Necessity of a Paper Currency,” Philadelphia, 1729, reprinted in Andrew McFarland Davis (ed.), Colonial Currency Reprints, Boston: The Prince Society, 1911, vol. II, p. 340.

Franklin, Benjamin, The Papers of Benjamin Franklin, Leonard W. Labaree (ed.), New Haven, CT: Yale University Press, 1959.

Goldberg, Dror. “The Massachusetts Paper Money of 1690,” Journal of Economic History, vol. 69, no. 4 (2009), pp. 1092-1106.

Gottfried, Marion H. “The First Depression in Massachusetts,” New England Quarterly, vol. 9, no. 4 (1936), pp. 655-678.

Great Britain. Public Record Office. Calendar of State Papers, Colonial Series, London: Her Majesty’s Stationary Office, 44 vols., 1860-1969.

Grubb, Farley W. “Creating the U.S. Dollar Currency Union, 1748-1811: A Quest for Monetary Stability or a Usurpation of State Sovereignty for Personal Gain?” American Economic Review, vol. 93, no. 5 (2003), pp. 1778-98.

Grubb, Farley W. “The Circulating Medium of Exchange in Colonial Pennsylvania, 1729-1775: New Estimates of Monetary Composition, Performance, and Economic Growth,” Explorations in Economic History, vol. 41, no. 4 (2004), pp. 329-360.

Grubb, Farley W. “Theory, Evidence, and Belief—The Colonial Money Puzzle Revisited: Reply to Michener and Wright.” Econ Journal Watch, vol. 3, no. 1, (2006a), pp. 45-72.

Grubb, Farley W. “Benjamin Franklin and Colonial Money: A Reply to Michener and Wright—Yet Again” Econ Journal Watch, vol. 3, no. 3, (2006b), pp. 484-510.

Grubb, Farley W. “The Constitutional Creation of a Common Currency in the U.S.: Monetary Stabilization versus Merchant Rent Seeking.” In Lars Jonung and Jurgen Nautz, eds., Conflict Potentials in Monetary Unions, Stuttgart, Franz Steiner Verlag, 2007, pp. 19-50.

Hamilton, Alexander. Hamilton’s Itinerarium, Albert Bushnell (ed.), St. Louis, MO: William Bixby, 1907.

Hanson, Alexander C. Remarks on the proposed plan of an emission of paper, and on the means of effecting it, addressed to the citizens of Maryland, by Aristides, Annapolis: Frederick Green, 1787.

Hanson John R. II. “Money in the Colonial American Economy: An Extension,” Economic Inquiry, vol. 17 (April 1979), pp. 281-86.

Hanson John R. II. “Small Notes in the American Economy,” Explorations in Economic History, vol. 17 (1980), pp. 411-20.

Harper, Joel W. C. Scrip and other forms of local money, Ph. D. dissertation, University of Chicago, 1948.

Hart, Edward H. Almost a Hero: Andrew Elliot, the King’s Moneyman in New York, 1764-1776. Unionville, N.Y.: Royal Fireworks Press, 2005.

Hawley, Anna. “The Meaning of Absence: Household Inventories in Surry County, Virginia, 1690-1715,” in Peter Benes (ed.) Early American Probate Inventories, Dublin Seminar for New England Folklore: Annual Proceedings, 1987.

Hazard, Samuel et. al. (eds.). Pennsylvania Archives, Philadelphia: Joseph Severns, 1852.

Hemphill, John II. Virginia and the English Commercial System, 1689-1733, Ph. D. diss., Princeton University, 1964.

Horle, Craig et. al. (eds.). Lawmaking and Legislators in Pennsylvania: A Biographical Dictionary. Philadelphia: University of Pennsylvania Press, 1991-.

Horle, Craig et. al. (eds.). Lawmaking and Legislators in Pennsylvania: A Biographical Dictionary. Philadelphia: University of Pennsylvania Press, 1991-.

House of Lords. The Manuscripts of the House of Lords, 1706-1708, Vol. VII (New Series), London: His Majesty’s Stationery Office, 1921.

Hutchinson, Thomas. The History of the Province of Massachusetts Bay, Cambridge, MA: Harvard University Press, 1936.

Jones, Alice Hanson. Wealth Estimates for the American Middle Colonies, 1774, Ph.D. diss., University of Chicago, 1968.

Jones, Alice Hanson, Wealth of a Nation to Be, New York: Columbia University Press, 1980.

Jordan, Louis. John Hull, the Mint and the Economics of Massachusetts Coinage, Lebanon, NH: University Press of New England, 2002.

Judd, Sylvester. History of Hadley, Springfield, MA: H.R. Huntting & Co., 1905.

Kays, Thomas A. “When Cross Pistareens Cut their Way through the Tobacco Colonies,” The Colonial Newsletter, April 2001, pp. 2169-2199.

Kimber, Edward, Itinerant Observations in America, (Kevin J. Hayes, ed.), Newark, NJ: University of Delaware Press, 1998.

Knight, Sarah K. The Journal of Madam Knight, New York: Peter Smith, 1935.

Lester, Richard A. Monetary Experiments: Early American and Recent Scandinavian, New York: Augustus Kelley, 1970.

Letwin, William. “Monetary Practice and Theory of the North American Colonies during the 17th and 18th Centuries,” in Barbagli Bagnoli (ed.), La Moneta Nell’economia Europea, Secoli XIII-XVIII, Florence, Italy: Le Monnier, 1981, pp. 439-69.

Lincoln, Charles Z. The Colonial Laws of New York, Vol V., Albany: James B. Lyon, State Printer, 1894.

Lindert, Peter H. “An Algorithm for Probate Sampling,” Journal of Interdisciplinary History, vol. 11, (1981).

Lydon, James G. “Fish and Flour for Gold: Southern Europe and the Colonial American Balance of Payments,” Business History Review, 39 (Summer 1965), pp. 171-183.

Main, Gloria T. and Main, Jackson T. “Economic Growth and the Standard of Living in Southern New England, 1640-1774,” Journal of Economic History, vol. 48 (March 1988), pp. 27-46.

Manigault, Peter. “The Letterbook of Peter Manigault, 1763-1773,” Maurice A. Crouse (ed.), South Carolina Historical Magazine, Vol 70 #3 (July 1969), pp. 177-95.

Massachusetts. Courts (Hampshire Co.). Colonial justice in western Massachusetts, 1639-1702; the Pynchon court record, an original judges’ diary of the administration of justice in the Springfield courts in the Massachusetts Bay Colony. Edited by Joseph H. Smith. Cambridge: Harvard University Press, 1961.

Massey, J. Earl. “Early Money Substitutes,” in Eric P. Newman and Richard G. Doty (eds.), Studies on Money in Early America, New York: American Numismatic Society, 1976, pp. 15-24.

Mather, Cotton. “Some Considerations on the Bills of Credit now passing in New-England,” Boston, 1691, reprinted in Andrew McFarland Davis (ed.), Colonial Currency Reprints, Boston: The Prince Society, 1911, vol. I, pp. 189-95.

McCallum, Bennett. “Money and Prices in Colonial America: A New Test of Competing Theories,” Journal of Political Economy, vol. 100 (1992), pp. 143-61,

McCusker, John J. Money and Exchange in Europe and America, 1600-1775: A Handbook, Williamsburg, VA: University of North Carolina Press, 1978.

McCusker, John J. and Menard, Russell R. The Economy of British America, 1607-1789, Chapel Hill, N.C.: University of North Carolina Press, 1985.

Michener, Ronald. “Fixed Exchange Rates and the Quantity Theory in Colonial America,” Carnegie-Rochester Conference Series on Public Policy, vol. 27 (1987), pp. 245-53.

Michener, Ron. “Backing Theories and the Currencies of the Eighteenth-Century America: A Comment,” Journal of Economic History, 48 (1988), pp. 682-92.

Michener, Ronald W. and Robert E. Wright. 2005. “State ‘Currencies’ and the Transition to the U.S. Dollar: Clarifying Some Confusions,” American Economic Review, vol. 95, no. 3 (2005), pp. 682-703.

Michener, Ronald W. and Robert E. Wright. 2006a. “Miscounting Money of Colonial America.” Econ Journal Watch, vol. 3, no. 1 (2006a), 4-44.

Michener, Ronald W. and Robert E. Wright. 2006b. “Development of the U.S. Monetary Union,” Financial History Review, vol. 13, no. 1 (2006b), pp. 19-41.

Michener, Ronald W. and Robert E. Wright. 2006c. “ Farley Grubb’s Noisy Evasions on Colonial Money: A Rejoinder ” Econ Journal Watch, vol. 3, no. 2 (2006c), pp. 1-24.

Morris, Lewis. The Papers of Lewis Morris,Eugene R. Sheridan, (ed.), Newark, NJ: New Jersey Historical Society, 1993.

Mossman Philip L. Money of the American Colonies and Confederation, New York: American Numismatic Society, 1993, pp. 105-142.

Nettels, Curtis P. “The Beginnings of Money in Connecticut,” Transactions of the Wisconsin Academy of Sciences, Arts, and Letters, vol. 23, (January 1928), pp. 1-28.

Nettels, Curtis P. The Money Supply of the American Colonies before 1720, Madison: University of Wisconsin Press, 1934.

Newman, Eric P. “Coinage for Colonial Virginia,” Numismatic Notes and Monographs, No. 135, New York: The American Numismatic Society, 1956.

Newman, Eric P. “American Circulation of English and Bungtown Halfpence,” in Eric P. Newman and Richard G. Doty (eds.) Studies on Money in Early America, New York: The American Numismatic Society, 1976, pp. 134-72.

Nicholas, Robert C. “Paper Money in Colonial Virginia,” The William and Mary Quarterly, vol. 20 (1912), pp. 227-262.

Officer, Lawrence C. “The Quantity Theory in New England, 1703-1749: New Data to Analyze an Old Question,” Explorations in Economic History, vol. 42, no. 1 (2005), pp. 101-121.

Patterson, Stephen Everett. Boston Merchants and the American Revolution to 1776, Masters thesis, University of Wisconsin, 1961.

Phillips, Henry. Historical Sketches of the Paper Currency of the American Colonies, original 1865, reprinted New York: Burt Franklin, 1969.

Plummer, Wilbur C. “Consumer Credit in Colonial Pennsylvania,” The Pennsylvania Magazine of History and Biography, LXVI (1942), pp. 385-409.

Ratchford, Benjamin U. American State Debts, Durham, N.C.: Duke University Press, 1941.

Reipublicæ, Amicus. “Trade and Commerce Inculcated; in a Discourse,” (1731). Reprinted in Andrew McFarland Davis, Colonial Currency Reprints, vol. 2, pp. 360-428.

Roberdeau, Daniel. David Roberdeau letter book, 1764-1771, MSS, Pennsylvania Historical Society, Philadelphia, PA.

Rosseau, Peter L. “Backing, the Quantity Theory, and the Transition to the U.S. Dollar, 1723-1850,” American Economic Review, vol. 97, no. 2 (2007), pp. 266-270.

Ruffhead, Owen. (ed.) The Statutes at Large, from the Magna Charta to the End of the last Parliament, 1761, 18 volumes., London: Mark Basket, 1763-1800.

Sachs, William S. The Business Outlook in the Northern Colonies, 1750-1775, Ph. D. Dissertation, Columbia University, 1957.

Schweitzer, Mary M. Custom and Contract: Household, Government, and the Economy in Colonial Pennsylvania, New York:Columbia University Press, 1987.

Schweitzer, Mary M. “State-Issued Currency and the Ratification of the U.S. Constitution,” Journal of Economic History, 49 (1989), pp. 311-22.

Shalhope, Robert E. A Tale of New England: the Diaries of Hiram Harwood, Vermont Farmer, 1810–1837, Baltimore: John Hopkins University Press, 2003.

Smith, Bruce. “American Colonial Monetary Regimes: The Failure of the Quantity Theory and Some Evidence in Favor of an Alternate View,” The Canadian Journal of Economics, 18 (1985a), pp. 531-64.

Smith, Bruce. “Some Colonial Evidence on Two Theories of Money: Maryland and the Carolinas, Journal of Political Economy, 93 (1985b), pp. 1178-1211.

Smith, Bruce. “The Relationship between Money and Prices: Some Historical Evidence Reconsidered,” Federal Reserve Bank of Minneapolis Quarterly Review, vol 12, no. 3 (1988), pp. 19-32.

Solomon, Raphael E. “Foreign Specie Coins in the American Colonies,”in Eric P. Newman (ed.), Studies on Money in Early America, New York: The American Numismatic Society, 1976, pp. 25-42.

Soltow, James H. The Economic Role of Williamsburg, Charlottesville, VA: University of Virginia Press, 1965.

South Carolina. Public Records of South Carolina, manuscript transcripts of the South Carolina material in the British Public Record office, at Historical Commission of South Carolina.

Stevens John A. Jr. Colonial Records of the New York Chamber of Commerce, 1768-1784, New York: John F. Trow & Co., 1867.

Sufro, Joel A. Boston in Massachusetts Politics 1730-1760, Ph.D. dissertation, University of Wisconsin, 1976.

Sumner, Scott. “Colonial Currency and the Quantity Theory of Money: A Critique of Smith’s Interpretation,” Journal of Economic History, 53 (1993), pp. 139-45.

Thayer, Theodore. “The Land Bank System in the American Colonies,” Journal of Economic History, vol. 13 (Spring 1953), pp. 145-59.

Vance, Hugh. An Inquiry into the Nature and Uses of Money, Boston, 1740, reprinted in Andrew McFarland Davis, Colonial Currency Reprints, Boston: The Prince Society, 1911, vol. III, pp. 365-474.

Weeden, William B. Economic and Social History of New England, Boston, MA: Houghton, Mifflin, 1891.

Weiss, Roger. “Issues of Paper Money in the American Colonies, 1720-1774,” Journal of Economic History, 30 (1970), pp. 770-784.

West, Roger C. “Money in the Colonial American Economy,” Economic Inquiry, vol. 16 (1985), pp. 1-15.

Wharton, Francis. (ed.) The revolutionary diplomatic correspondence of the United States, Washington, D.C.: Government Printing office, 1889.

Whitaker, Benjamin. The Chief Justice’s Charge to the Grand Jury for the Body of this Province, Charlestown, South Carolina: Printed by Peter Timothy, 1741.

White, Phillip L. Beekman Mercantile Papers, 1746-1799, New York: New York Historical Society, 1956.

Whitehead, William A. et. al. (eds.). Documents relating to the colonial, revolutionary and post-revolutionary history of the State of New Jersey, Newark: Daily Advertising Printing House, 1880-1949.

Wicker, Elmus. “Colonial Monetary Standards Contrasted: Evidence from the Seven Years War,” Journal of Economic History, 45 (1985), pp. 869-84.

Winthrop, Wait. “Winthrop Papers,” Collections of the Massachusetts Historical Society, Series 6, Vol 5, Boston: Massachusetts Historical Society, 1892.

Wright, Robert E. Hamilton Unbound: Finance and the Creation of the American Republic, Westport, Connecticut: Greenwood Press, 2002.

Citation: Michener, Ron. “Money in the American Colonies”. EH.Net Encyclopedia, edited by Robert Whaples. June 8, 2003, revised January 13, 2011. URL http://eh.net/encyclopedia/money-in-the-american-colonies/