EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

Military Spending Patterns in History

Jari Eloranta, Appalachian State University

Introduction

Determining adequate levels of military spending and sustaining the burden of conflicts have been among key fiscal problems in history. Ancient societies were usually less complicated in terms of the administrative, fiscal, technological, and material demands of warfare. The most pressing problem was frequently the adequate maintenance of supply routes for the armed forces. On the other hand, these societies were by and large subsistence societies, so they could not extract massive resources for such ventures, at least until the arrival of the Roman and Byzantine Empires. The emerging nation states of the early modern period were much better equipped to fight wars. On the one hand, the frequent wars, new gunpowder technologies, and the commercialization of warfare forced them to consolidate resources for the needs of warfare. On the other hand, the rulers had to – slowly but surely – give up some of their sovereignty to be able to secure required credit both domestically and abroad. The Dutch and the British were masters at this, with the latter amassing an empire that spanned the globe at the eve of the First World War.

The early modern expansion of Western European states started to challenge other regimes all over the world, made possible by their military and naval supremacy as well as later on by their industrial prowess. The age of total war in the nineteenth and twentieth centuries finally pushed these states to adopt more and more efficient fiscal systems and enabled some of them to dedicate more than half of their GDP to the war effort during the world wars. Comparatively, even though military spending was regularly the biggest item in the budget for most states before the twentieth century, it still represented only a modest amount of their GDP. The Cold War period again saw high relative spending levels, due to the enduring rivalry between the West and the Communist Bloc. Finally, the collapse of the Soviet Union alleviated some of these tensions and lowered the aggregate military spending in the world. Newer security challenges such as terrorism and various interstate rivalries have again pushed the world towards growing overall military spending.

This article will first elaborate on some of the research trends in studying military spending and the multitude of theories attempting to explain the importance of warfare and military finance in history. This survey will be followed by a chronological sweep, starting with the military spending of the ancient empires and ending with a discussion of the current behavior of states in the post-Cold War international system. By necessity, this chronological review will be selective at best, given the enormity of the time period in question and the complexity of the topic at hand.

Theoretical Approaches

Military spending is a key phenomenon in order to understand various aspects of economic history: the cost, funding, and burden of conflicts; the creation of nation states; and in general the increased role of government in everyone’s lives especially since the nineteenth century. Nonetheless, certain characteristics can be distinguished from the efforts to study this complex topic among different sciences (mainly history, economics, and political sciences). Historians, especially diplomatic and military historians, have been keen on studying the origins of the two World Wars and perhaps certain other massive conflicts. Nonetheless, many of the historical studies on war and societies have analyzed developments at an elusive macro-level, often without a great deal of elaboration on the quantitative evidence behind the assumptions on the effects of military spending. For example, Paul Kennedy argued in his famous The Rise and Fall of the Great Powers: Economic Change and Military Conflict from 1500 to 2000 (1989) that military spending by hegemonic states eventually becomes excessive and a burden on its economy, finally leading to economic ruin. This argument has been criticized by many economists and historians, since it seems to lack the proper quantitative sources to support his notion of interaction between military spending and economic growth.[2] Quite frequently, as emerging from the classic studies by A.J.P. Taylor and many of the more current works, historians tend to be more interested in the impact of foreign policy decision-making and alliances, in addition to resolving the issue of “blame,” on the road towards major conflicts[3], rather than how reliable quantitative evidence can be mustered to support or disprove the key arguments. Economic historians, in turn, have not been particularly interested in the long-term economic impacts of military spending. Usually the interest of economic historians has centered on the economics of global conflicts — of which a good example of recent work combining the theoretical aspects of economics with historical case studies is The Economics of World War II, a compilation edited by Mark Harrison — as well as the immediate short-term economic impacts of wartime mobilization.[4]

The study of defense economics and military spending patterns as such is related to the immense expansion of military budgets and military establishments in the Cold War era. It involves the application of the methods and tools of economics to the study of issues arising from such a huge expansion. At least three aspects in defense economics set it apart from other fields of economics: 1) the actors (both private and public, for example in contracting); 2) theoretical challenges introduced by the interaction of different institutional and organizational arrangements, both in the budgeting and the allocation procedures; 3) the nature of military spending as a tool for destruction as well as providing security.[5] One of the shortcomings in the study of defense economics has been, at least so far, the lack of interest in periods before the Second World War.[6] For example, how much has the overall military burden (military expenditures as a percentage of GDP) of nation states changed over the last couple of centuries? Or, how big of a financial burden did the Thirty Years War (1618-1648) impose on the participating Great Powers?

A “typical” defense economist (see especially Sandler and Hartley (1995)) would model and attempt, based on public good theories, to explain military spending behavior (essentially its demand) by states with the following base equation:

(1)

In Equation 1, ME represents military expenditures by state i in year t, PRICE the price of military goods (affected by technological changes as well), INCOME most commonly the real GDP of the state in question, SPILLINS the impact of friendly states’ military spending (for example in an alliance), THREATS the impact of hostile states’ or alliances’ military expenditures, and STRATEGY the constraints imposed by changes in the overall strategic parameters of a nation. Most commonly, a higher price for military goods lowers military spending; higher income tends to increase ME (like during the industrial revolutions); alliances often lower ME due to the free riding tendencies of most states; threats usually increase military spending (and sometimes spur on arms races); and changes in the overall defensive strategy of a nation can affect ME in either direction, depending on the strategic framework implemented. While this model may be suitable for the study of, for example, the Cold War period, it fails to capture many other important explanatory factors, such as the influence of various organizations and interest groups in the budgetary processes as well as the impact of elections and policy-makers in general. For example, interest groups can get policy-makers to ignore price increases (on, for instance, domestic military goods), and election years usually alter (or focus) the behavior of elected officials.

In turn within peace sciences, a broader yet overlapping school of thought compared to defense economics, the focus in research has been to find the causal factors behind the most destructive conflicts. One of the most significant of such interdisciplinary efforts has been the Correlates of War (COW) project, which started in the spring of 1963. This project and the researchers loosely associated with it, not to mention its importance in producing comparative statistics, have had a big impact on the study of conflicts.[7] As Daniel S. Geller and J. David Singer have noted, the number of territorial states in the global system has ranged from fewer than 30 after the Napoleonic Wars to nearly 200 at the end of the twentieth century, and it is essential to test the various indicators collected by peace scientists against the historical record until theoretical premises can be confirmed or rejected.[8] In fact, a typical feature in most studies of this type is that they are focused on finding those sets of variables that might predict major wars and other conflicts, in a way similar to the historians’ origins-of-wars approach, whereas studies investigating the military spending behavior of monads (single states), dyads (pairs of states), or systems in particular are quite rare. Moreover, even though some cycle theorists and conflict scientists have been interested in the formation of modern nation states and the respective system of states since 1648, they have not expressed any real interest in pre-modern societies and warfare.[9]

Nevertheless, these contributions have had a lot to offer to the study of long-run dynamics of military spending, state formation, and warfare. According to Charles Tilly, there are four approximate approaches to the study of the relationships between war and power: 1) the statist; 2) the geopolitical; 3) the world system; and 4) the mode of production approach. The statist approach presents war, international relations, and state formation chiefly as a conse­quence of events within particular states. The geopolitical analysis is centered on the argument that state formation responds strongly to the current system of relations among states. The world system approach, á la Wallerstein, is mainly rooted in the idea that the different paths of state formation are influenced by the division of resources in the world system. In the mode of production framework, the way that production is organized determines the outcome of state formation. None of the approaches, as Tilly has pointed out, are adequate in their purest form in explaining state formation, international power relations, and economic growth as a whole.[10] Tilly himself maintains that coercion (a monopoly of violence by rulers and ability to wield coercion also externally) and capital (means of financing warfare) were the key elements in the European ascendancy to world domination in the early modern era. Warfare, state formation, and technological supremacy were all interrelated fundamentals of the same process.[11]

How can these theories of state behavior at the system level be linked to the analysis of military spending? According to George Modelski and William R. Thompson, proponents of Kondratieff waves and long cycles as explanatory forces in the development of world leadership patterns, the key aspect in a state’s ascendancy to prominence via such cycles in such models is naval power; i.e., a state’s ability to vie for world political leadership, colonization, and domination in trade.[12] One of the less explored aspects in most studies of hegemonic patterns is the military expenditure component in the competition between the states for military and economic leadership in the system. It is often argued, for example, that uneven economic growth levels cause nations to compete for economic and military prow­ess. The leader nation(s) thus has to dedicate increasing resources to armaments in order to maintain its position, while the other states, the so-called followers, can benefit from greater investments in other areas of economic activity. Therefore, the follower states act as free-riders in the international system stabilized by the hegemon. A built-in assumption in this hypothesized development pattern is that military spending eventually becomes harmful for economic development; a notion that has often been challenged based on empirical studies.[13]

Overall, the assertion arising from such a framework is that economic development and military spending are closely interdependent, with military spending being the driving force behind economic cycles. Moreover, based on this development pattern, it has been suggested that a country’s poor economic performance is linked to the “wasted” economic resources represented by military expenditures. However, as recent studies have shown, economic development is often more significant in explaining military spending rather than vice versa. The development of the U.S. economy since the Second World War certainly does not the type of hegemonic decline as predicted by Kennedy.[14] The aforementioned development pattern can be paraphrased as the so-called war chest hypothesis. As some of the hegemonic theorists reviewed above suggest, economic prosperity might be a necessary prerequisite for war and expansion. Thus, as Brian M. Pollins and Randall L. Schweller have indicated, economic growth would induce rising government expenditures, which in turn would enable higher military spending — therefore military expenditures would be “caused” by economic growth at a certain time lag.[15] In order for military spending to hinder economic performance, it would have to surpass all other areas of an economy, such as is often the case during wartime.

There have been relatively few credible attempts to model the military (or budgetary) spending behavior of states based on their long-run regime characteristics. Here I am going to focus on three in particular: 1) the Webber-Wildawsky model of budgeting; 2) the Richard Bonney model of fiscal systems; and 3) the Niall Ferguson model of interaction between public debts and forms of government. Caroly Webber and Aaron Wildawsky maintain essentially that each political culture generates its characteristic budgetary objectives; namely, productivity in market regimes, redistribution in sects (specific groups dissenting from an established authority), and more complex procedures in hierarchical regimes.[16] Thus, according to them the respective budgetary consequences arising from the chosen regime can be divided into four categories: despotism, state capitalism, American individualism, and social democracy. All of them in turn have implications for the respective regimes’ revenue and spending needs.

This model, however, is essentially a static one. It does not provide clues as to why nations’ behavior may change over time. Richard Bonney has addressed this problem in his writings on mainly the early modern states.[17] He has emphasized that the states’ revenue and tax collection systems, the backbone of any militarily successful nation state, have evolved over time. For example, in most European states the government became the arbiter of disputes and the defender of certain basic rights in the society by the early modern period. During the Middle Ages, the European fiscal systems were relatively backward and autarchic, with mostly predatory rulers (or roving bandits, as Mancur Olson has coined them).[18] In his model this would be the stage of the so-called tribute state. Next in the evolution came, respectively, the domain state (with stationary bandits, providing some public goods), the tax state (more reliance on credit and revenue collection), and finally the fiscal state (embodying more complex fiscal and political structures). A superpower like Great Britain in the nineteenth century, in fact, had to be a fiscal state to be able to dominate the world, due to all the burdens that went with an empire.[19]

While both of the models mentioned above have provided important clues as to how and why nations have prepared fiscally for wars, the most complete account of this process (along with Charles Tilly’s framework covered earlier) has been provided by Niall Ferguson.[20] He has maintained that wars have shaped all the most relevant institutions of modern economic life: tax-collecting bureaucracies, central banks, bond markets, and stock exchanges. Moreover, he argues that the invention of public debt instruments has gone hand-in-hand with more democratic forms of government and military supremacy – hence, the so-called Dutch or British model. These types of regimes have also been the most efficient economically, which has in turned reinforced the success of this fiscal regime model. In fact, military expenditures may have been the principal cause of fiscal innovation for most of history. Ferguson’s model highlights the importance, for a state’s survival among its challengers, of the adoption of the right types of institutions, technology, and a sufficient helping of external ambitions. All in all, I would summarize the required model, combining elements from the various frameworks, as being evolutionary, with regimes during different stages having different priorities and burdens imposed by military spending, depending also on their position in the international system. A successful ascendancy to a leadership position required higher expenditures, a substantial navy, fiscal and political structures conducive to increasing the availability of credit, and reoccurring participation in international conflicts.

Military Spending and the Early Empires

For most societies since the ancient river valley civilizations, military exertions and the means by which to finance them have been the crucial problems of governance. A centralized ability to plan and control spending were lacking in most governments until the nineteenth century. In fact, among the ancient civilizations, financial administration and the government were inseparable. Governments were organized on hierarchical basis, with the rulers having supreme control over military decisions. Taxes were often paid in kind to support the rulers, thus making it more difficult to monitor and utilize the revenues for military campaigns over great distances. For these agricultural economies, victory in war usually yielded lavish tribute to supplement royal wealth and helped to maintain the army and control the population. Thus, support of the large military forces and expeditions, contingent on food and supplies, was the ancient government’s principal expense and problem. Dependence on distant, often external suppliers of food limited the expansion of these empires. Fiscal management in turn was usually cumbersome and costly, and all of the ancient governments were internally unstable and vulnerable to external incursions.[21]

Soldiers, however, often supplemented their supplies by looting the enemy territory. The optimal size of an ancient empire was determined by the efficiency of tax collection and allocation, resource extraction, and its transportation system. Moreover, the supply of metal and weaponry, though important, was seldom the only critical variable for the military success an ancient empire. There were, however, important changing points in this respect, for example the introduction of bronze weaponry, starting with Mesopotamia about 3500 B.C. The introduction of iron weaponry about 1200 B.C. in eastern parts of Asia Minor, although the subsequent spread of this technology was fairly slow and gathered momentum from about 1000 B.C. onwards, and the use of chariot warfare introduced a new phase in warfare, due to the superior efficiency and cheapness of iron armaments as well as the hierarchical structures that were needed to use them during the chariot era.[22]

The river valley civilizations, nonetheless, paled in comparison with the military might and economy of one of the most efficient military behemoths of all time: the Roman Empire. Military spending was the largest item of public spending throughout Roman history. All Roman governments, similar to Athens during the time of Pericles, had problems in gathering enough revenue. Therefore, for example in the third century A.D. Roman citizenship was extended to all residents of the empire in order to raise revenue, as only citizens paid taxes. There were also other constraints on their spending, such as technological, geographic, and other productivity concerns. Direct taxation was, however, regarded as a dishonor, only to be extended in crisis times. Thus, taxation during most of the empire remained moderate, consisting of extraordinary taxes (so-called liturgies in ancient Athens) during such episodes. During the first two centuries of empire, the Roman army had about 150,000 to 160,000 legionnaires, in addition to 150,000 other troops, and during the first two centuries of empire soldiers’ wages began to increase rapidly to ensure the army’s loyalty. For example, in republican and imperial Rome military wages accounted for more than half of the revenue. The demands of the empire became more and more extensive during the third and fourth centuries A.D., as the internal decline of the empire became more evident and Rome’s external challengers became stronger. For example, the limited use of direct taxes and the commonness of tax evasion could not fulfill the fiscal demands of the crumbling empire. Armed forces were in turn used to maintain internal order. Societal unrest, inflation, and external incursions finally brought the Roman Empire, at least in the West, to an end.[23]

Warfare and the Rise of European Supremacy

During the Middle Ages, following the decentralized era of barbarian invasions, a varied system of European feudalism emerged, in which often feudal lords provided protection for communities for service or price. Since the Merovingian era, soldiers became more specialized professionals, with expensive horses and equipment. By the Carolingian era, military service had become largely the prerogative of an aristocratic elite. Prior to 1000 A.D., the command system was preeminent in mobilizing human and material resources for large-scale military enterprises, mostly on a contingency basis.[24] The isolated European societies, with the exception of the Byzantine Empire, paled in comparison with the splendor and accomplishment of the empires in China and the Muslim world. Also, in terms of science and inventions the Europeans were no match for these empires until the early modern period. Moreover, it was not until the twelfth century and the Crusades that the feudal kings needed to supplement the ordinary revenues to finance large armies. Internal discontent in the Middle Ages often led to an expansionary drive as the spoils of war helped calm the elite — for example, the French kings had to establish firm taxing power in the fourteenth century out of military necessity. The political ambitions of medieval kings, however, still relied on revenue strategies that catered to the short-term deficits, which made long-term credit and prolonged military campaigns difficult.[25]

Innovations in the ways of waging war and technology invented by the Chinese and the Islamic societies permeated Europe with a delay, such as the use of pikes in the fourteenth century and the gunpowder revolution of the fifteenth century, which in turn permitted armies to attack and defend larger territories. This also made possible a commercialization of warfare in Europe in the fourteenth and fifteenth centuries as feudal armies had to give way to professional mercenary forces. Accordingly, medieval states had to increase their taxation levels and tax collection to support the growing costs of warfare and the maintenance of larger standing armies. Equally, the age of commercialization of warfare was accompanied by the rising importance of sea power as European states began to build their overseas empires (as opposed to for example the isolationist turn of Ming China in the fifteenth century). States such as Portugal, the Netherlands, and England, respectively, became the “systemic leaders” due to their extensive fleets and commercial expansion in the period before the Napoleonic Wars. These were also states that were economically cohesive due to internal waterways and small geographic size as well. The early winners in the fight for world leadership, such as England, were greatly influenced by the availability of inexpensive credit, enabling them to mobilize limited resources effectively to meet military expenses. Their rise was of course preceded by the naval exploration and empire-building of many successful European states, especially Spain, both in Europe and around the globe.[26]

This pattern from command to commercialized warfare, from short-term to more permanent military management system, can be seen in the English case. In the period 1535-1547, the English defense share (military expenditures as a percentage of central government expenditures) averaged at 29.4 percent, with large fluctuations from year to year. However, in the period 1685-1813, the mean English defense share was 74.6 percent, never dropping below 55 percent in the said period. The newly-emerging nation states began to develop more centralized and productive revenue-expenditure systems, the goal of which was to enhance the state’s power, especially in the absolutist era. This also reflected on the growing cost and scale of warfare: During the Thirty Years’ War between 100,000 and 200,000 men fought under arms, whereas twenty years later 450,000 to 500,000 men fought on both sides in the War of the Spanish Succession. The numbers notwithstanding, the Thirty Years’ War was a conflict directly comparable to the world wars in terms of destruction. For example, Charles Tilly has estimated the battle deaths to have exceeded two million. Henry Kamen, in turn, has emphasized the mass scale destruction and economic dislocation this caused in the German lands, especially to the civilian population.[27]

With the increasing scale of armed conflicts in the seventeenth century, the participants became more and more dependent on access to long-term credit, because whichever government ran out of money had to surrender first. For example, even though the causes of Spain’s supposed decline in the seventeenth century are still disputed, nonetheless it can be said that the lack of royal credit and the poor management of government finances resulted in heavy deficit spending as military exertions followed one after another in the seventeenth century. Therefore, the Spanish Crown defaulted repeatedly during the sixteenth and seventeenth centuries, and on several occasions forced Spain to seek an end to its military activities. Spain still remained one of the most important Great Powers of the period, and was able to sustain its massive empire mostly intact until the nineteenth century.[28]

What about other country cases – can they shed further light into the importance of military spending and warfare in their early modern economic and political development? A key question for France, for example, was the financing of its military exertions. According to Richard Bonney, the cost of France’s armed forces in its era of “national greatness” were stupendous, with expenditure on the army by the period 1708-1714 averaging 218 million livres, whereas during the Dutch War of 1672-1678 it had averaged only 99 million in nominal terms. This was due to both growth in the size of the army and the navy, and the decline in the purchasing power of the French livre. The overall burden of war, however, remained roughly similar in this period: War expenditures accounted roughly 57 percent of total expenditure in 1683, whereas they represented about 52 percent in 1714. Moreover, as for all the main European monarchies, it was the expenditure on war that brought fiscal change in France, especially after the Napoleonic wars. Between 1815 and 1913, there was a 444 percent increase in French public expenditure and a consolidation of the emerging fiscal state. This also embodied a change in the French credit market structure.[29]

A success story, in a way a predecessor to the British model, was the Dutch state in this period. As Marjolein ‘t Hart has noted, the domestic investors were instrumental in supporting their new-born state as the state was able to borrow the money it needed from the credit markets, thus providing a stability in public finances even during crises. This financial regime lasted up until the end of the eighteenth century. Here again we can observe the intermarriage of military spending and the availability of credit, essentially the basic logic in the Ferguson model. One of the key features in the Dutch success in the seventeenth century was their ability to pay their soldiers relatively promptly. The Dutch case also underlines the primacy of military spending in state budgets and the burden involved for the early modern states. As we can see in Figure 1, the defense share of the Dutch region of Groningen remained consistently around 80 to 90 percent until the mid-seventeenth century, and then it declined, at least temporarily during periods of peace.[30]

Figure 1

Groningen’s Defense Share (Military Spending as a Percentage of Central Government Expenditures), 1596-1795

Source: L. van der Ent, et al. European State Finance Database. ESFD, 1999 [cited 1.2.2001]. Available from: http://www.le.ac.uk/hi/bon/ESFDB/frameset.html.

Respectively, in the eighteenth century, with rapid population growth in Europe, armies also grew in size, especially the Russian army. In Western Europe, a mounting intensity of warfare with the Seven Years War (1756-1763) finally culminated in the French Revolution and Napoleon’s conquests and defeat (1792-1815). The new style of warfare brought on by the Revolutionary Wars, with conscription and war of attrition as new elements, can be seen in the growth of army sizes. For example, the French army grew over 3.5 times in size from 1789 to 1793 – up to 650,000 men. Similarly, the British army grew from 57,000 in 1783 to 255,000 men in 1816. The Russian army acquired the massive size of 800,000 men in 1816, and Russia also kept the size of its armed forces at similar levels in the nineteenth century. However, the number of Great Power wars declined in number (see Table 1), as did the average duration of these wars. Yet, some of the conflicts of the industrial era became massive and deadly events, drawing in most parts of the world into essentially European skirmishes.

Table 1

Wars Involving the Great Powers

Century Number of wars Average duration of wars (years) Proportion of years war was underway, percentage
16th 34 1.6 95
17th 29 1.7 94
18th 17 1.0 78
19th 20 0.4 40
20th 15 0.4 53

Source: Charles Tilly. Coercion, Capital, and European States, AD 990-1990. Cambridge, Mass: Basil Blackwell, 1990.

The Age of Total War and Industrial Revolutions

With the new kind of mobilization, which became more or less a permanent state of affairs in the nineteenth century, centralized governments required new methods of finance. The nineteenth century brought on reforms, such as centralized public administration, reliance on specific, balanced budgets, innovations in public banking and public debt management, and reliance on direct taxation for revenue. However, for the first time in history, these reforms were also supported with the spread of industrialization and rising productivity. The nineteenth century was also the century of the industrialization of war, starting in the mid-century and gathering breakneck speed quickly. By the 1880s, military engineering began to forge ahead of even civil engineering. Also, a revolution in transportation with steamships and railroads made massive, long-distance mobilizations possible, as shown by the Prussian example against the French in 1870-1871.[31]

The demands posed by these changes on the state finances and economies differed. In the French case, the defense share stayed roughly the same, a little over 30 percent, throughout the nineteenth and early twentieth centuries, whereas its military burden increased about one percent to 4.2 percent. In the UK case, the defense share mean declined two percent to 36.7 percent in 1870-1913, compared to early nineteenth century. However, the strength of the British economy made it possible that the military burden actually declined a little to 2.6 percent, a similar figure incurred by Germany in the same period. For most countries the period leading to the First World War meant higher military burdens than that, such as Japan’s 6.1 percent. However, the United States, the new economic leader by the closing decades of the century, averaged spending a meager 0.7 percent of its GDP for military purposes, a trend that continued throughout the interwar period as well (military burden of 1.2 percent). As seen in Figure 2, the military burdens incurred by the Great Powers also varied in terms of timing, suggesting different reactions to external and internal pressures. Nonetheless, the aggregate, systemic real military spending of the period showed a clear upward trend for the entire period. Moreover, the impact of the Russo-Japanese was immense for the total (real) spending of the sixteen states represented in the figure below, due to the fact that both countries were Great Powers and Russian military expenditures alone were massive. The unexpected defeat of the Russians unleashed, along with the arrival of dreadnoughts, an intensive arms race.[32]

Figure 2

Military Burdens of Four Great Powers and Aggregate Real Military Expenditure (ME) for Sixteen Countries on the Aggregate, 1870-1913

Sources: See Jari Eloranta, “Struggle for Leadership? Military Spending Behavior of the Great Powers, 1870-1913,” Appalachian State University, Department of History, unpublished manuscript 2005b, also on the constructed system of states and the methods involved in converting the expenditures into a common currency (using exchange rates and purchasing power parities), which is always a controversial exercise.

With the beginning of the First World War in 1914, this military potential was unleashed in Europe with horrible consequences, as most of the nations anticipated a quick victory but ended up fighting a war of attrition in the trenches. Mankind had finally, even officially, entered the age of total war.[33] It has been estimated that about nine million combatants and twelve million civilians died during the so-called Great War, with property damage especially in France, Belgium, and Poland. According to Rondo Cameron and Larry Neal, the direct financial losses arising from the Great War were about 180-230 billion 1914 U.S. dollars, whereas the indirect losses of property and capital rose to over 150 billion dollars.[34] According to the most recent estimates, the economic losses arising from the war could be as high as 692 billion 1938 U.S. dollars.[35] But how much of their resources did they have to mobilize and what were the human costs of the war?

As Table 2 displays, the French military burden was fairly high, in addition to the size of its military forces and the number of battle deaths. Therefore, France mobilized the most resources in the war and, subsequently, suffered the greatest losses. The mobilization by Germany was also quite efficient, because almost the entire state budget was used to support the war effort. On the other hand, the United States barely participated in the war, and its personnel losses in the conflict were relatively small, as were its economic burdens. In comparison, the massive population reserves of Russia enabled fairly high personnel losses, quite similar to the Soviet experience in the Second World War.

Table 2

Resource Mobilization by the Great Powers in the First World War

Country and years in the war Average military burden (percent of GDP) Average defense share of government spending Military personnel as a percentage of population Battle deaths as a percentage of population
France

1914-1918

43 77 11 3.5
Germany

1914-1918

.. 91 7.3 2.7
Russia

1914-1917

.. .. 4.3 1.4
UK

1914-1918

22 49 7.3 2.0
US

1917-1918

7 47 1.7 0.1

Sources: Historical Statistics of the United States, Colonial Times to 1970, Washington, DC: U.S. Bureau of Census, 1975; Louis Fontvieille. Evolution et croissance de l’Etat Français: 1815-1969, Economies et sociëtës, Paris: Institut de Sciences Mathematiques et Economiques Appliquees, 1976 ; B. R. Mitchell. International Historical Statistics: Europe, 1750-1993, 4th edition, Basingstoke: Macmillan Academic and Professional, 1998a; E. V. Morgan, Studies in British Financial Policy, 1914-1925., London: Macmillan, 1952; J. David Singer and Melvin Small. National Material Capabilities Data, 1816-1985. Ann Arbor, MI: Inter-university Consortium for Political and Social Research, 1993. See also Jari Eloranta, “Sotien taakka: Makrotalouden ongelmat ja julkisen talouden kipupisteet maailmansotien jälkeen (The Burden of Wars: The Problems of Macro Economy and Public Sector after the World Wars),” in Kun sota on ohi, edited by Petri Karonen and Kerttu Tarjamo (forthcoming), 2005a.

In the interwar period, the pre-existing tendencies to continue social programs and support new bureaucracies made it difficult for the participants to cut their public expenditure, leading to a displacement of government spending to a slightly higher level for many countries. Public spending especially in the 1920s was in turn very static by nature, plagued by budgetary immobility and standoffs especially in Europe. This meant that although in many countries, except the authoritarian regimes, defense shares dropped noticeably, their respective military burdens stayed either at similar levels or even increased — for example, the French military burden rose to a mean level of 7.2 percent in this period. In Great Britain also, the defense share mean dropped to 18.0 percent, although the military burden mean actually increased compared to the pre-war period, despite the military expenditure cuts and the “Ten-Year Rule” in the 1920s. For these countries, the mid-1930s marked the beginning of intense rearmament whereas some of the authoritarian regimes had begun earlier in the decade. Germany under Hitler increased its military burden from 1.6 percent in 1933 to 18.9 percent in 1938, a rearmament program combining creative financing and promising both guns and butter for the Germans. Mussolini was not quite as successful in his efforts to realize the new Roman Empire, with a military burden fluctuating between four and five percent in the 1930s (5.0 percent in 1938). The Japanese rearmament drive was perhaps the most impressive, with as high as 22.7 percent military burden and over 50 percent defense share in 1938. For many countries, such as France and Russia, the rapid pace of technological change in the 1930s rendered many of the earlier armaments obsolete only two or three years later.[36]

Figure 3
Military Burdens of Denmark, Finland, France, and the UK, 1920-1938

Source: Jari Eloranta, “External Security by Domestic Choices: Military Spending as an Impure Public Good among Eleven European States, 1920-1938,” Dissertation, European University Institute, 2002.

There were differences between democracies as well, as seen in Figure 3. Finland’s behavior was similar to the UK and France, i.e. part of the so-called high spending group among European democracies. This was also similar to the actions of most East European states. Denmark was among the low-spending group, perhaps due to the futility of trying to defend its borders amidst probable conflicts involving giants in the south, France and Germany. Overall, the democracies maintained fairly steady military burdens throughout the period. Their rearmament was, however, much slower than the effort amassed by most autocracies. This is also amply displayed in Figure 4.

Figure 4
Military Burdens of Germany, Italy, Japan, and Russia/USSR, 1920-1938

Sources: Eloranta (2002), see especially appendices for the data sources. There are severe limitations and debates related to, for example, the German (see e.g. Werner Abelshauser, “Germany: Guns, Butter, and Economic Miracles,” in The Economics of World War II: Six Great Powers in International Comparison, edited by Mark Harrison, 122-176, Cambridge: Cambridge University Press, 2000) and the Soviet data (see especially R. W. Davies, “Soviet Military Expenditure and the Armaments Industry, 1929-33: A Reconsideration,” Europe-Asia Studies 45, no. 4 (1993): 577-608, as well as R. W. Davies and Mark Harrison. “The Soviet Military-Economic Effort under the Second Five-Year Plan, 1933-1937,” Europe-Asia Studies 49, no. 3 (1997): 369-406).

In the ensuing conflict, the Second World War, the initial phase from 1939 to early 1942 favored the Axis as far as strategic and economic potential was concerned. After that, the war of attrition, with the United States and the USSR joining the Allies, turned the tide in favor of the Allies. For example, in 1943 the Allied total GDP was 2,223 billion international dollars (in 1990 prices), whereas the Axis accounted for only 895 billion. Also, the impact of the Second World War was much more profound for the participants’ economies. For example, Great Britain at the height of the First World War incurred a military burden of about 27 percent, whereas the military burden level consistently held throughout the Second World War was over 50 percent.[37]

Table 3

Resource Mobilization by the Great Powers in the Second World War

Country and years in the war Average military burden (percent of GDP) Average defense share of government spending Military personnel as a percentage of population Battle deaths as a percentage of population
France

1939-1945

.. .. 4.2 0.5
Germany

1939-1945

50 .. 6.4 4.4
Soviet Union

1939-1945

44 48 3.3 4.4
UK

1939-1945

45 69 6.2 0.9
USA

1941-1945

32 71 5.5 0.3

Sources: Singer and Small (1993); Stephen Broadberry and Peter Howlett, “The United Kingdom: ‘Victory at All Costs’,” in The Economics of World War II: Six Great Powers in International Comparisons, edited by Mark Harrison (Cambridge University Press, 1998); Mark Harrison. “The Economics of World War II: An Overview,” in The Economics of World War II: Six Great Powers in International Comparisons, edited by Mark Harrison (Cambridge: Cambridge University Press, 1998a); Mark Harrison, “The Soviet Union: The Defeated Victor,” in The Economics of World War II: Six Great Powers in International Comparison, edited by Mark Harrison, 268-301 (Cambridge: Cambridge University Press, 2000); Mitchell (1998a); B.R. Mitchell. International Historical Statistics: The Americas, 1750-1993, fourth edition, London: Macmillan, 1998b. The Soviet defense share only applies to years 1940-1945, whereas the military burden applies to 1940-1944. These two measures are not directly comparable, since the former is measured in current prices and the latter in constant prices.

As Table 3 shows, the greatest military burden was most likely incurred by Germany, even though the other Great Powers experienced similar levels. Only the massive economic resources of the United States made possible its lower military burden. Also the UK and the United States mobilized their central/federal government expenditures efficiently for the military effort. In this sense the Soviet Union fared the worst, and additionally the share of military personnel out of the population was relatively small compared to the other Great Powers. On the other hand, the economic and demographic resources that the Soviet Union possessed ultimately ensured its survival during the German onslaught. On the aggregate, the largest personnel losses were incurred by Germany and the Soviet Union, in fact many times those of the other Great Powers.[38] In comparison with the First World War, the second one was even more destructive and lethal, and the aggregate economic losses from the war exceeded even 4,000 billion 1938 U.S. dollars. After the war, the European industrial and agricultural production amounted to only half of the 1938 total.[39]

The Atomic Age and Beyond

The Second World War brought with it also a new role for the United States in world politics, a military-political leadership role warranted by its dominant economic status established over fifty years earlier. With the establishment of NATO in 1949, a formidable defense alliance was formed for the capitalist countries. The USSR, rising to new prominence due to the war, established the Warsaw Pact in 1955 to counter these efforts. The war also meant a change in the public spending and taxation levels of most Western nations. The introduction of welfare states brought the OECD government expenditure average from just under 30 percent of the GDP in the 1950s to over 40 percent in the 1970s. Military spending levels followed suit and peaked during the early Cold War. The American military burden increased above 10 percent in 1952-1954, and the United States has retained a high mean value for the post-war period of 6.7 percent. Great Britain and France followed the American example after the Korean War.[40]

The Cold War embodied a relentless armaments race, with nuclear weapons now as the main investment item, between the two superpowers (see Figure 5). The USSR, according to some figures, spent about 60 to 70 percent of the American level in the 1950s, and actually spent more than the United States in the 1970s. Nonetheless, the United States maintained a massive advantage over the Soviets in terms of nuclear warheads. However, figures collected by SIPRI (Stockholm International Peace Research Institute), suggest an enduring yet dwindling lead for the US even in the 1970s. On the other hand, the same figures point to a 2-to-1 lead in favor of the NATO countries over the Warsaw Pact members in the 1970s and early 1980s. Part of this armaments race was due to technological advances that led to increases in the cost per soldier — it has been estimated that technological increases have produced a mean annual increase in real costs of around 5.5 percent in the post-war period. Nonetheless, spending on personnel and their maintenance has remained the biggest spending item for most countries.

Figure 5

Military Burdens (=MILBUR) of the United States and the United Kingdom, and the Soviet Military Spending as a Percentage of the US Military Spending (ME), 1816-1993

Sources: References to the economic data can be found in Jari Eloranta, “National Defense,” in The Oxford Encyclopedia of Economic History, edited by Joel Mokyr, 30-33 (Oxford: Oxford University Press, 2003b). ME (Military Expenditure) data from Singer and Small (1993), supplemented with the SIPRI (available from: http://www.sipri.org/) data for 1985-1993. Details are available from the author upon request. Exchange rates from Global Financial Data (Online databank), 2003. Available from http://www.globalfindata.com/. The same caveats apply to the underlying currency conversion methods as in Figure 2.

The one outcome of this Cold War arms race that is often cited is the so-called Military Industrial Complex (MIC), referring usually to the influence that the military and industry would have on each other’s policies. The more nefarious connotation refers to the unduly large influence that military producers might have over public sector’s acquisitions and foreign policy in particular in such a collusive relationship. In fact, the origins of this type of interaction can be found further back in history. As Paul Koistinen has emphasized, the First World War was a watershed in business-government relationships, since businessmen were often brought into government, to make supply decisions during this total conflict. Most governments, as a matter of fact, needed the expertise of the core business elites during the world wars. In the United States some form of an MIC came into existence before 1940. Similar developments can be seen in other countries before the Second World War, for example in the Soviet Union. The Cold War simply reinforced these tendencies.[41] Findings by, for example, Robert Higgs establish that the financial performance of the leading defense contracting companies was, on the average, much better than that of comparable large corporations during the period 1948-1989. Nonetheless, his findings do not support the normative conclusion that the profits of defense contractors were “too high.”[42]

World spending levels began a slow decline from the 1970s onwards, with the Reagan years being an exception for the US. In 1986, the US military burden was 6.5 percent, whereas in 1999 it was down to 3.0 percent. In France during the period 1977-1999, the military burden has declined from the post-war peak levels in the 1950s to a mean level of 3.6 percent at the turn of the millennium. This has been mostly the outcome of the reduction in tensions between the rival groups and the downfall of the USSR and the communist regimes in Eastern Europe. The USSR was spending almost as much on its armed forces as the United States up until mid-1980s, and the Soviet military burden was still 12.3 percent in 1990. Under the Russian Federation, with a declining GDP, this level has dropped rapidly to 3.2 percent in 1998. Similarly, other nations have downscaled their military spending since the late 1980s and the 1990s. For example, German military spending in constant US dollars in 1991 was over 52 billion, whereas in 1999 it declined to less than 40 billion. In the French case, the decline was from little over 52 billion in 1991 to below 47 billion in 1999, with its military burden decreasing from 3.6 percent to 2.8 percent.[43]

Overall, according to the SIPRI figures, there was a reduction of about one-third in real terms in world military spending in 1989-1996, with some fluctuation and even small increase since then. In the global scheme, world military expenditure is still highly concentrated on a few countries, with the 15 major spenders accounting for 80 percent of the world total in 1999. The newest military spending estimates (see e.g. http://www.sipri.org/) put the world military expenditures on a growth trend once again due to new threats such as international terrorism and the conflicts related to terrorism. In terms of absolute figures, the United States still dominates the world military spending with a 47 percent share of the world total in 2003. The U.S. spending total becomes less impressive when purchasing power parities are utilized. Nonetheless, the United States has entered the third millennium as the world’s only real superpower – a role that it embraces sometimes awkwardly. Whereas the United States was an absent hegemon in the late nineteenth and first half of the twentieth century, it now has to maintain its presence in many parts of the world, sometimes despite objections from the other players in the international system.[44]

Conclusions

Warfare has played a crucial role in the evolution of human societies. The ancient societies were usually less complicated in terms of the administrative, fiscal, technological, and material demands of warfare. The most pressing problem was commonly the maintenance of adequate supply for the armed forces during prolonged campaigns. This also put constraints on the size and expansion of the early empires, at least until the introduction of iron weaponry. The Roman Empire, for example, was able to sustain a large, geographically diverse empire for a long time period. The disjointed Middle Ages splintered the European societies into smaller communities, in which so-called roving bandits ruled, at least until the arrival of more organized military forces from the tenth century onwards. At the same time, the empires in China and the Muslim world developed into cradles of civilization in terms of scientific discoveries and military technologies.

The geographic and economic expansion of early modern European states started to challenge other regimes all over the world, made possible in part by their military and naval supremacy as well as their industrial prowess later on. The age of total war and revolutions in the nineteenth and twentieth centuries finally pushed these states to adopt more and more efficient fiscal systems and enabled some of them to dedicate more than half of their GDP to the war effort during the world wars. Even though military spending was regularly the biggest item in the budget for most states before the twentieth century, it still represented only a modest amount of their respective GDP. The Cold War period again saw high relative spending levels, due to the enduring rivalry between the West and the Communist bloc. Finally, the collapse of the Soviet Union alleviated some of these tensions and lowered the aggregate military spending in the world, if only temporarily. Newer security challenges such as terrorism and various interstate rivalries have again pushed the world towards a growth path in terms of overall military spending.

The cost of warfare has increased especially since the early modern period. The adoption of new technologies and massive standing armies, in addition to the increase in the “bang-for-buck” (namely, the destructive effect of military investments), have kept military expenditures in a central role vis-à-vis modern fiscal regimes. Although the growth of welfare states in the twentieth century has forced some tradeoffs between “guns and butter,” usually the spending choices have not been competing rather than complementary. Thus, the size and spending of governments have increased. Even though the growth in welfare spending has abated somewhat since the 1980s, according to Peter Lindert they will most likely still experience at least modest expansion in the future. Nor is it likely that military spending will be displaced as a major spending item in national budgets. Various international threats and the lack of international cooperation will ensure that military spending will remain the main contender to social expenditures.[45]


[1] I thank several colleagues for their helpful comments, especially Mark Harrison, Scott Jessee, Mary Valante, Ed Behrend, David Reid, as well as an anonymous referee and EH.Net editor Robert Whaples. The remaining errors and interpretations are solely my responsibility.

[2] See Paul Kennedy, The Rise and Fall of the Great Powers: Economic Change and Military Conflict from 1500 to 2000 (London: Fontana, 1989). Kennedy calls this type of approach, following David Landes, “large history.” On criticism of Kennedy’s “theory,” see especially Todd Sandler and Keith Hartley, The Economics of Defense, ed. Mark Perlman, Cambridge Surveys of Economic Literature (Cambridge: Cambridge University Press, 1995) and the studies listed in it. Other examples of long-run explanations can be found in, e.g., Maurice Pearton, The Knowledgeable State: Diplomacy, War, and Technology since 1830 (London: Burnett Books: Distributed by Hutchinson, 1982) and William H. McNeill, The Pursuit of Power: Technology, Armed Force, and Society since A.D. 1000 (Chicago: University of Chicago Press, 1982).

[3] Jari Eloranta, “Kriisien ja konfliktien tutkiminen kvantitatiivisena ilmiönä: Poikkitieteellisyyden haaste suomalaiselle sotahistorian tutkimukselle (The Study of Crises and Conflicts as Quantitative Phenomenon: The Challenge of Interdisciplinary Approaches to Finnish Study of Military History),” in Toivon historia – Toivo Nygårdille omistettu juhlakirja, ed. Kalevi Ahonen, et al. (Jyväskylä: Gummerus Kirjapaino Oy, 2003a).

[4] See Mark Harrison, ed., The Economics of World War II: Six Great Powers in International Comparisons (Cambridge, UK: Cambridge University Press, 1998b). Classic studies of this type are Alan Milward’s works on the European war economies; see e.g. Alan S. Milward, The German Economy at War (London: Athlon Press, 1965) and Alan S. Milward, War, Economy and Society 1939-1945 (London: Allen Lane, 1977).

[5] Sandler and Hartley, The Economics of Defense, xi; Jari Eloranta, “Different Needs, Different Solutions: The Importance of Economic Development and Domestic Power Structures in Explaining Military Spending in Eight Western Democracies during the Interwar Period” (Licentiate Thesis, University of Jyväskylä, 1998).

[6] See Jari Eloranta, “External Security by Domestic Choices: Military Spending as an Impure Public Good among Eleven European States, 1920-1938” (Dissertation, European University Institute, 2002) for details.

[7] Ibid.

[8] Daniel S. Geller and J. David Singer, Nations at War. A Scientific Study of International Conflict, vol. 58, Cambridge Studies in International Relations (Cambridge: Cambridge University Press, 1998), e.g. 1-7.

[9] See e.g. Jack S. Levy, “Theories of General War,” World Politics 37, no. 3 (1985). For an overview, see especially Geller and Singer, Nations at War: A Scientific Study of International Conflict. A classic study of war from the holistic perspective is Quincy Wright, A Study of War (Chicago: University of Chicago Press, 1942). See also Geoffrey Blainey, The Causes of War (New York: Free Press, 1973). On rational explanations of conflicts, see James D. Fearon, “Rationalist Explanations for War,” International Organization 49, no. 3 (1995).

[10] Charles Tilly, Coercion, Capital, and European States, AD 990-1990 (Cambridge, MA: Basil Blackwell, 1990), 6-14.

[11] For more, see especially ibid., Chapters 1 and 2.

[12] George Modelski and William R. Thompson, Leading Sectors and World Powers: The Coevolution of Global Politics and Economics, Studies in International Relations (Columbia, SC: University of South Carolina Press, 1996), 14-40. George Modelski and William R. Thompson, Seapower in Global Politics, 1494-1993 (Houndmills, UK: Macmillan Press, 1988).

[13] Kennedy, The Rise and Fall of the Great Powers: Economic Change and Military Conflict from 1500 to 2000, xiii. On specific criticism, see e,g, Jari Eloranta, “Military Competition between Friends? Hegemonic Development and Military Spending among Eight Western Democracies, 1920-1938,” Essays in Economic and Business History XIX (2001).

[14] Eloranta, “External Security by Domestic Choices: Military Spending as an Impure Public Good among Eleven European States, 1920-1938,” Sandler and Hartley, The Economics of Defense.

[15] Brian M. Pollins and Randall L. Schweller, “Linking the Levels: The Long Wave and Shifts in U.S. Foreign Policy, 1790- 1993,” American Journal of Political Science 43, no. 2 (1999), e.g. 445-446. E.g. Alex Mintz and Chi Huang, “Guns versus Butter: The Indirect Link,” American Journal of Political Science 35, no. 1 (1991) suggest an indirect (negative) growth effect via investment at a lag of at least five years.

[16] Caroly Webber and Aaron Wildavsky, A History of Taxation and Expenditure in the Western World (New York: Simon and Schuster, 1986).

[17] He outlines most of the following in Richard Bonney, “Introduction,” in The Rise of the Fiscal State in Europe c. 1200-1815, ed. Richard Bonney (Oxford: Oxford University Press, 1999b).

[18] Mancur Olson, “Dictatorship, Democracy, and Development,” American Political Science Review 87, no. 3 (1993).

[19] On the British Empire, see especially Niall Ferguson, Empire: The Rise and Demise of the British World Order and the Lessons for Global Power (New York: Basic Books, 2003). Ferguson has also tackled the issue of a possible American empire in a more polemical Niall Ferguson, Colossus: The Price of America’s Empire (New York: Penguin Press, 2004).

[20] Ferguson outlines his analytical framework most concisely in Niall Ferguson, The Cash Nexus: Money and Power in the Modern World, 1700-2000 (New York: Basic Books, 2001), especially Chapter 1.

[21] Webber and Wildavsky, A History of Taxation and Expenditure in the Western World, 39-67. See also McNeill, The Pursuit of Power: Technology, Armed Force, and Society since A.D. 1000.

[22] McNeill, The Pursuit of Power: Technology, Armed Force, and Society since A.D. 1000 , 9-12.

[23] Webber and Wildavsky, A History of Taxation and Expenditure in the Western World.

[24] This interpretation of early medieval warfare and societies, including the concept of feudalism, has been challenged in more recent military history literature. See especially John France, “Recent Writing on Medieval Warfare: From the Fall of Rome to c. 1300,” Journal of Military History 65, no. 2 (2001).

[25] Webber and Wildavsky, A History of Taxation and Expenditure in the Western World, McNeill, The Pursuit of Power. Technology, Armed Force, and Society since A.D. 1000. See also Richard Bonney, ed., The Rise of the Fiscal State in Europe c. 1200-1815 (Oxford: Oxford University Press, 1999c).

[26] Ferguson, The Cash Nexus: Money and Power in the Modern World, 1700-2000, Tilly, Coercion, Capital, and European States, AD 990-1990, Jari Eloranta, “National Defense,” in The Oxford Encyclopedia of Economic History, ed. Joel Mokyr (Oxford: Oxford University Press, 2003b). See also Modelski and Thompson, Seapower in Global Politics, 1494-1993.

[27] Tilly, Coercion, Capital, and European States, AD 990-1990, 165, Henry Kamen, “The Economic and Social Consequences of the Thirty Years’ War,” Past and Present April (1968).

[28] Eloranta, “National Defense,” Henry Kamen, Empire: How Spain Became a World Power, 1492-1763, 1st American ed. (New York: HarperCollins, 2003), Douglass C. North, Institutions, Institutional Change, and Economic Performance (New York.: Cambridge University Press, 1990).

[29] Richard Bonney, “France, 1494-1815,” in The Rise of the Fiscal State in Europe c. 1200-1815, ed. Richard Bonney (Oxford: Oxford University Press, 1999a). War expenditure percentages (for the seventeenth and eighteenth centuries) were calculated using the so-called Forbonnais (and Bonney) database(s), available from European State Finance Database: http://www.le.ac.uk/hi/bon/ESFDB/RJB/FORBON/forbon.html and should be considered only illustrative.

[30] Marjolein ’t Hart, “The United Provinces, 1579-1806,” in The Rise of the Fiscal State in Europe c. 1200-1815, ed. Richard Bonney (Oxford: Oxford University Press, 1999). See also Ferguson, The Cash Nexus..

[31] See especially McNeill, The Pursuit of Power..

[32] Eloranta, “External Security by Domestic Choices: Military Spending as an Impure Public Good Among Eleven European States, 1920-1938,” Eloranta, “National Defense”. See also Ferguson, The Cash Nexus.. On the military spending patterns of Great Powers in particular, see J. M. Hobson, “The Military-Extraction Gap and the Wary Titan: The Fiscal Sociology of British Defence Policy 1870-1914,” Journal of European Economic History 22, no. 3 (1993).

[33] The practice of total war, of course, is as old as civilizations themselves, ranging from the Punic Wars to the more modern conflicts. Here total war refers to the twentieth century connotation of this term, embodying the use of all economic, political, and military might of a nation to destroy another in war. Therefore, even though the destruction of Carthage certainly qualifies as an action of total war, it is only in the nineteenth and twentieth centuries that this type of warfare and strategic thinking comes to full fruition. For example, the famous ancient military genius Sun Tzu advocated caution and planning in warfare, rather than using all means possible to win a war: “Thus, those skilled in war subdue the enemy’s army without battle. They capture his cities without assaulting them and overthrow his state without protracted operations.” Sun Tzu, The Art of War (Oxford: Oxford University Press, 1963), 79. With the ideas put forth by Clausewitz (see Carl von Clausewitz, On War (London: Penguin Books, 1982, e.g. Book Five, Chapter II) in the century century, the French Revolution, and Napoleon, the nature of warfare began to change. Clausewitz’s absolute war did not go as far as prescribing indiscriminate slaughter or other ruthless means to subdue civilian populations, but did contribute to the new understanding of the means of warfare and military strategy in the industrial age. The generals and despots of the twentieth century drew their own conclusions, and thus total war came to include not only subjugating the domestic economy to the needs of the war effort but also propaganda, destruction of civilian (economic) targets, and genocide.

[34] Rondo Cameron and Larry Neal, A Concise Economic History of the World: From Paleolithic Times to the Present, 4th ed. (Oxford: The Oxford University Press, 2003), 339. Thus, the estimate in e.g. Eloranta, “National Defense” is a hypothetical minimum estimate originally expressed in Gerard J. de Groot, The First World War (New York: Palgrave, 2001).

[35] See Table 13 in Stephen Broadberry and Mark Harrison, “The Economics of World War I: An Overview,” in The Economics of World War I, ed. Stephen Broadberry and Mark Harrison ((forthcoming), Cambridge University Press, 2005). The figures are, as the authors point out, only tentative.

[36] Eloranta, “External Security by Domestic Choices: Military Spending as an Impure Public Good Among Eleven European States, 1920-1938”, Eloranta, “National Defense”, Webber and Wildavsky, A History of Taxation and Expenditure in the Western World.

[37] Eloranta, “National Defense”.

[38] Mark Harrison, “The Economics of World War II: An overview,” in The Economics of World War II: Six Great Powers in International Comparisons, ed. Mark Harrison (Cambridge, UK: Cambridge University Press, 1998a), Eloranta, “National Defense.”

[39] Cameron and Neal, A Concise Economic History of the World, Harrison, “The Economics of World War II: An Overview,” Broadberry and Harrison, “The Economics of World War I: An Overview.” Again, the same caveats apply to the Harrison-Broadberry figures as disclaimed earlier.

[40] Eloranta, “National Defense”.

[41] Mark Harrison, “Soviet Industry and the Red Army under Stalin: A Military-Industrial Complex?” Les Cahiers du Monde russe 44, no. 2-3 (2003), Paul A.C. Koistinen, The Military-Industrial Complex: A Historical Perspective (New York: Praeger Publishers, 1980).

[42] Robert Higgs, “The Cold War Economy: Opportunity Costs, Ideology, and the Politics of Crisis,” Explorations in Economic History 31, no. 3 (1994); Ruben Trevino and Robert Higgs. 1992. “Profits of U.S. Defense Contractors,” Defense Economics Vol. 3, no. 3: 211-18.

[43] Eloranta, “National Defense”.

[44] See more Eloranta, “Military Competition between Friends? Hegemonic Development and Military Spending among Eight Western Democracies, 1920-1938.”

[45] For more, see especially Ferguson, The Cash Nexus, Peter H. Lindert, Growing Public. Social Spending and Economic Growth since the Eighteenth Century, 2 Vols., Vol. 1 (Cambridge: Cambridge University Press, 2004). On tradeoffs, see e.g. David R. Davis and Steve Chan, “The Security-Welfare Relationship: Longitudinal Evidence from Taiwan,” Journal of Peace Research 27, no. 1 (1990), Herschel I. Grossman and Juan Mendoza, “Butter and Guns: Complementarity between Economic and Military Competition,” Economics of Governance, no. 2 (2001), Alex Mintz, “Guns Versus Butter: A Disaggregated Analysis,” The American Political Science Review 83, no. 4 (1989), Mintz and Huang, “Guns versus Butter: The Indirect Link,” Kevin Narizny, “Both Guns and Butter, or Neither: Class Interests in the Political Economy of Rearmament,” American Political Science Review 97, no. 2 (2003).

Citation: Eloranta, Jari. “Military Spending Patterns in History”. EH.Net Encyclopedia, edited by Robert Whaples. September 16, 2005. URL http://eh.net/encyclopedia/military-spending-patterns-in-history/

Labor Unions in the United States

Gerald Friedman, University of Massachusetts at Amherst

Unions and Collective Action

In capitalist labor markets, which developed in the nineteenth-century in the United States and Western Europe, workers exchange their time and effort for wages. But even while laboring under the supervision of others, wage earners have never been slaves, because they have recourse from abuse. They can quit to seek better employment. Or they are free to join with others to take collective action, forming political movements or labor unions. By the end of the nineteenth century, labor unions and labor-oriented political parties had become major forces influencing wages and working conditions. This article explores the nature and development of labor unions in the United States. It reviews the growth and recent decline of the American labor movement and makes comparisons with the experience of foreign labor unions to clarify particular aspects of the history of labor unions in the United States.

Unions and the Free-Rider Problem

Quitting, exit, is straightforward, a simple act for individuals unhappy with their employment. By contrast, collective action, such as forming a labor union, is always difficult because it requires that individuals commit themselves to produce “public goods” enjoyed by all, including those who “free ride” rather than contribute to the group effort. If the union succeeds, free riders receive the same benefits as do activists; but if it fails, the activists suffer while those who remained outside lose nothing. Because individualist logic leads workers to “free ride,” unions cannot grow by appealing to individual self-interest (Hirschman, 1970; 1982; Olson, 1966; Gamson, 1975).

Union Growth Comes in Spurts

Free riding is a problem for all collective movements, including Rotary Clubs, the Red Cross, and the Audubon Society. But unionization is especially difficult because unions must attract members against the opposition of often-hostile employers. Workers who support unions sacrifice money and risk their jobs, even their lives. Success comes only when large numbers simultaneously follow a different rationality. Unions must persuade whole groups to abandon individualism to throw themselves into the collective project. Rarely have unions grown incrementally, gradually adding members. Instead, workers have joined unions en masse in periods of great excitement, attracted by what the French sociologist Emile Durkheim labeled “collective effervescence” or the joy of participating in a common project without regard for individual interest. Growth has come in spurts, short periods of social upheaval punctuated by major demonstrations and strikes when large numbers see their fellow workers publicly demonstrating a shared commitment to the collective project. Union growth, therefore, is concentrated in short periods of dramatic social upheaval; in the thirteen countries listed in Tables 1 and 2, 67 percent of growth comes in only five years, and over 90 percent in only ten years. As Table 3 shows, in these thirteen countries, unions grew by over 10 percent a year in years with the greatest strike activity but by less than 1 percent a year in the years with the fewest strikers (Friedman, 1999; Shorter and Tilly, 1974; Zolberg, 1972).

Table 1
Union Members per 100 Nonagricultural Workers, 1880-1985: Selected Countries

Year Canada US Austria Denmark France Italy Germany Netherlands Norway Sweden UK Australia Japan
1880 n.a. 1.8 n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a.
1900 4.6 7.5 n.a. 20.8 5.0 n.a. 7.0 n.a. 3.4 4.8 12.7 n.a. n.a.
1914 8.6 10.5 n.a. 25.1 8.1 n.a. 16.9 17.0 13.6 9.9 23.0 32.8 n.a.
1928 11.6 9.9 41.7 39.7 8.0 n.a. 32.5 26.0 17.4 32.0 25.6 46.2 n.a.
1939 10.9 20.7 n.a. 51.8 22.4 n.a. n.a. 32.5 57.0 53.6 31.6 39.2 n.a.
1947 24.6 31.4 64.6 55.9 40.0 n.a. 29.1 40.4 55.1 64.6 44.5 52.9 45.3
1950 26.3 28.4 62.3 58.1 30.2 49.0 33.1 43.0 58.4 67.7 44.1 56.0 46.2
1960 28.3 30.4 63.4 64.4 20.0 29.6 37.1 41.8 61.5 73.0 44.2 54.5 32.2
1975 35.6 26.4 58.5 66.6 21.4 50.1 38.2 39.1 60.5 87.2 51.0 54.7 34.4
1985 33.7 18.9 57.8 82.2 14.5 51.0 39.3 28.6 65.3 103.0 44.2 51.5 28.9

Note: This table shows the unionization rate, the share of nonagricultural workers belonging to unions, in different countries in different years, 1880-1985. Because union membership often includes unemployed and retired union members it may exceed the number of employed workers, giving a unionization rate of greater than 100 percent.

Table 2
Union Growth in Peak and Other Years

Country Years Membership Growth Share of Growth (%) Excess Growth (%)
Top 5 Years Top 10 Years All Years 5 Years 10 Years 5 Years 10 Years
Australia 83 720 000 1 230 000 3 125 000 23.0 39.4 17.0 27.3
Austria 52 5 411 000 6 545 000 3 074 000 176.0 212.9 166.8 194.4
Canada 108 855 000 1 532 000 4 028 000 21.2 38.0 16.6 28.8
Denmark 85 521 000 795 000 1 883 000 27.7 42.2 21.8 30.5
France 92 6 605 000 7 557 000 2 872 000 230.0 263.1 224.5 252.3
Germany 82 10 849 000 13 543 000 9 120 000 119.0 148.5 112.9 136.3
Italy 38 3 028 000 4 671 000 3 713 000 81.6 125.8 68.4 99.5
Japan 43 4 757 000 6 692 000 8 983 000 53.0 74.5 41.3 51.2
Netherlands 71 671 000 1 009 000 1 158 000 57.9 87.1 50.9 73.0
Norway 85 304 000 525 000 1 177 000 25.8 44.6 19.9 32.8
Sweden 99 633 000 1 036 000 3 859 000 16.4 26.8 11.4 16.7
UK 96 4 929 000 8 011 000 8 662 000 56.9 92.5 51.7 82.1
US 109 10 247 000 14 796 000 22 293 000 46.0 66.4 41.4 57.2
Total 1043 49 530 000 67 942 000 73 947 000 67.0 91.9 60.7 79.4

Note: This table shows that most union growth comes in a few years. Union membership growth (net of membership losses) has been calculated for each country for each year. Years were then sorted for each country according to membership growth. This table reports growth for each country for the five and the ten years with the fastest growth and compares this with total growth over all years for which data are available. Excess growth has been calculated as the difference between the share of growth in the top five or ten years and the share that would have come in these periods if growth had been distributed evenly across all years.

Note that years of rapid growth are not necessarily contiguous. There can be more growth in years of rapid growth than over the entire period. This is because some is temporary when years of rapid growth are followed by years of decline.

Sources: Bain and Price (1980): 39, Visser (1989)

Table 3
Impact of Strike Activity on Union Growth
Average Union Membership Growth in Years Sorted by Proportion of Workers Striking

Country Striker Rate Quartile Change
Lowest Third Second Highest
Australia 5.1 2.5 4.5 2.7 -2.4
Austria 0.5 -1.9 0.4 2.4 1.9
Canada 1.3 1.9 2.3 15.8 14.5
Denmark 0.3 1.1 3.0 11.3 11.0
France 0.0 2.1 5.6 17.0 17.0
Germany -0.2 0.4 1.3 20.3 20.5
Italy -2.2 -0.3 2.3 5.8 8.0
Japan -0.2 5.1 3.0 4.3 4.5
Netherlands -0.9 1.2 3.5 6.3 7.2
Norway 1.9 4.3 8.6 10.3 8.4
Sweden 2.5 3.2 5.9 16.9 14.4
UK 1.7 1.7 1.9 3.4 1.7
US -0.5 0.6 2.1 19.9 20.4
Total: Average 0.72 1.68 3.42 10.49 9.78

Note: This table shows that except in Australia unions grew fastest in years with large number of strikers. The proportion of workers striking was calculated for each country for each year as the number of strikers divided by the nonagricultural labor force. Years were then sorted into quartiles, each including one-fourth of the years, according to this striker rate statistic. The average annual union membership growth rate was then calculated for each quartile as the mean of the growth rate in each year in the quartile.

Rapid Union Growth Provokes a Hostile Reaction

These periods of rapid union growth end because social upheaval provokes a hostile reaction. Union growth leads employers to organize, to discover their own collective interests. Emulating their workers, they join together to discharge union activists, to support each other in strikes, and to demand government action against unions. This rising opposition ends periods of rapid union growth, beginning a new phase of decline followed by longer periods of stagnant membership. The weakest unions formed during the union surge succumb to the post-boom reaction; but if enough unions survive they leave a movement larger and broader than before.

Early Labor Unions, Democrats and Socialists

Guilds

Before modern labor unions, guilds united artisans and their employees. Craftsmen did the work of early industry, “masters” working beside “journeymen” and apprentices in small workplaces. Throughout the cities and towns of medieval Europe, guilds regulated production by setting minimum prices and quality, and capping wages, employment, and output. Controlled by independent craftsmen, “masters” who employed journeymen and trained apprentices, guilds regulated industry to protect the comfort and status of the masters. Apprentices and journeymen benefited from guild restrictions only when they advanced to master status.

Guild power was gradually undermined in the early-modern period. Employing workers outside the guild system, including rural workers and semiskilled workers in large urban workplaces, merchants transformed medieval industry. By the early 1800s, few could anticipate moving up to becoming a master artisan or owning their own establishment. Instead, facing the prospect of a lifetime of wage labor punctuated by periods of unemployment, some wage earners began to seek a collective regulation of their individual employment (Thompson, 1966; Scott, 1974; Dawley, 1976; Sewell, 1980; Wilentz, 1984; Blewett, 1988).

The labor movement within the broader movement for democracy

This new wage-labor regime led to the modern labor movement. Organizing propertyless workers who were laboring for capitalists, organized labor formed one wing of a broader democratic movement struggling for equality and for the rights of commoners (Friedman, 1998). Within the broader democratic movement for legal and political equality, labor fought the rise of a new aristocracy that controlled the machinery of modern industry just as the old aristocracy had monopolized land. Seen in this light, the fundamental idea of the labor movement, that employees should have a voice in the management of industry, is comparable to the demand that citizens should have a voice in the management of public affairs. Democratic values do not, by any means, guarantee that unions will be fair and evenhanded to all workers. In the United States, by reserving good jobs for their members, unions of white men sometimes contributed to the exploitation of women and nonwhites. Democracy only means that exploitation will be carried out at the behest of a political majority rather than at the say of an individual capitalist (Roediger, 1991; Arnesen, 2001; Foner, 1974; 1979; Milkman, 1985).

Craft unions’ strategy

Workers formed unions to voice their interests against their employers, and also against other workers. Rejecting broad alliances along class lines, alliances uniting workers on the basis of their lack of property and their common relationship with capitalists, craft unions followed a narrow strategy, uniting workers with the same skill against both the capitalists and against workers in different trades. By using their monopoly of knowledge of the work process to restrict access to the trade, craft unions could have a strong bargaining position that was enhanced by alliances with other craftsmen to finance long strikes. A narrow craft strategy was followed by the first successful unions throughout Europe and America, especially in small urban shops using technologies that still depended on traditional specialized skills, including printers, furniture makers, carpenters, gold beaters and jewelry makers, iron molders, engineers, machinists, and plumbers. Craft unions’ characteristic action was the small, local strike, the concerted withdrawal of labor by a few workers critical to production. Typically, craft unions would present a set of demands to local employers on a “take-it-or-leave-it” basis; either the employer accepted their demands or fought a contest of strength to determine whether the employers could do without the skilled workers for longer than the workers could manage without their jobs.

The craft strategy offered little to the great masses of workers. Because it depends on restricting access to trades it could not be applied by common laborers, who were untrained, nor by semi-skilled employees in modern mass-production establishments whose employers trained them on-the-job. Shunned by craft unions, most women and African-Americans in the United States were crowded into nonunion occupations. Some sought employment as strikebreakers in occupations otherwise monopolized by craft unions controlled by white, native-born males (Washington, 1913; Whatley, 1993).

Unions among unskilled workers

To form unions, the unskilled needed a strategy of the weak that would utilize their numbers rather than specialized knowledge and accumulated savings. Inclusive unions have succeeded but only when they attract allies among politicians, state officials, and the affluent public. Sponsoring unions and protecting them from employer repression, allies can allow organization among workers without specialized skills. When successful, inclusive unions can grow quickly in mass mobilization of common laborers. This happened, for example, in Germany at the beginning of the Weimar Republic, during the French Popular Front of 1936-37, and in the United States during the New Deal of the 1930s. These were times when state support rewarded inclusive unions for organizing the unskilled. The bill for mass mobilization usually came later. Each boom was followed by a reaction against the extensive promises of the inclusive labor movement when employers and conservative politicians worked to put labor’s genie back in the bottle.

Solidarity and the Trade Unions

Unionized occupations of the late 1800s

By the late-nineteenth century, trade unions had gained a powerful position in several skilled occupations in the United States and elsewhere. Outside of mining, craft unions were formed among well-paid skilled craft workers — workers whom historian Eric Hobsbawm labeled the “labor aristocracy” (Hobsbawm, 1964; Geary, 1981). In 1892, for example, nearly two-thirds of British coal miners were union members, as were a third of machinists, millwrights and metal workers, cobblers and shoe makers, glass workers, printers, mule spinners, and construction workers (Bain and Price, 1980). French miners had formed relatively strong unions, as had skilled workers in the railroad operating crafts, printers, jewelry makers, cigar makers, and furniture workers (Friedman, 1998). Cigar makers, printers, furniture workers, some construction and metal craftsmen took the lead in early German unions (Kocka, 1986). In the United States, there were about 160,000 union members in 1880, including 120,000 belonging to craft unions, including carpenters, engineers, furniture makers, stone-cutters, iron puddlers and rollers, printers, and several railroad crafts. Another 40,000 belonged to “industrial” unions organized without regard for trade. About half of these were coal miners; most of the rest belonged to the Knights of Labor (KOL) (Friedman, 1999).

The Knights of Labor

In Europe, these craft organizations were to be the basis of larger, mass unions uniting workers without regard for trade or, in some cases, industry (Ansell, 2001). This process began in the United States in the 1880s when craft workers in the Knights of Labor reached out to organize more broadly. Formed by skilled male, native-born garment cutters in 1869, the Knights of Labor would seem an odd candidate to mobilize the mass of unskilled workers. But from a few Philadelphia craft workers, the Knights grew to become a national and even international movement. Membership reached 20,000 in 1881 and grew to 100,000 in 1885. Then, in 1886, when successful strikes on some western railroads attracted a mass of previously unorganized unskilled workers, the KOL grew to a peak membership of a million workers. For a brief time, the Knights of Labor was a general movement of the American working class (Ware, 1929; Voss, 1993).

The KOL became a mass movement with an ideology and program that united workers without regard for occupation, industry, race or gender (Hattam, 1993). Never espousing Marxist or socialist doctrines, the Knights advanced an indigenous form of popular American radicalism, a “republicanism” that would overcome social problems by extending democracy to the workplace. Valuing citizens according to their work, their productive labor, the Knights were true heirs of earlier bourgeois radicals. Open to all producers, including farmers and other employers, they excluded only those seen to be parasitic on the labor of producers — liquor dealers, gamblers, bankers, stock manipulators and lawyers. Welcoming all others without regard for race, gender, or skill, the KOL was the first American labor union to attract significant numbers of women, African-Americans, and the unskilled (Foner, 1974; 1979; Rachleff, 1984).

The KOL’s strategy

In practice, most KOL local assemblies acted like craft unions. They bargained with employers, conducted boycotts, and called members out on strike to demand higher wages and better working conditions. But unlike craft unions that depended on the bargaining leverage of a few strategically positioned workers, the KOL’s tactics reflected its inclusive and democratic vision. Without a craft union’s resources or control over labor supply, the Knights sought to win labor disputes by widening them to involve political authorities and the outside public able to pressure employers to make concessions. Activists hoped that politicizing strikes would favor the KOL because its large membership would tempt ambitious politicians while its members’ poverty drew public sympathy.

In Europe, a strategy like that of the KOL succeeded in promoting the organization of inclusive unions. But it failed in the United States. Comparing the strike strategies of trade unions and the Knights provides insight into the survival and eventual success of the trade unions and their confederation, the American Federation of Labor (AFL) in late-nineteenth century America. Seeking to transform industrial relations, local assemblies of the KOL struck frequently with large but short strikes involving skilled and unskilled workers. The Knights’ industrial leverage depended on political and social influence. It could succeed where trade unions would not go because the KOL strategy utilized numbers, the one advantage held by common laborers. But this strategy could succeed only where political authorities and the outside public might sympathize with labor. Later industrial and regional unions tried the same strategy, conducting short but large strikes. By demonstrating sufficient numbers and commitment, French and Italian unions, for example, would win from state officials concessions they could not force from recalcitrant employers (Shorter and Tilly, 1974; Friedman, 1998). But compared with the small strikes conducted by craft unions, “solidarity” strikes must walk a fine line, aggressive enough to draw attention but not so threatening to provoke a hostile reaction from threatened authorities. Such a reaction doomed the KOL.

The Knights’ collapse in 1886

In 1886, the Knights became embroiled in a national general strike demanding an eight-hour workday, the world’s first May Day. This led directly to the collapse of the KOL. The May Day strike wave in 1886 and the bombing at Haymarket Square in Chicago provoked a “red scare” of historic proportions driving membership down to half a million in September 1887. Police in Chicago, for example, broke up union meetings, seized union records, and even banned the color red from advertisements. The KOL responded politically, sponsoring a wave of independent labor parties in the elections of 1886 and supporting the Populist Party in 1890 (Fink, 1983). But even relatively strong showings by these independent political movements could not halt the KOL’s decline. By 1890, its membership had fallen by half again, and it fell to under 50,000 members by 1897.

Unions and radical political movements in Europe in the late 1800s

The KOL spread outside the United States, attracting an energetic following in the Canada, the United Kingdom, France, and other European countries. Industrial and regional unionism fared better in these countries than in the United States. Most German unionists belonged to industrial unions allied with the Social Democratic Party. Under Marxist leadership, unions and political party formed a centralized labor movement to maximize labor’s political leverage. English union membership was divided between members of a stable core of craft unions and a growing membership in industrial and regional unions based in mining, cotton textiles, and transportation. Allied with political radicals, these industrial and regional unions formed the backbone of the Labor Party, which held the balance of power in British politics after 1906.

The most radical unions were found in France. By the early 1890s, revolutionary syndicalists controlled the national union center, the Confédération générale du travail (or CGT), which they tried to use as a base for a revolutionary general strike where the workers would seize economic and political power. Consolidating craft unions into industrial and regional unions, the Bourses du travail, syndicalists conducted large strikes designed to demonstrate labor’s solidarity. Paradoxically, the syndicalists’ large strikes were effective because they provoked friendly government mediation. In the United States, state intervention was fatal for labor because government and employers usually united to crush labor radicalism. But in France, officials were more concerned to maintain a center-left coalition with organized labor against reactionary employers opposed to the Third Republic. State intervention helped French unionists to win concessions beyond any they could win with economic leverage. A radical strategy of inclusive industrial and regional unionism could succeed in France because the political leadership of the early Third Republic needed labor’s support against powerful economic and social groups who would replace the Republic with an authoritarian regime. Reminded daily of the importance of republican values and the coalition that sustained the Republic, French state officials promoted collective bargaining and labor unions. Ironically, it was the support of liberal state officials that allowed French union radicalism to succeed, and allowed French unions to grow faster than American unions and to organize the semi-skilled workers in the large establishments of France’s modern industries (Friedman, 1997; 1998).

The AFL and American Exceptionalism

By 1914, unions outside the United States had found that broad organization reduced the availability of strike breakers, advanced labor’s political goals, and could lead to state intervention on behalf of the unions. The United States was becoming exceptional, the only advanced capitalist country without a strong, united labor movement. The collapse of the Knights of Labor cleared the way for the AFL. Formed in 1881 as the Federation of Trade and Labor Unions, the AFL was organized to uphold the narrow interests of craft workers against the general interests of common laborers in the KOL. In practice, AFL-craft unions were little labor monopolies, able to win concessions because of their control over uncommon skills and because their narrow strategy did not frighten state officials. Many early AFL leaders, notably the AFL’s founding president Samuel Gompers and P. J. McGuire of the Carpenters, had been active in radical political movements. But after 1886, they learned to reject political involvements for fear that radicalism might antagonize state officials or employers and provoke repression.

AFL successes in the early twentieth-century

Entering the twentieth century, the AFL appeared to have a winning strategy. Union membership rose sharply in the late 1890s, doubling between 1896 and 1900 and again between 1900 and 1904. Fewer than 5 percent of industrial wage earners belonged to labor unions in 1895, but this share rose to 7 percent in 1900 and 13 percent in 1904, including over 21 percent of industrial wage earners (workers outside of commerce, government, and the professions). Half of coal miners in 1904 belonged to an industrial union (the United Mine Workers of America), but otherwise, most union members belonged to craft organizations, including nearly half the printers, and a third of cigar makers, construction workers and transportation workers. As shown in Table 4, other pockets of union strength included skilled workers in the metal trades, leather, and apparel. These craft unions had demonstrated their economic power, raising wages by around 15 percent and reducing hours worked (Friedman, 1991; Mullin, 1993).

Table 4
Unionization rates by industry in the United States, 1880-2000

Industry 1880 1910 1930 1953 1974 1983 2000
Agriculture Forestry Fishing 0.0 0.1 0.4 0.6 4.0 4.8 2.1
Mining 11.2 37.7 19.8 64.7 34.7 21.1 10.9
Construction 2.8 25.2 29.8 83.8 38.0 28.0 18.3
Manufacturing 3.4 10.3 7.3 42.4 37.2 27.9 14.8
Transportation Communication Utilities 3.7 20.0 18.3 82.5 49.8 46.4 24.0
Private Services 0.1 3.3 1.8 9.5 8.6 8.7 4.8
Public Employment 0.3 4.0 9.6 11.3 38.0 31.1 37.5
All Private 1.7 8.7 7.0 31.9 22.4 18.4 10.9
All 1.7 8.5 7.1 29.6 24.8 20.4 14.1

Note: This table shows the unionization rate, the share of workers belonging to unions, in different industries in the United States, 1880-1996.

Sources: 1880 and 1910: Friedman (1999): 83; 1930: Union membership from Wolman (1936); employment from United States, Bureau of the Census (1932); 1953: Troy (1957); 1974, 1986, 2000: United States, Current Population Survey.

Limits to the craft strategy

Even at this peak, the craft strategy had clear limits. Craft unions succeeded only in a declining part of American industry among workers still performing traditional tasks where training was through apprenticeship programs controlled by the workers themselves. By contrast, there were few unions in the rapidly growing industries employing semi-skilled workers. Nor was the AFL able to overcome racial divisions and state opposition to organize in the South (Friedman, 2000; Letwin, 1998). Compared with the KOL in the early 1880s, or with France’s revolutionary syndicalist unions, American unions were weak in steel, textiles, chemicals, paper and metal fabrication using technologies without traditional craft skills. AFL strongholds included construction, printing, cigar rolling, apparel cutting and pressing, and custom metal engineering, employed craft workers in relatively small establishments little changed from 25 years earlier (see Table 4).

Dependent on skilled craftsmen’s economic leverage, the AFL was poorly organized to battle large, technologically dynamic corporations. For a brief time, the revolutionary International Workers of the World (IWW), formed in 1905, organized semi-skilled workers in some mass production industries. But by 1914, it too had failed. It was state support that forced powerful French employers to accept unions. Without such assistance, no union strategy could force large American employers to accept unions.

Unions in the World War I Era

The AFL and World War I

For all its limits, it must be acknowledged that the AFL and its craft affiliates survived after their rivals ignited and died. The AFL formed a solid union movement among skilled craftsmen that with favorable circumstances could form the core of a broader union movement like what developed in Europe after 1900. During World War I, the Wilson administration endorsed unionization and collective bargaining in exchange for union support for the war effort. AFL affiliates used state support to organize mass-production workers in shipbuilding, metal fabrication, meatpacking and steel doubling union membership between 1915 and 1919. But when Federal support ended after the war’s end, employers mobilized to crush the nascent unions. The post-war union collapse has been attributed to the AFL’s failings. The larger truth is that American unions needed state support to overcome the entrenched power of capital. The AFL did not fail because of its deficient economic strategy; it failed because it had an ineffective political strategy (Friedman, 1998; Frank, 1994; Montgomery, 1987).

International effects of World War I

War gave labor extraordinary opportunities. Combatant governments rewarded pro-war labor leaders with positions in the expanded state bureaucracy and support for collective bargaining and unions. Union growth also reflected economic conditions when wartime labor shortages strengthened the bargaining position of workers and unions. Unions grew rapidly during and immediately after the war. British unions, for example, doubled their membership between 1914 and 1920, to enroll eight million workers, almost half the nonagricultural labor force (Bain and Price, 1980; Visser, 1989). Union membership tripled in Germany and Sweden, doubled in Canada, Denmark, the Netherlands, and Norway, and almost doubled in the United States (see Table 5 and Table 1). For twelve countries, membership grew by 121 percent between 1913 and 1920, including 119 percent growth in seven combatant countries and 160 percent growth in five neutral states.

Table 5
Impact of World War I on Union Membership Growth
Membership Growth in Wartime and After

12 Countries 7 Combatants 5 Neutrals
War-Time 1913 12 498 000 11 742 000 756 000
1920 27 649 000 25 687 000 1 962 000
Growth 1913-20: 121% 119% 160%
Post-war 1920 27 649 000
1929 18 149 000
Growth 1920-29: -34%

Shift toward the revolutionary left

Even before the war, frustration with the slow pace of social reform had led to a shift towards the revolutionary socialist and syndicalist left in Germany, the United Kingdom, and the United States (Nolan, 1981; Montgomery, 1987). In Europe, frustrations with rising prices, declining real wages and working conditions, and anger at catastrophic war losses fanned the flames of discontent into a raging conflagration. Compared with pre-war levels, the number of strikers rose ten or even twenty times after the war, including 2.5 million strikers in France in 1919 and 1920, compared with 200,000 strikers in 1913, 13 million German strikers, up from 300,000 in 1913, and 5 million American strikers, up from under 1 million in 1913. British Prime Minister Lloyd George warned in March 1919 that “The whole of Europe is filled with the spirit of revolution. There is a deep sense not only of discontent, but of anger and revolt among the workmen . . . The whole existing order in its political, social and economic aspects is questioned by the masses of the population from one end of Europe to the other” (quoted in Cronin, 1983: 22).

Impact of Communists

Inspired by the success of the Bolshevik revolution in Russia, revolutionary Communist Parties were organized throughout the world to promote revolution by organizing labor unions, strikes, and political protest. Communism was a mixed blessing for labor. The Communists included some of labor’s most dedicated activists and organizers who contributed greatly to union organization. But Communist help came at a high price. Secretive, domineering, intolerant of opposition, the Communists divided unions between their dwindling allies and a growing collection of outraged opponents. Moreover, they galvanized opposition, depriving labor of needed allies among state officials and the liberal bourgeoisie.

The “Lean Years”: Welfare Capitalism and the Open Shop

Aftermath of World War I

As with most great surges in union membership, the postwar boom was self-limiting. Helped by a sharp post- war economic contraction, employers and state officials ruthlessly drove back the radical threat, purging their workforce of known union activists and easily absorbing futile strikes during a period of rising unemployment. Such campaigns drove membership down by a third from a 1920 peak of 26 million members in eleven countries in 1920 to fewer than 18 million in 1924. In Austria, France, Germany, and the United States, labor unrest contributed to the election of conservative governments; in Hungary, Italy, and Poland it led to the installation of anti- democratic dictatorships that ruthlessly crushed labor unions. Economic stagnation, state repression, and anti-union campaigns by employers prevented any union resurgence through the rest of the 1920s. By 1929, unions in these eleven countries had added only 30,000 members, one-fifth of one percent.

Injunctions and welfare capitalism

The 1920s was an especially dark period for organized labor in the United States where weaknesses visible before World War I became critical failures. Labor’s opponents used fear of Communism to foment a post-war red scare that targeted union activists for police and vigilante violence. Hundreds of foreign-born activists were deported, and mobs led by the American Legion and the Ku Klux Klan broke up union meetings and destroyed union offices (see, for example, Frank, 1994: 104-5). Judges added law to the campaign against unions. Ignoring the intent of the Clayton Anti-Trust Act (1914) they used anti-trust law and injunctions against unions, forbidding activists from picketing or publicizing disputes, holding signs, or even enrolling new union members. Employers competed for their workers’ allegiance, offering paternalist welfare programs and systems of employee representation as substitutes for independent unions. They sought to build a nonunion industrial relations system around welfare capitalism (Cohen, 1990).

Stagnation and decline

After the promises of the war years, the defeat of postwar union drives in mass production industries like steel and meatpacking inaugurated a decade of union stagnation and decline. Membership fell by a third between 1920 and 1924. Unions survived only in the older trades where employment was usually declining. By 1924, they were almost completely eliminated from the dynamic industries of the second industrial revolution: including steel, automobiles, consumer electronics, chemicals and rubber manufacture.

New Deals for Labor

Great Depression

The nonunion industrial relations system of the 1920s might have endured and produced a docile working class organized in company unions (Brody, 1985). But the welfare capitalism of the 1920s collapsed when the Great Depression of the 1930s exposed its weaknesses and undermined political support for the nonunion, open shop. Between 1929 and 1933, real national income in the United States fell by one third, nonagricultural employment fell by a quarter, and unemployment rose from under 2 million in 1929 to 13 million in 1933, a quarter of the civilian labor force. Economic decline was nearly as great elsewhere, raising unemployment to over 15 percent in Austria, Canada, Germany, and the United Kingdom (Maddison, 1991: 260-61). Only the Soviet Union, with its authoritarian political economy was largely spared the scourge of unemployment and economic collapse — a point emphasized by Communists throughout the 1930s and later. Depression discredited the nonunion industrial relations system by forcing welfare capitalists to renege on promises to stabilize employment and to maintain wages. Then, by ignoring protests from members of employee representation plans, welfare capitalists further exposed the fundamental weakness of their system. Lacking any independent support, paternalist promises had no standing but depended entirely on the variable good will of employers. And sometimes that was not enough (Cohen, 1990).

Depression-era political shifts

Voters, too, lost confidence in employers. The Great Depression discredited the old political economy. Even before Franklin Roosevelt’s election as President of the United States in 1932, American states enacted legislation restricting the rights of creditors and landlords, restraining the use of the injunction in labor disputes, and providing expanded relief for the unemployed (Ely, 1998; Friedman, 2001). European voters abandoned centrist parties, embracing extremists of both left and right, Communists and Fascists. In Germany, the Nazis won, but Popular Front governments uniting Communists and socialists with bourgeois liberals assumed power in other countries, including Sweden, France and Spain. (The Spanish Popular Front was overthrown by a Fascist rebellion that installed a dictatorship led by Francisco Franco.) Throughout there was an impulse to take public control over the economy because free market capitalism and orthodox finance had led to disaster (Temin, 1990).

Economic depression lowers union membership when unemployed workers drop their membership and employers use their stronger bargaining position to defeat union drives (Bain and Elsheikh, 1976). Indeed, union membership fell with the onset of the Great Depression but, contradicting the usual pattern, membership rebounded sharply after 1932 despite high unemployment, rising by over 76 percent in ten countries by 1938 (see Table 6 and Table 1). The fastest growth came in countries with openly pro-union governments. In France, where the Socialist Léon Blum led a Popular Front government, and the United States, during Franklin Roosevelt’s New Deal, membership rose by 160 percent 1933-38. But membership grew by 33 percent in eight other countries even without openly pro-labor governments.

Table 6
Impact of the Great Depression and World War II on Union Membership Growth

11 Countries (no Germany) 10 Countries (no Austria)
Depression 1929 12 401 000 11 508 000
1933 11 455 000 10 802 000
Growth 1929-33 -7.6% -6.1%
Popular Front Period 1933 10 802 000
1938 19 007 000
Growth 1933-38 76.0%
Second World War 1938 19 007 000
1947 35 485 000
Growth 1938-47 86.7%

French unions and the Matignon agreements

French union membership rose from under 900,000 in 1935 to over 4,500,000 in 1937. The Popular Front’s victory in the elections of June 1936 precipitated a massive strike wave and the occupation of factories and workplaces throughout France. Remembered in movie, song and legend, the factory occupations were a nearly spontaneous uprising of French workers that brought France’s economy to a halt. Contemporaries were struck by the extraordinarily cheerful feelings that prevailed, the “holiday feeling” and sense that the strikes were a new sort of non-violent revolution that would overturn hierarchy and replace capitalist authoritarianism with true social democracy (Phillippe and Dubief, 1993: 307-8). After Blum assumed office, he brokered the Matignon agreements, named after the premier’s official residence in Paris. Union leaders and heads of France’s leading employer associations agreed to end the strikes and occupations in exchange for wage increases of around 15 percent, a 40 hour workweek, annual vacations, and union recognition. Codified in statute by the Popular Front government, French unions gained new rights and protections from employer repression. Only then did workers flock into unions. In a few weeks, French unions gained four million members with the fastest growth in the new industries of the second industrial revolution. Unions in metal fabrication and chemicals grew by 1,450 percent and 4,000 percent respectively (Magraw, 1992: 2, 287-88).

French union leader Léon Jouhaux hailed the Matignon agreements as “the greatest victory of the workers’ movement.” It included lasting gains, including annual vacations and shorter workweeks. But Simone Weil described the strikers of May 1936 as “soldiers on leave,” and they were soon returned to work. Regrouping, employers discharged union activists and attacked the precarious unity of the Popular Front government. Fighting an uphill battle against renewed employer resistance, the Popular Front government fell before it could build a new system of cooperative industrial relations. Contained, French unions were unable to maintain their momentum towards industrial democracy. Membership fell by a third in 1937-39.

The National Industrial Recovery Act

A different union paradigm was developed in the United States. Rather than vehicles for a democratic revolution, the New Deal sought to integrate organized labor into a reformed capitalism that recognized capitalist hierarchy in the workplace, using unions only to promote macroeconomic stabilization by raising wages and consumer spending (Brinkley, 1995). Included as part of a program for economic recovery was section 7(a) of the National Industrial Recovery Act (NIRA) giving “employees . . . the right to organize and bargain collectively through representatives of their own choosing . . . free from the interference, restraint, or coercion of employers.” AFL-leader William Green pronounced this a “charter of industrial freedom” and workers rushed into unions in a wave unmatched since the Knights of Labor in 1886. As with the KOL, the greatest increase came among the unskilled. Coal miners, southern textile workers, northern apparel workers, Ohio tire makers, Detroit automobile workers, aluminum, lumber and sawmill workers all rushed into unions. For the first time in fifty years, American unions gained a foothold in mass production industries.

AFL’s lack of enthusiasm

Promises of state support brought common laborers into unions. But once there, the new unionists received little help from aging AFL leaders. Fearing that the new unionists’ impetuous zeal and militant radicalism would provoke repression, AFL leaders tried to scatter the new members among contending craft unions with archaic craft jurisdictions. The new unionists were swept up in the excitement of unity and collective action but a half-century of experience had taught the AFL’s leadership to fear such enthusiasms.

The AFL dampened the union boom of 1933-34, but, again, the larger problem was not with the AFL’s flawed tactics but with its lack of political leverage. Doing little to enforce the promises of Section 7(a), the Federal government left employers free to ignore the law. Some flatly prohibited union organization; others formally honored the law but established anemic employee representation plans while refusing to deal with independent unions (Irons, 2000). By 1935 almost as many industrial establishments had employer-dominated employee- representation plans (27 percent) as had unions (30 percent). The greatest number had no labor organization at all (43 percent).

Birth of the CIO

Implacable management resistance and divided leadership killed the early New Deal union surge. It died even before the NIRA was ruled unconstitutional in 1935. Failure provoked rebellion within the AFL. Led by John L. Lewis of the United Mine Workers, eight national unions launched a campaign for industrial organization as the Committee for Industrial Organization. After Lewis punched Carpenter’s Union leader William L Hutcheson on the floor of the AFL convention in 1935, the Committee became an independent Congress of Industrial Organization (CIO). Including many Communist activists, CIO committees fanned out to organize workers in steel, automobiles, retail trade, journalism and other industries. Building effectively on local rank and file militancy, including sitdown strikes in automobiles, rubber, and other industries, the CIO quickly won contracts from some of the strongest bastions of the open shop, including United States Steel and General Motors (Zieger, 1995).

The Wagner Act

Creative strategy and energetic organizing helped. But the CIO owed its lasting success to state support. After the failure of the NIRA, New Dealers sought another way to strengthen labor as a force for economic stimulus. This led to the enactment in 1935 of the National Labor Relations Act, also known as the “Wagner Act.” The Wagner Act established a National Labor Relations Board charged to enforce employees’ “right to self-organization, to form, join, or assist labor organizations to bargain collectively through representatives of their own choosing and to engage in concerted activities for the purpose of collective bargaining or other mutual aid or protection.” It provided for elections to choose union representation and required employers to negotiate “in good faith” with their workers’ chosen representatives. Shifting labor conflict from strikes to elections and protecting activists from dismissal for their union work, the Act lowered the cost to individual workers of supporting collective action. It also put the Federal government’s imprimatur on union organization.

Crucial role of rank-and-file militants and state government support

Appointed by President Roosevelt, the first NLRB was openly pro-union, viewing the Act’s preamble as mandate to promote organization. By 1945 the Board had supervised 24,000 union elections involving some 6,000,000 workers, leading to the unionization of nearly 5,000,000 workers. Still, the NLRB was not responsible for the period’s union boom. The Wagner Act had no direct role in the early CIO years because it was ignored for two years until its constitutionality was established by the Supreme Court in National Labor Relations Board v. Jones and Laughlin Steel Company (1937). Furthermore, the election procedure’s gross contribution of 5,000,000 members was less than half of the period’s net union growth of 11,000,000 members. More important than the Wagner Act were crucial union victories over prominent open shop employers in cities like Akron, Ohio, Flint, Michigan, and among Philadelphia-area metal workers. Dedicated rank-and-file militants and effective union leadership were crucial in these victories. As important was the support of pro-New Deal local and state governments. The Roosevelt landslides of 1934 and 1936 brought to office liberal Democratic governors and mayors who gave crucial support to the early CIO. Placing a right to collective bargaining above private property rights, liberal governors and other elected officials in Michigan, Ohio, Pennsylvania and elsewhere refused to send police to evict sit-down strikers who had seized control of factories. This state support allowed the minority of workers who actively supported unionization to use force to overcome the passivity of the majority of workers and the opposition of the employers. The Open Shop of the 1920s was not abandoned; it was overwhelmed by an aggressive, government-backed labor movement (Gall, 1999; Harris, 2000).

World War II

Federal support for union organization was also crucial during World War II. Again, war helped unions both by eliminating unemployment and because state officials supported unions to gain support for the war effort. Established to minimize labor disputes that might disrupt war production, the National War Labor Board instituted a labor truce where unions exchanged a no-strike pledge for employer recognition. During World War II, employers conceded union security and “maintenance of membership” rules requiring workers to pay their union dues. Acquiescing to government demands, employers accepted the institutionalization of the American labor movement, guaranteeing unions a steady flow of dues to fund an expanded bureaucracy, new benefit programs, and even to raise funds for political action. After growing from 3.5 to 10.2 million members between 1935 and 1941, unions added another 4 million members during the war. “Maintenance of membership” rules prevented free riders even more effectively than had the factory takeovers and violence of the late-1930s. With millions of members and money in the bank, labor leaders like Sidney Hillman and Phillip Murray had the ear of business leaders and official Washington. Large, established, and respected: American labor had made it, part of a reformed capitalism committed to both property and prosperity.

Even more than the First World War, World War Two promoted unions and social change. A European civil war, the war divided the continent not only between warring countries but within countries between those, usually on the political right, who favored fascism over liberal parliamentary government and those who defended democracy. Before the war, left and right contended over the appeasement of Nazi Germany and fascist Italy; during the war, many businesses and conservative politicians collaborated with the German occupation against a resistance movement dominated by the left. Throughout Europe, victory over Germany was a triumph for labor that led directly to the entry into government of socialists and Communists.

Successes and Failures after World War II

Union membership exploded during and after the war, nearly doubling between 1938 and 1946. By 1947, unions had enrolled a majority of nonagricultural workers in Scandinavia, Australia, and Italy, and over 40 percent in most other European countries (see Table 1). Accumulated depression and wartime grievances sparked a post- war strike wave that included over 6 million strikers in France in 1948, 4 million in Italy in 1949 and 1950, and 5 million in the United States in 1946. In Europe, popular unrest led to a dramatic political shift to the left. The Labor Party government elected in the United Kingdom in 1945 established a new National Health Service, and nationalized mining, the railroads, and the Bank of England. A center-left post-war coalition government in France expanded the national pension system and nationalized the Bank of France, Renault, and other companies associated with the wartime Vichy regime. Throughout Europe, the share of national income devoted to social services jumped dramatically, as did the share of income going to the working classes.

Europeans unions and the state after World War II

Unions and the political left were stronger everywhere throughout post-war Europe, but in some countries labor’s position deteriorated quickly. In France, Italy, and Japan, the popular front uniting Communists, socialists, and bourgeois liberals dissolved, and labor’s management opponents recovered state support, with the onset of the Cold War. In these countries, union membership dropped after 1947 and unions remained on the defensive for over a decade in a largely adversarial industrial relations system. Elsewhere, notably in countries with weak Communist movements, such as in Scandinavia but also in Austria, Germany, and the Netherlands, labor was able to compel management and state officials to accept strong and centralized labor movements as social partners. In these countries, stable industrial relations allowed cooperation between management and labor to raise productivity and to open new markets for national companies. High-union-density and high-union-centralization allowed Scandinavian and German labor leaders to negotiate incomes policies with governments and employers restraining wage inflation in exchange for stable employment, investment, and wages linked to productivity growth. Such policies could not be instituted in countries with weaker and less centralized labor movements, including France, Italy, Japan, the United Kingdom and the United States because their unions had not been accepted as bargaining partners by management and they lacked the centralized authority to enforce incomes policies and productivity bargains (Alvarez, Garrett, and Lange, 1992).

Europe since the 1960s

Even where European labor was the weakest, in France or Italy in the 1950s, unions were stronger than before World War II. Working with entrenched socialist and labor political parties, European unions were able to maintain high wages, restrictions on managerial autonomy, and social security. The wave of popular unrest in the late 1960s and early 1970s would carry most European unions to new heights, briefly bringing membership to over 50 percent of the labor force in the United Kingdom and in Italy, and bringing socialists into the government in France, Germany, Italy, and the United Kingdom. Since 1980, union membership has declined some and there has been some retrenchment in the welfare state. But the essentials of European welfare states and labor relations have remained (Western, 1997; Golden and Pontusson, 1992).

Unions begin to decline in the US

It was after World War II that American Exceptionalism became most valid, when the United States emerged as the advanced, capitalist democracy with the weakest labor movement. The United States was the only advanced capitalist democracy where unions went into prolonged decline right after World War II. At 35 percent, the unionization rate in 1945 was the highest in American history, but even then it was lower than in most other advanced capitalist economies. It has been falling since. The post-war strike wave, including three million strikers in 1945 and five million in 1946, was the largest in American history but it did little to enhance labor’s political position or bargaining leverage. Instead, it provoked a powerful reaction among employers and others suspicious of growing union power. A concerted drive by the CIO to organize the South, “Operation Dixie,” failed dismally in 1946. Unable to overcome private repression, racial divisions, and the pro-employer stance of southern local and state governments, the CIO’s defeat left the South as a nonunion, low-wage domestic enclave and a bastion of anti- union politics (Griffith, 1988). Then, in 1946, a conservative Republican majority was elected to Congress, dashing hopes for a renewed, post-war New Deal.

The Taft-Hartley Act and the CIO’s Expulsion of Communists

Quickly, labor’s wartime dreams turned to post-war nightmares. The Republican Congress amended the Wagner Act, enacting the Taft-Hartley Act in 1947 to give employers and state officials new powers against strikers and unions. The law also required union leaders to sign a non-Communist affidavit as a condition for union participation in NLRB-sponsored elections. This loyalty oath divided labor during a time of weakness. With its roots in radical politics and an alliance of convenience between Lewis and the Communists, the CIO was torn by the new Red Scare. Hoping to appease the political right, the CIO majority in 1949 expelled ten Communist-led unions with nearly a third of the organization’s members. This marked the end of the CIO’s expansive period. Shorn of its left, the CIO lost its most dynamic and energetic organizers and leaders. Worse, it plunged the CIO into a civil war; non-Communist affiliates raided locals belonging to the “communist-led” unions fatally distracting both sides from the CIO’s original mission to organize the unorganized and empower the dispossessed. By breaking with the Communists, the CIO’s leadership signaled that it had accepted its place within a system of capitalist hierarchy. Little reason remained for the CIO to remain independent. In 1955 it merged with the AFL to form the AFL-CIO.

The Golden Age of American Unions

Without the revolutionary aspirations now associated with the discredited Communists, America’s unions settled down to bargain over wages and working conditions without challenging such managerial prerogatives as decisions about prices, production, and investment. Some labor leaders, notably James Hoffa of the Teamsters but also local leaders in construction and service trades, abandoned all higher aspirations to use their unions for purely personal financial gain. Allying themselves with organized crime, they used violence to maintain their power over employers and their own rank-and-file membership. Others, including former-CIO leaders, like Walter Reuther of the United Auto Workers, continued to push the envelope of legitimate bargaining topics, building challenges to capitalist authority at the workplace. But even the UAW was unable to force major managerial prerogatives onto the bargaining table.

The quarter century after 1950 formed a ‘golden age’ for American unions. Established unions found a secure place at the bargaining table with America’s leading firms in such industries as autos, steel, trucking, and chemicals. Contracts were periodically negotiated providing for the exchange of good wages for cooperative workplace relations. Rules were negotiated providing a system of civil authority at work, with negotiated regulations for promotion and layoffs, and procedures giving workers opportunities to voice grievances before neutral arbitrators. Wages rose steadily, by over 2 percent per year and union workers earned a comfortable 20 percent more than nonunion workers of similar age, experience and education. Wages grew faster in Europe but American wages were higher and growth was rapid enough to narrow the gap between rich and poor, and between management salaries and worker wages. Unions also won a growing list of benefit programs, medical and dental insurance, paid holidays and vacations, supplemental unemployment insurance, and pensions. Competition for workers forced many nonunion employers to match the benefit packages won by unions, but unionized employers provided benefits worth over 60 percent more than were given nonunion workers (Freeman and Medoff, 1984; Hirsch and Addison, 1986).

Impact of decentralized bargaining in the US

In most of Europe, strong labor movements limit the wage and benefit advantages of union membership by forcing governments to extend union gains to all workers in an industry regardless of union status. By compelling nonunion employers to match union gains, this limited the competitive penalty borne by unionized firms. By contrast, decentralized bargaining and weak unions in the United States created large union wage differentials that put unionized firms at a competitive disadvantage, encouraging them to seek out nonunion labor and localities. A stable and vocal workforce with more experience and training did raise unionized firms’ labor productivity by 15 percent or more above the level of nonunion firms and some scholars have argued that unionized workers earn much of their wage gain. Others, however, find little productivity gain for unionized workers after account is taken of greater use of machinery and other nonlabor inputs by unionized firms (compare Freeman and Medoff, 1984 and Hirsch and Addison, 1986). But even unionized firms with higher labor productivity were usually more conscious of the wages and benefits paid to union worker than they were of unionization’s productivity benefits.

Unions and the Civil Rights Movement

Post-war unions remained politically active. European unions were closely associated with political parties, Communists in France and Italy, socialists or labor parties elsewhere. In practice, notwithstanding revolutionary pronouncements, even the Communist’s political agenda came to resemble that of unions in the United States, liberal reform including a commitment to full employment and the redistribution of income towards workers and the poor (Boyle, 1998). Golden age unions have also been at the forefront of campaigns to extend individual rights. The major domestic political issue of the post-war United States, civil rights, was troubling for many unions because of the racist provisions in their own practice. Nonetheless, in the 1950s and 1960s, the AFL-CIO strongly supported the civil rights movement, funded civil rights organizations and lobbied in support of civil rights legislation. The AFL-CIO pushed unions to open their ranks to African-American workers, even at the expense of losing affiliates in states like Mississippi. Seizing the opportunity created by the civil rights movement, some unions gained members among nonwhites. The feminist movement of the 1970s created new challenges for the masculine and sometimes misogynist labor movement. But, here too, the search for members and a desire to remove sources of division eventually brought organized labor to the forefront. The AFL-CIO supported the Equal Rights Amendment and began to promote women to leadership positions.

Shift of unions to the public sector

In no other country have women and members of racial minorities assumed such prominent positions in the labor movement as they have in the United States. The movement of African-American and women to leadership positions in the late-twentieth century labor movement was accelerated by a shift in the membership structure of the United States union movement. Maintaining their strength in traditional, masculine occupations in manufacturing, construction, mining, and transportation, European unions remained predominantly male. Union decline in these industries combined with growth in heavily female public sector employments in the United States led to the femininization of the American labor movement. Union membership began to decline in the private sector in the United States immediately after World War II. Between 1953 and 1983, for example, the unionization rate fell from 42 percent to 28 percent in manufacturing, by nearly half in transportation, and by over half in construction and mining (see Table 4). By contrast, after 1960, public sector workers won new opportunities to form unions. Because women and racial minorities form a disproportionate share of these public sector workers, increasing union membership there has changed the American labor movement’s racial and gender composition. Women comprised only 19 percent of American union members in the mid-1950s but their share rose to 40 percent by the late 1990s. By then, the most unionized workers were no longer the white male skilled craftsmen of old. Instead, they were nurses, parole officers, government clerks, and most of all, school teachers.

Union Collapse and Union Avoidance in the US

Outside the United States, unions grew through the 1970s and, despite some decline since the 1980s, European and Canadian unions remain large and powerful. The United States is different. Union decline since World War II has brought the United States private-sector labor movement down to early twentieth century levels. As a share of the nonagricultural labor force, union membership fell from its 1945 peak of 35 percent down to under 30 percent in the early 1970s. From there, decline became a general rout. In the 1970s, rising unemployment, increasing international competition, and the movement of industry to the nonunion South and to rural areas undermined the bargaining position of many American unions leaving them vulnerable to a renewed management offensive. Returning to pre-New Deal practices, some employers established new welfare and employee representation programs, hoping to lure worker away from unions (Heckscher, 1987; Jacoby, 1997). Others returned to pre-New Deal repression. By the early 1980s, union avoidance had become an industry. Anti-union consultants and lawyers openly counseled employers how to use labor law to evade unions. Findings of employers’ unfair labor practices in violation of the Wagner Act tripled in the 1970s; by the 1980s, the NLRB reinstated over 10,000 workers a year who were illegally discharged for union activity, nearly one for every twenty who voted for a union in an NLRB election (Weiler, 1983). By the 1990s, the unionization rate in the United States fell to under 14 percent, including only 9 percent of the private sector workers and 37 percent of those in the public sector. Unions now have minimal impact on wages or working conditions for most American workers.

Nowhere else have unions collapsed as in the United States. With a unionization rate dramatically below that of other countries, including Canada, the United States has achieved exceptional status (see Table 7). There remains great interest in unions among American workers; where employers do not resist, unions thrive. In the public sector and in some private employers where workers have free choice to join a union, they are as likely as they ever were, and as likely as workers anywhere. In the past, as after 1886 and in the 1920s, when American employers broke unions, they revived when a government committed to workplace democracy sheltered them from employer repression. If we see another such government, we may yet see another union revival.

Table 7
Union Membership Rates for the United States and Six Other Leading Industrial Economies, 1970 to 1990

1970 1980 1990
U.S.: Unionization Rate: All industries 30.0 24.7 17.6
U.S.: Unionization Rate: Manufacturing 41.0 35.0 22.0
U.S.: Unionization Rate: Financial services 5.0 4.0 2.0
Six Countries: Unionization Rate: All industries 37.1 39.7 35.3
Six Countries: Unionization Rate: Manufacturing 38.8 44.0 35.2
Five Countries: Unionization Rate: Financial services 23.9 23.8 24.0
Ratio: U.S./Six Countries: All industries 0.808 0.622 0.499
Ratio: U.S./Six Countries: Manufacturing 1.058 0.795 0.626
Ratio: U.S./Five Countries: Financial services 0.209 0.168 0.083

Note: The unionization rate reported is the number of union members out of 100 workers in the specified industry. The ratio shown is the unionization rate for the United States divided by the unionization rate for the other countries. The six countries are Canada, France, Germany, Italy, Japan, and the United Kingdom. Data on union membership in financial services in France are not available.

Source: Visser (1991): 110.

References

Alvarez, R. Michael, Geoffrey Garrett and Peter Lange. “Government Partisanship, Labor Organization, and Macroeconomic Performance,” American Political Science Review 85 (1992): 539-556.

Ansell, Christopher K. Schism and Solidarity in Social Movements: The Politics of Labor in the French Third Republic. Cambridge: Cambridge University Press, 2001.

Arnesen, Eric, Brotherhoods of Color: Black Railroad Workers and the Struggle for Equality. Cambridge, MA: Harvard University Press, 2001.

Bain, George S., and Farouk Elsheikh. Union Growth and the Business Cycle: An Econometric Analysis. Oxford: Basil Blackwell, 1976.

Bain, George S. and Robert Price. Profiles of Union Growth: A Comparative Statistical Portrait of Eight Countries. Oxford: Basil Blackwell, 1980.

Bernard, Phillippe and Henri Dubief. The Decline of the Third Republic, 1914-1938. Cambridge: Cambridge University Press, 1993.

Blewett, Mary H. Men, Women, and Work: Class, Gender and Protest in the New England Shoe Industry, 1780-1910. Urbana, IL: University of Illinois Press, 1988.

Boyle, Kevin, editor. Organized Labor and American Politics, 1894-1994: The Labor-Liberal Alliance. Albany, NY: State University of New York Press, 1998.

Brinkley, Alan. The End of Reform: New Deal Liberalism in Recession and War. New York: Alfred A. Knopf, 1995.

Brody, David. Workers in Industrial America: Essays on the Twentieth-Century Struggle. New York: Oxford University Press, 1985.

Cazals, Rémy. Avec les ouvriers de Mazamet dans la grève et l’action quotidienne, 1909-1914. Paris: Maspero, 1978.

Cohen, Lizabeth. Making A New Deal: Industrial Workers in Chicago, 1919-1939. Cambridge: Cambridge University Press, 1990.

Cronin, James E. Industrial Conflict in Modern Britain. London: Croom Helm, 1979.

Cronin, James E. “Labor Insurgency and Class Formation.” In Work, Community, and Power: The Experience of Labor in Europe and America, 1900-1925, edited by James E. Cronin and Carmen Sirianni. Philadelphia: Temple University Press, 1983. .

Cronin, James E. and Carmen Sirianni, editors. Work, Community, and Power: The Experience of Labor in Europe and America, 1900-1925. Philadelphia: Temple University Press, 1983.

Dawley, Alan. Class and Community: The Industrial Revolution in Lynn. Cambridge, MA: Harvard University Press, 1976.

Ely, James W., Jr. The Guardian of Every Other Right: A Constitutional History of Property Rights. New York: Oxford, 1998.

Fink, Leon. Workingmen’s Democracy: The Knights of Labor and American Politics. Urbana, IL: University of Illinois Press, 1983.

Fink, Leon. “The New Labor History and the Powers of Historical Pessimism: Consensus, Hegemony, and the Case of the Knights of Labor.” Journal of American History 75 (1988): 115-136.

Foner, Philip S. Organized Labor and the Black Worker, 1619-1973. New York: International Publishers, 1974.

Foner, Philip S. Women and the American Labor Movement: From Colonial Times to the Eve of World War I. New York: Free Press, 1979.

Frank, Dana. Purchasing Power: Consumer Organizing, Gender, and the Seattle Labor Movement, 1919- 1929. Cambridge: Cambridge University Press, 1994.

Freeman, Richard and James Medoff. What Do Unions Do? New York: Basic Books, 1984.

Friedman, Gerald. “Dividing Labor: Urban Politics and Big-City Construction in Late-Nineteenth Century America.” In Strategic Factors in Nineteenth-Century American Economic History, edited by Claudia Goldin and Hugh Rockoff, 447-64. Chicago: University of Chicago Press, 1991.

Friedman, Gerald. “Revolutionary Syndicalism and French Labor: The Rebels Behind the Cause.” French Historical Studies 20 (Spring 1997).

Friedman, Gerald. State-Making and Labor Movements: France and the United States 1876-1914. Ithaca, NY: Cornell University Press, 1998.

Friedman, Gerald. “New Estimates of United States Union Membership, 1880-1914.” Historical Methods 32 (Spring 1999): 75-86.

Friedman, Gerald. “The Political Economy of Early Southern Unionism: Race, Politics, and Labor in the South, 1880-1914.” Journal of Economic History 60, no. 2 (2000): 384-413.

Friedman, Gerald. “The Sanctity of Property in American Economic History” (manuscript, University of Massachusetts, July 2001).

Gall, Gilbert. Pursuing Justice: Lee Pressman, the New Deal, and the CIO. Albany, NY: State University of New York Press, 1999.

Gamson, William A. The Strategy of Social Protest. Homewood, IL: Dorsey Press, 1975.

Geary, Richard. European Labour Protest, 1848-1939. New York: St. Martin’s Press, 1981.

Golden, Miriam and Jonas Pontusson, editors. Bargaining for Change: Union Politics in North America and Europe. Ithaca, NY: Cornell University Press, 1992.

Griffith, Barbara S. The Crisis of American Labor: Operation Dixie and the Defeat of the CIO. Philadelphia: Temple University Press, 1988.

Harris, Howell John. Bloodless Victories: The Rise and Fall of the Open Shop in the Philadelphia Metal Trades, 1890-1940. Cambridge: Cambridge University Press, 2000.

Hattam, Victoria C. Labor Visions and State Power: The Origins of Business Unionism in the United States. Princeton: Princeton University Press, 1993.

Heckscher, Charles C. The New Unionism: Employee Involvement in the Changing Corporation. New York: Basic Books, 1987.

Hirsch, Barry T. and John T. Addison. The Economic Analysis of Unions: New Approaches and Evidence. Boston: Allen and Unwin, 1986.

Hirschman, Albert O. Exit, Voice and Loyalty: Responses to Decline in Firms, Organizations, and States. Cambridge, MA, Harvard University Press, 1970.

Hirschman, Albert O. Shifting Involvements: Private Interest and Public Action. Princeton: Princeton University Press, 1982.

Hobsbawm, Eric J. Labouring Men: Studies in the History of Labour. London: Weidenfeld and Nicolson, 1964.

Irons, Janet. Testing the New Deal: The General Textile Strike of 1934 in the American South. Urbana, IL: University of Illinois Press, 2000.

Jacoby, Sanford. Modern Manors: Welfare Capitalism Since the New Deal. Princeton: Princeton University Press, 1997.

Katznelson, Ira and Aristide R. Zolberg, editors. Working-Class Formation: Nineteenth-Century Patterns in Western Europe and the United States. Princeton: Princeton University Press, 1986. Kocka, Jurgen. “Problems of Working-Class Formation in Germany: The Early Years, 1800-1875.” In Working- Class Formation: Nineteenth-Century Patterns in Western Europe and the United States, edited by Ira Katznelson and Aristide R. Zolberg, 279-351. Princeton: Princeton University Press, 1986. Letwin, Daniel. The Challenge of Interracial Unionism: Alabama Coal Miners, 1878-1921. Chapel Hill: University of North Carolina Press, 1998. Maddison, Angus. Dynamic Forces in Capitalist Development: A Long-Run Comparative View. Oxford: Oxford University Press, 1991. Magraw, Roger. A History of the French Working Class, two volumes. London: Blackwell, 1992. Milkman, Ruth. Women, Work, and Protest: A Century of United States Women’s Labor. Boston: Routledge and Kegan Paul, 1985.

Montgomery, David. The Fall of the House of Labor: The Workplace, the State, and American Labor Activism, 1865-1920. Cambridge: Cambridge University Press, 1987.

Mullin, Debbie Dudley. “The Porous Umbrella of the AFL: Evidence From Late Nineteenth-Century State Labor Bureau Reports on the Establishment of American Unions.” Ph.D. diss., University of Virginia, 1993.

Nolan, Mary. Social Democracy and Society: Working-Class Radicalism in Dusseldorf, 1890-1920. Cambridge: Cambridge University Press, 1981.

Olson, Mancur. The Logic of Collective Action: Public Goods and the Theory of Groups. Cambridge, MA: Harvard University Press, 1971.

Perlman, Selig. A Theory of the Labor Movement. New York: MacMillan, 1928.

Rachleff, Peter J. Black Labor in the South, 1865-1890. Philadelphia: Temple University Press, 1984.

Roediger, David. The Wages of Whiteness: Race and the Making of the American Working Class. London: Verso, 1991.

Scott, Joan. The Glassworkers of Carmaux: French Craftsmen in Political Action in a Nineteenth-Century City. Cambridge, MA: Harvard University Press, 1974.

Sewell, William H. Jr. Work and Revolution in France: The Language of Labor from the Old Regime to 1848. Cambridge: Cambridge University Press, 1980.

Shorter, Edward and Charles Tilly. Strikes in France, 1830-1968. Cambridge: Cambridge University Press, 1974.

Temin, Peter. Lessons from the Great Depression. Cambridge, MA: MIT Press, 1990.

Thompson, Edward P. The Making of the English Working Class. New York: Vintage, 1966.

Troy, Leo. Distribution of Union Membership among the States, 1939 and 1953. New York: National Bureau of Economic Research, 1957.

United States, Bureau of the Census. Census of Occupations, 1930. Washington, DC: Government Printing Office, 1932.

Visser, Jelle. European Trade Unions in Figures. Boston: Kluwer, 1989.

Voss, Kim. The Making of American Exceptionalism: The Knights of Labor and Class Formation in the Nineteenth Century. Ithaca, NY: Cornell University Press, 1993.

Ware, Norman. The Labor Movement in the United States, 1860-1895: A Study in Democracy. New York: Vintage, 1929.

Washington, Booker T. “The Negro and the Labor Unions.” Atlantic Monthly (June 1913).

Weiler, Paul. “Promises to Keep: Securing Workers Rights to Self-Organization Under the NLRA.” Harvard Law Review 96 (1983).

Western, Bruce. Between Class and Market: Postwar Unionization in the Capitalist Democracies. Princeton: Princeton University Press, 1997.

Whatley, Warren. “African-American Strikebreaking from the Civil War to the New Deal.” Social Science History 17 (1993), 525-58.

Wilentz, Robert Sean. Chants Democratic: New York City and the Rise of the American Working Class, 1788-1850. Oxford: Oxford University Press, 1984.

Wolman, Leo. Ebb and Flow in Trade Unionism. New York: National Bureau of Economic Research, 1936.

Zieger, Robert. The CIO, 1935-1955. Chapel Hill: University of North Carolina Press, 1995.

Zolberg, Aristide. “Moments of Madness.” Politics and Society 2 (Winter 1972): 183-207. 60

Citation: Friedman, Gerald. “Labor Unions in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/labor-unions-in-the-united-states/

Harold Adams Innis

Robin Neill, University of Prince Edward Island

Harold Innis has been called “the first Canadian-born social scientist to achieve an international reputation” and “the father of Canadian Economic History.” He was the second president of the Economic History Association (1942-1944) and the fifty-fourth President of the American Economic Association (1951). He has been credited with joint authorship of the Staple Theory of Canadian Economic Development (W.T. Easterbrook, 967, p. 261). In a backhanded posthumous complement a Keynesian said of him that he led the Canadian economics profession down the wrong path for fifteen years.

Innis’s influence in Canadian social science was pervasive in the pre-Keynesian period. His studies of the fur trade, the cod fisheries, and the mining and forest frontiers broke new ground, and provided an economic underpinning for the Laurentian School of Canadian historians. His students, W.T. Easterbrook, Hugh G.J. Aitken, Albert Faucher, and two of the then famous four Saskatonians, Vernon C. Fowke and Kenneth A.H. Buckley, are still cited in current Canadian economic history texts. The building-of-the-Canadian-nation histories that typified the Laurentian School have lost some of their appeal. Regional and community histories are now more frequently celebrated. Close reading of Innis, and particularly of Fowke (R.F. Neill, 1999), however, shows the two to have made a greater contribution in this regard than one would surmise from reading the general texts that draw on their work.

Innis’s influence in economic history in general has been considerable. His reworking of the “vent for surplus” theory of economic development, that is the “staple,” “primary products” or “export base” theory of economic development, was extended by Douglass North in applications to regional development in the United States, and to the experience of what were then called underdeveloped countries. Subsequently it was elaborated in generalized export-base models used to describe the experience of newly industrializing countries.

Innis’s contribution to historical economics, we have to assume, was noted. His success in the profession would indicate that it was. But that sort of Old Institutional, historical theorizing fell out of fashion after the Second World War. Neither Innis’s “cyclonics” nor J.M Clark’s “non Euclidian economics” had any formal standing in the period following general acceptance of Keynesian macroeconomic theory. Nonetheless, Innis had some influence beyond economic history. His most celebrated student, Harry G. Johnson, referred back to Innis as his “greatest teacher in economics” (Johnson and Johnson, 1978, p. 234).

The studies of communication media that characterized the so-called “later Innis” were not understood by, or, better, were outside the grasp of, economists preoccupied with positivistic testing of neoclassical, neo-Keynesian, and Monetarist-New Classical hypotheses. The root of the media studies can be traced back to the work of nineteenth-century historical economists, such as J.K. Ingram, who had much to say about “the prevalent mode of thinking” that shaped the nature of economic theory in any given period (Ingram, 1888, p. 2-3). Innis’s studies of communication media were an attempt to specify one causal factor in changes in the prevalent mode of thinking. His approach gave him grounds for assessing the economics profession itself.

He was not alone in this. J.J. Spengler, whose work also emerged from 1930s discussion of the nature of economics, also adopted an “external” approach to the history of economics (Spengler, 1940). This approach has had considerable acceptance among historians of economic thought, and it has been taken up by intellectual historians in general. Indeed, it gained high fashion following Michel Foucault’s discussion of the biased information environments that he called “epistemes,” and following Jacques Derrida’s emphasis on the linguistic context of all knowledge, both of which were related to analyses of prevalent modes of thinking.

Harold Adams Innis was born on November 5, 1894, in Otterville, Ontario, the first born of William Anson and Mary (Adams) Innis. His parents worked a hundred-acre farm outside of Otterville in Oxford County. At age eleven Harold was admitted to the Otterville high school. Two years later, in the fall of 1908, he began commuting twenty miles to the Woodstock Collegiate Institute. After graduation, he taught grade school for a year and then registered at McMaster University in Hamilton at the western end of Lake Ontario. The First World War interrupted his education. Upon graduating from McMaster, in the spring of 1916, he enlisted in the Canadian Army. By Christmas his group, the 69th Battery, was on the front in France. By the end of July, Innis had been wounded and sent to England for convalescence. During his stay in England he studied for a Master’s degree through a wartime institution called Khaki College. On arrival back in Canada he passed the examination for an M.A. in Economics. Disappointment over what he had learned was a major motivation in his enrolling in the doctoral program at the University of Chicago, a Baptist institution appropriate for one raised strictly in that faith.

When Innis arrived at Chicago there was considerable dissent in the United States with respect to the tenets of neoclassical economic theory. Its fundamental assumptions were being questioned by the Institutionalist Thorstein Veblen and by his student and colleague, John R. Commons. Some of the controversy was brought to Innis’s attention by his mentors, C.W. Wright and C.S. Duncan, but his most effective contact with current economic thought was through Frank H. Knight, who was then an instructor at Chicago. Knight’s skepticism captured Innis’s imagination and drew him into a small, informal group, including Carter Goodrich, Morris Copeland, W.B. Smith, J.W. Angel, and, of course, Knight himself. Their discussions focused on the nature and implications of Veblen’s critique of received economic doctrine.

Innis returned to Canada in 1920 to take a position in the Department of Political Economy at the University of Toronto. With the exception of its redoubtable Head, James Mavor, the Department was young and aware that it had the economics of Canada still to discover. Mavor had attempted an introduction to Canadian economic history, but had left it unfinished. C.R. Fay, the economic historian, was at Toronto in those years, and was aware that there was something to be done. He and Innis became life long friends in their mutual endeavor to see that it did. V.W. Bladen, recently arrived from Oxford, was pulled into the effort by Innis who insisted that Bladen could not understand the economics of Canada unless he personally visited every part of it.

The first fifteen of Innis’s years at Toronto were a difficult but fruitful time. He was not always understood, and, at one point, he was withdrawn from teaching a course because he pursued its subject “along too radical lines.” Still, his efforts began to produce results with the 1930 publication of his own introduction to Canadian economic history, The Fur Trade of Canada. Following the 1929 stock market crash, the Canadian Political Science Association was reestablished. Innis was deeply involved. A year earlier, with the help of the Bladens, he initiated a periodical, Contributions to Canadian Economics. The publication provided a medium for the Canadian Political Science Association, and its success in that capacity was a major factor in the Association’s decision to launch the Canadian Journal of Economics and Political Science. Innis’s contributions to the literature on Canadian economic history, and his involvement in the institutionalization of economics brought public recognition. In 1934 he was elected fellow of the Royal Society of Canada. He was promoted to Full Professor rank in 1936. He was an invited member of the Nova Scotia Royal Commission of Economic Enquiry in 1933. In 1937 he was appointed Head of the Department of Political Economy at the University of Toronto, and he remained Head until his death in 1952. From 1947 until 1952 he was Dean of Graduate Studies at Toronto, and had, in the meantime been a member of a Federal Royal Commission on Transportation. These public appointments say much for his influence on the economics profession in Canada, but they are not the end of it. He took a personal interest in the politics of the Department of Economics and Political Science at the University of Saskatchewan, which was headed by his student and close friend George Britnell. Perhaps his greatest influence was exercised through Canada’s Social Science Research council of which he was Chairman in 1945-46, and Chairman of the Grants-in-Aid Committee for its first nine years. Funds then available to assist research in the social sciences were minuscule by later standards, but none were allocated without Innis’s concurrence. He met regularly with Anne Bezanson, another sometime president of the EHA, who represented the Carnegie Foundation. Together they poured over names and projects related to social science research in Canada. In recommending reorganization of the Canadian Social Science Research Council in 1968, Mabel Timlin stated that in the beginning elaborate organization was not needed because Innis knew everyone.

For all his involvement in the institutionalization of economics in Canada, Innis did not withdraw from contacts in the United States. He was involved in the founding of the Economic History Association and the launching of the Journal of Economic History. He was the Association’s second president, and was deeply involved with the Committee on Research in Economic History, sponsored by the Social Science Research Council of the United States. It was these activities that brought Innis into close contact with American economic historians, Arthur H. Cole, Anne Bezanson, Robert B. Warren, and Earl J. Hamilton. At the same time Innis continued his interest in the general debates over the nature of economics in the United States, reviving his interaction with Frank Knight and eventually leading to his presidency of the American Economics Association in 1951. Innis has been the only president of the Economic History Association or the American Economic Association never to become an American citizen.

The lines of cleavage in the 1930s American debate over the nature of economics are now being clarified (Yonay, 1998; Morgan and Rutherford, 1998). One was drawn over the extent to which the values of elites should direct government economic policy. Another was drawn over the role of values in social science in general, but, particularly, in economics. With respect to these cleavages, Innis found himself in opposition to Frank Underhill and the socialist League for Social Reconstruction, which was active at the University of Toronto. Knight opposed the interventionist economics of the New Deal “brains trust” economist Guy Rexford Tugwell. Neither Innis nor Knight was well disposed towards the rise of Keynesian macroeconomics. Innis found it to be too interventionist given what he thought to be the unreliable state of the economics on which it was based. Perhaps it was for this reason that, from 1943 to 1947, Innis had an open invitation from the University of Chicago, where other, now famous, dissenters were gathering (Kitch, 1983).

Harold Innis died November 8, 1952. He was at the peak of his career. He had been invited to give the Beit Lectures in Imperial History at Oxford in 1949. While in England he was invited to give the Cust Lecture at Nottingham, and he spoke at the University of London. His thesis was, perhaps, not clearly presented, and not well received. Still, he continued to develop it over the succeeding years, leaving behind a body of writing well ahead of its time in intellectual history, and well off from contemporary paradigms in economics.

Selected Publications of Harold Innis: Books and Collections of Articles

A History of the Canadian Pacific Railway. London: P.S. King, 1923; Toronto: University of Toronto Press, 1971.

The Fur Trade in Canada: An Introduction to Canadian Economic History. New Haven: Yale University Press, 1930.

Peter Pond: Fur Trader and Adventurer. Toronto, 1930.

Select Documents in Canadian Economic History, Volume 1 (1497-1783), Volume 2 (1783-1885), co-edited with A.R.M. Lower. Toronto: University of Toronto Press, 1929 and 1933.

Problems of Staple Production in Canada. Toronto: University of Toronto Press, 1933.

Settlement and the Mining Frontier. Toronto: University of Toronto Press, Toronto, 1936.

The Cod Fisheries: The History of an International Economy. New Haven: Yale University Press, 1940.

Political Economy and the Modern State. Toronto: University of Toronto Press, 1946.

Empire and Communications. Oxford: Clarendon Press, 1950.

The Bias of Communication. Toronto: University of Toronto Press, 1951.

Changing Concepts of Time. Toronto: University of Toronto Press, 1952.

Essays in Canadian Economic History, ( M.Q. Innis, editor). Toronto: University of Toronto Press, 1956.

The Idea File of Harold Adams Innis, (introduced and edited by William Christian). Toronto: University of Toronto Press, 1980.

Innis on Russia: The Russian Diary and Other Writings (edited with a preface by William Christian). Toronto: University of Toronto Press, 1981.

Selected Writings about Innis: Biographical, Bibliographical, and Interpretative

Barnes, T.J. “Focus: A Geographical Appreciation of Harold A. Innis.” Canadian Geographer. 37 (1993): 352-364.

Creighton, Donald. Harold Adams Innis: Portrait of a Scholar. Toronto: University of Toronto Press, 1957.

Havelock, E.A. “Harold Innis: A Man of His Times” and “Harold Innis: The Philosophical Historian.” Et cetra 38 (1981): 242-268.

Neill, Robin. A New Theory of Value: The Canadian Economics of Harold Adams Innis. Toronto: University of Toronto Press, 1972.

Neill, Robin. “Rationality and the Information Environment: A Reassessment of the Work of Harold Adams Innis.” Journal of Canadian Studies 22 (1987-88): 78-92.

Patterson, Graeme. History and Communications: Harold Innis, Marshall McLuhan, the Interpretation of History. Toronto: University of Toronto Press, 1990.

Stamps, Judith. Unthinking Modernity: Innis, McLuhan, and the Frankfurt School. Kingston and Montreal: McGill-Queen’s University Press, 1995.

Additional References: Relevant to the Presented Interpretation

Ingram, J.K. A History of Political Economy. New York: Augustus M. Kelly (1888, 1967).

Johnson, E.S. and Johnson, H.G. In the Shadow of Keynes. Oxford: Basil Blackwell, 1978.

Kitch, E.W. “Fire of Truth: A Remembrance of Law and Economics at Chicago, 1932-1970.” Journal of Law and Economics 26 (1983): 163-233.

Morgan, M.S. and Rutherford, M., editors. From Interwar Pluralism to Postwar Neoclassicism. Durham, NC: Duke University, 1998.

Neill, R.F. “Economic Historiography in the 1950s: The Saskatchewan School.” Journal of Canadian Studies 34 (1999): 243-260.

Spengler, J.J. “Sociological Presuppositions in Economic Theory.” Southern Economic Journal 7 (1940): 131-157.

Yonay, Y.P. The Struggle over the Soul of Economics. Princeton University Press, Princeton, 1998.

Citation: Neill, Robin. “Harold Adams Innis”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/harold-adams-innis/

The Economic History of Indonesia

Jeroen Touwen, Leiden University, Netherlands

Introduction

In recent decades, Indonesia has been viewed as one of Southeast Asia’s successful highly performing and newly industrializing economies, following the trail of the Asian tigers (Hong Kong, Singapore, South Korea, and Taiwan) (see Table 1). Although Indonesia’s economy grew with impressive speed during the 1980s and 1990s, it experienced considerable trouble after the financial crisis of 1997, which led to significant political reforms. Today Indonesia’s economy is recovering but it is difficult to say when all its problems will be solved. Even though Indonesia can still be considered part of the developing world, it has a rich and versatile past, in the economic as well as the cultural and political sense.

Basic Facts

Indonesia is situated in Southeastern Asia and consists of a large archipelago between the Indian Ocean and the Pacific Ocean, with more than 13.000 islands. The largest islands are Java, Kalimantan (the southern part of the island Borneo), Sumatra, Sulawesi, and Papua (formerly Irian Jaya, which is the western part of New Guinea). Indonesia’s total land area measures 1.9 million square kilometers (750,000 square miles). This is three times the area of Texas, almost eight times the area of the United Kingdom and roughly fifty times the area of the Netherlands. Indonesia has a tropical climate, but since there are large stretches of lowland and numerous mountainous areas, the climate varies from hot and humid to more moderate in the highlands. Apart from fertile land suitable for agriculture, Indonesia is rich in a range of natural resources, varying from petroleum, natural gas, and coal, to metals such as tin, bauxite, nickel, copper, gold, and silver. The size of Indonesia’s population is about 230 million (2002), of which the largest share (roughly 60%) live in Java.

Table 1

Indonesia’s Gross Domestic Product per Capita

Compared with Several Other Asian Countries (in 1990 dollars)

Indonesia Philippines Thailand Japan
1900 745 1 033 812 1 180
1913 904 1 066 835 1 385
1950 840 1 070 817 1 926
1973 1 504 1 959 1 874 11 439
1990 2 516 2 199 4 645 18 789
2000 3 041 2 385 6 335 20 084

Source: Angus Maddison, The World Economy: A Millennial Perspective, Paris: OECD Development Centre Studies 2001, 206, 214-215. For year 2000: University of Groningen and the Conference Board, GGDC Total Economy Database, 2003, http://www.eco.rug.nl/ggdc.

Important Aspects of Indonesian Economic History

“Missed Opportunities”

Anne Booth has characterized the economic history of Indonesia with the somewhat melancholy phrase “a history of missed opportunities” (Booth 1998). One may compare this with J. Pluvier’s history of Southeast Asia in the twentieth century, which is entitled A Century of Unfulfilled Expectations (Breda 1999). The missed opportunities refer to the fact that despite its rich natural resources and great variety of cultural traditions, the Indonesian economy has been underperforming for large periods of its history. A more cyclical view would lead one to speak of several ‘reversals of fortune.’ Several times the Indonesian economy seemed to promise a continuation of favorable economic development and ongoing modernization (for example, Java in the late nineteenth century, Indonesia in the late 1930s or in the early 1990s). But for various reasons Indonesia time and again suffered from severe incidents that prohibited further expansion. These incidents often originated in the internal institutional or political spheres (either after independence or in colonial times), although external influences such as the 1930s Depression also had their ill-fated impact on the vulnerable export-economy.

“Unity in Diversity”

In addition, one often reads about “unity in diversity.” This is not only a political slogan repeated at various times by the Indonesian government itself, but it also can be applied to the heterogeneity in the national features of this very large and diverse country. Logically, the political problems that arise from such a heterogeneous nation state have had their (negative) effects on the development of the national economy. The most striking difference is between densely populated Java, which has a long tradition of politically and economically dominating the sparsely populated Outer Islands. But also within Java and within the various Outer Islands, one encounters a rich cultural diversity. Economic differences between the islands persist. Nevertheless, for centuries, the flourishing and enterprising interregional trade has benefited regional integration within the archipelago.

Economic Development and State Formation

State formation can be viewed as a condition for an emerging national economy. This process essentially started in Indonesia in the nineteenth century, when the Dutch colonized an area largely similar to present-day Indonesia. Colonial Indonesia was called ‘the Netherlands Indies.’ The term ‘(Dutch) East Indies’ was mainly used in the seventeenth and eighteenth centuries and included trading posts outside the Indonesian archipelago.

Although Indonesian national historiography sometimes refers to a presumed 350 years of colonial domination, it is exaggerated to interpret the arrival of the Dutch in Bantam in 1596 as the starting point of Dutch colonization. It is more reasonable to say that colonization started in 1830, when the Java War (1825-1830) was ended and the Dutch initiated a bureaucratic, centralizing polity in Java without further restraint. From the mid-nineteenth century onward, Dutch colonization did shape the borders of the Indonesian nation state, even though it also incorporated weaknesses in the state: ethnic segmentation of economic roles, unequal spatial distribution of power, and a political system that was largely based on oppression and violence. This, among other things, repeatedly led to political trouble, before and after independence. Indonesia ceased being a colony on 17 August 1945 when Sukarno and Hatta proclaimed independence, although full independence was acknowledged by the Netherlands only after four years of violent conflict, on 27 December 1949.

The Evolution of Methodological Approaches to Indonesian Economic History

The economic history of Indonesia analyzes a range of topics, varying from the characteristics of the dynamic exports of raw materials, the dualist economy in which both Western and Indonesian entrepreneurs participated, and the strong measure of regional variation in the economy. While in the past Dutch historians traditionally focused on the colonial era (inspired by the rich colonial archives), from the 1960s and 1970s onward an increasing number of scholars (among which also many Indonesians, but also Australian and American scholars) started to study post-war Indonesian events in connection with the colonial past. In the course of the 1990s attention gradually shifted from the identification and exploration of new research themes towards synthesis and attempts to link economic development with broader historical issues. In 1998 the excellent first book-length survey of Indonesia’s modern economic history was published (Booth 1998). The stress on synthesis and lessons is also present in a new textbook on the modern economic history of Indonesia (Dick et al 2002). This highly recommended textbook aims at a juxtaposition of three themes: globalization, economic integration and state formation. Globalization affected the Indonesian archipelago even before the arrival of the Dutch. The period of the centralized, military-bureaucratic state of Soeharto’s New Order (1966-1998) was only the most recent wave of globalization. A national economy emerged gradually from the 1930s as the Outer Islands (a collective name which refers to all islands outside Java and Madura) reoriented towards industrializing Java.

Two research traditions have become especially important in the study of Indonesian economic history during the past decade. One is a highly quantitative approach, culminating in reconstructions of Indonesia’s national income and national accounts over a long period of time, from the late nineteenth century up to today (Van der Eng 1992, 2001). The other research tradition highlights the institutional framework of economic development in Indonesia, both as a colonial legacy and as it has evolved since independence. There is a growing appreciation among scholars that these two approaches complement each other.

A Chronological Survey of Indonesian Economic History

The precolonial economy

There were several influential kingdoms in the Indonesian archipelago during the pre-colonial era (e.g. Srivijaya, Mataram, Majapahit) (see further Reid 1988,1993; Ricklefs 1993). Much debate centers on whether this heyday of indigenous Asian trade was effectively disrupted by the arrival of western traders in the late fifteenth century

Sixteenth and seventeenth century

Present-day research by scholars in pre-colonial economic history focuses on the dynamics of early-modern trade and pays specific attention to the role of different ethnic groups such as the Arabs, the Chinese and the various indigenous groups of traders and entrepreneurs. During the sixteenth to the nineteenth century the western colonizers only had little grip on a limited number of spots in the Indonesian archipelago. As a consequence much of the economic history of these islands escapes the attention of the economic historian. Most data on economic matters is handed down by western observers with their limited view. A large part of the area remained engaged in its own economic activities, including subsistence agriculture (of which the results were not necessarily very meager) and local and regional trade.

An older research literature has extensively covered the role of the Dutch in the Indonesian archipelago, which began in 1596 when the first expedition of Dutch sailing ships arrived in Bantam. In the seventeenth and eighteenth centuries the Dutch overseas trade in the Far East, which focused on high-value goods, was in the hands of the powerful Dutch East India Company (in full: the United East Indies Trading Company, or Vereenigde Oost-Indische Compagnie [VOC], 1602-1795). However, the region was still fragmented and Dutch presence was only concentrated in a limited number of trading posts.

During the eighteenth century, coffee and sugar became the most important products and Java became the most important area. The VOC gradually took over power from the Javanese rulers and held a firm grip on the productive parts of Java. The VOC was also actively engaged in the intra-Asian trade. For example, cotton from Bengal was sold in the pepper growing areas. The VOC was a successful enterprise and made large dividend payments to its shareholders. Corruption, lack of investment capital, and increasing competition from England led to its demise and in 1799 the VOC came to an end (Gaastra 2002, Jacobs 2000).

The nineteenth century

In the nineteenth century a process of more intensive colonization started, predominantly in Java, where the Cultivation System (1830-1870) was based (Elson 1994; Fasseur 1975).

During the Napoleonic era the VOC trading posts in the archipelago had been under British rule, but in 1814 they came under Dutch authority again. During the Java War (1825-1830), Dutch rule on Java was challenged by an uprising led by Javanese prince Diponegoro. To repress this revolt and establish firm rule in Java, colonial expenses increased, which in turn led to a stronger emphasis on economic exploitation of the colony. The Cultivation System, initiated by Johannes van den Bosch, was a state-governed system for the production of agricultural products such as sugar and coffee. In return for a fixed compensation (planting wage), the Javanese were forced to cultivate export crops. Supervisors, such as civil servants and Javanese district heads, were paid generous ‘cultivation percentages’ in order to stimulate production. The exports of the products were consigned to a Dutch state-owned trading firm (the Nederlandsche Handel-Maatschappij, NHM, established in 1824) and sold profitably abroad.

Although the profits (‘batig slot’) for the Dutch state of the period 1830-1870 were considerable, various reasons can be mentioned for the change to a liberal system: (a) the emergence of new liberal political ideology; (b) the gradual demise of the Cultivation System during the 1840s and 1850s because internal reforms were necessary; and (c) growth of private (European) entrepreneurship with know-how and interest in the exploitation of natural resources, which took away the need for government management (Van Zanden and Van Riel 2000: 226).

Table 2

Financial Results of Government Cultivation, 1840-1849 (‘Cultivation System’) (in thousands of guilders in current values)

1840-1844 1845-1849
Coffee 40 278 24 549
Sugar 8 218 4 136
Indigo, 7 836 7 726
Pepper, Tea 647 1 725
Total net profits 39 341 35 057

Source: Fasseur 1975: 20.

Table 3

Estimates of Total Profits (‘batig slot’) during the Cultivation System,

1831/40 – 1861/70 (in millions of guilders)

1831/40 1841/50 1851/60 1861/70
Gross revenues of sale of colonial products 227.0 473.9 652.7 641.8
Costs of transport etc (NHM) 88.0 165.4 138.7 114.7
Sum of expenses 59.2 175.1 275.3 276.6
Total net profits* 150.6 215.6 289.4 276.7

Source: Van Zanden and Van Riel 2000: 223.

* Recalculated by Van Zanden and Van Riel to include subsidies for the NHM and other costs that in fact benefited the Dutch economy.

The heyday of the colonial export economy (1900-1942)

After 1870, private enterprise was promoted but the exports of raw materials gained decisive momentum after 1900. Sugar, coffee, pepper and tobacco, the old export products, were increasingly supplemented with highly profitable exports of petroleum, rubber, copra, palm oil and fibers. The Outer Islands supplied an increasing share in these foreign exports, which were accompanied by an intensifying internal trade within the archipelago and generated an increasing flow of foreign imports. Agricultural exports were cultivated both in large-scale European agricultural plantations (usually called agricultural estates) and by indigenous smallholders. When the exploitation of oil became profitable in the late nineteenth century, petroleum earned a respectable position in the total export package. In the early twentieth century, the production of oil was increasingly concentrated in the hands of the Koninklijke/Shell Group.


Figure 1

Foreign Exports from the Netherlands-Indies, 1870-1940

(in millions of guilders, current values)

Source: Trade statistics

The momentum of profitable exports led to a broad expansion of economic activity in the Indonesian archipelago. Integration with the world market also led to internal economic integration when the road system, railroad system (in Java and Sumatra) and port system were improved. In shipping lines, an important contribution was made by the KPM (Koninklijke Paketvaart-Maatschappij, Royal Packet boat Company) that served economic integration as well as imperialist expansion. Subsidized shipping lines into remote corners of the vast archipelago carried off export goods (forest products), supplied import goods and transported civil servants and military.

The Depression of the 1930s hit the export economy severely. The sugar industry in Java collapsed and could not really recover from the crisis. In some products, such as rubber and copra, production was stepped up to compensate for lower prices. In the rubber exports indigenous producers for this reason evaded the international restriction agreements. The Depression precipitated the introduction of protectionist measures, which ended the liberal period that had started in 1870. Various import restrictions were launched, making the economy more self-sufficient, as for example in the production of rice, and stimulating domestic integration. Due to the strong Dutch guilder (the Netherlands adhered to the gold standard until 1936), it took relatively long before economic recovery took place. The outbreak of World War II disrupted international trade, and the Japanese occupation (1942-1945) seriously disturbed and dislocated the economic order.

Table 4

Annual Average Growth in Economic Key Aggregates 1830-1990

GDP per capita Export volume Export

Prices

Government Expenditure
Cultivation System 1830-1840 n.a. 13.5 5.0 8.5
Cultivation System 1840-1848 n.a. 1.5 – 4.5 [very low]
Cultivation System 1849-1873 n.a. 1.5 1.5 2.6
Liberal Period 1874-1900 [very low] 3.1 – 1.9 2.3
Ethical Period 1901-1928 1.7 5.8 17.4 4.1
Great Depression 1929-1934 -3.4 -3.9 -19.7 0.4
Prewar Recovery 1934-1940 2.5 2.2 7.8 3.4
Old Order 1950-1965 1.0 0.8 – 2.1 1.8
New Order 1966-1990 4.4 5.4 11.6 10.6

Source: Booth 1998: 18.

Note: These average annual growth percentages were calculated by Booth by fitting an exponential curve to the data for the years indicated. Up to 1873 data refer only to Java.

The post-1945 period

After independence, the Indonesian economy had to recover from the hardships of the Japanese occupation and the war for independence (1945-1949), on top of the slow recovery from the 1930s Depression. During the period 1949-1965, there was little economic growth, predominantly in the years from 1950 to 1957. In 1958-1965, growth rates dwindled, largely due to political instability and inappropriate economic policy measures. The hesitant start of democracy was characterized by a power struggle between the president, the army, the communist party and other political groups. Exchange rate problems and absence of foreign capital were detrimental to economic development, after the government had eliminated all foreign economic control in the private sector in 1957/58. Sukarno aimed at self-sufficiency and import substitution and estranged the suppliers of western capital even more when he developed communist sympathies.

After 1966, the second president, general Soeharto, restored the inflow of western capital, brought back political stability with a strong role for the army, and led Indonesia into a period of economic expansion under his authoritarian New Order (Orde Baru) regime which lasted until 1997 (see below for the three phases in New Order). In this period industrial output quickly increased, including steel, aluminum, and cement but also products such as food, textiles and cigarettes. From the 1970s onward the increased oil price on the world market provided Indonesia with a massive income from oil and gas exports. Wood exports shifted from logs to plywood, pulp, and paper, at the price of large stretches of environmentally valuable rainforest.

Soeharto managed to apply part of these revenues to the development of technologically advanced manufacturing industry. Referring to this period of stable economic growth, the World Bank Report of 1993 speaks of an ‘East Asian Miracle’ emphasizing the macroeconomic stability and the investments in human capital (World Bank 1993: vi).

The financial crisis in 1997 revealed a number of hidden weaknesses in the economy such as a feeble financial system (with a lack of transparency), unprofitable investments in real estate, and shortcomings in the legal system. The burgeoning corruption at all levels of the government bureaucracy became widely known as KKN (korupsi, kolusi, nepotisme). These practices characterize the coming-of-age of the 32-year old, strongly centralized, autocratic Soeharto regime.

From 1998 until present

Today, the Indonesian economy still suffers from severe economic development problems following the financial crisis of 1997 and the subsequent political reforms after Soeharto stepped down in 1998. Secessionist movements and the low level of security in the provincial regions, as well as relatively unstable political policies, form some of its present-day problems. Additional problems include the lack of reliable legal recourse in contract disputes, corruption, weaknesses in the banking system, and strained relations with the International Monetary Fund. The confidence of investors remains low, and in order to achieve future growth, internal reform will be essential to build up confidence of international donors and investors.

An important issue on the reform agenda is regional autonomy, bringing a larger share of export profits to the areas of production instead of to metropolitan Java. However, decentralization policies do not necessarily improve national coherence or increase efficiency in governance.

A strong comeback in the global economy may be at hand, but has not as yet fully taken place by the summer of 2003 when this was written.

Additional Themes in the Indonesian Historiography

Indonesia is such a large and multi-faceted country that many different aspects have been the focus of research (for example, ethnic groups, trade networks, shipping, colonialism and imperialism). One can focus on smaller regions (provinces, islands), as well as on larger regions (the western archipelago, the eastern archipelago, the Outer Islands as a whole, or Indonesia within Southeast Asia). Without trying to be exhaustive, eleven themes which have been subject of debate in Indonesian economic history are examined here (on other debates see also Houben 2002: 53-55; Lindblad 2002b: 145-152; Dick 2002: 191-193; Thee 2002: 242-243).

The indigenous economy and the dualist economy

Although western entrepreneurs had an advantage in technological know-how and supply of investment capital during the late-colonial period, there has been a traditionally strong and dynamic class of entrepreneurs (traders and peasants) in many regions of Indonesia. Resilient in times of economic malaise, cunning in symbiosis with traders of other Asian nationalities (particularly Chinese), the Indonesian entrepreneur has been rehabilitated after the relatively disparaging manner in which he was often pictured in the pre-1945 literature. One of these early writers, J.H. Boeke, initiated a school of thought centering on the idea of ‘economic dualism’ (referring to a modern western and a stagnant eastern sector). As a consequence, the term ‘dualism’ was often used to indicate western superiority. From the 1960s onward such ideas have been replaced by a more objective analysis of the dualist economy that is not so judgmental about the characteristics of economic development in the Asian sector. Some focused on technological dualism (such as B. Higgins) others on ethnic specialization in different branches of production (see also Lindblad 2002b: 148, Touwen 2001: 316-317).

The characteristics of Dutch imperialism

Another vigorous debate concerns the character of and the motives for Dutch colonial expansion. Dutch imperialism can be viewed as having a rather complex mix of political, economic and military motives which influenced decisions about colonial borders, establishing political control in order to exploit oil and other natural resources, and preventing local uprisings. Three imperialist phases can be distinguished (Lindblad 2002a: 95-99). The first phase of imperialist expansion was from 1825-1870. During this phase interference with economic matters outside Java increased slowly but military intervention was occasional. The second phase started with the outbreak of the Aceh War in 1873 and lasted until 1896. During this phase initiatives in trade and foreign investment taken by the colonial government and by private businessmen were accompanied by extension of colonial (military) control in the regions concerned. The third and final phase was characterized by full-scale aggressive imperialism (often known as ‘pacification’) and lasted from 1896 until 1907.

The impact of the cultivation system on the indigenous economy

The thesis of ‘agricultural involution’ was advocated by Clifford Geertz (1963) and states that a process of stagnation characterized the rural economy of Java in the nineteenth century. After extensive research, this view has generally been discarded. Colonial economic growth was stimulated first by the Cultivation System, later by the promotion of private enterprise. Non-farm employment and purchasing power increased in the indigenous economy, although there was much regional inequality (Lindblad 2002a: 80; 2002b:149-150).

Regional diversity in export-led economic expansion

The contrast between densely populated Java, which had been dominant in economic and political regard for a long time, and the Outer Islands, which were a large, sparsely populated area, is obvious. Among the Outer Islands we can distinguish between areas which were propelled forward by export trade, either from Indonesian or European origin (examples are Palembang, East Sumatra, Southeast Kalimantan) and areas which stayed behind and only slowly picked the fruits of the modernization that took place elsewhere (as for example Benkulu, Timor, Maluku) (Touwen 2001).

The development of the colonial state and the role of Ethical Policy

Well into the second half of the nineteenth century, the official Dutch policy was to abstain from interference with local affairs. The scarce resources of the Dutch colonial administrators should be reserved for Java. When the Aceh War initiated a period of imperialist expansion and consolidation of colonial power, a call for more concern with indigenous affairs was heard in Dutch politics, which resulted in the official Ethical Policy which was launched in 1901 and had the threefold aim of improving indigenous welfare, expanding the educational system, and allowing for some indigenous participation in the government (resulting in the People’s Council (Volksraad) that was installed in 1918 but only had an advisory role). The results of the Ethical Policy, as for example measured in improvements in agricultural technology, education, or welfare services, are still subject to debate (Lindblad 2002b: 149).

Living conditions of coolies at the agricultural estates

The plantation economy, which developed in the sparsely populated Outer Islands (predominantly in Sumatra) between 1870 and 1942, was in bad need of labor. The labor shortage was solved by recruiting contract laborers (coolies) in China, and later in Java. The Coolie Ordinance was a government regulation that included the penal clause (which allowed for punishment by plantation owners). In response to reported abuse, the colonial government established the Labor Inspectorate (1908), which aimed at preventing abuse of coolies on the estates. The living circumstances and treatment of the coolies has been subject of debate, particularly regarding the question whether the government put enough effort in protecting the interests of the workers or allowed abuse to persist (Lindblad 2002b: 150).

Colonial drain

How large of a proportion of economic profits was drained away from the colony to the mother country? The detrimental effects of the drain of capital, in return for which European entrepreneurial initiatives were received, have been debated, as well as the exact methods of its measurement. There was also a second drain to the home countries of other immigrant ethnic groups, mainly to China (Van der Eng 1998; Lindblad 2002b: 151).

The position of the Chinese in the Indonesian economy

In the colonial economy, the Chinese intermediary trader or middleman played a vital role in supplying credit and stimulating the cultivation of export crops such as rattan, rubber and copra. The colonial legal system made an explicit distinction between Europeans, Chinese and Indonesians. This formed the roots of later ethnic problems, since the Chinese minority population in Indonesia has gained an important (and sometimes envied) position as capital owners and entrepreneurs. When threatened by political and social turmoil, Chinese business networks may have sometimes channel capital funds to overseas deposits.

Economic chaos during the ‘Old Order’

The ‘Old Order’-period, 1945-1965, was characterized by economic (and political) chaos although some economic growth undeniably did take place during these years. However, macroeconomic instability, lack of foreign investment and structural rigidity formed economic problems that were closely connected with the political power struggle. Sukarno, the first president of the Indonesian republic, had an outspoken dislike of colonialism. His efforts to eliminate foreign economic control were not always supportive of the struggling economy of the new sovereign state. The ‘Old Order’ has for long been a ‘lost area’ in Indonesian economic history, but the establishment of the unitary state and the settlement of major political issues, including some degree of territorial consolidation (as well as the consolidation of the role of the army) were essential for the development of a national economy (Dick 2002: 190; Mackie 1967).

Development policy and economic planning during the ‘New Order’ period

The ‘New Order’ (Orde Baru) of Soeharto rejected political mobilization and socialist ideology, and established a tightly controlled regime that discouraged intellectual enquiry, but did put Indonesia’s economy back on the rails. New flows of foreign investment and foreign aid programs were attracted, the unbridled population growth was reduced due to family planning programs, and a transformation took place from a predominantly agricultural economy to an industrializing economy. Thee Kian Wie distinguishes three phases within this period, each of which deserve further study:

(a) 1966-1973: stabilization, rehabilitation, partial liberalization and economic recovery;

(b) 1974-1982: oil booms, rapid economic growth, and increasing government intervention;

(c) 1983-1996: post-oil boom, deregulation, renewed liberalization (in reaction to falling oil-prices), and rapid export-led growth. During this last phase, commentators (including academic economists) were increasingly concerned about the thriving corruption at all levels of the government bureaucracy: KKN (korupsi, kolusi, nepotisme) practices, as they later became known (Thee 2002: 203-215).

Financial, economic and political crisis: KRISMON, KRISTAL

The financial crisis of 1997 started with a crisis of confidence following the depreciation of the Thai baht in July 1997. Core factors causing the ensuing economic crisis in Indonesia were the quasi-fixed exchange rate of the rupiah, quickly rising short-term foreign debt and the weak financial system. Its severity had to be attributed to political factors as well: the monetary crisis (KRISMON) led to a total crisis (KRISTAL) because of the failing policy response of the Soeharto regime. Soeharto had been in power for 32 years and his government had become heavily centralized and corrupt and was not able to cope with the crisis in a credible manner. The origins, economic consequences, and socio-economic impact of the crisis are still under discussion. (Thee 2003: 231-237; Arndt and Hill 1999).

(Note: I want to thank Dr. F. Colombijn and Dr. J.Th Lindblad at Leiden University for their useful comments on the draft version of this article.)

Selected Bibliography

In addition to the works cited in the text above, a small selection of recent books is mentioned here, which will allow the reader to quickly grasp the most recent insights and find useful further references.

General textbooks or periodicals on Indonesia’s (economic) history:

Booth, Anne. The Indonesian Economy in the Nineteenth and Twentieth Centuries: A History of Missed Opportunities. London: Macmillan, 1998.

Bulletin of Indonesian Economic Studies.

Dick, H.W., V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie. The Emergence of a National Economy in Indonesia, 1800-2000. Sydney: Allen & Unwin, 2002.

Itinerario “Economic Growth and Institutional Change in Indonesia in the 19th and 20th centuries” [special issue] 26 no. 3-4 (2002).

Reid, Anthony. Southeast Asia in the Age of Commerce, 1450-1680, Vol. I: The Lands below the Winds. New Haven: Yale University Press, 1988.

Reid, Anthony. Southeast Asia in the Age of Commerce, 1450-1680, Vol. II: Expansion and Crisis. New Haven: Yale University Press, 1993.

Ricklefs, M.C. A History of Modern Indonesia since ca. 1300. Basingstoke/Londen: Macmillan, 1993.

On the VOC:

Gaastra, F.S. De Geschiedenis van de VOC. Zutphen: Walburg Pers, 1991 (1st edition), 2002 (4th edition).

Jacobs, Els M. Koopman in Azië: de Handel van de Verenigde Oost-Indische Compagnie tijdens de 18de Eeuw. Zutphen: Walburg Pers, 2000.

Nagtegaal, Lucas. Riding the Dutch Tiger: The Dutch East Indies Company and the Northeast Coast of Java 1680-1743. Leiden: KITLV Press, 1996.

On the Cultivation System:

Elson, R.E. Village Java under the Cultivation System, 1830-1870. Sydney: Allen and Unwin, 1994.

Fasseur, C. Kultuurstelsel en Koloniale Baten. De Nederlandse Exploitatie van Java, 1840-1860. Leiden, Universitaire Pers, 1975. (Translated as: The Politics of Colonial Exploitation: Java, the Dutch and the Cultivation System. Ithaca, NY: Southeast Asia Program, Cornell University Press 1992.)

Geertz, Clifford. Agricultural Involution: The Processes of Ecological Change in Indonesia. Berkeley: University of California Press, 1963.

Houben, V.J.H. “Java in the Nineteenth Century: Consolidation of a Territorial State.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 56-81. Sydney: Allen & Unwin, 2002.

On the Late-Colonial Period:

Dick, H.W. “Formation of the Nation-state, 1930s-1966.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 153-193. Sydney: Allen & Unwin, 2002.

Lembaran Sejarah, “Crisis and Continuity: Indonesian Economy in the Twentieth Century” [special issue] 3 no. 1 (2000).

Lindblad, J.Th., editor. New Challenges in the Modern Economic History of Indonesia. Leiden: PRIS, 1993. Translated as: Sejarah Ekonomi Modern Indonesia. Berbagai Tantangan Baru. Jakarta: LP3ES, 2002.

Lindblad, J.Th., editor. The Historical Foundations of a National Economy in Indonesia, 1890s-1990s. Amsterdam: North-Holland, 1996.

Lindblad, J.Th. “The Outer Islands in the Nineteenthh Century: Contest for the Periphery.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 82-110. Sydney: Allen & Unwin, 2002a.

Lindblad, J.Th. “The Late Colonial State and Economic Expansion, 1900-1930s.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 111-152. Sydney: Allen & Unwin, 2002b.

Touwen, L.J. Extremes in the Archipelago: Trade and Economic Development in the Outer Islands of Indonesia, 1900‑1942. Leiden: KITLV Press, 2001.

Van der Eng, Pierre. “Exploring Exploitation: The Netherlands and Colonial Indonesia, 1870-1940.” Revista de Historia Económica 16 (1998): 291-321.

Zanden, J.L. van, and A. van Riel. Nederland, 1780-1914: Staat, instituties en economische ontwikkeling. Amsterdam: Balans, 2000. (On the Netherlands in the nineteenth century.)

Independent Indonesia:

Arndt, H.W. and Hal Hill, editors. Southeast Asia’s Economic Crisis: Origins, Lessons and the Way forward. Singapore: Institute of Southeast Asian Studies, 1999.

Cribb, R. and C. Brown. Modern Indonesia: A History since 1945. Londen/New York: Longman, 1995.

Feith, H. The Decline of Constitutional Democracy in Indonesia. Ithaca, New York: Cornell University Press, 1962.

Hill, Hal. The Indonesian Economy. Cambridge: Cambridge University Press, 2000. (This is the extended second edition of Hill, H., The Indonesian Economy since 1966. Southeast Asia’s Emerging Giant. Cambridge: Cambridge University Press, 1996.)

Hill, Hal, editor. Unity and Diversity: Regional Economic Development in Indonesia since 1970. Singapore: Oxford University Press, 1989.

Mackie, J.A.C. “The Indonesian Economy, 1950-1960.” In The Economy of Indonesia: Selected Readings, edited by B. Glassburner, 16-69. Ithaca NY: Cornell University Press 1967.

Robison, Richard. Indonesia: The Rise of Capital. Sydney: Allen and Unwin, 1986.

Thee Kian Wie. “The Soeharto Era and After: Stability, Development and Crisis, 1966-2000.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 194-243. Sydney: Allen & Unwin, 2002.

World Bank. The East Asian Miracle: Economic Growth and Public Policy. Oxford: World Bank /Oxford University Press, 1993.

On economic growth:

Booth, Anne. The Indonesian Economy in the Nineteenth and Twentieth Centuries. A History of Missed Opportunities. London: Macmillan, 1998.

Van der Eng, Pierre. “The Real Domestic Product of Indonesia, 1880-1989.” Explorations in Economic History 39 (1992): 343-373.

Van der Eng, Pierre. “Indonesia’s Growth Performance in the Twentieth Century.” In The Asian Economies in the Twentieth Century, edited by Angus Maddison, D.S. Prasada Rao and W. Shepherd, 143-179. Cheltenham: Edward Elgar, 2002.

Van der Eng, Pierre. “Indonesia’s Economy and Standard of Living in the Twentieth Century.” In Indonesia Today: Challenges of History, edited by G. Lloyd and S. Smith, 181-199. Singapore: Institute of Southeast Asian Studies, 2001.

Citation: Touwen, Jeroen. “The Economic History of Indonesia”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-indonesia/

An Overview of the Great Depression

Randall Parker, East Carolina University

This article provides an overview of selected events and economic explanations of the interwar era. What follows is not intended to be a detailed and exhaustive review of the literature on the Great Depression, or of any one theory in particular. Rather, it will attempt to describe the “big picture” events and topics of interest. For the reader who wishes more extensive analysis and detail, references to additional materials are also included.

The 1920s

The Great Depression, and the economic catastrophe that it was, is perhaps properly scaled in reference to the decade that preceded it, the 1920s. By conventional macroeconomic measures, this was a decade of brisk economic growth in the United States. Perhaps the moniker “the roaring twenties” summarizes this period most succinctly. The disruptions and shocking nature of World War I had been survived and it was felt the United States was entering a “new era.” In January 1920, the Federal Reserve seasonally adjusted index of industrial production, a standard measure of aggregate economic activity, stood at 81 (1935–39 = 100). When the index peaked in July 1929 it was at 114, for a growth rate of 40.6 percent over this period. Similar rates of growth over the 1920–29 period equal to 47.3 percent and 42.4 percent are computed using annual real gross national product data from Balke and Gordon (1986) and Romer (1988), respectively. Further computations using the Balke and Gordon (1986) data indicate an average annual growth rate of real GNP over the 1920–29 period equal to 4.6 percent. In addition, the relative international economic strength of this country was clearly displayed by the fact that nearly one-half of world industrial output in 1925–29 was produced in the United States (Bernanke, 1983).

Consumer Durables Market

The decade of the 1920s also saw major innovations in the consumption behavior of households. The development of installment credit over this period led to substantial growth in the consumer durables market (Bernanke, 1983). Purchases of automobiles, refrigerators, radios and other such durable goods all experienced explosive growth during the 1920s as small borrowers, particularly households and unincorporated businesses, utilized their access to available credit (Persons, 1930; Bernanke, 1983; Soule, 1947).

Economic Growth in the 1920s

Economic growth during this period was mitigated only somewhat by three recessions. According to the National Bureau of Economic Research (NBER) business cycle chronology, two of these recessions were from May 1923 through July 1924 and October 1926 through November 1927. Both of these recessions were very mild and unremarkable. In contrast, the 1920s began with a recession lasting 18 months from the peak in January 1920 until the trough of July 1921. Original estimates of real GNP from the Commerce Department showed that real GNP fell 8 percent between 1919 and 1920 and another 7 percent between 1920 and 1921 (Romer, 1988). The behavior of prices contributed to the naming of this recession “the Depression of 1921,” as the implicit price deflator for GNP fell 16 percent and the Bureau of Labor Statistics wholesale price index fell 46 percent between 1920 and 1921. Although thought to be severe, Romer (1988) has argued that the so-called “postwar depression” was not as severe as once thought. While the deflation from war-time prices was substantial, revised estimates of real GNP show falls in output of only 1 percent between 1919 and 1920 and 2 percent between 1920 and 1921. Romer (1988) also argues that the behaviors of output and prices are inconsistent with the conventional explanation of the Depression of 1921 being primarily driven by a decline in aggregate demand. Rather, the deflation and the mild recession are better understood as resulting from a decline in aggregate demand together with a series of positive supply shocks, particularly in the production of agricultural goods, and significant decreases in the prices of imported primary commodities. Overall, the upshot is that the growth path of output was hardly impeded by the three minor downturns, so that the decade of the 1920s can properly be viewed economically as a very healthy period.

Fed Policies in the 1920s

Friedman and Schwartz (1963) label the 1920s “the high tide of the Reserve System.” As they explain, the Federal Reserve became increasingly confident in the tools of policy and in its knowledge of how to use them properly. The synchronous movements of economic activity and explicit policy actions by the Federal Reserve did not go unnoticed. Taking the next step and concluding there was cause and effect, the Federal Reserve in the 1920s began to use monetary policy as an implement to stabilize business cycle fluctuations. “In retrospect, we can see that this was a major step toward the assumption by government of explicit continuous responsibility for economic stability. As the decade wore on, the System took – and perhaps even more was given – credit for the generally stable conditions that prevailed, and high hopes were placed in the potency of monetary policy as then administered” (Friedman and Schwartz, 1963).

The giving/taking of credit to/by the Federal Reserve has particular value pertaining to the recession of 1920–21. Although suggesting the Federal Reserve probably tightened too much, too late, Friedman and Schwartz (1963) call this episode “the first real trial of the new system of monetary control introduced by the Federal Reserve Act.” It is clear from the history of the time that the Federal Reserve felt as though it had successfully passed this test. The data showed that the economy had quickly recovered and brisk growth followed the recession of 1920–21 for the remainder of the decade.

Questionable Lessons “Learned” by the Fed

Moreover, Eichengreen (1992) suggests that the episode of 1920–21 led the Federal Reserve System to believe that the economy could be successfully deflated or “liquidated” without paying a severe penalty in terms of reduced output. This conclusion, however, proved to be mistaken at the onset of the Depression. As argued by Eichengreen (1992), the Federal Reserve did not appreciate the extent to which the successful deflation could be attributed to the unique circumstances that prevailed during 1920–21. The European economies were still devastated after World War I, so the demand for United States’ exports remained strong many years after the War. Moreover, the gold standard was not in operation at the time. Therefore, European countries were not forced to match the deflation initiated in the United States by the Federal Reserve (explained below pertaining to the gold standard hypothesis).

The implication is that the Federal Reserve thought that deflation could be generated with little effect on real economic activity. Therefore, the Federal Reserve was not vigorous in fighting the Great Depression in its initial stages. It viewed the early years of the Depression as another opportunity to successfully liquidate the economy, especially after the perceived speculative excesses of the 1920s. However, the state of the economic world in 1929 was not a duplicate of 1920–21. By 1929, the European economies had recovered and the interwar gold standard was a vehicle for the international transmission of deflation. Deflation in 1929 would not operate as it did in 1920–21. The Federal Reserve failed to understand the economic implications of this change in the international standing of the United States’ economy. The result was that the Depression was permitted to spiral out of control and was made much worse than it otherwise would have been had the Federal Reserve not considered it to be a repeat of the 1920–21 recession.

The Beginnings of the Great Depression

In January 1928 the seeds of the Great Depression, whenever they were planted, began to germinate. For it is around this time that two of the most prominent explanations for the depth, length, and worldwide spread of the Depression first came to be manifest. Without any doubt, the economics profession would come to a firm consensus around the idea that the economic events of the Great Depression cannot be properly understood without a solid linkage to both the behavior of the supply of money together with Federal Reserve actions on the one hand and the flawed structure of the interwar gold standard on the other.

It is well documented that many public officials, such as President Herbert Hoover and members of the Federal Reserve System in the latter 1920s, were intent on ending what they perceived to be the speculative excesses that were driving the stock market boom. Moreover, as explained by Hamilton (1987), despite plentiful denials to the contrary, the Federal Reserve assumed the role of “arbiter of security prices.” Although there continues to be debate as to whether or not the stock market was overvalued at the time (White, 1990; DeLong and Schleifer, 1991), the main point is that the Federal Reserve believed there to be a speculative bubble in equity values. Hamilton (1987) describes how the Federal Reserve, intending to “pop” the bubble, embarked on a highly contractionary monetary policy in January 1928. Between December 1927 and July 1928 the Federal Reserve conducted $393 million of open market sales of securities so that only $80 million remained in the Open Market account. Buying rates on bankers’ acceptances1 were raised from 3 percent in January 1928 to 4.5 percent by July, reducing Federal Reserve holdings of such bills by $193 million, leaving a total of only $185 million of these bills on balance. Further, the discount rate was increased from 3.5 percent to 5 percent, the highest level since the recession of 1920–21. “In short, in terms of the magnitudes consciously controlled by the Fed, it would be difficult to design a more contractionary policy than that initiated in January 1928” (Hamilton, 1987).

The pressure did not stop there, however. The death of Federal Reserve Bank President Benjamin Strong and the subsequent control of policy ascribed to Adolph Miller of the Federal Reserve Board insured that the fall in the stock market was going to be made a reality. Miller believed the speculative excesses of the stock market were hurting the economy, and the Federal Reserve continued attempting to put an end to this perceived harm (Cecchetti, 1998). The amount of Federal Reserve credit that was being extended to market participants in the form of broker loans became an issue in 1929. The Federal Reserve adamantly discouraged lending that was collateralized by equities. The intentions of the Board of Governors of the Federal Reserve were made clear in a letter dated February 2, 1929 sent to Federal Reserve banks. In part the letter read:

The board has no disposition to assume authority to interfere with the loan practices of member banks so long as they do not involve the Federal reserve banks. It has, however, a grave responsibility whenever there is evidence that member banks are maintaining speculative security loans with the aid of Federal reserve credit. When such is the case the Federal reserve bank becomes either a contributing or a sustaining factor in the current volume of speculative security credit. This is not in harmony with the intent of the Federal Reserve Act, nor is it conducive to the wholesome operation of the banking and credit system of the country. (Board of Governors of the Federal Reserve 1929: 93–94, quoted from Cecchetti, 1998)

The deflationary pressure to stock prices had been applied. It was now a question of when the market would break. Although the effects were not immediate, the wait was not long.

The Economy Stumbles

The NBER business cycle chronology dates the start of the Great Depression in August 1929. For this reason many have said that the Depression started on Main Street and not Wall Street. Be that as it may, the stock market plummeted in October of 1929. The bursting of the speculative bubble had been achieved and the economy was now headed in an ominous direction. The Federal Reserve’s seasonally adjusted index of industrial production stood at 114 (1935–39 = 100) in August 1929. By October it had fallen to 110 for a decline of 3.5 percent (annualized percentage decline = 14.7 percent). After the crash, the incipient recession intensified, with the industrial production index falling from 110 in October to 100 in December 1929, or 9 percent (annualized percentage decline = 41 percent). In 1930, the index fell further from 100 in January to 79 in December, or an additional 21percent.

Links between the Crash and the Depression?

While popular history treats the crash and the Depression as one and the same event, economists know that they were not. But there is no doubt that the crash was one of the things that got the ball rolling. Several authors have offered explanations for the linkage between the crash and the recession of 1929–30. Mishkin (1978) argues that the crash and an increase in liabilities led to a deterioration in households’ balance sheets. The reduced liquidity2 led consumers to defer consumption of durable goods and housing and thus contributed to a fall in consumption. Temin (1976) suggests that the fall in stock prices had a negative wealth effect on consumption, but attributes only a minor role to this given that stocks were not a large fraction of total wealth; the stock market in 1929, although falling dramatically, remained above the value it had achieved in early 1928, and the propensity to consume from wealth was small during this period. Romer (1990) provides evidence suggesting that if the stock market were thought to be a predictor of future economic activity, then the crash can rightly be viewed as a source of increased consumer uncertainty that depressed spending on consumer durables and accelerated the decline that had begun in August 1929. Flacco and Parker (1992) confirm Romer’s findings using different data and alternative estimation techniques.

Looking back on the behavior of the economy during the year of 1930, industrial production declined 21 percent, the consumer price index fell 2.6 percent, the supply of high-powered money (that is, the liabilities of the Federal Reserve that are usable as money, consisting of currency in circulation and bank reserves; also called the monetary base) fell 2.8 percent, the nominal supply of money as measured by M1 (the product of the monetary base3 multiplied by the money multiplier4) dipped 3.5 percent and the ex post real interest rate turned out to be 11.3 percent, the highest it had been since the recession of 1920–21 (Hamilton, 1987). In spite of this, when put into historical context, there was no reason to view the downturn of 1929–30 as historically unprecedented. Its magnitude was comparable to that of many recessions that had previously occurred. Perhaps there was justifiable optimism in December 1930 that the economy might even shake off the negative movement and embark on the path to recovery, rather like what had occurred after the recession of 1920–21 (Bernanke, 1983). As we know, the bottom would not come for another 27 months.

The Economy Crumbles

Banking Failures

During 1931, there was a “change in the character of the contraction” (Friedman and Schwartz, 1963). Beginning in October 1930 and lasting until December 1930, the first of a series of banking panics now accompanied the downward spasms of the business cycle. Although bank failures had occurred throughout the 1920s, the magnitude of the failures that occurred in the early 1930s was of a different order altogether (Bernanke, 1983). The absence of any type of deposit insurance resulted in the contagion of the panics being spread to sound financial institutions and not just those on the margin.

Traditional Methods of Combating Bank Runs Not Used

Moreover, institutional arrangements that had existed in the private banking system designed to provide liquidity – to convert assets into cash – to fight bank runs before 1913 were not exercised after the creation of the Federal Reserve System. For example, during the panic of 1907, the effects of the financial upheaval had been contained through a combination of lending activities by private banks, called clearinghouses, and the suspension of deposit convertibility into currency. While not preventing bank runs and the financial panic, their economic impact was lessened to a significant extent by these countermeasures enacted by private banks, as the economy quickly recovered in 1908. The aftermath of the panic of 1907 and the desire to have a central authority to combat the contagion of financial disruptions was one of the factors that led to the establishment of the Federal Reserve System. After the creation of the Federal Reserve, clearinghouse lending and suspension of deposit convertibility by private banks were not undertaken. Believing the Federal Reserve to be the “lender of last resort,” it was apparently thought that the responsibility to fight bank runs was the domain of the central bank (Friedman and Schwartz, 1963; Bernanke, 1983). Unfortunately, when the banking panics came in waves and the financial system was collapsing, being the “lender of last resort” was a responsibility that the Federal Reserve either could not or would not assume.

Money Supply Contracts

The economic effects of the banking panics were devastating. Aside from the obvious impact of the closing of failed banks and the subsequent loss of deposits by bank customers, the money supply accelerated its downward spiral. Although the economy had flattened out after the first wave of bank failures in October–December 1930, with the industrial production index steadying from 79 in December 1930 to 80 in April 1931, the remainder of 1931 brought a series of shocks from which the economy was not to recover for some time.

Second Wave of Banking Failure

In May, the failure of Austria’s largest bank, the Kredit-anstalt, touched off financial panics in Europe. In September 1931, having had enough of the distress associated with the international transmission of economic depression, Britain abandoned its participation in the gold standard. Further, just as the United States’ economy appeared to be trying to begin recovery, the second wave of bank failures hit the financial system in June and did not abate until December. In addition, the Hoover administration in December 1931, adhering to its principles of limited government, embarked on a campaign to balance the federal budget. Tax increases resulted the following June, just as the economy was to hit the first low point of its so-called “double bottom” (Hoover, 1952).

The results of these events are now evident. Between January and December 1931 the industrial production index declined from 78 to 66, or 15.4 percent, the consumer price index fell 9.4 percent, the nominal supply of M1 dipped 5.7 percent, the ex post real interest rate5 remained at 11.3 percent, and although the supply of high-powered money6 actually increased 5.5 percent, the currency–deposit and reserve–deposit ratios began their upward ascent, and thus the money multiplier started its downward plunge (Hamilton, 1987). If the economy had flattened out in the spring of 1931, then by December output, the money supply, and the price level were all on negative growth paths that were dragging the economy deeper into depression.

Third Wave of Banking Failure

The economic difficulties were far from over. The economy displayed some evidence of recovery in late summer/early fall of 1932. However, in December 1932 the third, and largest, wave of banking panics hit the financial markets and the collapse of the economy arrived with the business cycle hitting bottom in March 1933. Industrial production between January 1932 and March 1933 fell an additional 15.6 percent. For the combined years of 1932 and 1933, the consumer price index fell a cumulative 16.2 percent, the nominal supply of M1 dropped 21.6 percent, the nominal M2 money supply fell 34.7 percent, and although the supply of high-powered money increased 8.4 percent, the currency–deposit and reserve–deposit ratios accelerated their upward ascent. Thus the money multiplier continued on a downward plunge that was not arrested until March 1933. Similar behaviors for real GDP, prices, money supplies and other key macroeconomic variables occurred in many European economies as well (Snowdon and Vane, 1999; Temin, 1989).

An examination of the macroeconomic data in August 1929 compared to March 1933 provides a stark contrast. The unemployment rate of 3 percent in August 1929 was at 25 percent in March 1933. The industrial production index of 114 in August 1929 was at 54 in March 1933, or a 52.6 percent decrease. The money supply had fallen 35 percent, prices plummeted by about 33 percent, and more than one-third of banks in the United States were either closed or taken over by other banks. The “new era” ushered in by “the roaring twenties” was over. Roosevelt took office in March 1933, a nationwide bank holiday was declared from March 6 until March 13, and the United States abandoned the international gold standard in April 1933. Recovery commenced immediately and the economy began its long path back to the pre-1929 secular growth trend.

Table 1 summarizes the drop in industrial production in the major economies of Western Europe and North America. Table 2 gives gross national product estimates for the United States from 1928 to 1941. The constant price series adjusts for inflation and deflation.

Table 1
Indices of Total Industrial Production, 1927 to 1935 (1929 = 100)

1927 1928 1929 1930 1931 1932 1933 1934 1935
Britain 95 94 100 94 86 89 95 105 114
Canada 85 94 100 91 78 68 69 82 90
France 84 94 100 99 85 74 83 79 77
Germany 95 100 100 86 72 59 68 83 96
Italy 87 99 100 93 84 77 83 85 99
Netherlands 87 94 100 109 101 90 90 93 95
Sweden 85 88 100 102 97 89 93 111 125
U.S. 85 90 100 83 69 55 63 69 79

Source: Industrial Statistics, 1900-57 (Paris, OEEC, 1958), Table 2.

Table 2
U.S. GNP at Constant (1929) and Current Prices, 1928-1941

Year GNP at constant (1929) prices (billions of $) GNP at current prices (billions of $)
1928 98.5 98.7
1929 104.4 104.6
1930 95.1 91.2
1931 89.5 78.5
1932 76.4 58.6
1933 74.2 56.1
1934 80.8 65.5
1935 91.4 76.5
1936 100.9 83.1
1937 109.1 91.2
1938 103.2 85.4
1939 111.0 91.2
1940 121.0 100.5
1941 131.7 124.7

Contemporary Explanations

The economics profession during the 1930s was at a loss to explain the Depression. The most prominent conventional explanations were of two types. First, some observers at the time firmly grounded their explanations on the two pillars of classical macroeconomic thought, Say’s Law and the belief in the self-equilibrating powers of the market. Many argued that it was simply a question of time before wages and prices adjusted fully enough for the economy to return to full employment and achieve the realization of the putative axiom that “supply creates its own demand.” Second, the Austrian school of thought argued that the Depression was the inevitable result of overinvestment during the 1920s. The best remedy for the situation was to let the Depression run its course so that the economy could be purified from the negative effects of the false expansion. Government intervention was viewed by the Austrian school as a mechanism that would simply prolong the agony and make any subsequent depression worse than it would ordinarily be (Hayek, 1966; Hayek, 1967).

Liquidationist Theory

The Hoover administration and the Federal Reserve Board also contained several so-called “liquidationists.” These individuals basically believed that economic agents should be forced to re-arrange their spending proclivities and alter their alleged profligate use of resources. If it took mass bankruptcies to produce this result and wipe the slate clean so that everyone could have a fresh start, then so be it. The liquidationists viewed the events of the Depression as an economic penance for the speculative excesses of the 1920s. Thus, the Depression was the price that was being paid for the misdeeds of the previous decade. This is perhaps best exemplified in the well-known quotation of Treasury Secretary Andrew Mellon, who advised President Hoover to “Liquidate labor, liquidate stocks, liquidate the farmers, liquidate real estate.” Mellon continued, “It will purge the rottenness out of the system. High costs of living and high living will come down. People will work harder, live a more moral life. Values will be adjusted, and enterprising people will pick up the wrecks from less competent people” (Hoover, 1952). Hoover apparently followed this advice as the Depression wore on. He continued to reassure the public that if the principles of orthodox finance were faithfully followed, recovery would surely be the result.

The business press at the time was not immune from such liquidationist prescriptions either. The Commercial and Financial Chronicle, in an August 3, 1929 editorial entitled “Is Not Group Speculating Conspiracy, Fostering Sham Prosperity?” complained of the economy being replete with profligate spending including:

(a) The luxurious diversification of diet advantageous to dairy men … and fruit growers …; (b) luxurious dressing … more silk and rayon …; (c) free spending for automobiles and their accessories, gasoline, house furnishings and equipment, radios, travel, amusements and sports; (d) the displacement from the farms by tractors and autos of produce-consuming horses and mules to a number aggregating 3,700,000 for the period 1918–1928 … (e) the frills of education to thousands for whom places might better be reserved at bench or counter or on the farm. (Quoted from Nelson, 1991)

Persons, in a paper which appeared in the November 1930 Quarterly Journal of Economics, demonstrates that some academic economists also held similar liquidationist views.

Although certainly not universal, the descriptions above suggest that no small part of the conventional wisdom at the time believed the Depression to be a penitence for past sins. In addition, it was thought that the economy would be restored to full employment equilibrium once wages and prices adjusted sufficiently. Say’s Law will ensure the economy will return to health, and supply will create its own demand sufficient to return to prosperity, if we simply let the system work its way through. In his memoirs published in 1952, 20 years after his election defeat, Herbert Hoover continued to steadfastly maintain that if Roosevelt and the New Dealers would have stuck to the policies his administration put in place, the economy would have made a full recovery within 18 months after the election of 1932. We have to intensify our resolve to “stay the course.” All will be well in time if we just “take our medicine.” In hindsight, it challenges the imagination to think up worse policy prescriptions for the events of 1929–33.

Modern Explanations

There remains considerable debate regarding the economic explanations for the behavior of the business cycle between August 1929 and March 1933. This section describes the main hypotheses that have been presented in the literature attempting to explain the causes for the depth, protracted length, and worldwide propagation of the Great Depression.

The United States’ experience, considering the preponderance of empirical results and historical simulations contained in the economic literature, can largely be accounted for by the monetary hypothesis of Friedman and Schwartz (1963) together with the nonmonetary/financial hypotheses of Bernanke (1983) and Fisher (1933). That is, most, but not all, of the characteristic phases of the business cycle and depth to which output fell from 1929 to 1933 can be accounted for by the monetary and nonmonetary/financial hypotheses. The international experience, well documented in Choudri and Kochin (1980), Hamilton (1988), Temin (1989), Bernanke and James (1991), and Eichengreen (1992), can be properly understood as resulting from a flawed interwar gold standard. Each of these hypotheses is explained in greater detail below.

Nonmonetary/Nonfinancial Theories

It should be noted that I do not include a section covering the nonmonetary/nonfinancial theories of the Great Depression. These theories, including Temin’s (1976) focus on autonomous consumption decline, the collapse of housing construction contained in Anderson and Butkiewicz (1980), the effects of the stock market crash, the uncertainty hypothesis of Romer (1990), and the Smoot–Hawley Tariff Act of 1930, are all worthy of mention and can rightly be apportioned some of the responsibility for initiating the Depression. However, any theory of the Depression must be able to account for the protracted problems associated with the punishing deflation imposed on the United States and the world during that era. While the nonmonetary/nonfinancial theories go a long way accounting for the impetus for, and first year of the Depression, my reading of the empirical results of the economic literature indicates that they do not have the explanatory power of the three other theories mentioned above to account for the depths to which the economy plunged.

Moreover, recent research by Olney (1999) argues convincingly that the decline in consumption was not autonomous at all. Rather, the decline resulted because high consumer indebtedness threatened future consumption spending because default was expensive. Olney shows that households were shouldering an unprecedented burden of installment debt – especially for automobiles. In addition, down payments were large and contracts were short. Missed installment payments triggered repossession, reducing consumer wealth in 1930 because households lost all acquired equity. Cutting consumption was the only viable strategy in 1930 for avoiding default.

The Monetary Hypothesis

In reviewing the economic history of the Depression above, it was mentioned that the supply of money fell by 35 percent, prices dropped by about 33 percent, and one-third of all banks vanished. Milton Friedman and Anna Schwartz, in their 1963 book A Monetary History of the United States, 1867–1960, call this massive drop in the supply of money “The Great Contraction.”

Friedman and Schwartz (1963) discuss and painstakingly document the synchronous movements of the real economy with the disruptions that occurred in the financial sector. They point out that the series of bank failures that occurred beginning in October 1930 worsened economic conditions in two ways. First, bank shareholder wealth was reduced as banks failed. Second, and most importantly, the bank failures were exogenous shocks and led to the drastic decline in the money supply. The persistent deflation of the 1930s follows directly from this “great contraction.”

Criticisms of Fed Policy

However, this raises an important question: Where was the Federal Reserve while the money supply and the financial system were collapsing? If the Federal Reserve was created in 1913 primarily to be the “lender of last resort” for troubled financial institutions, it was failing miserably. Friedman and Schwartz pin the blame squarely on the Federal Reserve and the failure of monetary policy to offset the contractions in the money supply. As the money multiplier continued on its downward path, the monetary base, rather than being aggressively increased, simply progressed slightly upwards on a gently positive sloping time path. As banks were failing in waves, was the Federal Reserve attempting to contain the panics by aggressively lending to banks scrambling for liquidity? The unfortunate answer is “no.” When the panics were occurring, was there discussion of suspending deposit convertibility or suspension of the gold standard, both of which had been successfully employed in the past? Again the unfortunate answer is “no.” Did the Federal Reserve consider the fact that it had an abundant supply of free gold, and therefore that monetary expansion was feasible? Once again the unfortunate answer is “no.” The argument can be summarized by the following quotation:

At all times throughout the 1929–33 contraction, alternative policies were available to the System by which it could have kept the stock of money from falling, and indeed could have increased it at almost any desired rate. Those policies did not involve radical innovations. They involved measures of a kind the System had taken in earlier years, of a kind explicitly contemplated by the founders of the System to meet precisely the kind of banking crisis that developed in late 1930 and persisted thereafter. They involved measures that were actually proposed and very likely would have been adopted under a slightly different bureaucratic structure or distribution of power, or even if the men in power had had somewhat different personalities. Until late 1931 – and we believe not even then – the alternative policies involved no conflict with the maintenance of the gold standard. Until September 1931, the problem that recurrently troubled the System was how to keep the gold inflows under control, not the reverse. (Friedman and Schwartz, 1963)

The inescapable conclusion is that it was a failure of the policies of the Federal Reserve System in responding to the crises of the time that made the Depression as bad as it was. If monetary policy had responded differently, the economic events of 1929–33 need not have been as they occurred. This assertion is supported by the results of Fackler and Parker (1994). Using counterfactual historical simulations, they show that if the Federal Reserve had kept the M1 money supply growing along its pre-October 1929 trend of 3.3 percent annually, most of the Depression would have been averted. McCallum (1990) also reaches similar conclusions employing a monetary base feedback policy in his counterfactual simulations.

Lack of Leadership at the Fed

Friedman and Schwartz trace the seeds of these regrettable events to the death of Federal Reserve Bank of New York President Benjamin Strong in 1928. Strong’s death altered the locus of power in the Federal Reserve System and left it without effective leadership. Friedman and Schwartz maintain that Strong had the personality, confidence and reputation in the financial community to lead monetary policy and sway policy makers to his point of view. Friedman and Schwartz believe that Strong would not have permitted the financial panics and liquidity crises to persist and affect the real economy. Instead, after Governor Strong died, the conduct of open market operations changed from a five-man committee dominated by the New York Federal Reserve to that of a 12-man committee of Federal Reserve Bank governors. Decisiveness in leadership was replaced by inaction and drift. Others (Temin, 1989; Wicker, 1965) reject this point, claiming the policies of the Federal Reserve in the 1930s were not inconsistent with the policies pursued in the decade of the 1920s.

The Fed’s Failure to Distinguish between Nominal and Real Interest Rates

Meltzer (1976) also points out errors made by the Federal Reserve. His argument is that the Federal Reserve failed to distinguish between nominal and real interest rates. That is, while nominal rates were falling, the Federal Reserve did virtually nothing, since it construed this to be a sign of an “easy” credit market. However, in the face of deflation, real rates were rising and there was in fact a “tight” credit market. Failure to make this distinction led money to be a contributing factor to the initial decline of 1929.

Deflation

Cecchetti (1992) and Nelson (1991) bolster the monetary hypothesis by demonstrating that the deflation during the Depression was anticipated at short horizons, once it was under way. The result, using the Fisher equation, is that high ex ante real interest rates were the transmission mechanism that led from falling prices to falling output. In addition, Cecchetti (1998) and Cecchetti and Karras (1994) argue that if the lower bound of the nominal interest rate is reached, then continued deflation renders the opportunity cost of holding money negative. In this instance the nature of money changes. Now the rate of deflation places a floor on the real return nonmoney assets must provide to make them attractive to hold. If they cannot exceed the rate on money holdings, then agents will move their assets into cash and the result will be negative net investment and a decapitalization of the economy.

Critics of the Monetary Hypothesis

The monetary hypothesis, however, is not without its detractors. Paul Samuelson observes that the monetary base did not fall during the Depression. Moreover, expecting the Federal Reserve to have aggressively increased the monetary base by whatever amount was necessary to stop the decline in the money supply is hindsight. A course of action for monetary policy such as this was beyond the scope of discussion prevailing at the time. In addition, others, like Moses Abramovitz, point out that the money supply had endogenous components that were beyond the Federal Reserve’s ability to control. Namely, the money supply may have been falling as a result of declining economic activity, or so-called “reverse causation.” Moreover the gold standard, to which the United States continued to adhere until March 1933, also tied the hands of the Federal Reserve in so far as gold outflows that occurred required the Federal Reserve to contract the supply of money. These views are also contained in Temin (1989) and Eichengreen (1992), as discussed below.

Bernanke (1983) argues that the monetary hypothesis: (i) is not a complete explanation of the link between the financial sector and aggregate output in the 1930s; (ii) does not explain how it was that decreases in the money supply caused output to keep falling over many years, especially since it is widely believed that changes in the money supply only change prices and other nominal economic values in the long run, not real economic values like output ; and (iii) is quantitatively insufficient to explain the depth of the decline in output. Bernanke (1983) not only resurrected and sharpened Fisher’s (1933) debt deflation hypothesis, but also made further contributions to what has come to be known as the nonmonetary/financial hypothesis.

The Nonmonetary/Financial Hypothesis

Bernanke (1983), building on the monetary hypothesis of Friedman and Schwartz (1963), presents an alternative interpretation of the way in which the financial crises may have affected output. The argument involves both the effects of debt deflation and the impact that bank panics had on the ability of financial markets to efficiently allocate funds from lenders to borrowers. These nonmonetary/financial theories hold that events in financial markets other than shocks to the money supply can help to account for the paths of output and prices during the Great Depression.

Fisher (1933) asserted that the dominant forces that account for “great” depressions are (nominal) over-indebtedness and deflation. Specifically, he argued that real debt burdens were substantially increased when there were dramatic declines in the price level and nominal incomes. The combination of deflation, falling nominal income and increasing real debt burdens led to debtor insolvency, lowered aggregate demand, and thereby contributed to a continuing decline in the price level and thus further increases in the real burden of debt.

The “Credit View”

Bernanke (1983), in what is now called the “credit view,” provided additional details to help explain Fisher’s debt deflation hypothesis. He argued that in normal circumstances, an initial decline in prices merely reallocates wealth from debtors to creditors, such as banks. Usually, such wealth redistributions are minor in magnitude and have no first-order impact on the economy. However, in the face of large shocks, deflation in the prices of assets forfeited to banks by debtor bankruptcies leads to a decline in the nominal value of assets on bank balance sheets. For a given value of bank liabilities, also denominated in nominal terms, this deterioration in bank assets threatens insolvency. As banks reallocate away from loans to safer government securities, some borrowers, particularly small ones, are unable to obtain funds, often at any price. Further, if this reallocation is long-lived, the shortage of credit for these borrowers helps to explain the persistence of the downturn. As the disappearance of bank financing forces lower expenditure plans, aggregate demand declines, which again contributes to the downward deflationary spiral. For debt deflation to be operative, it is necessary to demonstrate that there was a substantial build-up of debt prior to the onset of the Depression and that the deflation of the 1930s was at least partially unanticipated at medium- and long-term horizons at the time that the debt was being incurred. Both of these conditions appear to have been in place (Fackler and Parker, 2001; Hamilton, 1992; Evans and Wachtel, 1993).

The Breakdown in Credit Markets

In addition, the financial panics which occurred hindered the credit allocation mechanism. Bernanke (1983) explains that the process of credit intermediation requires substantial information gathering and non-trivial market-making activities. The financial disruptions of 1930–33 are correctly viewed as substantial impediments to the performance of these services and thus impaired the efficient allocation of credit between lenders and borrowers. That is, financial panics and debtor and business bankruptcies resulted in a increase in the real cost of credit intermediation. As the cost of credit intermediation increased, sources of credit for many borrowers (especially households, farmers and small firms) became expensive or even unobtainable at any price. This tightening of credit put downward pressure on aggregate demand and helped turn the recession of 1929–30 into the Great Depression. The empirical support for the validity of the nonmonetary/financial hypothesis during the Depression is substantial (Bernanke, 1983; Fackler and Parker, 1994, 2001; Hamilton, 1987, 1992), although support for the “credit view” for the transmission mechanism of monetary policy in post-World War II economic activity is substantially weaker. In combination, considering the preponderance of empirical results and historical simulations contained in the economic literature, the monetary hypothesis and the nonmonetary/financial hypothesis go a substantial distance toward accounting for the economic experiences of the United States during the Great Depression.

The Role of Pessimistic Expectations

To this combination, the behavior of expectations should also be added. As explained by James Tobin, there was another reason for a “change in the character of the contraction” in 1931. Although Friedman and Schwartz attribute this “change” to the bank panics that occurred, Tobin points out that change also took place because of the emergence of pessimistic expectations. If it was thought that the early stages of the Depression were symptomatic of a recession that was not different in kind from similar episodes in our economic history, and that recovery was a real possibility, the public need not have had pessimistic expectations. Instead the public may have anticipated things would get better. However, after the British left the gold standard, expectations changed in a very pessimistic way. The public may very well have believed that the business cycle downturn was not going to be reversed, but rather was going to get worse than it was. When households and business investors begin to make plans based on the economy getting worse instead of making plans based on anticipations of recovery, the depressing economic effects on consumption and investment of this switch in expectations are common knowledge in the modern macroeconomic literature. For the literature on the Great Depression, the empirical research conducted on the expectations hypothesis focuses almost exclusively on uncertainty (which is not the same thing as pessimistic/optimistic expectations) and its contribution to the onset of the Depression (Romer, 1990; Flacco and Parker, 1992). Although Keynes (1936) writes extensively about the state of expectations and their economic influence, the literature is silent regarding the empirical validity of the expectations hypothesis in 1931–33. Yet, in spite of this, the continued shocks that the United States’ economy received demonstrated that the business cycle downturn of 1931–33 was of a different kind than had previously been known. Once the public believed this to be so and made their plans accordingly, the results had to have been economically devastating. There is no formal empirical confirmation and I have not segregated the expectations hypothesis as a separate hypothesis in the overview. However, the logic of the above argument compels me to be of the opinion that the expectations hypothesis provides an impressive addition to the monetary hypothesis and the nonmonetary/financial hypothesis in accounting for the economic experiences of the United States during the Great Depression.

The Gold Standard Hypothesis

Recent research on the operation of the interwar gold standard has deepened our understanding of the Depression and its international character. The way and manner in which the interwar gold standard was structured and operated provide a convincing explanation of the international transmission of deflation and depression that occurred in the 1930s.

The story has its beginning in the 1870–1914 period. During this time the gold standard functioned as a pegged exchange rate system where certain rules were observed. Namely, it was necessary for countries to permit their money supplies to be altered in response to gold flows in order for the price-specie flow mechanism to function properly. It operated successfully because countries that were gaining gold allowed their money supply to increase and raise the domestic price level to restore equilibrium and maintain the fixed exchange rate of their currency. Countries that were losing gold were obligated to permit their money supply to decrease and generate a decline in their domestic price level to restore equilibrium and maintain the fixed exchange rate of their currency. Eichengreen (1992) discusses and extensively documents that the gold standard of this period functioned as smoothly as it did because of the international commitment countries had to the gold standard and the level of international cooperation exhibited during this time. “What rendered the commitment to the gold standard credible, then, was that the commitment was international, not merely national. That commitment was activated through international cooperation” (Eichengreen, 1992).

The gold standard was suspended when the hostilities of World War I broke out. By the end of 1928, major countries such as the United States, the United Kingdom, France and Germany had re-established ties to a functioning fixed exchange rate gold standard. However, Eichengreen (1992) points out that the world in which the gold standard functioned before World War I was not the same world in which the gold standard was being re-established. A credible commitment to the gold standard, as Hamilton (1988) explains, required that a country maintain fiscal soundness and political objectives that insured the monetary authority could pursue a monetary policy consistent with long-run price stability and continuous convertibility of the currency. Successful operation required these conditions to be in place before re-establishment of the gold standard was operational. However, many governments during the interwar period went back on the gold standard in the opposite set of circumstances. They re-established ties to the gold standard because they were incapable, due to the political chaos generated after World War I, of fiscal soundness and did not have political objectives conducive to reforming monetary policy such that it could insure long-run price stability. “By this criterion, returning to the gold standard could not have come at a worse time or for poorer reasons” (Hamilton, 1988). Kindleberger (1973) stresses the fact that the pre-World War I gold standard functioned as well as it did because of the unquestioned leadership exercised by Great Britain. After World War I and the relative decline of Britain, the United States did not exhibit the same strength of leadership Britain had shown before. The upshot is that it was an unsuitable environment in which to re-establish the gold standard after World War I and the interwar gold standard was destined to drift in a state of malperformance as no one took responsibility for its proper functioning. However, the problems did not end there.

Flaws in the Interwar International Gold Standard

Lack of Symmetry in the Response of Gold-Gaining and Gold-Losing Countries

The interwar gold standard operated with four structural/technical flaws that almost certainly doomed it to failure (Eichengreen, 1986; Temin, 1989; Bernanke and James, 1991). The first, and most damaging, was the lack of symmetry in the response of gold-gaining countries and gold-losing countries that resulted in a deflationary bias that was to drag the world deeper into deflation and depression. If a country was losing gold reserves, it was required to decrease its money supply to maintain its commitment to the gold standard. Given that a minimum gold reserve had to be maintained and that countries became concerned when the gold reserve fell within 10 percent of this minimum, little gold could be lost before the necessity of monetary contraction, and thus deflation, became a reality. Moreover, with a fractional gold reserve ratio of 40 percent, the result was a decline in the domestic money supply equal to 2.5 times the gold outflow. On the other hand, there was no such constraint on countries that experienced gold inflows. Gold reserves were accumulated without the binding requirement that the domestic money supply be expanded. Thus the price–specie flow mechanism ceased to function and the equilibrating forces of the pre-World War I gold standard were absent during the interwar period. If a country attracting gold reserves were to embark on a contractionary path, the result would be the further extraction of gold reserves from other countries on the gold standard and the imposition of deflation on their economies as well, as they were forced to contract their money supplies. “As it happened, both of the two major gold surplus countries – France and the United States, who at the time together held close to 60 percent of the world’s monetary gold – took deflationary paths in 1928–1929” (Bernanke and James, 1991).

Foreign Exchange Reserves

Second, countries that did not have reserve currencies could hold their minimum reserves in the form of both gold and convertible foreign exchange reserves. If the threat of devaluation of a reserve currency appeared likely, a country holding foreign exchange reserves could divest itself of the foreign exchange, as holding it became a more risky proposition. Further, the convertible reserves were usually only fractionally backed by gold. Thus, if countries were to prefer gold holdings as opposed to foreign exchange reserves for whatever reason, the result would be a contraction in the world money supply as reserves were destroyed in the movement to gold. This effect can be thought of as equivalent to the effect on the domestic money supply in a fractional reserve banking system of a shift in the public’s money holdings toward currency and away from bank deposits.

The Bank of France and Open Market Operations

Third, the powers of many European central banks were restricted or excluded outright. In particular, as discussed by Eichengreen (1986), the Bank of France was prohibited from engaging in open market operations, i.e. the purchase or sale of government securities. Given that France was one of the countries amassing gold reserves, this restriction largely prevented them from adhering to the rules of the gold standard. The proper response would have been to expand their supply of money and inflate so as not to continue to attract gold reserves and impose deflation on the rest of the world. This was not done. France continued to accumulate gold until 1932 and did not leave the gold standard until 1936.

Inconsistent Currency Valuations

Lastly, the gold standard was re-established at parities that were unilaterally determined by each individual country. When France returned to the gold standard in 1926, it returned at a parity rate that is believed to have undervalued the franc. When Britain returned to the gold standard in 1925, it returned at a parity rate that is believed to have overvalued the pound. In this situation, the only sustainable equilibrium required the French to inflate their economy in response to the gold inflows. However, given their legacy of inflation during the 1921–26 period, France steadfastly resisted inflation (Eichengreen, 1986). The maintenance of the gold standard and the resistance to inflation were now inconsistent policy objectives. The Bank of France’s inability to conduct open market operations only made matters worse. The accumulation of gold and the exporting of deflation to the world was the result.

The Timing of Recoveries

Taken together, the flaws described above made the interwar gold standard dysfunctional and in the end unsustainable. Looking back, we observe that the record of departure from the gold standard and subsequent recovery was different for many different countries. For some countries recovery came sooner. For some it came later. It is in this timing of departure from the gold standard that recent research has produced a remarkable empirical finding. From the work of Choudri and Kochin (1980), Eichengreen and Sachs (1985), Temin (1989), and Bernanke and James (1991), we now know that the sooner a country abandoned the gold standard, the quicker recovery commenced. Spain, which never restored its participation in the gold standard, missed the ravages of the Depression altogether. Britain left the gold standard in September 1931, and started to recover. Sweden left the gold standard at the same time as Britain, and started to recover. The United States left in March 1933, and recovery commenced. France, Holland, and Poland continued to have their economies struggle after the United States’ recovery began as they continued to adhere to the gold standard until 1936. Only after they left did recovery start; departure from the gold standard freed a country from the ravages of deflation.

The Fed and the Gold Standard: The “Midas Touch”

Temin (1989) and Eichengreen (1992) argue that it was the unbending commitment to the gold standard that generated deflation and depression worldwide. They emphasize that the gold standard required fiscal and monetary authorities around the world to submit their economies to internal adjustment and economic instability in the face of international shocks. Given how the gold standard tied countries together, if the gold parity were to be defended and devaluation was not an option, unilateral monetary actions by any one country were pointless. The end result is that Temin (1989) and Eichengreen (1992) reject Friedman and Schwartz’s (1963) claim that the Depression was caused by a series of policy failures on the part of the Federal Reserve. Actions taken in the United States, according to Temin (1989) and Eichengreen (1992), cannot be properly understood in isolation with respect to the rest of the world. If the commitment to the gold standard was to be maintained, monetary and fiscal authorities worldwide had little choice in responding to the crises of the Depression. Why did the Federal Reserve continue a policy of inaction during the banking panics? Because the commitment to the gold standard, what Temin (1989) has labeled “The Midas Touch,” gave them no choice but to let the banks fail. Monetary expansion and the injection of liquidity would lower interest rates, lead to a gold outflow, and potentially be contrary to the rules of the gold standard. Continued deflation due to gold outflows would begin to call into question the monetary authority’s commitment to the gold standard. “Defending gold parity might require the authorities to sit idly by as the banking system crumbled, as the Federal Reserve did at the end of 1931 and again at the beginning of 1933” (Eichengreen, 1992). Thus, if the adherence to the gold standard were to be maintained, the money supply was endogenous with respect to the balance of payments and beyond the influence of the Federal Reserve.

Eichengreen (1992) concludes further that what made the pre-World War I gold standard so successful was absent during the interwar period: credible commitment to the gold standard activated through international cooperation in its implementation and management. Had these important ingredients of the pre-World War I gold standard been present during the interwar period, twentieth-century economic history may have been very different.

Recovery and the New Deal

March 1933 was the rock bottom of the Depression and the inauguration of Franklin D. Roosevelt represented a sharp break with the status quo. Upon taking office, a bank holiday was declared, the United States left the interwar gold standard the following month, and the government commenced with several measures designed to resurrect the financial system. These measures included: (i) the establishment of the Reconstruction Finance Corporation which set about funneling large sums of liquidity to banks and other intermediaries; (ii) the Securities Exchange Act of 1934 which established margin requirements for bank loans used to purchase stocks and bonds and increased information requirements to potential investors; and (iii) the Glass–Steagal Act which strictly separated commercial banking and investment banking. Although delivering some immediate relief to financial markets, lenders continued to be reluctant to extend credit after the events of 1929–33, and the recovery of financial markets was slow and incomplete. Bernanke (1983) estimates that the United States’ financial system did not begin to shed the inefficiencies under which it was operating until the end of 1935.

The NIRA

Policies designed to promote different economic institutions were enacted as part of the New Deal. The National Industrial Recovery Act (NIRA) was passed on June 6, 1933 and was designed to raise prices and wages. In addition, the Act mandated the formation of planning boards in critical sectors of the economy. The boards were charged with setting output goals for their respective sector and the usual result was a restriction of production. In effect, the NIRA was a license for industries to form cartels and was struck down as unconstitutional in 1935. The Agricultural Adjustment Act of 1933 was similar legislation designed to reduce output and raise prices in the farming sector. It too was ruled unconstitutional in 1936.

Relief and Jobs Programs

Other policies intended to provide relief directly to people who were destitute and out of work were rapidly enacted. The Civilian Conservation Corps (CCC), the Tennessee Valley Authority (TVA), the Public Works Administration (PWA) and the Federal Emergency Relief Administration (FERA) were set up shortly after Roosevelt took office and provided jobs for the unemployed and grants to states for direct relief. The Civil Works Administration (CWA), created in 1933–34, and the Works Progress Administration (WPA), created in 1935, were also designed to provide work relief to the jobless. The Social Security Act was also passed in 1935. There surely are other programs with similar acronyms that have been left out, but the intent was the same. In the words of Roosevelt himself, addressing Congress in 1938:

Government has a final responsibility for the well-being of its citizenship. If private co-operative endeavor fails to provide work for the willing hands and relief for the unfortunate, those suffering hardship from no fault of their own have a right to call upon the Government for aid; and a government worthy of its name must make fitting response. (Quoted from Polenberg, 2000)

The Depression had shown the inaccuracies of classifying the 1920s as a “new era.” Rather, the “new era,” as summarized by Roosevelt’s words above and initiated in government’s involvement in the economy, began in March 1933.

The NBER business cycle chronology shows continuous growth from March 1933 until May 1937, at which time a 13-month recession hit the economy. The business cycle rebounded in June 1938 and continued on its upward march to and through the beginning of the United States’ involvement in World War II. The recovery that started in 1933 was impressive, with real GNP experiencing annual rates of the growth in the 10 percent range between 1933 and December 1941, excluding the recession of 1937–38 (Romer, 1993). However, as reported by Romer (1993), real GNP did not return to its pre-Depression level until 1937 and real GNP did not catch up to its pre-Depression secular trend until 1942. Indeed, the unemployment rate, peaking at 25 percent in March 1933, continued to dwell near or above the double-digit range until 1940. It is in this sense that most economists attribute the ending of the Depression to the onset of World War II. The War brought complete recovery as the unemployment rate quickly plummeted after December 1941 to its nadir during the War of below 2 percent.

Explanations for the Pace of Recovery

The question remains, however, that if the War completed the recovery, what initiated it and sustained it through the end of 1941? Should we point to the relief programs of the New Deal and the leadership of Roosevelt? Certainly, they had psychological/expectational effects on consumers and investors and helped to heal the suffering experienced during that time. However, as shown by Brown (1956), Peppers (1973), and Raynold, McMillin and Beard (1991), fiscal policy contributed little to the recovery, and certainly could have done much more.

Once again we return to the financial system for answers. The abandonment of the gold standard, the impact this had on the money supply, and the deliverance from the economic effects of deflation would have to be singled out as the most important contributor to the recovery. Romer (1993) stresses that Eichengreen and Sachs (1985) have it right; recovery did not come before the decision to abandon the old gold parity was made operational. Once this became reality, devaluation of the currency permitted expansion in the money supply and inflation which, rather than promoting a policy of beggar-thy-neighbor, allowed countries to escape the deflationary vortex of economic decline. As discussed in connection with the gold standard hypothesis, the simultaneity of leaving the gold standard and recovery is a robust empirical result that reflects more than simple temporal coincidence.

Romer (1993) reports an increase in the monetary base in the United States of 52 percent between April 1933 and April 1937. The M1 money supply virtually matched this increase in the monetary base, with 49 percent growth over the same period. The sources of this increase were two-fold. First, aside from the immediate monetary expansion permitted by devaluation, as Romer (1993) explains, monetary expansion continued into 1934 and beyond as gold flowed to the United States from Europe due to the increasing political unrest and heightened probability of hostilities that began the progression to World War II. Second, the increase in the money supply matched the increase in the monetary base and the Treasury chose not to sterilize the gold inflows. This is evidence that the monetary expansion resulted from policy decisions and not endogenous changes in the money multiplier. The new regime was freed from the constraints of the gold standard and the policy makers were intent on taking actions of a different nature than what had been done between 1929 and 1933.

Incompleteness of the Recovery before WWII

The Depression had turned a corner and the economy was emerging from the abyss in 1933. However, it still had a long way to go to reach full recovery. Friedman and Schwartz (1963) comment that “the most notable feature of the revival after 1933 was not its rapidity but its incompleteness.” They claim that monetary policy and the Federal Reserve were passive after 1933. The monetary authorities did nothing to stop the fall from 1929 to 1933 and did little to promote the recovery. The Federal Reserve made no effort to increase the stock of high-powered money through the use of either open market operations or rediscounting; Federal Reserve credit outstanding remained “almost perfectly constant from 1934 to mid-1940” (Friedman and Schwartz, 1963). As we have seen above, it was the Treasury that was generating increases in the monetary base at the time by issuing gold certificates equal to the amount of gold reserve inflow and depositing them at the Federal Reserve. When the government spent the money, the Treasury swapped the gold certificates for Federal Reserve notes and this expanded the monetary base (Romer, 1993). Monetary policy was thought to be powerless to promote recovery, and instead it was fiscal policy that became the implement of choice. The research shows that fiscal policy could have done much more to aid in recovery – ironically fiscal policy was the vehicle that was now the focus of attention. There is an easy explanation for why this is so.

The Emergences of Keynes

The economics profession as a whole was at a loss to provide cogent explanations for the events of 1929–33. In the words of Robert Gordon (1998), “economics had lost its intellectual moorings, and it was time for a new diagnosis.” There were no convincing answers regarding why the earlier theories of macroeconomic behavior failed to explain the events that were occurring, and worse, there was no set of principles that established a guide for proper actions in the future. That changed in 1936 with the publication of Keynes’s book The General Theory of Employment, Interest and Money. Perhaps there has been no other person and no other book in economics about which so much has been written. Many consider the arrival of Keynesian thought to have been a “revolution,” although this too is hotly contested (see, for example, Laidler, 1999). The debates that The General Theory generated have been many and long-lasting. There is little that can be said here to add or subtract from the massive literature devoted to the ideas promoted by Keynes, whether they be viewed right or wrong. But the influence over academic thought and economic policy that was generated by The General Theory is not in doubt.

The time was right for a set of ideas that not only explained the Depression’s course of events, but also provided a prescription for remedies that would create better economic performance in the future. Keynes and The General Theory, at the time the events were unfolding, provided just such a package. When all is said and done, we can look back in hindsight and argue endlessly about what Keynes “really meant” or what the “true” contribution of Keynesianism has been to the world of economics. At the time the Depression happened, Keynes represented a new paradigm for young scholars to latch on to. The stage was set for the nurturing of macroeconomics for the remainder of the twentieth century.

This article is a modified version of the introduction to Randall Parker, editor, Reflections on the Great Depression, Edward Elgar Publishing, 2002.

Bibliography

Olney, Martha. “Avoiding Default:The Role of Credit in the Consumption Collapse of 1930.” Quarterly Journal of Economics 114, no. 1 (1999): 319-35.

Anderson, Barry L. and James L. Butkiewicz. “Money, Spending and the Great Depression.” Southern Economic Journal 47 (1980): 388-403.

Balke, Nathan S. and Robert J. Gordon. “Historical Data.” In The American Business Cycle: Continuity and Change, edited by Robert J. Gordon. Chicago: University of Chicago Press, 1986.

Bernanke, Ben S. “Nonmonetary Effects of the Financial Crisis in the Propagation of the Great Depression.” American Economic Review 73, no. 3 (1983): 257-76.

Bernanke, Ben S. and Harold James. “The Gold Standard, Deflation, and Financial Crisis in the Great Depression: An International Comparison.” In Financial Markets and Financial Crises, edited by R. Glenn Hubbard. Chicago: University of Chicago Press, 1991.

Brown, E. Cary. “Fiscal Policy in the Thirties: A Reappraisal.” American Economic Review 46, no. 5 (1956): 857-79.

Cecchetti, Stephen G. “Prices during the Great Depression: Was the Deflation of 1930-1932 Really Anticipated?” American Economic Review 82, no. 1 (1992): 141-56.

Cecchetti, Stephen G. “Understanding the Great Depression: Lessons for Current Policy.” In The Economics of the Great Depression, edited by Mark Wheeler. Kalamazoo, MI: W.E. Upjohn Institute for Employment Research, 1998.

Cecchetti, Stephen G. and Georgios Karras. “Sources of Output Fluctuations during the Interwar Period: Further Evidence on the Causes of the Great Depression.” Review of Economics and Statistics 76, no. 1 (1994): 80-102

Choudri, Ehsan U. and Levis A. Kochin. “The Exchange Rate and the International Transmission of Business Cycle Disturbances: Some Evidence from the Great Depression.” Journal of Money, Credit, and Banking 12, no. 4 (1980): 565-74.

De Long, J. Bradford and Andrei Shleifer. “The Stock Market Bubble of 1929: Evidence from Closed-end Mutual Funds.” Journal of Economic History 51, no. 3 (1991): 675-700.

Eichengreen, Barry. “The Bank of France and the Sterilization of Gold, 1926–1932.” Explorations in Economic History 23, no. 1 (1986): 56-84.

Eichengreen, Barry. Golden Fetters: The Gold Standard and the Great Depression, 1919–1939. New York: Oxford University Press, 1992.

Eichengreen, Barry and Jeffrey Sachs. “Exchange Rates and Economic Recovery in the 1930s.” Journal of Economic History 45, no. 4 (1985): 925-46.

Evans, Martin and Paul Wachtel. “Were Price Changes during the Great Depression Anticipated? Evidence from Nominal Interest Rates.” Journal of Monetary Economics 32, no. 1 (1993): 3-34.

Fackler, James S. and Randall E. Parker. “Accounting for the Great Depression: A Historical Decomposition.” Journal of Macroeconomics 16 (1994): 193-220.

Fackler, James S. and Randall E. Parker. “Was Debt Deflation Operative during the Great Depression?” East Carolina University Working Paper, 2001.

Fisher, Irving. “The Debt–Deflation Theory of Great Depressions.” Econometrica 1, no. 4 (1933): 337-57.

Flacco, Paul R. and Randall E. Parker. “Income Uncertainty and the Onset of the Great Depression.” Economic Inquiry 30, no. 1 (1992): 154-71.

Friedman, Milton and Anna J. Schwartz. A Monetary History of the United States, 1867–1960. Princeton, NJ: Princeton University Press, 1963.

Gordon, Robert J. Macroeconomics, seventh edition. New York: Addison Wesley, 1998.

Hamilton, James D. “Monetary Factors in the Great Depression.” Journal of Monetary Economics 13 (1987): 1-25.

Hamilton, James D. “Role of the International Gold Standard in Propagating the Great Depression.” Contemporary Policy Issues 6, no. 2 (1988): 67-89.

Hamilton, James D. “Was the Deflation during the Great Depression Anticipated? Evidence from the Commodity Futures Market.” American Economic Review 82, no. 1 (1992): 157-78.

Hayek, Friedrich A. von. Monetary Theory and the Trade Cycle. New

York: A. M. Kelley, 1967 (originally published in 1929).

Hayek, Friedrich A. von, Prices and Production. New York: A. M.

Kelley, 1966 (originally published in 1931).

Hoover, Herbert. The Memoirs of Herbert Hoover: The Great Depression, 1929–1941. New York: Macmillan, 1952.

Keynes, John M. The General Theory of Employment, Interest, and Money. London: Macmillan, 1936.

Kindleberger, Charles P. The World in Depression, 1929–1939. Berkeley: University of California Press, 1973.

Laidler, David. Fabricating the Keynesian Revolution. Cambridge: Cambridge University Press, 1999.

McCallum, Bennett T. “Could a Monetary Base Rule Have Prevented the Great Depression?” Journal of Monetary Economics 26 (1990): 3-26.

Meltzer, Allan H. “Monetary and Other Explanations of the Start of the Great Depression.” Journal of Monetary Economics 2 (1976): 455-71.

Mishkin, Frederick S. “The Household Balance Sheet and the Great Depression.” Journal of Economic History 38, no. 4 (1978): 918-37.

Nelson, Daniel B. “Was the Deflation of 1929–1930 Anticipated? The Monetary Regime as Viewed by the Business Press.” Research in Economic History 13 (1991): 1-65.

Peppers, Larry. “Full Employment Surplus Analysis and Structural Change: The 1930s.” Explorations in Economic History 10 (1973): 197-210..

Persons, Charles E. “Credit Expansion, 1920 to 1929, and Its Lessons.” Quarterly Journal of Economics 45, no. 1 (1930): 94-130.

Polenberg, Richard. The Era of Franklin D. Roosevelt, 1933–1945: A Brief History with Documents. Boston: Bedford/St. Martin’s, 2000.

Raynold, Prosper, W. Douglas McMillin and Thomas R. Beard. “The Impact of Federal Government Expenditures in the 1930s.” Southern Economic Journal 58, no. 1 (1991): 15-28.

Romer, Christina D. “World War I and the Postwar Depression: A Reappraisal Based on Alternative Estimates of GNP.” Journal of Monetary Economics 22, no. 1 (1988): 91-115.

Romer, Christina D. “The Great Crash and the Onset of the Great Depression.” Quarterly Journal of Economics 105, no. 3 (1990): 597-624.

Romer, Christina D. “The Nation in Depression.” Journal of Economic Perspectives 7, no. 2 (1993): 19-39.

Snowdon, Brian and Howard R. Vane. Conversations with Leading Economists: Interpreting Modern Macroeconomics, Cheltenham, UK: Edward Elgar, 1999.

Soule, George H. Prosperity Decade, From War to Depression: 1917–1929. New York: Rinehart, 1947.

Temin, Peter. Did Monetary Forces Cause the Great Depression? New York: W.W. Norton, 1976.

Temin, Peter. Lessons from the Great Depression. Cambridge, MA: MIT Press, 1989.

White, Eugene N. “The Stock Market Boom and Crash of 1929 Revisited.” Journal of Economic Perspectives 4, no. 2 (1990): 67-83.

Wicker, Elmus. “Federal Reserve Monetary Policy, 1922–33: A Reinterpretation.” Journal of Political Economy 73, no. 4 (1965): 325-43.

1 Bankers’ acceptances are explained at http://www.rich.frb.org/pubs/instruments/ch10.html.

2 Liquidity is the ease of converting an asset into money.

3 The monetary base is measured as the sum of currency in the hands of the public plus reserves in the banking system. It is also called high-powered money since the monetary base is the quantity that gets multiplied into greater amounts of money supply as banks make loans and people spend and thereby create new bank deposits.

4 The money multiplier equals [D/R*(1 + D/C)]/(D/R + D/C + D/E), where

D = deposits, R = reserves, C = currency and E = excess reserves in the

banking system.

5 The real interest rate adjusts the observed (nominal) interest rate for inflation or deflation. Ex post refers to the real interest rate after the actual change in prices has been observed; ex ante refers to the real interest rate that is expected at the time the lending occurs.

6 See note 3.

Citation: Parker, Randall. “An Overview of the Great Depression”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/an-overview-of-the-great-depression/

Fraternal Sickness Insurance

Herb Emery, University of Calgary

Introduction

During the nineteenth and early-twentieth century, lost income due to illness was one of the greatest risks to a wage earner’s household’s standard of living (Horrell and Oxley 2000, Hoffman 2001). Prior to the introduction of state health insurance in England in 1911, similar “patchworks of protection” — that included fraternal organizations, trade unions and workplace-based mutual benefit associations, commercial insurance contracts and discretionary charity — were available to workers in England and North America. Within the patchwork the largest source of illness-related income protection was through Friendly Societies; voluntary organizations such as fraternal orders and trade unions that provided stipulated amounts of “relief” for members who were sick and unable to work. Conditions have changed since the 1920s. Health care for family members, not loss of the family head’s income, has become the chief cost of sickness. Government social programs and commercial group plans have become the principal sources of disability insurance and health insurance. Friendly societies have largely discontinued their sick benefits. Most of them, moreover, have had declining memberships in growing populations.

Overview

This article

  • Explains the types of fraternal orders that existed in the late nineteenth and early twentieth centuries and the types of insurance they offered.
  • Provides estimates of the share of the adult male population that participated in fraternal self-help organizations – over 40 percent in the UK and almost as high in the US – and describes the characteristics of these society’s members.
  • Explains how friendly societies worked to provide sickness insurance as a reasonable price by overcoming the adverse selection and moral hazard problems, while facing problems of risk diversification.
  • Discusses the decline of fraternal sickness insurance after the turn of the twentieth century.
    • Concludes that fraternal lodges were financially sound despite claims that they were weakened by unsoundly pricing sickness insurance.
    • Examines the impact of competition from other insurers – including group insurance, government programs, labor unions, and company-sponsored sick-benefits societies.
    • Examines the impact of broader social and economic changes.
    • Concludes that fraternal sickness insurance was in greatest demand among young men and that its decline is tied mainly to the ageing of fraternal membership.
  • Closes by examining historians’ assessments of the importance and adequacy of fraternal sickness insurance.
  • Includes a lengthy bibliography of sources on fraternal sickness insurance.

Some Details and Definitions Pertaining to Fraternal Sickness Insurance

Fraternal orders were affiliated societies, or societies with branches. The branches were known by various names such as lodges, courts, tents, and hives. Fraternal orders emphasized benefits to their members rather than service to the community. They used secret passwords, rituals, and benefits to attract, bond, and hold members and distinguish themselves from members of rival orders.

Fraternal orders fell into three groups from an insurance perspective. The Masonic order and the Elks comprised the no-benefit group. Lodges in these orders often aided their members on a discretionary basis; that is where members were determined to be in “need” of assistance. They did not provide stipulated stated) insurance benefits (or relief).

econd group, the friendly societies, provided stipulated sick and funeral benefits to their members. The Independent Order of Odd Fellows, the Knights of Pythias, the Improved Order of Red Men, the Loyal Order of Moose, the Fraternal Order of Eagles, the Ancient Order of Foresters and the Foresters of America were the largest orders in this group.

A third group, the life-insurance orders, provided stipulated life-insurance, endowment, and annuity benefits to their members. The Maccabees, the Royal Arcanum, the Independent Order of Foresters, the Woodmen of the World, the Modern Woodmen of America, the Ancient Order of United Workmen, and the Catholic Order of Foresters were major orders in this group. In historical usage, the term “fraternal insurance” meant life insurance, but not sickness and funeral (burial) insurance.

The boundaries between the categories blur on close examination. Certain friendly societies, such as the Knights of Pythias and the Improved Order of Red Men, offered optional life-insurance at extra cost through their centrally-administered endowment branches. Certain insurance orders, such as the Independent Order of Foresters, offered optional sick and funeral benefits at extra cost through centrally-administered separate sickness and funeral funds. In other cases, the members of a society had privileged access to third-party insurance. The Canadian Odd Fellows Relief Association, for example, was entirely separate from the IOOF, but sold life policies exclusively to Odd Fellows.

Friendly Societies and Sickness Insurance

From the late eighteenth and early nineteenth centuries, friendly societies were often local lodges with no affiliations to other lodges. Over time, larger national and sometimes international orders that consisted of local lodges affiliated under jurisdictional grand lodges and national or international supreme bodies displaced the purely local lodge.1 The Ancient Order of Foresters was one of England’s larger affiliated Orders and it had subordinate Courts and jurisdictions in North America. The first Independent Order of Odd Fellows (IOOF) subordinate lodge in North America opened in Baltimore in 1819 under the jurisdiction of the British IOOF Manchester Unity. In the 1840s, the North American Odd Fellows seceded from the IOOFMU and founded the IOOF Sovereign Grand Lodge (SGL) that had jurisdiction over state and province level Grand Lodge jurisdictions in North America.

Membership Estimates

For the United Kingdom near the peak of the self-help movement in the 1890s, estimates of participation in friendly societies and trade unions for insurance against the costs of sickness and/or burial range from as many as 20 percent of the population (Horrell and Oxley 2000), to 41.2 percent of adult males (Johnson 1985) to one-half or more of adult males and as many as two-thirds of workingmen (Riley 1997). Estimates for participation in self-help organizations in North America are somewhat lower but they suggest a similar importance of friendly societies for insuring households against the costs of sickness and burial. Beito (1999) argues that a conservative estimate of participation in fraternal self-help organizations in the United States would have one of three adult males as a member in 1920, “including a large segment of the working class.” Millis (1937) reports that 30 per cent of Illinois wage-earners had market insurance for the disability risk in 1919 where fraternal organizations were the principal source of market insurance.

Characteristics of Friendly Society Members

Studies of British friendly societies suggest that friendly society membership was the “badge of the skilled worker” and made no appeal whatever to the “grey, faceless, lower third” of the working class (Johnson 1985, Hopkins 1995, Riley 1997). The major friendly societies in North America found their market for insurance among white, protestant males who came from upper-working-class and lower-middle-class backgrounds. Not surprisingly, the composition of local lodge memberships bore a resemblance to that of the local working population. Most Odd Fellows in Canada and the United States, however, were higher-paid workers, shop keepers, clerks, and farmers (Emery and Emery 1999). As Theodore Ross, the SGL’s grand secretary, noted in 1890, American Odd Fellows came from “the great middle, industrial classes almost exclusively.” Similarly, studies for Lynn, Massachusetts and Missouri found a heavy working-class representation among IOOF lodge memberships (Cumbler, 1979, p.46; Thelen, 1986, p. 165). In Missouri the social-class composition of Odd Fellows was similar to those for the Knights of Pythias and three life-insurance orders (the Ancient Order of United Workmen, the Maccabees, and the Modern Woodmen of the World). Beito’s (2000) work suggests that while the poor, non-whites and immigrants were not usually members of the larger fraternal orders’ memberships, they had their own mutual aid organizations.

Friendly Insurance: Modest Benefits at Low Cost

Friendly society sick benefits exemplified classic features of working-class insurance: a low cost and a small, fixed benefit amount equal to part of the wages of a worker with average wages. By contrast, commercial policies for middle-class clients offered insurance in variable amounts up to full-income replacement, at a cost beyond the reach of most workers. The affiliated orders established Constitutions which standardized rules and arrangements for sick benefit provision. For most of the friendly societies, local lodges or courts paid the sick claims of its members. Subject to requirements of higher bodies, the local lodge set the amounts of its weekly benefit, joining fees, and membership dues. The affiliation of lodges across locations also resulted in members having portable sickness insurance. If a member moved from one location to another, he could transfer his membership from one lodge to another within the organization.

Claiming Benefits

To claim benefits in the IOOF, a member had to provide his lodge with notice of sickness or disability within a week of its commencement. On receiving notice of a brother’s illness, the member of the visiting committee was to visit the brother within twenty-four hours to render him aid and confirm his sickness. Subsequently, the lodge visitors reported weekly on the brother’s condition until he recovered.

Strengths of Friendly Society’s Insurance: Low Overhead, Effective Monitoring

The local lodge or court system of the affiliated friendly societies like the IOOF and the Ancient Order of Foresters had important strengths for the sickness-insurance market. First, it had low overhead costs. Lodge members, not paid agents, recruited clients. Nominally-paid or unpaid lodge officers did the administrative work. Second, the intrusive methods of monitoring within the lodge system helped friendly societies to respond effectively to two classic problems in sickness insurance: adverse selection and moral hazard.

Overcoming the Adverse Selection Problem

An adverse selection of customers for sickness insurers refers to the fact that when the insurance is priced to reflect the average risk of a specified population, unhealthy persons (above average risk of sickness) have more incentive than healthy persons to purchase sickness insurance. Adverse selection in fraternal memberships was potentially a large problem as many orders had membership dues that were not scaled according to age despite the reality that the risk of sickness increased with age. To keep claims and costs manageable, an insurer needs ways to screen out poor risks. To this end, many organizations scaled initiation fees by the age of an initiate to discourage applications from older males, who had above-average sickness risk. In other cases, fraternal lodges or courts scaled the membership dues by the age at which the member was initiated. In addition, lodge-approved physicians often examined the physical conditions and health histories of applicants for membership. Lodge committees investigated the “moral character” of applicants.

Overcoming the Moral Hazard Problem

Sickness insurers also faced the problem of moral hazard (malingering) — an insured person has an incentive to claim to be disabled when he is not and an incentive to not take due care in avoiding injury or illness. The moral hazard problem was small for accident insurance as disability from accident is definite as to time and cause, and external symptoms are usually self-evident (Osborn, 1958). Disability from sickness, by contrast, is subjective and variable in definition. Friendly societies defined sickness, or disability, as the inability to work at one’s usual occupation. Relatively minor complaints disabled some individuals, while serious complaints failed to incapacitate others. The very possession of sickness insurance may have increased a worker’s willingness to consider himself disabled. The friendly society benefit contract dealt with this problem in several ways. First, by having one to two week waiting periods, and much less than full earnings replacement, self-help benefits required the disabled member to co-insure the loss which reduces the incentive to make a claim. In many fraternal orders, members receiving benefits could not drink or gamble and in some cases were not allowed to be away from their residence after dark. The activities of the lodge visiting committee helped to ward off false claims. In addition, fraternal ideology emphasized a member’s moral responsibility for not making a false claim and for reporting on brothers who were falsely claiming benefits.

Problem with Lack of Risk Diversification

On the negative side, the fraternal-lodge system made little provision for risk diversification. In the IOOF, the Knights of Pythias and the Ancient Order of Foresters, each subordinate lodge (or Court) was responsible for the sick claims of its members. Thus in principle, a high local rate of sick claims in a given year could shock a lodge’s financial condition. Certain commercial practices might have reduced the problem. For example, a grand lodge could have pooled the risks from all lodges in a central fund. Alternatively, it could have initiated a scheme of reinsurance, whereby each lodge assumed a portion of the claims in other lodges. Yet any centralization stood to weaken a friendly society’s management of adverse selection and the moral hazard. The behaviour of lodge members was observed to be a function of the structure of the benefit system. In 1908, for example, when the IOOF, Manchester Unity, in New South Wales, Australia established central funds for sick and funeral benefits, the effect was to turn the lodges into “mere collection agencies.” Participation in lodge affairs fell off, and members developed a more selfish attitude to claims. “When the lodges administered sick pay,” Green and Cromwell observed, “the members knew who was paying — it was the members themselves. But once ‘head office’ took over, the illusion that someone else was paying made its entry” (Green and Cromwell, 1984, pp. 59-60).

Commercial Insurers Couldn’t Match Friendly Societies in the Working-Class Sickness Insurance Market

On balance friendly societies provided an efficient delivery of working-class sickness insurance that commercial insurers could not match. Without the intrusive screening methods and low overhead of the decentralized lodge system, commercial insurers could not as easily solve the problems of moral hazard and adverse selection. “The assurance of a stipulated sum during sickness,” the president of the Prudential Insurance Company conceded in 1909, “can only safely be transacted ? by fraternal organizations having a perfect knowledge of and complete supervision over the individual members.”2

The Decline of Fraternal Sickness Insurance

By the 1890s, friendly societies in North America were withdrawing from the sickness insurance field. The IOOF imposed limits on the length of time that full sick benefits had to be paid, and one or two week waiting periods before the payment of claims began. In 1894, the Knights of Pythias eliminated their constitutional requirement that all subordinate lodges be required to pay stated sick benefits. By the 1920s, the IOOF followed the Knights of Pythias and eliminated its compulsory requirement for the payment of stipulated sick benefits. In England, where friendly societies opposed government pension and insurance schemes in the 1890s, they did not stand in the way of the introduction of Old Age Pensions in 1908 and compulsory state health insurance in 1911. Thus, the decline of fraternal sickness insurance pre-dates the Depression of the 1930s and for many organizations dates from at least the 1890s.

Unsound Pricing Practices?

Why did sickness insurance provided by friendly societies decline? Perhaps friendly society sickness insurance was a casualty of unsound pricing practices in the presence of ageing memberships. To illustrate this argument, consider the IOOF benefit contract. On the one hand, the incidence and duration of sickness claims increased with a member’s age. On the other hand, most IOOF lodges set quarterly dues at a flat rate, rather than by the member’s age, or the member’s age at joining. As the IOOF lodge benefit arrangement was essentially insurance benefits provided on a pay-as-you-go basis (current revenues are used to meet current expenditures), this posed little problem during a lodge’s early years when its members were young and had low sick-claim rates. Over time, however, the members aged and their claim rates showed a rising trend. When revenues from level dues became insufficient to cover claims, the argument goes, the lodge’s insurance provision collapsed. Thus fraternal-insurance provision was essentially a failed, experimental phase in the development of sickness and health insurance.

Lodges Were Financially Sound Despite Non-Actuarial Pricing

By contrast with the above scenario, evidence for British Columbia showed that the IOOF lodges were financially sound, despite their non-actuarial pricing practices (Emery 1996). Typically a lodge accumulated assets during its first years of operation, when its members were young and had below-average sickness risk. In later years, as its membership aged and the cost of claims exceeded income from members’ dues and fees, income from investments made up the difference. Consequently none of British Columbia’s twenty lodge closures before 1929 resulted from the bankruptcy of lodge assets. Similarly none of the British Columbia lodges had a significant probability of ruin from high claims in a particular year.

Non-payment of dues also helped lodge finances. A member became ineligible for benefits if he fell behind in his dues. If he fell far enough behind on his dues, his lodge could suspend him from membership or declare him “ceased” (dropped from membership). A member’s unpaid dues continued to accumulate after suspension. Thus a suspended member had to pay the full, accumulated amount (or a maximum sum, if his grand lodge set one), to get reinstated. Lodges did not pay sick claims to members who were in arrears.

Turnover of Membership Explains How They Remained Financially Sound

When members did not pay their dues owing to be reinstated, their exit from membership relieved lodge financial pressures. Most men joined fraternal lodges when they were under age 35 and for the members who quit, they typically did so before age 40.3 Thus, a substantial proportion of initiates into fraternal memberships did not remain in the membership long enough for their rising risk of illness after age 40 to pose a problem for lodge finances. On average, they belonged when they were most likely net payers and quit before they were net recipients. These features of the substantial turnover in fraternal memberships helps to explain how fraternal lodges were actually going concerns when official actuarial valuations of lodge finances and reserves inevitably showed that the lodges had actuarial deficits at the prevailing levels of dues. These valuations assessed the adequacy of accumulated reserves and dues revenues expected over the remaining lifetimes of the membership at the time of the valuation for meeting the expected benefits of the membership over the remainder of the members’ lifetimes. The assumption that all current members would remain in the membership until death always resulted in valuations that showed the sick benefits were inadequately, if not hazardously, priced. The fact that many members were not lifetime members meant that the pricing was not so hazardous.

Competition from Other Insurers

If poor finances cannot explain the decline of friendly society sickness benefits, then perhaps increasing competition from government and commercial insurance arrangements can explain the decline. Trends for competition do not provide strong support for this explanation for the decline of friendly society sickness-insurance. Competition for friendly societies came from commercial-group plans, government workmen’s compensation programs, trade unions and industrial unions, company-sponsored mutual benefit societies, and other fraternal orders that provided life insurance, or non-stipulated (discretionary) relief.

Group Insurance

Group insurance used the employer’s mass-purchasing power to provide low-cost insurance without a medical examination (Ilse, 1953, chapter 1). Often the employer paid the premium. Otherwise employees paid part of the cost through payroll deductions, a practice that kept the insurer’s overhead costs low. The insurance company made the group-plan contract with the employer, who then issued certificates to individuals in the plan. Group plans compared favourably with IOOF benefits in terms of cost and the amount of the benefit. They also gave a viable commercial solution to the problems of adverse selection and moral hazard.

During the 1920s, however, group plans were available to few workers. In the United States, they missed men who were self-employed or employed in firms with less than fifty workers. The employee’s coverage ceased if he left the company. It also stopped if either the insurer or the employer did not renew the contract at the end of its standard one-year term. When coverage ceased, the employee might find himself too old or unhealthy to obtain insurance elsewhere. More importantly, the challenge of commercial-group insurance was just beginning during the 1920s. By 1929 Americans and Canadians in group plans were less numerous than the number of Odd Fellows alone.

Government Programs

Government programs such as compulsory sickness insurance dated from 1883 in Germany and 1911 in Britain. Between 1914 and 1920, eight state commissions, two national conferences, and several state legislatures attended to the issue in the United States (see Armstrong, 1932, Beito 2000, Hoffman 2001). Despite these initiatives, no American or Canadian government — national, state, or provincial — adopted compulsory sickness insurance until the 1940s (Osborn, 1958, chapter 4; Ilse, 1953, chapter 8).

Workmen’s compensation was another matter. During the years 1911-25, forty-two of the forty-eight American states and six of Canada’s nine provinces passed workmen’s compensation laws (Weis, 1935; Leacy, 1983). Nevertheless, half of all state laws in 1917, and a fifth of them in 1932, applied only to persons in hazardous occupations. None of the various state laws covered employees of interstate railways. In twenty-four states, the law exempted small businesses; in five it exempted public employees. In some states the law was so hedged with restrictions that the scale of benefits was uncertain. Although comprehensive by American standards, Ontario’s law omitted persons in farming, wholesale and retail establishments, and domestic service (Guest, 1980).

Overall, government programs provided negligible competition for friendly society sick benefits during the 1920s. No state or province provided for compulsory sickness insurance. Workmen’s compensation laws were commonplace, but missed important parts of the workforce. More importantly, industrial accidents accounted for just ten percent of all disability (Armstrong, 1932, pp. 284ff; Osborn, 1958, chapter 1).

Labor Unions

Labor unions traditionally used benefits to attract members and hold the loyalty of existing members. During the 1890s miners’ unions in the American west and British Columbia reportedly devoted more time to mutual aid than to collective bargaining (Derickson, 1988, chapter 3). By 1907 nineteen unions, accounting for 25 per cent of organized labor in the United States, offered sick benefits (Rubinow, 1913, chapter 18). During the 1920s, however, the friendly society competition from unions followed a declining trend. After years of steady growth, for example, the membership of American trade unions dropped by 32 per cent between 1920 and 1929.4 Similarly, the membership of Canadian trade unions fell by 23 per cent between 1919 and 1926. In an unprecedented development in 1926, the street railway workers’ union in Newburgh, New York, obtained commercial group-sickness coverage through a collective bargaining agreement with the employer (Ilse, 1953, ch. 13). Although rare during the 1920s, this marked the start of collective bargaining for sick benefits rather than direct union provision.

Company-sponsored Sick-Benefit Societies

Company-sponsored sick-benefit societies, often known as Mutual Benefit Associations, originated in a tradition of corporate paternalism during the 1870s (Brandes, 1976; Brody, 1980; Zahavi, 1988; McCallum, 1990). The United States had more than 500 such societies by 1908. Typically these societies obtained most or all of their funds from employee dues, not company funds, ostensibly to encourage the workers to be self-reliant.

Participation was voluntary in 85 per cent of 461 American societies surveyed on the eve of the First World War. Eligibility for membership commonly required a waiting period (a minimum period of permanent employment). A major disadvantage, compared to fraternal order sickness benefits, was that coverage ceased when the employee left the firm. In the amount and cost of the benefit (benefits of $5 to 6 per week for up to thirteen weeks for annual dues of $2.50 to $6 per year) the societies were similar to fraternal lodges.

The institutions were part of a larger program of corporate welfarism that had developed during the First World War in conditions of labor scarcity, labor unrest, rising union membership, and government management of capital-labor relations. At the war’s end, however, the economy slumped, the supply of labor became abundant, unions became cooperative and were losing members, and wartime government-economic management ended. In the new circumstances, the pressure on businessmen to promote welfare programs abated, and the membership of company-sponsored sick-benefit societies entered a flat trend.5 By 1929 the societies were still a minority phenomenon. They existed in 30 percent of large firms (250 or more employees), but in just 4.5 percent of small firms, which accounted for half the industrial work force (Jacoby, 1985, ch.6).

Competition from Insurance Orders

Friendly societies (orders with sick and funeral benefits) also competed with the insurance orders (orders with life and/or annuity benefits in small amounts) that offered an optional sick benefit. The Maccabees, Woodmen of the World, Independent Order of Foresters, and the Royal Arcanum were some main rivals in the insurance-order group for the friendly societies.

The insurance-order sick benefit had several features of commercial insurance and compared poorly with the friendly-society benefit. In many cases, these orders paid sick claims from a centrally-administered “sick and funeral fund,” not local lodge funds. They financed sick claims by requiring monthly premiums, paid in advance, not quarterly dues. Their central authority could cancel the member’s sickness insurance by giving him notice; in the IOOF, by contrast, the member retained his coverage as long as his dues were paid up. A member could draw benefits for a maximum of twenty-six weeks in the Maccabees and a maximum of twelve weeks in the IOF. During the 1920s, competition from fraternal life insurance orders showed a flat or declining trend. In terms of membership size, the largest friendly society, the IOOF, gained ground on all competitors in the insurance-order group.

Broader Economic and Social Trends in the 1920s

Another popular explanation for the decline of friendly society sick benefits is one of “changing times” where friendly societies provided an outdated social arrangement. Here fraternal orders were multiple-function organizations that offered their members a variety of social and indirect economic benefits, as well as insurance. Thus in principle, the declining trend for IOOF sickness insurance could have been a by-product of social changes during the 1920s that were undermining the popularity of fraternal lodges (Dumenil, 1984; Brody, 1980; Carnes, 1989; Charles, 1993; Clawson, 1989; Rotundo, 1989; Burley, 1994; Tucker, 1990). For example, the fraternal-lodge meeting faced competition from new forms of entertainment (radio, cinema, automobile travel). The development of installment buying and consumerism undermined fraternal culture and working-class institutional life. Trends in sex relations sapped the appeal of all-male social activities and fraternal ritual of lodge meetings. The rising popularity of luncheon-club organizations (Kiwanis, Lions, Kinsmen) expressed a popular shift to a community-service orientation, as opposed to the fraternal tradition of services to members. The luncheon clubs also exemplified a popular shift to class-specific organizations, at the expense of fraternal orders, which had a cross-class appeal. Finally, with the waning popularity of lodge meetings, lodge nights became less useful occasions for making business contacts.

Rising Health-Care Costs

The decade also gave rise to two important insurance-related developments. The one, described above, was the diffusion of commercial-group plans for income-replacement insurance. The other was the emergence of health-care services as the principal cost of sickness (Starr, 1982). In 1914 lost wages had been between two and four times the medical costs of a worker’s sickness, or about equal if one included the worker’s family. During the 1920’s, however, the medical costs soared, by 20 per cent for families with less than $1,200 income and 85 per cent for families with incomes between $1,200 and $2,500. The medical costs were highly variable as well as rising. Effectively, a serious hospitalized illness could consume a third to a half of a family’s annual income.

External Changes and Competition Don’t Explain the Decline of Fraternal Sickness Insurance Well

Changes during the 1920s, however, provide a poor explanation for the declining trend for the friendly-society sick benefit in North America. First, the timing was wrong. On the one hand, the declining trend dated from the 1890s, not the 1920s. On the other hand, key developments during the decade were at an early stage. By 1929 commercial-group insurance was established, but not widespread. Similarly, health insurance scarcely existed, despite the rising trend for the health-care costs. As Starr explains, health insurance presented an extreme problem of moral hazard that insurers did not solve until the 1930s.6 Second, we lack a theory to explain why the waning of interest in lodge meetings would have caused a declining trend for the sick benefit. Finally, the “changing times” explanation, on its own, incorrectly portrays the sick benefit as a static product that became less relevant in an exogenously changing society and economy.

Young Men Value Sickness Insurance

If external pressure did not cause the decline of the friendly society sick benefits, then why did friendly society sickness insurance decline? Emery and Emery (1999) argue that the sick benefit was primarily in demand amongst men who lacked alternatives to market insurance. For example, at the start of their working lives, male breadwinners had no older children to earn secondary incomes (family insurance). They also lacked savings to cover the disability risk (self-insurance). Thus men joined the Odd Fellows when they were “young”. They then quit after a few years as family and self-insurance alternatives to market insurance opened up to them. Further, as the friendly society sick benefit was a form of precautionary saving, demand for it would have declined as a household accumulated wealth.

Aging Membership and the Declining Demand for Sickness Insurance

Over time, fraternal memberships were ageing as rates of initiation slowed and suspensions from membership continued on at steady rates. Initiates and suspended members were disproportionately from the lower age groups in the memberships thus slower membership growth in the friendly societies represented ageing memberships. In this context of the demand for the sick benefit over the life-cycle, ageing fraternal memberships became less attached to the sick benefit. Thus, as the memberships aged, their collective preferences changed. Older members had priorities and objectives other than sickness insurance.

Friendly Societies and Compulsory State Insurance

Despite the similarity of organizations and the high rates of participation in them in the late nineteenth and early twentieth centuries, the role of voluntary self-help organizations like the friendly societies, diverged on either side of the Atlantic. In England, the “administrative machinery” of friendly societies was the vehicle for introducing and delivering compulsory government sickness/health insurance under the Approved Societies system that prevailed from 1911 to 1944 at which time the government centralized the provision of health insurance (Gosden 1973). In North America the friendly society sickness insurance arrangement declined from at least the 1890s despite growing memberships in the organizations up to the 1920s. While the friendly society sickness insurance declined, government showed little activity in the health/sickness insurance field. Only through the 1930s did commercial and non-profit group health and hospital insurance plans and government social programs rise to primacy in the sickness and health insurance field.7

Critics of Friendly Societies’ Voluntary Self-Help

Critics of voluntary self-help arrangements for insuring the costs of sickness argue that voluntary self-help was a failed system and its obvious short-comings and financial difficulties were the impetus for government involvement in social insurance arrangements. (Smiles 1876, Moffrey 1910, Peebles 1936, Gosden 1961, Gilbert 1965, Hopkins 1995, Horrell and Oxley 2000, Hoffman 2001). Horrell and Oxley (2000) argue that friendly society benefits were too paltry to offer true relief. Hopkins (1995) argues that for those workers who could afford it, self-help through friendly society membership worked well but too much of the working population remained outside the safety net due to low incomes. At best, the critics applaud the intent of individuals taking the initiative to protect themselves and for friendly societies in pioneering the preparation of actuarial data on morbidity and sickness duration that aided commercial insurers in insuring the sickness risk in a financially sound way.

Positive Assessments of Friendly Societies’ Roles

In contrast, Beito (2000) presents a positive assessment of fraternal mutual aid in the United States, and hence working-class self-help, for dealing with the economic consequences of poor health. Beito argues that fraternal societies in America extended social welfare service, such as insurance, to the poor (notably immigrants and blacks) and working class Americans who otherwise would not have had access to such coverage. Far from being an inadequate form of safety net, fraternal mutual aid sustained needy Americans from cradle to grave and over time, extended the range of benefits provided to include hospitals and homes for the aged as the needs in society arose. Beito suggests that changing cultural attitudes and the expanding scale and scope of a paternalistic welfare state undermined an efficient and viable fraternal social insurance arrangement.

Government’s Role in “Crowding Out” Self-Help

Similarly, Green and Cromwell (1984) argue that state paternalism crowded out efficient fraternal methods of social insurance in Australia. Hopkins (1995) suggests that while friendly societies were effective for aiding a sizable portion of the working class, working class self-help “had been weighed in the balance and found wanting” since it failed to provide income protection for the working classes as a whole. Hopkins concludes that compulsory state aid inevitably had to replace voluntary self-help to “spread the net over the abyss” to protect the poorest of the working class. Similar to Beito’s view, Hopkins suggests that equity considerations were the reason for undermining otherwise efficient voluntary self-help arrangements. Beveridge (1948) expresses dismay over the crowding out of friendly societies as social insurers in England following the centralization of compulsory government health insurance arrangements in 1944.

References:

Applebaum, L. “The Development of Voluntary Health Insurance in the United States.” Journal of Insurance 28 (1961): 25-33.

Armstrong, Barbara N. Insuring the Essentials. New York: MacMillan, 1932.

Beito, David. From Mutual Aid to the Welfare State: Fraternal Societies and Social Services, 1890-1967. Chapel Hill: University of North Carolina Press, 2000.

Berkowitz, Edward. “How to Think About the Welfare State” Labor History 32 (1991): 489-502

Berkowitz, Edward and Monroe Berkowitz, “Challenges to Workers’ Compensation: An Historical Analysis.” In Workers’ Compensation Benefits: Adequacy, Equity, and Efficiency, edited by John D. Worrall and David Appel. Ithaca, NY: ILR Press, 1985.

Berkowitz, Edward and Kim McQuaid. “Businessman and Bureaucrat: the Evolution of the American Welfare System, 1900-1940.” Journal of Economic History 38 (1978): 120-41.

Berkowitz, Edward and Kim McQuaid. Creating the Welfare State: The Political Economy of Twentieth Century Reform. New York: Praeger, 1988.

Bernstein, Irving. The Lean Years: A History of the American Worker, 1920-1933. Boston: Houghton Mifflin, 1960.

Bradbury, Bettina. Working Families, Age, Gender, and Daily Survival in Industrializing Montreal. Toronto: McClelland and Stewart, 1993.

Brandes, Stuart D. American Welfare Capitalism 1880-1940. Chicago: University of Chicago Press, 1976.

Brody, David. Workers in Industrial America: Essays on the Twentieth Century Struggle. New York: Oxford University Press, 1980.

Brumberg, Joan Jacobs, and Faye E. Dudden. “Masculinity and Mumbo Jumbo: Nineteenth-Century Fraternalism Revisited.” Reviews in American History 18 (1990): 363-70 [review of Carnes].

Burley, David G. A Particular Condition in Life, Self-Employment and Social Mobility in Mid-Victorian Brantford, Ontario. McGill-Queen’s University Press, 1994.

Burrows, V.A. “On Friendly Societies since the Advent of National Health Insurance.” Journal of the Institute of Actuaries 63 (1932): 307-401

Carnes, Mark C. Secret Ritual and Manhood in Victorian America. New Haven, Yale University Press, 1989.

Charles, Jeffrey A. Service Clubs in American Society, Rotary, Kiwanis, and Lions. Urbana: University of Illinois Press, 1993.

Clawson, Mary Ann. Constructing Brotherhood, Class, Gender, and Fraternalism Princeton: Princeton University Press, 1989.

Cordery, Simon. “Fraternal Orders in the United States: A Quest for Protection and Identity.” In Social Security Mutualism: The Comparative history of Mutual Benefit Societies, edited by Marcel Van der Linden, 83-110. Bern: Peter Lang, 1996.

Cordery, Simon. “Friendly Societies and the Discourse of Respectability in Britain, 1825-1875.” Journal of British Studies 34, no. 1 (1995): 35-58

Costa, Dora. “The Political Economy of State Provided Health Insurance in the Progressive Era: Evidence from California.” National Bureau of Economic Research Working Paper, no. 5328, 1995

Cumbler, John T. Working-Class Community in Industrial America: Work, Leisure, and Struggle in Two Industrial Cities, 1880-1930. Westport: Greenwood Press, 1979.

Davis, K. “National Health Insurance: A Proposal.” American Economic Review 79, no. 2 (1989): 349-352

Derickson, Alan. Workers’ Health Workers’ Democracy, The Western Miners” Struggle, 1891-1925. Ithaca: Cornell University Press, 1988.

Dumenil, Lynn. Freemasonry and American Culture 1880-1930. Princeton: Princeton University Press, 1984.

Ehrlich, Isaac and Gary S. Becker. “Market Insurance, Self-Insurance, and Self-Protection.” Journal of Political Economy 80, no. 4 (1972): 623-648.

Emery, J.C. Herbert. The Rise and Fall of Fraternal Methods of Social Insurance: A Case Study of the Independent Order of Oddfellows of British Columbia Sickness Insurance, 1874-1951. Ph.D. Dissertation: University of British Columbia, 1993.

Emery, J.C. Herbert. “Risky Business? Nonactuarial Pricing Practices and the Financial Viability of Fraternal Sickness Insurers.” Explorations in Economic History 33 (1996): 195-226.

Emery, George and J.C. Herbert Emery. A Young Man’s Benefit: The Independent Order of Odd Fellows and Sickness Insurance in the United States and Canada, 1860-1929. Montreal: McGill-Queen’s University Press, 1999.

Fischer, Stanley. “A Life Cycle Model of Life Insurance Purchases.” International Economic Review 14, no. 1 (1973): 132-152.

Follmann, J.F. “The Growth of Group Health Insurance.” Journal of Risk and Insurance 32 (1965): 105-112.

Galanter, Marc. Cults, Faith, Healing and Coercion. New York: Oxford University Press, 1989.

Gilbert, B.B. “The Decay of Nineteenth-Century Provident Institutions and the Coming of Old Age Pensions in Great Britain.” Economic History Review, 2nd Series 17 (1965): 551-563.

Gilbert, B.B. The Evolution of National Health Insurance in Great Britain: The Origins of the Welfare State. London: Michael Joseph, 1966.

Gist, Noel P. “Secret Societies: A Cultural Study of Fraternalism in the United States.” University of Missouri Studies XV, no. 4 (1940): 1-176.

Gosden, P.(1961). The Friendly Societies in England 1815 to 1875. Manchester: Manchester University Press.

Gosden, P. Self-Help, Voluntary Associations in the 19th Century London: B.T. Batsford, 1973.

Grourinchas, Pierre-Olivier and Jonathan A. Parker. “The Empirical Importance of Precautionary Savings.” National Bureau of Economic Research Working Paper no. 8107, 2001.

Gratton, Brian. “The Poverty of Impoverishment Theory: The Economic Well-Being of the Elderly, 1890-1950.” Journal of Economic History 56, no. 1 (1996): 39-61.

Green, D.G. and L.G. Cromwell. Mutual Aid or Welfare State: Australia’s Friendly Societies. Boston: Allen & Unwin, 1984.

Greenberg, Brian. “Worker and Community: Fraternal Orders in Albany, New York, 1845-1885.” Maryland Historian 8 (1977): 38-53.

Guest, D. The Emergence of Social Security in Canada. Vancouver: University of British Columbia Press, 1980.

Haines, Michael R. “Industrial Work and the Family Life Cycle, 1889-1890.” Research in Economic History 4 (1979): 289-356.

Hirschman, Albert O. Exit, Voice, and Loyalty, Responses to Decline in Firms, Organizations, and States. Cambridge: Harvard University Press, 1970.

History of Odd-Fellowship in Canada under the Old Regime. Brantford: Grand Lodge of Ontario, 1879.

History of the Maccabees, Ancient and Modern, 1881 to 1896. Port Huron, 1896.

Hopkins, Eric. Working-Class Self-Help in Nineteenth-Century England: Responses to Industrialization. New York: St. Martin’s Press, 1995.

Hoffman, Beatrix. The Wages of Sickness: The Politics of Health Insurance in Progressive America. Chapel Hill: University of North Carolina Press, 2001.

Horrell, Sara and Deborah Oxley. “Work and Prudence: Household Responses to Income Variation in Nineteenth Century Britain.” European Review of Economic History 4, no. 1 (2000): 27-58.

Ilse, Louise Wolters. Group Insurance and Employee Retirement Plans. New York: Prentice-Hall, 1953.

Jacoby, Sanford M. Employing Bureaucracy, Managers, Unions, and the Transformation of Work in American Industry, 1900-1945. New York: Columbia University Press, 1985.

James, Marquis. The Metropolitan Life, A Study in Business Growth. New York: Viking Press, 1bove, Roy. The Struggle for Social Security: 1900-1935. Cambridge: Harvard University Press, 1968.

Lynd, Robert S. and Helen Merrell. Middletown: A Study in Contemporary American Culture. New York: Harcourt, Brace & World, 1929.

MacDonald, Fergus. The Catholic Church and Secret Societies in the United States. New York: U.S. Catholic Historical Society, 1946.

Markey, Raymond. “The History of Mutual Benefit Societies in Australia, 1830-1991.” In Social Security Mutualism: The Comparative History of Mutual Benefit Societies, edited by Marcel Van der Linden, 147-76. Bern: Peter Lang, 1996.

McCallum, Margaret E. “Corporate Welfarism in Canada, 1919-39.” Canadian Historical Review LXXI, no. 1 (1990): 49-79.

Millis, Harry A. Sickness Insurance: A Study of the Sickness Problem and Health Insurance.. Chicago: Chicago University Press, 1937.

Moffrey, R.W. A Century of Odd Fellowship. Manchester: IOOFMU G.M. and Board of Directors, 1910.

National Inoulder: Westview Press, 1991.

Osborn, Grant M. Compulsory Temporary Disability Insurance in the United States. Homewood, IL: Richard D. Irwin, 1958.

Palmer, Bryan D. “Mutuality and the Masking/Making of Difference: The Making of Mutual Benefit Societies in Canada, 1850-1950.” In Social Security Mutualism: The Comparative History of Mutual Benefit Societies, edited by Marcel Van der Linden, 111-46. Bern: Peter Lang, 1996.

Peebles, A. “The State and Medicine.” Canadian Journal of Economics and Political Studies 2 (1936): 464-480.

Preuss, Arthur. Dictionary of Secret and Other Societies. St. Louis: B. Herder Co., 1924.

Quadagno, Jill. “Theories of the Welfare State.” Annual Reviews of Sociology 13 (1987): 109-28

Quadagno, Jill. The Transformation of Old Age Security: Class and Politics in the American Welfare State. Chicago: University of Chicago Press, 1988.

Riley, James C. “Ill Health during the English Mortality Decline: The Friendly Societies’ y. “Boston Masons, 1900-1935: The Lower Middle Class in a Divided Society.” Journal of Voluntary Action Research 6 (1977): 119-26.

Ross, Theo. A. Odd Fellowship, Its History and Manual. New York: M.W. Hazen, 1890.

Rotundo, E. Anthony. “Romantic Friendship: Male Intimacy and Middle-Class Youth in the Northern United States, 1800-1900.” Journal of Social History 23 no. 1 (1989): 1-25.

Rubinow, Isaac Max. Social Insurance: With Special References to American Conditions. Henry Holt & Co., 1913.

Schmidt, A.J. Fraternal Organizations. Westport: Greenwood Press, 1980.

Senior, Hereward. Orangeism: The Canadian Phase. Toronto: McGraw-Hill Ryerson, 1972.

Stalson, J. Owen. Marketing Life Insurance: Its History in America. Cambridge: Harvard University Press, 1942; Homewood: R.D. Irwin, 1969.

Smiles, Samuel. Thrift. Toronto: Belford Brothers, 1876.

Starr, Paul. The Social Transformation of American Medicine: The Rise of a Sovereign Profesc History.” Journal of Economic History 51, no. 2 (1991): 271-288.

Thelen, David. Paths of Resistance: Tradition and Dignity in Industrializing Missouri. New York: Oxford University Press, 1986.

Tishler, Hace Sorel. Self-Reliance and Social Security, 1870-1917. Port Washington, N.Y.: Kennikat, 1971.

Tucker, Eric. Administering Danger in the Workplace: The Law and Politics of Occupational Health and Safety Regulation in Ontario, 1850-1914. Toronto: University of Toronto Press, 1990.

Van der Linden, Marcel. “Introduction.” In Social Security Mutualism: The Comparative History of Mutual Benefit Societies, edited by Marcel Van der Linden, 11-38. Bern: Peter Lang, 1996.

Vondracek, Felix John. “The Rise of Fraternal Organizations in the United States, 1868-1900.” Social Science 47 (1972): 26-33.

Weis, Harry. “Employers’ Liability and Workmen’s Compensation.” In History of Labor in the United States, 1896-1932, Vol. III, edited by Don D. Lescohier

Footnotes

1 See Gosden (1961), Hopkins (1995) and Riley (1997) for excellent discussions of the evolution of friendly societies in England.

2 Cited in Starr (1982, p. 242). British industrial-life companies did not offer sickness insurance until 1911, when government allowed them qualify as approved societies under the National Health Act. In acting as approved societies, their motive was not to write sickness insurance, but rather to protect their interest in burial insurance. See Beveridge, 1948, p. 81; Gilbert, 1966, p. 323.

3 Emery and Emery (1999). Riley (1997) shows that British men in their twenties were the majority of initiates and members who exited did so within “a few years of joining”.

4 Data for unions are from Wolman, 1936, pp. 16, 239 and Leacy, 1983, series E175. By 1931 just 10 per cent of non-agricultural workers in the United States were unionized, down from 19 per cent in 1919 (Bernstein, 1960, chapter 2). Unions affiliated with the American Federation of Labor accounted for approximately 80 per cent of the total membership of American labor unions (Wolman, p.7). The reported AFL membership statistics are high. Unions paid per capita tax on more than their actual paid-up memberships for prestige and to maintain their voting strength at AFL meetings. In 1929, the United Mine Workers, an extreme case, reported 400,000 members, but probably had just 262,000 members, including 169,000 paid-up members and 93,000 “exonerated” members (kept on the books because they were unemployed or on strike).

5 Brandes (1976, chapter 10) places their membership at 749,000 in 1916 and 825,000 in 1931.

6 The probable costs of health-care claims were hard to predict (Starr, 1982, pp. 290-1). As with income-replacement insurance, sickness was not a well-defined condition. In addition, the treatment costs were within the insured’s control. They also were within the control of the physician and hospital, both of which could profit from additional services and raise prices as the patient’s ability to pay increased.

7 Employer-purchased/provided group plans came to be the most common source of the health insurance coverage in the United States (Applebaum, 1961; Follmann, 1965; Davis, 1989). In Canada, provincial government health insurance plans, with universal coverage, replaced the work-place based arrangements in the 1960s.

Citation: Emery, Herb. “Fraternal Sickness Insurance”. EH.Net Encyclopedia, edited by Robert Whaples. March 26, 2008. URL http://eh.net/encyclopedia/fraternal-sickness-insurance/

The U.S. Economy in the 1920s

Gene Smiley, Marquette University

Introduction

The interwar period in the United States, and in the rest of the world, is a most interesting era. The decade of the 1930s marks the most severe depression in our history and ushered in sweeping changes in the role of government. Economists and historians have rightly given much attention to that decade. However, with all of this concern about the growing and developing role of government in economic activity in the 1930s, the decade of the 1920s often tends to get overlooked. This is unfortunate because the 1920s are a period of vigorous, vital economic growth. It marks the first truly modern decade and dramatic economic developments are found in those years. There is a rapid adoption of the automobile to the detriment of passenger rail travel. Though suburbs had been growing since the late nineteenth century their growth had been tied to rail or trolley access and this was limited to the largest cities. The flexibility of car access changed this and the growth of suburbs began to accelerate. The demands of trucks and cars led to a rapid growth in the construction of all-weather surfaced roads to facilitate their movement. The rapidly expanding electric utility networks led to new consumer appliances and new types of lighting and heating for homes and businesses. The introduction of the radio, radio stations, and commercial radio networks began to break up rural isolation, as did the expansion of local and long-distance telephone communications. Recreational activities such as traveling, going to movies, and professional sports became major businesses. The period saw major innovations in business organization and manufacturing technology. The Federal Reserve System first tested its powers and the United States moved to a dominant position in international trade and global business. These things make the 1920s a period of considerable importance independent of what happened in the 1930s.

National Product and Income and Prices

We begin the survey of the 1920s with an examination of the overall production in the economy, GNP, the most comprehensive measure of aggregate economic activity. Real GNP growth during the 1920s was relatively rapid, 4.2 percent a year from 1920 to 1929 according to the most widely used estimates. (Historical Statistics of the United States, or HSUS, 1976) Real GNP per capita grew 2.7 percent per year between 1920 and 1929. By both nineteenth and twentieth century standards these were relatively rapid rates of real economic growth and they would be considered rapid even today.

There were several interruptions to this growth. In mid-1920 the American economy began to contract and the 1920-1921 depression lasted about a year, but a rapid recovery reestablished full-employment by 1923. As will be discussed below, the Federal Reserve System’s monetary policy was a major factor in initiating the 1920-1921 depression. From 1923 through 1929 growth was much smoother. There was a very mild recession in 1924 and another mild recession in 1927 both of which may be related to oil price shocks (McMillin and Parker, 1994). The 1927 recession was also associated with Henry Ford’s shut-down of all his factories for six months in order to changeover from the Model T to the new Model A automobile. Though the Model T’s market share was declining after 1924, in 1926 Ford’s Model T still made up nearly 40 percent of all the new cars produced and sold in the United States. The Great Depression began in the summer of 1929, possibly as early as June. The initial downturn was relatively mild but the contraction accelerated after the crash of the stock market at the end of October. Real total GNP fell 10.2 percent from 1929 to 1930 while real GNP per capita fell 11.5 percent from 1929 to 1930.

image002 image004

Price changes during the 1920s are shown in Figure 2. The Consumer Price Index, CPI, is a better measure of changes in the prices of commodities and services that a typical consumer would purchase, while the Wholesale Price Index, WPI, is a better measure in the changes in the cost of inputs for businesses. As the figure shows the 1920-1921 depression was marked by extraordinarily large price decreases. Consumer prices fell 11.3 percent from 1920 to 1921 and fell another 6.6 percent from 1921 to 1922. After that consumer prices were relatively constant and actually fell slightly from 1926 to 1927 and from 1927 to 1928. Wholesale prices show greater variation. The 1920-1921 depression hit farmers very hard. Prices had been bid up with the increasing foreign demand during the First World War. As European production began to recover after the war prices began to fall. Though the prices of agricultural products fell from 1919 to 1920, the depression brought on dramatic declines in the prices of raw agricultural produce as well as many other inputs that firms employ. In the scramble to beat price increases during 1919 firms had built up large inventories of raw materials and purchased inputs and this temporary increase in demand led to even larger price increases. With the depression firms began to draw down those inventories. The result was that the prices of raw materials and manufactured inputs fell rapidly along with the prices of agricultural produce—the WPI dropped 45.9 percent between 1920 and 1921. The price changes probably tend to overstate the severity of the 1920-1921 depression. Romer’s recent work (1988) suggests that prices changed much more easily in that depression reducing the drop in production and employment. Wholesale prices in the rest of the 1920s were relatively stable though they were more likely to fall than to rise.

Economic Growth in the 1920s

Despite the 1920-1921 depression and the minor interruptions in 1924 and 1927, the American economy exhibited impressive economic growth during the 1920s. Though some commentators in later years thought that the existence of some slow growing or declining sectors in the twenties suggested weaknesses that might have helped bring on the Great Depression, few now argue this. Economic growth never occurs in all sectors at the same time and at the same rate. Growth reallocates resources from declining or slower growing sectors to the more rapidly expanding sectors in accordance with new technologies, new products and services, and changing consumer tastes.

Economic growth in the 1920s was impressive. Ownership of cars, new household appliances, and housing was spread widely through the population. New products and processes of producing those products drove this growth. The combination of the widening use of electricity in production and the growing adoption of the moving assembly line in manufacturing combined to bring on a continuing rise in the productivity of labor and capital. Though the average workweek in most manufacturing remained essentially constant throughout the 1920s, in a few industries, such as railroads and coal production, it declined. (Whaples 2001) New products and services created new markets such as the markets for radios, electric iceboxes, electric irons, fans, electric lighting, vacuum cleaners, and other laborsaving household appliances. This electricity was distributed by the growing electric utilities. The stocks of those companies helped create the stock market boom of the late twenties. RCA, one of the glamour stocks of the era, paid no dividends but its value appreciated because of expectations for the new company. Like the Internet boom of the late 1990s, the electricity boom of the 1920s fed a rapid expansion in the stock market.

Fed by continuing productivity advances and new products and services and facilitated by an environment of stable prices that encouraged production and risk taking, the American economy embarked on a sustained expansion in the 1920s.

Population and Labor in the 1920s

At the same time that overall production was growing, population growth was declining. As can be seen in Figure 3, from an annual rate of increase of 1.85 and 1.93 percent in 1920 and 1921, respectively, population growth rates fell to 1.23 percent in 1928 and 1.04 percent in 1929.

These changes in the overall growth rate were linked to the birth and death rates of the resident population and a decrease in foreign immigration. Though the crude death rate changed little during the period, the crude birth rate fell sharply into the early 1930s. (Figure 4) There are several explanations for the decline in the birth rate during this period. First, there was an accelerated rural-to-urban migration. Urban families have tended to have fewer children than rural families because urban children do not augment family incomes through their work as unpaid workers as rural children do. Second, the period also saw continued improvement in women’s job opportunities and a rise in their labor force participation rates.

Immigration also fell sharply. In 1917 the federal government began to limit immigration and in 1921 an immigration act limited the number of prospective citizens of any nationality entering the United States each year to no more than 3 percent of that nationality’s resident population as of the 1910 census. A new act in 1924 lowered this to 2 percent of the resident population at the 1890 census and more firmly blocked entry for people from central, southern, and eastern European nations. The limits were relaxed slightly in 1929.

The American population also continued to move during the interwar period. Two regions experienced the largest losses in population shares, New England and the Plains. For New England this was a continuation of a long-term trend. The population share for the Plains region had been rising through the nineteenth century. In the interwar period its agricultural base, combined with the continuing shift from agriculture to industry, led to a sharp decline in its share. The regions gaining population were the Southwest and, particularly, the far West.— California began its rapid growth at this time.

 Real Average Weekly or Daily Earnings for Selected=During the 1920s the labor force grew at a more rapid rate than population. This somewhat more rapid growth came from the declining share of the population less than 14 years old and therefore not in the labor force. In contrast, the labor force participation rates, or fraction of the population aged 14 and over that was in the labor force, declined during the twenties from 57.7 percent to 56.3 percent. This was entirely due to a fall in the male labor force participation rate from 89.6 percent to 86.8 percent as the female labor force participation rate rose from 24.3 percent to 25.1 percent. The primary source of the fall in male labor force participation rates was a rising retirement rate. Employment rates for males who were 65 or older fell from 60.1 percent in 1920 to 58.0 percent in 1930.

With the depression of 1920-1921 the unemployment rate rose rapidly from 5.2 to 8.7 percent. The recovery reduced unemployment to an average rate of 4.8 percent in 1923. The unemployment rate rose to 5.8 percent in the recession of 1924 and to 5.0 percent with the slowdown in 1927. Otherwise unemployment remained relatively low. The onset of the Great Depression from the summer of 1929 on brought the unemployment rate from 4.6 percent in 1929 to 8.9 percent in 1930. (Figure 5)

Earnings for laborers varied during the twenties. Table 1 presents average weekly earnings for 25 manufacturing industries. For these industries male skilled and semi-skilled laborers generally commanded a premium of 35 percent over the earnings of unskilled male laborers in the twenties. Unskilled males received on average 35 percent more than females during the twenties. Real average weekly earnings for these 25 manufacturing industries rose somewhat during the 1920s. For skilled and semi-skilled male workers real average weekly earnings rose 5.3 percent between 1923 and 1929, while real average weekly earnings for unskilled males rose 8.7 percent between 1923 and 1929. Real average weekly earnings for females rose on 1.7 percent between 1923 and 1929. Real weekly earnings for bituminous and lignite coal miners fell as the coal industry encountered difficult times in the late twenties and the real daily wage rate for farmworkers in the twenties, reflecting the ongoing difficulties in agriculture, fell after the recovery from the 1920-1921 depression.

The 1920s were not kind to labor unions even though the First World War had solidified the dominance of the American Federation of Labor among labor unions in the United States. The rapid growth in union membership fostered by federal government policies during the war ended in 1919. A committee of AFL craft unions undertook a successful membership drive in the steel industry in that year. When U.S. Steel refused to bargain, the committee called a strike, the failure of which was a sharp blow to the unionization drive. (Brody, 1965) In the same year, the United Mine Workers undertook a large strike and also lost. These two lost strikes and the 1920-21 depression took the impetus out of the union movement and led to severe membership losses that continued through the twenties. (Figure 6)

Under Samuel Gompers’s leadership, the AFL’s “business unionism” had attempted to promote the union and collective bargaining as the primary answer to the workers’ concerns with wages, hours, and working conditions. The AFL officially opposed any government actions that would have diminished worker attachment to unions by providing competing benefits, such as government sponsored unemployment insurance, minimum wage proposals, maximum hours proposals and social security programs. As Lloyd Ulman (1961) points out, the AFL, under Gompers’ direction, differentiated on the basis of whether the statute would or would not aid collective bargaining. After Gompers’ death, William Green led the AFL in a policy change as the AFL promoted the idea of union-management cooperation to improve output and promote greater employer acceptance of unions. But Irving Bernstein (1965) concludes that, on the whole, union-management cooperation in the twenties was a failure.

To combat the appeal of unions in the twenties, firms used the “yellow-dog” contract requiring employees to swear they were not union members and would not join one; the “American Plan” promoting the open shop and contending that the closed shop was un-American; and welfare capitalism. The most common aspects of welfare capitalism included personnel management to handle employment issues and problems, the doctrine of “high wages,” company group life insurance, old-age pension plans, stock-purchase plans, and more. Some firms formed company unions to thwart independent unionization and the number of company-controlled unions grew from 145 to 432 between 1919 and 1926.

Until the late thirties the AFL was a voluntary association of independent national craft unions. Craft unions relied upon the particular skills the workers had acquired (their craft) to distinguish the workers and provide barriers to the entry of other workers. Most craft unions required a period of apprenticeship before a worker was fully accepted as a journeyman worker. The skills, and often lengthy apprenticeship, constituted the entry barrier that gave the union its bargaining power. There were only a few unions that were closer to today’s industrial unions where the required skills were much less (or nonexistent) making the entry of new workers much easier. The most important of these industrial unions was the United Mine Workers, UMW.

The AFL had been created on two principles: the autonomy of the national unions and the exclusive jurisdiction of the national union.—Individual union members were not, in fact, members of the AFL; rather, they were members of the local and national union, and the national was a member of the AFL. Representation in the AFL gave dominance to the national unions, and, as a result, the AFL had little effective power over them. The craft lines, however, had never been distinct and increasingly became blurred. The AFL was constantly mediating jurisdictional disputes between member national unions. Because the AFL and its individual unions were not set up to appeal to and work for the relatively less skilled industrial workers, union organizing and growth lagged in the twenties.

Agriculture

The onset of the First World War in Europe brought unprecedented prosperity to American farmers. As agricultural production in Europe declined, the demand for American agricultural exports rose, leading to rising farm product prices and incomes. In response to this, American farmers expanded production by moving onto marginal farmland, such as Wisconsin cutover property on the edge of the woods and hilly terrain in the Ozark and Appalachian regions. They also increased output by purchasing more machinery, such as tractors, plows, mowers, and threshers. The price of farmland, particularly marginal farmland, rose in response to the increased demand, and the debt of American farmers increased substantially.

This expansion of American agriculture continued past the end of the First World War as farm exports to Europe and farm prices initially remained high. However, agricultural production in Europe recovered much faster than most observers had anticipated. Even before the onset of the short depression in 1920, farm exports and farm product prices had begun to fall. During the depression, farm prices virtually collapsed. From 1920 to 1921, the consumer price index fell 11.3 percent, the wholesale price index fell 45.9 percent, and the farm products price index fell 53.3 percent. (HSUS, Series E40, E42, and E135)

Real average net income per farm fell over 72.6 percent between 1920 and 1921 and, though rising in the twenties, never recovered the relative levels of 1918 and 1919. (Figure 7) Farm mortgage foreclosures rose and stayed at historically high levels for the entire decade of the 1920s. (Figure 8) The value of farmland and buildings fell throughout the twenties and, for the first time in American history, the number of cultivated acres actually declined as farmers pulled back from the marginal farmland brought into production during the war. Rather than indicators of a general depression in agriculture in the twenties, these were the results of the financial commitments made by overoptimistic American farmers during and directly after the war. The foreclosures were generally on second mortgages rather than on first mortgages as they were in the early 1930s. (Johnson, 1973; Alston, 1983)

A Declining Sector

A major difficulty in analyzing the interwar agricultural sector lies in separating the effects of the 1920-21 and 1929-33 depressions from those that arose because agriculture was declining relative to the other sectors. A relatively very slow growing demand for basic agricultural products and significant increases in the productivity of labor, land, and machinery in agricultural production combined with a much more rapid extensive economic growth in the nonagricultural sectors of the economy required a shift of resources, particularly labor, out of agriculture. (Figure 9) The market induces labor to voluntarily move from one sector to another through income differentials, suggesting that even in the absence of the effects of the depressions, farm incomes would have been lower than nonfarm incomes so as to bring about this migration.

The continuous substitution of tractor power for horse and mule power released hay and oats acreage to grow crops for human consumption. Though cotton and tobacco continued as the primary crops in the south, the relative production of cotton continued to shift to the west as production in Arkansas, Missouri, Oklahoma, Texas, New Mexico, Arizona, and California increased. As quotas reduced immigration and incomes rose, the demand for cereal grains grew slowly—more slowly than the supply—and the demand for fruits, vegetables, and dairy products grew. Refrigeration and faster freight shipments expanded the milk sheds further from metropolitan areas. Wisconsin and other North Central states began to ship cream and cheeses to the Atlantic Coast. Due to transportation improvements, specialized truck farms and the citrus industry became more important in California and Florida. (Parker, 1972; Soule, 1947)

The relative decline of the agricultural sector in this period was closely related to the highly inelastic income elasticity of demand for many farm products, particularly cereal grains, pork, and cotton. As incomes grew, the demand for these staples grew much more slowly. At the same time, rising land and labor productivity were increasing the supplies of staples, causing real prices to fall.

Table 3 presents selected agricultural productivity statistics for these years. Those data indicate that there were greater gains in labor productivity than in land productivity (or per acre yields). Per acre yields in wheat and hay actually decreased between 1915-19 and 1935-39. These productivity increases, which released resources from the agricultural sector, were the result of technological improvements in agriculture.

Technological Improvements In Agricultural Production

In many ways the adoption of the tractor in the interwar period symbolizes the technological changes that occurred in the agricultural sector. This changeover in the power source that farmers used had far-reaching consequences and altered the organization of the farm and the farmers’ lifestyle. The adoption of the tractor was land saving (by releasing acreage previously used to produce crops for workstock) and labor saving. At the same time it increased the risks of farming because farmers were now much more exposed to the marketplace. They could not produce their own fuel for tractors as they had for the workstock. Rather, this had to be purchased from other suppliers. Repair and replacement parts also had to be purchased, and sometimes the repairs had to be undertaken by specialized mechanics. The purchase of a tractor also commonly required the purchase of new complementary machines; therefore, the decision to purchase a tractor was not an isolated one. (White, 2001; Ankli, 1980; Ankli and Olmstead, 1981; Musoke, 1981; Whatley, 1987). These changes resulted in more and more farmers purchasing and using tractors, but the rate of adoption varied sharply across the United States.

Technological innovations in plants and animals also raised productivity. Hybrid seed corn increased yields from an average of 40 bushels per acre to 100 to 120 bushels per acre. New varieties of wheat were developed from the hardy Russian and Turkish wheat varieties which had been imported. The U.S. Department of Agriculture’s Experiment Stations took the lead in developing wheat varieties for different regions. For example, in the Columbia River Basin new varieties raised yields from an average of 19.1 bushels per acre in 1913-22 to 23.1 bushels per acre in 1933-42. (Shepherd, 1980) New hog breeds produced more meat and new methods of swine sanitation sharply increased the survival rate of piglets. An effective serum for hog cholera was developed, and the federal government led the way in the testing and eradication of bovine tuberculosis and brucellosis. Prior to the Second World War, a number of pesticides to control animal disease were developed, including cattle dips and disinfectants. By the mid-1920s a vaccine for “blackleg,” an infectious, usually fatal disease that particularly struck young cattle, was completed. The cattle tick, which carried Texas Fever, was largely controlled through inspections. (Schlebecker, 1975; Bogue, 1983; Wood, 1980)

Federal Agricultural Programs in the 1920s

Though there was substantial agricultural discontent in the period from the Civil War to late 1890s, the period from then to the onset of the First World War was relatively free from overt farmers’ complaints. In later years farmers dubbed the 1910-14 period as agriculture’s “golden years” and used the prices of farm crops and farm inputs in that period as a standard by which to judge crop and input prices in later years. The problems that arose in the agricultural sector during the twenties once again led to insistent demands by farmers for government to alleviate their distress.

Though there were increasing calls for direct federal government intervention to limit production and raise farm prices, this was not used until Roosevelt took office. Rather, there was a reliance upon the traditional method to aid injured groups—tariffs, and upon the “sanctioning and promotion of cooperative marketing associations.” In 1921 Congress attempted to control the grain exchanges and compel merchants and stockyards to charge “reasonable rates,” with the Packers and Stockyards Act and the Grain Futures Act. In 1922 Congress passed the Capper-Volstead Act to promote agricultural cooperatives and the Fordney-McCumber Tariff to impose high duties on most agricultural imports.—The Cooperative Marketing Act of 1924 did not bolster failing cooperatives as it was supposed to do. (Hoffman and Liebcap, 1991)

Twice between 1924 and 1928 Congress passed “McNary-Haugan” bills, but President Calvin Coolidge vetoed both. The McNary-Haugan bills proposed to establish “fair” exchange values (based on the 1910-14 period) for each product and to maintain them through tariffs and a private corporation that would be chartered by the government and could buy enough of each commodity to keep its price up to the computed fair level. The revenues were to come from taxes imposed on farmers. The Hoover administration passed the Hawley-Smoot tariff in 1930 and an Agricultural Marketing Act in 1929. This act committed the federal government to a policy of stabilizing farm prices through several nongovernment institutions but these failed during the depression. Federal intervention in the agricultural sector really came of age during the New Deal era of the 1930s.

Manufacturing

Agriculture was not the only sector experiencing difficulties in the twenties. Other industries, such as textiles, boots and shoes, and coal mining, also experienced trying times. However, at the same time that these industries were declining, other industries, such as electrical appliances, automobiles, and construction, were growing rapidly. The simultaneous existence of growing and declining industries has been common to all eras because economic growth and technological progress never affect all sectors in the same way. In general, in manufacturing there was a rapid rate of growth of productivity during the twenties. The rise of real wages due to immigration restrictions and the slower growth of the resident population spurred this. Transportation improvements and communications advances were also responsible. These developments brought about differential growth in the various manufacturing sectors in the United States in the 1920s.

Because of the historic pattern of economic development in the United States, the northeast was the first area to really develop a manufacturing base. By the mid-nineteenth century the East North Central region was creating a manufacturing base and the other regions began to create manufacturing bases in the last half of the nineteenth century resulting in a relative westward and southern shift of manufacturing activity. This trend continued in the 1920s as the New England and Middle Atlantic regions’ shares of manufacturing employment fell while all of the other regions—excluding the West North Central region—gained. There was considerable variation in the growth of the industries and shifts in their ranking during the decade. The largest broadly defined industries were, not surprisingly, food and kindred products; textile mill products; those producing and fabricating primary metals; machinery production; and chemicals. When industries are more narrowly defined, the automobile industry, which ranked third in manufacturing value added in 1919, ranked first by the mid-1920s.

Productivity Developments

Gavin Wright (1990) has argued that one of the underappreciated characteristics of American industrial history has been its reliance on mineral resources. Wright argues that the growing American strength in industrial exports and industrialization in general relied on an increasing intensity in nonreproducible natural resources. The large American market was knit together as one large market without internal barriers through the development of widespread low-cost transportation. Many distinctively American developments, such as continuous-process, mass-production methods were associated with the “high throughput” of fuel and raw materials relative to labor and capital inputs. As a result the United States became the dominant industrial force in the world 1920s and 1930s. According to Wright, after World War II “the process by which the United States became a unified ‘economy’ in the nineteenth century has been extended to the world as a whole. To a degree, natural resources have become commodities rather than part of the ‘factor endowment’ of individual countries.” (Wright, 1990)

In addition to this growing intensity in the use of nonreproducible natural resources as a source of productivity gains in American manufacturing, other technological changes during the twenties and thirties tended to raise the productivity of the existing capital through the replacement of critical types of capital equipment with superior equipment and through changes in management methods. (Soule, 1947; Lorant, 1967; Devine, 1983; Oshima, 1984) Some changes, such as the standardization of parts and processes and the reduction of the number of styles and designs, raised the productivity of both capital and labor. Modern management techniques, first introduced by Frederick W. Taylor, were introduced on a wider scale.

One of the important forces contributing to mass production and increased productivity was the transfer to electric power. (Devine, 1983) By 1929 about 70 percent of manufacturing activity relied on electricity, compared to roughly 30 percent in 1914. Steam provided 80 percent of the mechanical drive capacity in manufacturing in 1900, but electricity provided over 50 percent by 1920 and 78 percent by 1929. An increasing number of factories were buying their power from electric utilities. In 1909, 64 percent of the electric motor capacity in manufacturing establishments used electricity generated on the factory site; by 1919, 57 percent of the electricity used in manufacturing was purchased from independent electric utilities.

The shift from coal to oil and natural gas and from raw unprocessed energy in the forms of coal and waterpower to processed energy in the form of internal combustion fuel and electricity increased thermal efficiency. After the First World War energy consumption relative to GNP fell, there was a sharp increase in the growth rate of output per labor-hour, and the output per unit of capital input once again began rising. These trends can be seen in the data in Table 3. Labor productivity grew much more rapidly during the 1920s than in the previous or following decade. Capital productivity had declined in the decade previous to the 1920s while it also increased sharply during the twenties and continued to rise in the following decade. Alexander Field (2003) has argued that the 1930s were the most technologically progressive decade of the twentieth century basing his argument on the growth of multi-factor productivity as well as the impressive array of technological developments during the thirties. However, the twenties also saw impressive increases in labor and capital productivity as, particularly, developments in energy and transportation accelerated.

 Average Annual Rates of Labor Productivity and Capital Productivity Growth.

Warren Devine, Jr. (1983) reports that in the twenties the most important result of the adoption of electricity was that it would be an indirect “lever to increase production.” There were a number of ways in which this occurred. Electricity brought about an increased flow of production by allowing new flexibility in the design of buildings and the arrangement of machines. In this way it maximized throughput. Electric cranes were an “inestimable boon” to production because with adequate headroom they could operate anywhere in a plant, something that mechanical power transmission to overhead cranes did not allow. Electricity made possible the use of portable power tools that could be taken anywhere in the factory. Electricity brought about improved illumination, ventilation, and cleanliness in the plants, dramatically improving working conditions. It improved the control of machines since there was no longer belt slippage with overhead line shafts and belt transmission, and there were less limitations on the operating speeds of machines. Finally, it made plant expansion much easier than when overhead shafts and belts had been relied upon for operating power.

The mechanization of American manufacturing accelerated in the 1920s, and this led to a much more rapid growth of productivity in manufacturing compared to earlier decades and to other sectors at that time. There were several forces that promoted mechanization. One was the rapidly expanding aggregate demand during the prosperous twenties. Another was the technological developments in new machines and processes, of which electrification played an important part. Finally, Harry Jerome (1934) and, later, Harry Oshima (1984) both suggest that the price of unskilled labor began to rise as immigration sharply declined with new immigration laws and falling population growth. This accelerated the mechanization of the nation’s factories.

Technological changes during this period can be documented for a number of individual industries. In bituminous coal mining, labor productivity rose when mechanical loading devices reduced the labor required from 24 to 50 percent. The burst of paved road construction in the twenties led to the development of a finishing machine to smooth the surface of cement highways, and this reduced the labor requirement from 40 to 60 percent. Mechanical pavers that spread centrally mixed materials further increased productivity in road construction. These replaced the roadside dump and wheelbarrow methods of spreading the cement. Jerome (1934) reports that the glass in electric light bulbs was made by new machines that cut the number of labor-hours required for their manufacture by nearly half. New machines to produce cigarettes and cigars, for warp-tying in textile production, and for pressing clothes in clothing shops also cut labor-hours. The Banbury mixer reduced the labor input in the production of automobile tires by half, and output per worker of inner tubes increased about four times with a new production method. However, as Daniel Nelson (1987) points out, the continuing advances were the “cumulative process resulting from a vast number of successive small changes.” Because of these continuing advances in the quality of the tires and in the manufacturing of tires, between 1910 and 1930 “tire costs per thousand miles of driving fell from $9.39 to $0.65.”

John Lorant (1967) has documented other technological advances that occurred in American manufacturing during the twenties. For example, the organic chemical industry developed rapidly due to the introduction of the Weizman fermentation process. In a similar fashion, nearly half of the productivity advances in the paper industry were due to the “increasingly sophisticated applications of electric power and paper manufacturing processes,” especially the fourdrinier paper-making machines. As Avi Cohen (1984) has shown, the continuing advances in these machines were the result of evolutionary changes to the basic machine. Mechanization in many types of mass-production industries raised the productivity of labor and capital. In the glass industry, automatic feeding and other types of fully automatic production raised the efficiency of the production of glass containers, window glass, and pressed glass. Giedion (1948) reported that the production of bread was “automatized” in all stages during the 1920s.

Though not directly bringing about productivity increases in manufacturing processes, developments in the management of manufacturing firms, particularly the largest ones, also significantly affected their structure and operation. Alfred D. Chandler, Jr. (1962) has argued that the structure of a firm must follow its strategy. Until the First World War most industrial firms were centralized, single-division firms even when becoming vertically integrated. When this began to change the management of the large industrial firms had to change accordingly.

Because of these changes in the size and structure of the firm during the First World War, E. I. du Pont de Nemours and Company was led to adopt a strategy of diversifying into the production of largely unrelated product lines. The firm found that the centralized, divisional structure that had served it so well was not suited to this strategy, and its poor business performance led its executives to develop between 1919 and 1921 a decentralized, multidivisional structure that boosted it to the first rank among American industrial firms.

General Motors had a somewhat different problem. By 1920 it was already decentralized into separate divisions. In fact, there was so much decentralization that those divisions essentially remained separate companies and there was little coordination between the operating divisions. A financial crisis at the end of 1920 ousted W. C. Durant and brought in the du Ponts and Alfred Sloan. Sloan, who had seen the problems at GM but had been unable to convince Durant to make changes, began reorganizing the management of the company. Over the next several years Sloan and other GM executives developed the general office for a decentralized, multidivisional firm.

Though facing related problems at nearly the same time, GM and du Pont developed their decentralized, multidivisional organizations separately. As other manufacturing firms began to diversify, GM and du Pont became the models for reorganizing the management of the firms. In many industrial firms these reorganizations were not completed until well after the Second World War.

Competition, Monopoly, and the Government

The rise of big businesses, which accelerated in the postbellum period and particularly during the first great turn-of-the-century merger wave, continued in the interwar period. Between 1925 and 1939 the share of manufacturing assets held by the 100 largest corporations rose from 34.5 to 41.9 percent. (Niemi, 1980) As a public policy, the concern with monopolies diminished in the 1920s even though firms were growing larger. But the growing size of businesses was one of the convenient scapegoats upon which to blame the Great Depression.

However, the rise of large manufacturing firms in the interwar period is not so easily interpreted as an attempt to monopolize their industries. Some of the growth came about through vertical integration by the more successful manufacturing firms. Backward integration was generally an attempt to ensure a smooth supply of raw materials where that supply was not plentiful and was dispersed and firms “feared that raw materials might become controlled by competitors or independent suppliers.” (Livesay and Porter, 1969) Forward integration was an offensive tactic employed when manufacturers found that the existing distribution network proved inadequate. Livesay and Porter suggested a number of reasons why firms chose to integrate forward. In some cases they had to provide the mass distribution facilities to handle their much larger outputs; especially when the product was a new one. The complexity of some new products required technical expertise that the existing distribution system could not provide. In other cases “the high unit costs of products required consumer credit which exceeded financial capabilities of independent distributors.” Forward integration into wholesaling was more common than forward integration into retailing. The producers of automobiles, petroleum, typewriters, sewing machines, and harvesters were typical of those manufacturers that integrated all the way into retailing.

In some cases, increases in industry concentration arose as a natural process of industrial maturation. In the automobile industry, Henry Ford’s invention in 1913 of the moving assembly line—a technological innovation that changed most manufacturing—lent itself to larger factories and firms. Of the several thousand companies that had produced cars prior to 1920, 120 were still doing so then, but Ford and General Motors were the clear leaders, together producing nearly 70 percent of the cars. During the twenties, several other companies, such as Durant, Willys, and Studebaker, missed their opportunity to become more important producers, and Chrysler, formed in early 1925, became the third most important producer by 1930. Many went out of business and by 1929 only 44 companies were still producing cars. The Great Depression decimated the industry. Dozens of minor firms went out of business. Ford struggled through by relying on its huge stockpile of cash accumulated prior to the mid-1920s, while Chrysler actually grew. By 1940, only eight companies still produced cars—GM, Ford, and Chrysler had about 85 percent of the market, while Willys, Studebaker, Nash, Hudson, and Packard shared the remainder. The rising concentration in this industry was not due to attempts to monopolize. As the industry matured, growing economies of scale in factory production and vertical integration, as well as the advantages of a widespread dealer network, led to a dramatic decrease in the number of viable firms. (Chandler, 1962 and 1964; Rae, 1984; Bernstein, 1987)

It was a similar story in the tire industry. The increasing concentration and growth of firms was driven by scale economies in production and retailing and by the devastating effects of the depression in the thirties. Although there were 190 firms in 1919, 5 firms dominated the industry—Goodyear, B. F. Goodrich, Firestone, U.S. Rubber, and Fisk, followed by Miller Rubber, General Tire and Rubber, and Kelly-Springfield. During the twenties, 166 firms left the industry while 66 entered. The share of the 5 largest firms rose from 50 percent in 1921 to 75 percent in 1937. During the depressed thirties, there was fierce price competition, and many firms exited the industry. By 1937 there were 30 firms, but the average employment per factory was 4.41 times as large as in 1921, and the average factory produced 6.87 times as many tires as in 1921. (French, 1986 and 1991; Nelson, 1987; Fricke, 1982)

The steel industry was already highly concentrated by 1920 as U.S. Steel had around 50 percent of the market. But U. S. Steel’s market share declined through the twenties and thirties as several smaller firms competed and grew to become known as Little Steel, the next six largest integrated producers after U. S. Steel. Jonathan Baker (1989) has argued that the evidence is consistent with “the assumption that competition was a dominant strategy for steel manufacturers” until the depression. However, the initiation of the National Recovery Administration (NRA) codes in 1933 required the firms to cooperate rather than compete, and Baker argues that this constituted a training period leading firms to cooperate in price and output policies after 1935. (McCraw and Reinhardt, 1989; Weiss, 1980; Adams, 1977)

Mergers

A number of the larger firms grew by merger during this period, and the second great merger wave in American industry occurred during the last half of the 1920s. Figure 10 shows two series on mergers during the interwar period. The FTC series included many of the smaller mergers. The series constructed by Carl Eis (1969) only includes the larger mergers and ends in 1930.

This second great merger wave coincided with the stock market boom of the twenties and has been called “merger for oligopoly” rather than merger for monopoly. (Stigler, 1950) This merger wave created many larger firms that ranked below the industry leaders. Much of the activity in occurred in the banking and public utilities industries. (Markham, 1955) In manufacturing and mining, the effects on industrial structure were less striking. Eis (1969) found that while mergers took place in almost all industries, they were concentrated in a smaller number of them, particularly petroleum, primary metals, and food products.

The federal government’s antitrust policies toward business varied sharply during the interwar period. In the 1920s there was relatively little activity by the Justice Department, but after the Great Depression the New Dealers tried to take advantage of big business to make business exempt from the antitrust laws and cartelize industries under government supervision.

With the passage of the FTC and Clayton Acts in 1914 to supplement the 1890 Sherman Act, the cornerstones of American antitrust law were complete. Though minor amendments were later enacted, the primary changes after that came in the enforcement of the laws and in swings in judicial decisions. Their two primary areas of application were in the areas of overt behavior, such as horizontal and vertical price-fixing, and in market structure, such as mergers and dominant firms. Horizontal price-fixing involves firms that would normally be competitors getting together to agree on stable and higher prices for their products. As long as most of the important competitors agree on the new, higher prices, substitution between products is eliminated and the demand becomes much less elastic. Thus, increasing the price increases the revenues and the profits of the firms who are fixing prices. Vertical price-fixing involves firms setting the prices of intermediate products purchased at different stages of production. It also tends to eliminate substitutes and makes the demand less elastic.

Price-fixing continued to be considered illegal throughout the period, but there was no major judicial activity regarding it in the 1920s other than the Trenton Potteries decision in 1927. In that decision 20 individuals and 23 corporations were found guilty of conspiring to fix the prices of bathroom bowls. The evidence in the case suggested that the firms were not very successful at doing so, but the court found that they were guilty nevertheless; their success, or lack thereof, was not held to be a factor in the decision. (Scherer and Ross, 1990) Though criticized by some, the decision was precedent setting in that it prohibited explicit pricing conspiracies per se.

The Justice Department had achieved success in dismantling Standard Oil and American Tobacco in 1911 through decisions that the firms had unreasonably restrained trade. These were essentially the same points used in court decisions against the Powder Trust in 1911, the thread trust in 1913, Eastman Kodak in 1915, the glucose and cornstarch trust in 1916, and the anthracite railroads in 1920. The criterion of an unreasonable restraint of trade was used in the 1916 and 1918 decisions that found the American Can Company and the United Shoe Machinery Company innocent of violating the Sherman Act; it was also clearly enunciated in the 1920 U. S. Steel decision. This became known as the rule of reason standard in antitrust policy.

Merger policy had been defined in the 1914 Clayton Act to prohibit only the acquisition of one corporation’s stock by another corporation. Firms then shifted to the outright purchase of a competitor’s assets. A series of court decisions in the twenties and thirties further reduced the possibilities of Justice Department actions against mergers. “Only fifteen mergers were ordered dissolved through antitrust actions between 1914 and 1950, and ten of the orders were accomplished under the Sherman Act rather than Clayton Act proceedings.”

Energy

The search for energy and new ways to translate it into heat, light, and motion has been one of the unending themes in history. From whale oil to coal oil to kerosene to electricity, the search for better and less costly ways to light our lives, heat our homes, and move our machines has consumed much time and effort. The energy industries responded to those demands and the consumption of energy materials (coal, oil, gas, and fuel wood) as a percent of GNP rose from about 2 percent in the latter part of the nineteenth century to about 3 percent in the twentieth.

Changes in the energy markets that had begun in the nineteenth century continued. Processed energy in the forms of petroleum derivatives and electricity continued to become more important than “raw” energy, such as that available from coal and water. The evolution of energy sources for lighting continued; at the end of the nineteenth century, natural gas and electricity, rather than liquid fuels began to provide more lighting for streets, businesses, and homes.

In the twentieth century the continuing shift to electricity and internal combustion fuels increased the efficiency with which the American economy used energy. These processed forms of energy resulted in a more rapid increase in the productivity of labor and capital in American manufacturing. From 1899 to 1919, output per labor-hour increased at an average annual rate of 1.2 percent, whereas from 1919 to 1937 the increase was 3.5 percent per year. The productivity of capital had fallen at an average annual rate of 1.8 percent per year in the 20 years prior to 1919, but it rose 3.1 percent a year in the 18 years after 1919. As discussed above, the adoption of electricity in American manufacturing initiated a rapid evolution in the organization of plants and rapid increases in productivity in all types of manufacturing.

The change in transportation was even more remarkable. Internal combustion engines running on gasoline or diesel fuel revolutionized transportation. Cars quickly grabbed the lion’s share of local and regional travel and began to eat into long distance passenger travel, just as the railroads had done to passenger traffic by water in the 1830s. Even before the First World War cities had begun passing laws to regulate and limit “jitney” services and to protect the investments in urban rail mass transit. Trucking began eating into the freight carried by the railroads.

These developments brought about changes in the energy industries. Coal mining became a declining industry. As Figure 11 shows, in 1925 the share of petroleum in the value of coal, gas, and petroleum output exceeded bituminous coal, and it continued to rise. Anthracite coal’s share was much smaller and it declined while natural gas and LP (or liquefied petroleum) gas were relatively unimportant. These changes, especially the declining coal industry, were the source of considerable worry in the twenties.

Coal

One of the industries considered to be “sick” in the twenties was coal, particularly bituminous, or soft, coal. Income in the industry declined, and bankruptcies were frequent. Strikes frequently interrupted production. The majority of the miners “lived in squalid and unsanitary houses, and the incidence of accidents and diseases was high.” (Soule, 1947) The number of operating bituminous coal mines declined sharply from 1923 through 1932. Anthracite (or hard) coal output was much smaller during the twenties. Real coal prices rose from 1919 to 1922, and bituminous coal prices fell sharply from then to 1925. (Figure 12) Coal mining employment plummeted during the twenties. Annual earnings, especially in bituminous coal mining, also fell because of dwindling hourly earnings and, from 1929 on, a shrinking workweek. (Figure 13)

The sources of these changes are to be found in the increasing supply due to productivity advances in coal production and in the decreasing demand for coal. The demand fell as industries began turning from coal to electricity and because of productivity advances in the use of coal to create energy in steel, railroads, and electric utilities. (Keller, 1973) In the generation of electricity, larger steam plants employing higher temperatures and steam pressures continued to reduce coal consumption per kilowatt hour. Similar reductions were found in the production of coke from coal for iron and steel production and in the use of coal by the steam railroad engines. (Rezneck, 1951) All of these factors reduced the demand for coal.

Productivity advances in coal mining tended to be labor saving. Mechanical cutting accounted for 60.7 percent of the coal mined in 1920 and 78.4 percent in 1929. By the middle of the twenties, the mechanical loading of coal began to be introduced. Between 1929 and 1939, output per labor-hour rose nearly one third in bituminous coal mining and nearly four fifths in anthracite as more mines adopted machine mining and mechanical loading and strip mining expanded.

The increasing supply and falling demand for coal led to the closure of mines that were too costly to operate. A mine could simply cease operations, let the equipment stand idle, and lay off employees. When bankruptcies occurred, the mines generally just turned up under new ownership with lower capital charges. When demand increased or strikes reduced the supply of coal, idle mines simply resumed production. As a result, the easily expanded supply largely eliminated economic profits.

The average daily employment in coal mining dropped by over 208,000 from its peak in 1923, but the sharply falling real wages suggests that the supply of labor did not fall as rapidly as the demand for labor. Soule (1947) notes that when employment fell in coal mining, it meant fewer days of work for the same number of men. Social and cultural characteristics tended to tie many to their home region. The local alternatives were few, and ignorance of alternatives outside the Appalachian rural areas, where most bituminous coal was mined, made it very costly to transfer out.

Petroleum

In contrast to the coal industry, the petroleum industry was growing throughout the interwar period. By the thirties, crude petroleum dominated the real value of the production of energy materials. As Figure 14 shows, the production of crude petroleum increased sharply between 1920 and 1930, while real petroleum prices, though highly variable, tended to decline.

The growing demand for petroleum was driven by the growth in demand for gasoline as America became a motorized society. The production of gasoline surpassed kerosene production in 1915. Kerosene’s market continued to contract as electric lighting replaced kerosene lighting. The development of oil burners in the twenties began a switch from coal toward fuel oil for home heating, and this further increased the growing demand for petroleum. The growth in the demand for fuel oil and diesel fuel for ship engines also increased petroleum demand. But it was the growth in the demand for gasoline that drove the petroleum market.

The decline in real prices in the latter part of the twenties shows that supply was growing even faster than demand. The discovery of new fields in the early twenties increased the supply of petroleum and led to falling prices as production capacity grew. The Santa Fe Springs, California strike in 1919 initiated a supply shock as did the discovery of the Long Beach, California field in 1921. New discoveries in Powell, Texas and Smackover Arkansas further increased the supply of petroleum in 1921. New supply increases occurred in 1926 to 1928 with petroleum strikes in Seminole, Oklahoma and Hendricks, Texas. The supply of oil increased sharply in 1930 to 1931 with new discoveries in Oklahoma City and East Texas. Each new discovery pushed down real oil prices, and the prices of petroleum derivatives, and the growing production capacity led to a general declining trend in petroleum prices. McMillin and Parker (1994) argue that supply shocks generated by these new discoveries were a factor in the business cycles during the 1920s.

The supply of gasoline increased more than the supply of crude petroleum. In 1913 a chemist at Standard Oil of Indiana introduced the cracking process to refine crude petroleum; until that time it had been refined by distillation or unpressurized heating. In the heating process, various refined products such as kerosene, gasoline, naphtha, and lubricating oils were produced at different temperatures. It was difficult to vary the amount of the different refined products produced from a barrel of crude. The cracking process used pressurized heating to break heavier components down into lighter crude derivatives; with cracking, it was possible to increase the amount of gasoline obtained from a barrel of crude from 15 to 45 percent. In the early twenties, chemists at Standard Oil of New Jersey improved the cracking process, and by 1927 it was possible to obtain twice as much gasoline from a barrel of crude petroleum as in 1917.

The petroleum companies also developed new ways to distribute gasoline to motorists that made it more convenient to purchase gasoline. Prior to the First World War, gasoline was commonly purchased in one- or five-gallon cans and the purchaser used a funnel to pour the gasoline from the can into the car. Then “filling stations” appeared, which specialized in filling cars’ tanks with gasoline. These spread rapidly, and by 1919 gasoline companies werebeginning to introduce their own filling stations or contract with independent stations to exclusively distribute their gasoline. Increasing competition and falling profits led filling station operators to expand into other activities such as oil changes and other mechanical repairs. The general name attached to such stations gradually changed to “service stations” to reflect these new functions.

Though the petroleum firms tended to be large, they were highly competitive, trying to pump as much petroleum as possible to increase their share of the fields. This, combined with the development of new fields, led to an industry with highly volatile prices and output. Firms desperately wanted to stabilize and reduce the production of crude petroleum so as to stabilize and raise the prices of crude petroleum and refined products. Unable to obtain voluntary agreement on output limitations by the firms and producers, governments began stepping in. Led by Texas, which created the Texas Railroad Commission in 1891, oil-producing states began to intervene to regulate production. Such laws were usually termed prorationing laws and were quotas designed to limit each well’s output to some fraction of its potential. The purpose was as much to stabilize and reduce production and raise prices as anything else, although generally such laws were passed under the guise of conservation. Although the federal government supported such attempts, not until the New Deal were federal laws passed to assist this.

Electricity

By the mid 1890s the debate over the method by which electricity was to be transmitted had been won by those who advocated alternating current. The reduced power losses and greater distance over which electricity could be transmitted more than offset the necessity for transforming the current back to direct current for general use. Widespread adoption of machines and appliances by industry and consumers then rested on an increase in the array of products using electricity as the source of power, heat, or light and the development of an efficient, lower cost method of generating electricity.

General Electric, Westinghouse, and other firms began producing the electrical appliances for homes and an increasing number of machines based on electricity began to appear in industry. The problem of lower cost production was solved by the introduction of centralized generating facilities that distributed the electric power through lines to many consumers and business firms.

Though initially several firms competed in generating and selling electricity to consumers and firms in a city or area, by the First World War many states and communities were awarding exclusive franchises to one firm to generate and distribute electricity to the customers in the franchise area. (Bright, 1947; Passer, 1953) The electric utility industry became an important growth industry and, as Figure 15 shows, electricity production and use grew rapidly.

The electric utilities increasingly were regulated by state commissions that were charged with setting rates so that the utilities could receive a “fair return” on their investments. Disagreements over what constituted a “fair return” and the calculation of the rate base led to a steady stream of cases before the commissions and a continuing series of court appeals. Generally these court decisions favored the reproduction cost basis. Because of the difficulty and cost in making these calculations, rates tended to be in the hands of the electric utilities that, it has been suggested, did not lower rates adequately to reflect the rising productivity and lowered costs of production. The utilities argued that a more rapid lowering of rates would have jeopardized their profits. Whether or not this increased their monopoly power is still an open question, but it should be noted, that electric utilities were hardly price-taking industries prior to regulation. (Mercer, 1973) In fact, as Figure 16 shows, the electric utilities began to systematically practice market segmentation charging users with less elastic demands, higher prices per kilowatt-hour.

Energy in the American Economy of the 1920s

The changes in the energy industries had far-reaching consequences. The coal industry faced a continuing decline in demand. Even in the growing petroleum industry, the periodic surges in the supply of petroleum caused great instability. In manufacturing, as described above, electrification contributed to a remarkable rise in productivity. The transportation revolution brought about by the rise of gasoline-powered trucks and cars changed the way businesses received their supplies and distributed their production as well as where they were located. The suburbanization of America and the beginnings of urban sprawl were largely brought about by the introduction of low-priced gasoline for cars.

Transportation

The American economy was forever altered by the dramatic changes in transportation after 1900. Following Henry Ford’s introduction of the moving assembly production line in 1914, automobile prices plummeted, and by the end of the 1920s about 60 percent of American families owned an automobile. The advent of low-cost personal transportation led to an accelerating movement of population out of the crowded cities to more spacious homes in the suburbs and the automobile set off a decline in intracity public passenger transportation that has yet to end. Massive road-building programs facilitated the intercity movement of people and goods. Trucks increasingly took over the movement of freight in competition with the railroads. New industries, such as gasoline service stations, motor hotels, and the rubber tire industry, arose to service the automobile and truck traffic. These developments were complicated by the turmoil caused by changes in the federal government’s policies toward transportation in the United States.

With the end of the First World War, a debate began as to whether the railroads, which had been taken over by the government, should be returned to private ownership or nationalized. The voices calling for a return to private ownership were much stronger, but doing so fomented great controversy. Many in Congress believed that careful planning and consolidation could restore the railroads and make them more efficient. There was continued concern about the near monopoly that the railroads had on the nation’s intercity freight and passenger transportation. The result of these deliberations was the Transportation Act of 1920, which was premised on the continued domination of the nation’s transportation by the railroads—an erroneous presumption.

The Transportation Act of 1920 presented a marked change in the Interstate Commerce Commission’s ability to control railroads. The ICC was allowed to prescribe exact rates that were to be set so as to allow the railroads to earn a fair return, defined as 5.5 percent, on the fair value of their property. The ICC was authorized to make an accounting of the fair value of each regulated railroad’s property; however, this was not completed until well into the 1930s, by which time the accounting and rate rules were out of date. To maintain fair competition between railroads in a region, all roads were to have the same rates for the same goods over the same distance. With the same rates, low-cost roads should have been able to earn higher rates of return than high-cost roads. To handle this, a recapture clause was inserted: any railroad earning a return of more than 6 percent on the fair value of its property was to turn the excess over to the ICC, which would place half of the money in a contingency fund for the railroad when it encountered financial problems and the other half in a contingency fund to provide loans to other railroads in need of assistance.

In order to address the problem of weak and strong railroads and to bring better coordination to the movement of rail traffic in the United States, the act was directed to encourage railroad consolidation, but little came of this in the 1920s. In order to facilitate its control of the railroads, the ICC was given two additional powers. The first was the control over the issuance or purchase of securities by railroads, and the second was the power to control changes in railroad service through the control of car supply and the extension and abandonment of track. The control of the supply of rail cars was turned over to the Association of American Railroads. Few extensions of track were proposed, but as time passed, abandonment requests grew. The ICC, however, trying to mediate between the conflicting demands of shippers, communities and railroads, generally refused to grant abandonments, and this became an extremely sensitive issue in the 1930s.

As indicated above, the premises of the Transportation Act of 1920 were wrong. Railroads experienced increasing competition during the 1920s, and both freight and passenger traffic were drawn off to competing transport forms. Passenger traffic exited from the railroads much more quickly. As the network of all weather surfaced roads increased, people quickly turned from the train to the car. Harmed even more by the move to automobile traffic were the electric interurban railways that had grown rapidly just prior to the First World War. (Hilton-Due, 1960) Not surprisingly, during the 1920s few railroads earned profits in excess of the fair rate of return.

The use of trucks to deliver freight began shortly after the turn of the century. Before the outbreak of war in Europe, White and Mack were producing trucks with as much as 7.5 tons of carrying capacity. Most of the truck freight was carried on a local basis, and it largely supplemented the longer distance freight transportation provided by the railroads. However, truck size was growing. In 1915 Trailmobile introduced the first four-wheel trailer designed to be pulled by a truck tractor unit. During the First World War, thousands of trucks were constructed for military purposes, and truck convoys showed that long distance truck travel was feasible and economical. The use of trucks to haul freight had been growing by over 18 percent per year since 1925, so that by 1929 intercity trucking accounted for more than one percent of the ton-miles of freight hauled.

The railroads argued that the trucks and buses provided “unfair” competition and believed that if they were also regulated, then the regulation could equalize the conditions under which they competed. As early as 1925, the National Association of Railroad and Utilities Commissioners issued a call for the regulation of motor carriers in general. In 1928 the ICC called for federal regulation of buses and in 1932 extended this call to federal regulation of trucks.

Most states had began regulating buses at the beginning of the 1920s in an attempt to reduce the diversion of urban passenger traffic from the electric trolley and railway systems. However, most of the regulation did not aim to control intercity passenger traffic by buses. As the network of surfaced roads expanded during the twenties, so did the routes of the intercity buses. In 1929 a number of smaller bus companies were incorporated in the Greyhound Buslines, the carrier that has since dominated intercity bus transportation. (Walsh, 2000)

A complaint of the railroads was that interstate trucking competition was unfair because it was subsidized while railroads were not. All railroad property was privately owned and subject to property taxes, whereas truckers used the existing road system and therefore neither had to bear the costs of creating the road system nor pay taxes upon it. Beginning with the Federal Road-Aid Act of 1916, small amounts of money were provided as an incentive for states to construct rural post roads. (Dearing-Owen, 1949) However, through the First World War most of the funds for highway construction came from a combination of levies on the adjacent property owners and county and state taxes. The monies raised by the counties were commonly 60 percent of the total funds allocated, and these primarily came from property taxes. In 1919 Oregon pioneered the state gasoline tax, which then began to be adopted by more and more states. A highway system financed by property taxes and other levies can be construed as a subsidization of motor vehicles, and one study for the period up to 1920 found evidence of substantial subsidization of trucking. (Herbst-Wu, 1973) However, the use of gasoline taxes moved closer to the goal of users paying the costs of the highways. Neither did the trucks have to pay for all of the highway construction because automobiles jointly used the highways. Highways had to be constructed in more costly ways in order to accommodate the larger and heavier trucks. Ideally the gasoline taxes collected from trucks should have covered the extra (or marginal) costs of highway construction incurred because of the truck traffic. Gasoline taxes tended to do this.

The American economy occupies a vast geographic region. Because economic activity occurs over most of the country, falling transportation costs have been crucial to knitting American firms and consumers into a unified market. Throughout the nineteenth century the railroads played this crucial role. Because of the size of the railroad companies and their importance in the economic life of Americans, the federal government began to regulate them. But, by 1917 it appeared that the railroad system had achieved some stability, and it was generally assumed that the post-First World War era would be an extension of the era from 1900 to 1917. Nothing could have been further from the truth. Spurred by public investments in highways, cars and trucks voraciously ate into the railroad’s market, and, though the regulators failed to understand this at the time, the railroad’s monopoly on transportation quickly disappeared.

Communications

Communications had joined with transportation developments in the nineteenth century to tie the American economy together more completely. The telegraph had benefited by using the railroads’ right-of-ways, and the railroads used the telegraph to coordinate and organize their far-flung activities. As the cost of communications fell and information transfers sped, the development of firms with multiple plants at distant locations was facilitated. The interwar era saw a continuation of these developments as the telephone continued to supplant the telegraph and the new medium of radio arose to transmit news and provide a new entertainment source.

Telegraph domination of business and personal communications had given way to the telephone as long distance telephone calls between the east and west coasts with the new electronic amplifiers became possible in 1915. The number of telegraph messages handled grew 60.4 percent in the twenties. The number of local telephone conversations grew 46.8 percent between 1920 and 1930, while the number of long distance conversations grew 71.8 percent over the same period. There were 5 times as many long distance telephone calls as telegraph messages handled in 1920, and 5.7 times as many in 1930.

The twenties were a prosperous period for AT&T and its 18 major operating companies. (Brooks, 1975; Temin, 1987; Garnet, 1985; Lipartito, 1989) Telephone usage rose and, as Figure 19 shows, the share of all households with a telephone rose from 35 percent to nearly 42 percent. In cities across the nation, AT&T consolidated its system, gained control of many operating companies, and virtually eliminated its competitors. It was able to do this because in 1921 Congress passed the Graham Act exempting AT&T from the Sherman Act in consolidating competing telephone companies. By 1940, the non-Bell operating companies were all small relative to the Bell operating companies.

Surprisingly there was a decline in telephone use on the farms during the twenties. (Hadwiger-Cochran, 1984; Fischer 1987) Rising telephone rates explain part of the decline in rural use. The imposition of connection fees during the First World War made it more costly for new farmers to hook up. As AT&T gained control of more and more operating systems, telephone rates were increased. AT&T also began requiring, as a condition of interconnection, that independent companies upgrade their systems to meet AT&T standards. Most of the small mutual companies that had provided service to farmers had operated on a shoestring—wires were often strung along fenceposts, and phones were inexpensive “whoop and holler” magneto units. Upgrading to AT&T’s standards raised costs, forcing these companies to raise rates.

However, it also seems likely that during the 1920s there was a general decline in the rural demand for telephone services. One important factor in this was the dramatic decline in farm incomes in the early twenties. The second reason was a change in the farmers’ environment. Prior to the First World War, the telephone eased farm isolation and provided news and weather information that was otherwise hard to obtain. After 1920 automobiles, surfaced roads, movies, and the radio loosened the isolation and the telephone was no longer as crucial.

Othmar Merganthaler’s development of the linotype machine in the late nineteenth century had irrevocably altered printing and publishing. This machine, which quickly created a line of soft, lead-based metal type that could be printed, melted down and then recast as a new line of type, dramatically lowered the costs of printing. Previously, all type had to be painstakingly set by hand, with individual cast letter matrices picked out from compartments in drawers to construct words, lines, and paragraphs. After printing, each line of type on the page had to be broken down and each individual letter matrix placed back into its compartment in its drawer for use in the next printing job. Newspapers often were not published every day and did not contain many pages, resulting in many newspapers in most cities. In contrast to this laborious process, the linotype used a keyboard upon which the operator typed the words in one of the lines in a news column. Matrices for each letter dropped down from a magazine of matrices as the operator typed each letter and were assembled into a line of type with automatic spacers to justify the line (fill out the column width). When the line was completed the machine mechanically cast the line of matrices into a line of lead type. The line of lead type was ejected into a tray and the letter matrices mechanically returned to the magazine while the operator continued typing the next line in the news story. The first Merganthaler linotype machine was installed in the New York Tribune in 1886. The linotype machine dramatically lowered the costs of printing newspapers (as well as books and magazines). Prior to the linotype a typical newspaper averaged no more than 11 pages and many were published only a few times a week. The linotype machine allowed newspapers to grow in size and they began to be published more regularly. A process of consolidation of daily and Sunday newspapers began that continues to this day. Many have termed the Merganthaler linotype machine the most significant printing invention since the introduction of movable type 400 years earlier.

For city families as well as farm families, radio became the new source of news and entertainment. (Barnouw, 1966; Rosen, 1980 and 1987; Chester-Garrison, 1950) It soon took over as the prime advertising medium and in the process revolutionized advertising. By 1930 more homes had radio sets than had telephones. The radio networks sent news and entertainment broadcasts all over the country. The isolation of rural life, particularly in many areas of the plains, was forever broken by the intrusion of the “black box,” as radio receivers were often called. The radio began a process of breaking down regionalism and creating a common culture in the United States.

The potential demand for radio became clear with the first regular broadcast of Westinghouse’s KDKA in Pittsburgh in the fall of 1920. Because the Department of Commerce could not deny a license application there was an explosion of stations all broadcasting at the same frequency and signal jamming and interference became a serious problem. By 1923 the Department of Commerce had gained control of radio from the Post Office and the Navy and began to arbitrarily disperse stations on the radio dial and deny licenses creating the first market in commercial broadcast licenses. In 1926 a U.S. District Court decided that under the Radio Law of 1912 Herbert Hoover, the secretary of commerce, did not have this power. New stations appeared and the logjam and interference of signals worsened. A Radio Act was passed in January of 1927 creating the Federal Radio Commission (FRC) as a temporary licensing authority. Licenses were to be issued in the public interest, convenience, and necessity. A number of broadcasting licenses were revoked; stations were assigned frequencies, dial locations, and power levels. The FRC created 24 clear-channel stations with as much as 50,000 watts of broadcasting power, of which 21 ended up being affiliated with the new national radio networks. The Communications Act of 1934 essentially repeated the 1927 act except that it created a permanent, seven-person Federal Communications Commission (FCC).

Local stations initially created and broadcast the radio programs. The expenses were modest, and stores and companies operating radio stations wrote this off as indirect, goodwill advertising. Several forces changed all this. In 1922, AT&T opened up a radio station in New York City, WEAF (later to become WNBC). AT&T envisioned this station as the center of a radio toll system where individuals could purchase time to broadcast a message transmitted to other stations in the toll network using AT&T’s long distance lines and an August 1922 broadcast by a Long Island realty company became the first conscious use of direct advertising.

Though advertising continued to be condemned, the fiscal pressures on radio stations to accept advertising began rising. In 1923 the American Society of Composers and Publishers (ASCAP), began demanding a performance fee anytime ASCAP-copyrighted music was performed on the radio, either live or on record. By 1924 the issue was settled, and most stations began paying performance fees to ASCAP. AT&T decided that all stations broadcasting with non AT&T transmitters were violating their patent rights and began asking for annual fees from such stations based on the station’s power. By the end of 1924, most stations were paying the fees. All of this drained the coffers of the radio stations, and more and more of them began discreetly accepting advertising.

RCA became upset at AT&T’s creation of a chain of radio stations and set up its own toll network using the inferior lines of Western Union and Postal Telegraph, because AT&T, not surprisingly, did not allow any toll (or network) broadcasting on its lines except by its own stations. AT&T began to worry that its actions might threaten its federal monopoly in long distance telephone communications. In 1926 a new firm was created, the National Broadcasting Company (NBC), which took over all broadcasting activities from AT&T and RCA as AT&T left broadcasting. When NBC debuted in November of 1926, it had two networks: the Red, which was the old AT&T network, and the Blue, which was the old RCA network. Radio networks allowed advertisers to direct advertising at a national audience at a lower cost. Network programs allowed local stations to broadcast superior programs that captured a larger listening audience and in return received a share of the fees the national advertiser paid to the network. In 1927 a new network, the Columbia Broadcasting System (CBS) financed by the Paley family began operation and other new networks entered or tried to enter the industry in the 1930s.

Communications developments in the interwar era present something of a mixed picture. By 1920 long distance telephone service was in place, but rising rates slowed the rate of adoption in the period, and telephone use in rural areas declined sharply. Though direct dialing was first tried in the twenties, its general implementation would not come until the postwar era, when other changes, such as microwave transmission of signals and touch-tone dialing, would also appear. Though the number of newspapers declined, newspaper circulation generally held up. The number of competing newspapers in larger cities began declining, a trend that also would accelerate in the postwar American economy.

Banking and Securities Markets

In the twenties commercial banks became “department stores of finance.”— Banks opened up installment (or personal) loan departments, expanded their mortgage lending, opened up trust departments, undertook securities underwriting activities, and offered safe deposit boxes. These changes were a response to growing competition from other financial intermediaries. Businesses, stung by bankers’ control and reduced lending during the 1920-21 depression, began relying more on retained earnings and stock and bond issues to raise investment and, sometimes, working capital. This reduced loan demand. The thrift institutions also experienced good growth in the twenties as they helped fuel the housing construction boom of the decade. The securities markets boomed in the twenties only to see a dramatic crash of the stock market in late 1929.

There were two broad classes of commercial banks; those that were nationally chartered and those that were chartered by the states. Only the national banks were required to be members of the Federal Reserve System. (Figure 21) Most banks were unit banks because national regulators and most state regulators prohibited branching. However, in the twenties a few states began to permit limited branching; California even allowed statewide branching.—The Federal Reserve member banks held the bulk of the assets of all commercial banks, even though most banks were not members. A high bank failure rate in the 1920s has usually been explained by “overbanking” or too many banks located in an area, but H. Thomas Johnson (1973-74) makes a strong argument against this. (Figure 22)— If there were overbanking, on average each bank would have been underutilized resulting in intense competition for deposits and higher costs and lower earnings. One common reason would have been the free entry of banks as long as they achieved the minimum requirements then in force. However, the twenties saw changes that led to the demise of many smaller rural banks that would likely have been profitable if these changes had not occurred. Improved transportation led to a movement of business activities, including banking, into the larger towns and cities. Rural banks that relied on loans to farmers suffered just as farmers did during the twenties, especially in the first half of the twenties. The number of bank suspensions and the suspension rate fell after 1926. The sharp rise in bank suspensions in 1930 occurred because of the first banking crisis during the Great Depression.

Prior to the twenties, the main assets of commercial banks were short-term business loans, made by creating a demand deposit or increasing an existing one for a borrowing firm. As business lending declined in the 1920s commercial banks vigorously moved into new types of financial activities. As banks purchased more securities for their earning asset portfolios and gained expertise in the securities markets, larger ones established investment departments and by the late twenties were an important force in the underwriting of new securities issued by nonfinancial corporations.

The securities market exhibited perhaps the most dramatic growth of the noncommercial bank financial intermediaries during the twenties, but others also grew rapidly. (Figure 23) The assets of life insurance companies increased by 10 percent a year from 1921 to 1929; by the late twenties they were a very important source of funds for construction investment. Mutual savings banks and savings and loan associations (thrifts) operated in essentially the same types of markets. The Mutual savings banks were concentrated in the northeastern United States. As incomes rose, personal savings increased, and housing construction expanded in the twenties, there was an increasing demand for the thrifts’ interest earning time deposits and mortgage lending.

But the dramatic expansion in the financial sector came in new corporate securities issues in the twenties—especially common and preferred stock—and in the trading of existing shares of those securities. (Figure 24) The late twenties boom in the American economy was rapid, highly visible, and dramatic. Skyscrapers were being erected in most major cities, the automobile manufacturers produced over four and a half million new cars in 1929; and the stock market, like a barometer of this prosperity, was on a dizzying ride to higher and higher prices. “Playing the market” seemed to become a national pastime.

The Dow-Jones index hit its peak of 381 on September 3 and then slid to 320 on October 21. In the following week the stock market “crashed,” with a record number of shares being traded on several days. At the end of Tuesday, October, 29th, the index stood at 230, 96 points less than one week before. On November 13, 1929, the Dow-Jones index reached its lowest point for the year at 198—183 points less than the September 3 peak.

The path of the stock market boom of the twenties can be seen in Figure 25. Sharp price breaks occurred several times during the boom, and each of these gave rise to dark predictions of the end of the bull market and speculation. Until late October of 1929, these predictions turned out to be wrong. Between those price breaks and prior to the October crash, stock prices continued to surge upward. In March of 1928, 3,875,910 shares were traded in one day, establishing a record. By late 1928, five million shares being traded in a day was a common occurrence.

New securities, from rising merger activity and the formation of holding companies, were issued to take advantage of the rising stock prices.—Stock pools, which were not illegal until the 1934 Securities and Exchange Act, took advantage of the boom to temporarily drive up the price of selected stocks and reap large gains for the members of the pool. In stock pools a group of speculators would pool large amounts of their funds and then begin purchasing large amounts of shares of a stock. This increased demand led to rising prices for that stock. Frequently pool insiders would “churn” the stock by repeatedly buying and selling the same shares among themselves, but at rising prices. Outsiders, seeing the price rising, would decide to purchase the stock whose price was rising. At a predetermined higher price the pool members would, within a short period, sell their shares and pull out of the market for that stock. Without the additional demand from the pool, the stock’s price usually fell quickly bringing large losses for the unsuspecting outside investors while reaping large gains for the pool insiders.

Another factor commonly used to explain both the speculative boom and the October crash was the purchase of stocks on small margins. However, contrary to popular perception, margin requirements through most of the twenties were essentially the same as in previous decades. Brokers, recognizing the problems with margin lending in the rapidly changing market, began raising margin requirements in late 1928, and by the fall of 1929, margin requirements were the highest in the history of the New York Stock Exchange. In the 1920s, as was the case for decades prior to that, the usual margin requirements were 10 to 15 percent of the purchase price, and, apparently, more often around 10 percent. There were increases in this percentage by 1928 and by the fall of 1928, well before the crash and at the urging of a special New York Clearinghouse committee, margin requirements had been raised to some of the highest levels in New York Stock Exchange history. One brokerage house required the following of its clients. Securities with a selling price below $10 could only be purchased for cash. Securities with a selling price of $10 to $20 had to have a 50 percent margin; for securities of $20 to $30 a margin requirement of 40 percent; and, for securities with a price above $30 the margin was 30 percent of the purchase price. In the first half of 1929 margin requirements on customers’ accounts averaged a 40 percent margin, and some houses raised their margins to 50 percent a few months before the crash. These were, historically, very high margin requirements. (Smiley and Keehn, 1988)—Even so, during the crash when additional margin calls were issued, those investors who could not provide additional margin saw the brokers’ sell their stock at whatever the market price was at the time and these forced sales helped drive prices even lower.

The crash began on Monday, October 21, as the index of stock prices fell 3 points on the third-largest volume in the history of the New York Stock Exchange. After a slight rally on Tuesday, prices began declining on Wednesday and fell 21 points by the end of the day bringing on the third call for more margin in that week. On Black Thursday, October 24, prices initially fell sharply, but rallied somewhat in the afternoon so that the net loss was only 7 points, but the volume of thirteen million shares set a NYSE record. Friday brought a small gain that was wiped out on Saturday. On Monday, October 28, the Dow Jones index fell 38 points on a volume of nine million shares—three million in the final hour of trading. Black Tuesday, October 29, brought declines in virtually every stock price. Manufacturing firms, which had been lending large sums to brokers for margin loans, had been calling in these loans and this accelerated on Monday and Tuesday. The big Wall Street banks increased their lending on call loans to offset some of this loss of loanable funds. The Dow Jones Index fell 30 points on a record volume of nearly sixteen and a half million shares exchanged. Black Thursday and Black Tuesday wiped out entire fortunes.

Though the worst was over, prices continued to decline until November 13, 1929, as brokers cleaned up their accounts and sold off the stocks of clients who could not supply additional margin. After that, prices began to slowly rise and by April of 1930 had increased 96 points from the low of November 13,— “only” 87 points less than the peak of September 3, 1929. From that point, stock prices resumed their depressing decline until the low point was reached in the summer of 1932.

 

—There is a long tradition that insists that the Great Bull Market of the late twenties was an orgy of speculation that bid the prices of stocks far above any sustainable or economically justifiable level creating a bubble in the stock market. John Kenneth Galbraith (1954) observed, “The collapse in the stock market in the autumn of 1929 was implicit in the speculation that went before.”—But not everyone has agreed with this.

In 1930 Irving Fisher argued that the stock prices of 1928 and 1929 were based on fundamental expectations that future corporate earnings would be high.— More recently, Murray Rothbard (1963), Gerald Gunderson (1976), and Jude Wanniski (1978) have argued that stock prices were not too high prior to the crash.—Gunderson suggested that prior to 1929, stock prices were where they should have been and that when corporate profits in the summer and fall of 1929 failed to meet expectations, stock prices were written down.— Wanniski argued that political events brought on the crash. The market broke each time news arrived of advances in congressional consideration of the Hawley-Smoot tariff. However, the virtually perfect foresight that Wanniski’s explanation requires is unrealistic.— Charles Kindleberger (1973) and Peter Temin (1976) examined common stock yields and price-earnings ratios and found that the relative constancy did not suggest that stock prices were bid up unrealistically high in the late twenties.—Gary Santoni and Gerald Dwyer (1990) also failed to find evidence of a bubble in stock prices in 1928 and 1929.—Gerald Sirkin (1975) found that the implied growth rates of dividends required to justify stock prices in 1928 and 1929 were quite conservative and lower than post-Second World War dividend growth rates.

However, examination of after-the-fact common stock yields and price-earning ratios can do no more than provide some ex post justification for suggesting that there was not excessive speculation during the Great Bull Market.— Each individual investor was motivated by that person’s subjective expectations of each firm’s future earnings and dividends and the future prices of shares of each firm’s stock. Because of this element of subjectivity, not only can we never accurately know those values, but also we can never know how they varied among individuals. The market price we observe will be the end result of all of the actions of the market participants, and the observed price may be different from the price almost all of the participants expected.

In fact, there are some indications that there were differences in 1928 and 1929. Yields on common stocks were somewhat lower in 1928 and 1929. In October of 1928, brokers generally began raising margin requirements, and by the beginning of the fall of 1929, margin requirements were, on average, the highest in the history of the New York Stock Exchange. Though the discount and commercial paper rates had moved closely with the call and time rates on brokers’ loans through 1927, the rates on brokers’ loans increased much more sharply in 1928 and 1929.— This pulled in funds from corporations, private investors, and foreign banks as New York City banks sharply reduced their lending. These facts suggest that brokers and New York City bankers may have come to believe that stock prices had been bid above a sustainable level by late 1928 and early 1929. White (1990) created a quarterly index of dividends for firms in the Dow-Jones index and related this to the DJI. Through 1927 the two track closely, but in 1928 and 1929 the index of stock prices grows much more rapidly than the index of dividends.

The qualitative evidence for a bubble in the stock market in 1928 and 1929 that White assembled was strengthened by the findings of J. Bradford De Long and Andre Shleifer (1991). They examined closed-end mutual funds, a type of fund where investors wishing to liquidate must sell their shares to other individual investors allowing its fundamental value to be exactly measurable.— Using evidence from these funds, De Long and Shleifer estimated that in the summer of 1929, the Standard and Poor’s composite stock price index was overvalued about 30 percent due to excessive investor optimism. Rappoport and White (1993 and 1994) found other evidence that supported a bubble in the stock market in 1928 and 1929. There was a sharp divergence between the growth of stock prices and dividends; there were increasing premiums on call and time brokers’ loans in 1928 and 1929; margin requirements rose; and stock market volatility rose in the wake of the 1929 stock market crash.

There are several reasons for the creation of such a bubble. First, the fundamental values of earnings and dividends become difficult to assess when there are major industrial changes, such as the rapid changes in the automobile industry, the new electric utilities, and the new radio industry.— Eugene White (1990) suggests that “While investors had every reason to expect earnings to grow, they lacked the means to evaluate easily the future path of dividends.” As a result investors bid up prices as they were swept up in the ongoing stock market boom. Second, participation in the stock market widened noticeably in the twenties. The new investors were relatively unsophisticated, and they were more likely to be caught up in the euphoria of the boom and bid prices upward.— New, inexperienced commission sales personnel were hired to sell stocks and they promised glowing returns on stocks they knew little about.

These observations were strengthened by the experimental work of economist Vernon Smith. (Bishop, 1987) In a number of experiments over a three-year period using students and Tucson businessmen and businesswomen, bubbles developed as inexperienced investors valued stocks differently and engaged in price speculation. As these investors in the experiments began to realize that speculative profits were unsustainable and uncertain, their dividend expectations changed, the market crashed, and ultimately stocks began trading at their fundamental dividend values. These bubbles and crashes occurred repeatedly, leading Smith to conjecture that there are few regulatory steps that can be taken to prevent a crash.

Though the bubble of 1928 and 1929 made some downward adjustment in stock prices inevitable, as Barsky and De Long have shown, changes in fundamentals govern the overall movements. And the end of the long bull market was almost certainly governed by this. In late 1928 and early 1929 there was a striking rise in economic activity, but a decline began somewhere between May and July of that year and was clearly evident by August of 1929. By the middle of August, the rise in stock prices had slowed down as better information on the contraction was received. There were repeated statements by leading figures that stocks were “overpriced” and the Federal Reserve System sharply increased the discount rate in August 1929 was well as continuing its call for banks to reduce their margin lending. As this information was assessed, the number of speculators selling stocks increased, and the number buying decreased. With the decreased demand, stock prices began to fall, and as more accurate information on the nature and extent of the decline was received, stock prices fell more. The late October crash made the decline occur much more rapidly, and the margin purchases and consequent forced selling of many of those stocks contributed to a more severe price fall. The recovery of stock prices from November 13 into April of 1930 suggests that stock prices may have been driven somewhat too low during the crash.

There is now widespread agreement that the 1929 stock market crash did not cause the Great Depression. Instead, the initial downturn in economic activity was a primary determinant of the ending of the 1928-29 stock market bubble. The stock market crash did make the downturn become more severe beginning in November 1929. It reduced discretionary consumption spending (Romer, 1990) and created greater income uncertainty helping to bring on the contraction (Flacco and Parker, 1992). Though stock market prices reached a bottom and began to recover following November 13, 1929, the continuing decline in economic activity took its toll and by May 1930 stock prices resumed their decline and continued to fall through the summer of 1932.

Domestic Trade

In the nineteenth century, a complex array of wholesalers, jobbers, and retailers had developed, but changes in the postbellum period reduced the role of the wholesalers and jobbers and strengthened the importance of the retailers in domestic trade. (Cochran, 1977; Chandler, 1977; Marburg, 1951; Clewett, 1951) The appearance of the department store in the major cities and the rise of mail order firms in the postbellum period changed the retailing market.

Department Stores

A department store is a combination of specialty stores organized as departments within one general store. A. T. Stewart’s huge 1846 dry goods store in New York City is often referred to as the first department store. (Resseguie, 1965; Sobel-Sicilia, 1986) R. H. Macy started his dry goods store in 1858 and Wanamaker’s in Philadelphia opened in 1876. By the end of the nineteenth century, every city of any size had at least one major department store. (Appel, 1930; Benson, 1986; Hendrickson, 1979; Hower, 1946; Sobel, 1974) Until the late twenties, the department store field was dominated by independent stores, though some department stores in the largest cities had opened a few suburban branches and stores in other cities. In the interwar period department stores accounted for about 8 percent of retail sales.

The department stores relied on a “one-price” policy, which Stewart is credited with beginning. In the antebellum period and into the postbellum period, it was common not to post a specific price on an item; rather, each purchaser haggled with a sales clerk over what the price would be. Stewart posted fixed prices on the various dry goods sold, and the customer could either decide to buy or not buy at the fixed price. The policy dramatically lowered transactions costs for both the retailer and the purchaser. Prices were reduced with a smaller markup over the wholesale price, and a large sales volume and a quicker turnover of the store’s inventory generated profits.

Mail Order Firms

What changed the department store field in the twenties was the entrance of Sears Roebuck and Montgomery Ward, the two dominant mail order firms in the United States. (Emmet-Jeuck, 1950; Chandler, 1962, 1977) Both firms had begun in the late nineteenth century and by 1914 the younger Sears Roebuck had surpassed Montgomery Ward. Both located in Chicago due to its central location in the nation’s rail network and both had benefited from the advent of Rural Free Delivery in 1896 and low cost Parcel Post Service in 1912.

In 1924 Sears hired Robert C. Wood, who was able to convince Sears Roebuck to open retail stores. Wood believed that the declining rural population and the growing urban population forecast the gradual demise of the mail order business; survival of the mail order firms required a move into retail sales. By 1925 Sears Roebuck had opened 8 retail stores, and by 1929 it had 324 stores. Montgomery Ward quickly followed suit. Rather than locating these in the central business district (CBD), Wood located many on major streets closer to the residential areas. These moves of Sears Roebuck and Montgomery Ward expanded department store retailing and provided a new type of chain store.

Chain Stores

Though chain stores grew rapidly in the first two decades of the twentieth century, they date back to the 1860s when George F. Gilman and George Huntington Hartford opened a string of New York City A&P (Atlantic and Pacific) stores exclusively to sell tea. (Beckman-Nolen, 1938; Lebhar, 1963; Bullock, 1933) Stores were opened in other regions and in 1912 their first “cash-and-carry” full-range grocery was opened. Soon they were opening 50 of these stores each week and by the 1920s A&P had 14,000 stores. They then phased out the small stores to reduce the chain to 4,000 full-range, supermarket-type stores. A&P’s success led to new grocery store chains such as Kroger, Jewel Tea, and Safeway.

Prior to A&P’s cash-and-carry policy, it was common for grocery stores, produce (or green) grocers, and meat markets to provide home delivery and credit, both of which were costly. As a result, retail prices were generally marked up well above the wholesale prices. In cash-and-carry stores, items were sold only for cash; no credit was extended, and no expensive home deliveries were provided. Markups on prices could be much lower because other costs were much lower. Consumers liked the lower prices and were willing to pay cash and carry their groceries, and the policy became common by the twenties.

Chains also developed in other retail product lines. In 1879 Frank W. Woolworth developed a “5 and 10 Cent Store,” or dime store, and there were over 1,000 F. W. Woolworth stores by the mid-1920s. (Winkler, 1940) Other firms such as Kresge, Kress, and McCrory successfully imitated Woolworth’s dime store chain. J.C. Penney’s dry goods chain store began in 1901 (Beasley, 1948), Walgreen’s drug store chain began in 1909, and shoes, jewelry, cigars, and other lines of merchandise also began to be sold through chain stores.

Self-Service Policies

In 1916 Clarence Saunders, a grocer in Memphis, Tennessee, built upon the one-price policy and began offering self-service at his Piggly Wiggly store. Previously, customers handed a clerk a list or asked for the items desired, which the clerk then collected and the customer paid for. With self-service, items for sale were placed on open shelves among which the customers could walk, carrying a shopping bag or pushing a shopping cart. Each customer could then browse as he or she pleased, picking out whatever was desired. Saunders and other retailers who adopted the self-service method of retail selling found that customers often purchased more because of exposure to the array of products on the shelves; as well, self-service lowered the labor required for retail sales and therefore lowered costs.

Shopping Centers

Shopping Centers, another innovation in retailing that began in the twenties, was not destined to become a major force in retail development until after the Second World War. The ultimate cause of this innovation was the widening ownership and use of the automobile. By the 1920s, as the ownership and use of the car began expanding, population began to move out of the crowded central cities toward the more open suburbs. When General Robert Wood set Sears off on its development of urban stores, he located these not in the central business district, CBD, but as free-standing stores on major arteries away from the CBD with sufficient space for parking.

At about the same time, a few entrepreneurs began to develop shopping centers. Yehoshua Cohen (1972) says, “The owner of such a center was responsible for maintenance of the center, its parking lot, as well as other services to consumers and retailers in the center.” Perhaps the earliest such shopping center was the Country Club Plaza built in 1922 by the J. C. Nichols Company in Kansas City, Missouri. Other early shopping centers appeared in Baltimore and Dallas. By the mid-1930s the concept of a planned shopping center was well known and was expected to be the means to capture the trade of the growing number of suburban consumers.

International Trade and Finance

In the twenties a gold exchange standard was developed to replace the gold standard of the prewar world. Under a gold standard, each country’s currency carried a fixed exchange rate with gold, and the currency had to be backed up by gold. As a result, all countries on the gold standard had fixed exchange rates with all other countries. Adjustments to balance international trade flows were made by gold flows. If a country had a deficit in its trade balance, gold would leave the country, forcing the money stock to decline and prices to fall. Falling prices made the deficit countries’ exports more attractive and imports more costly, reducing the deficit. Countries with a surplus imported gold, which increased the money stock and caused prices to rise. This made the surplus countries’ exports less attractive and imports more attractive, decreasing the surplus. Most economists who have studied the prewar gold standard contend that it did not work as the conventional textbook model says, because capital flows frequently reduced or eliminated the need for gold flows for long periods of time. However, there is no consensus on whether fortuitous circumstances, rather than the gold standard, saved the international economy from periodic convulsions or whether the gold standard as it did work was sufficient to promote stability and growth in international transactions.

After the First World War it was argued that there was a “shortage” of fluid monetary gold to use for the gold standard, so some method of “economizing” on gold had to be found. To do this, two basic changes were made. First, most nations, other than the United States, stopped domestic circulation of gold. Second, the “gold exchange” system was created. Most countries held their international reserves in the form of U.S. dollars or British pounds and international transactions used dollars or pounds, as long as the United States and Great Britain stood ready to exchange their currencies for gold at fixed exchange rates. However, the overvaluation of the pound and the undervaluation of the franc threatened these arrangements. The British trade deficit led to a capital outflow, higher interest rates, and a weak economy. In the late twenties, the French trade surplus led to the importation of gold that they did not allow to expand the money supply.

Economizing on gold by no longer allowing its domestic circulation and by using key currencies as international monetary reserves was really an attempt to place the domestic economies under the control of the nations’ politicians and make them independent of international events. Unfortunately, in doing this politicians eliminated the equilibrating mechanism of the gold standard but had nothing with which to replace it. The new international monetary arrangements of the twenties were potentially destabilizing because they were not allowed to operate as a price mechanism promoting equilibrating adjustments.

There were other problems with international economic activity in the twenties. Because of the war, the United States was abruptly transformed from a debtor to a creditor on international accounts. Though the United States did not want reparations payments from Germany, it did insist that Allied governments repay American loans. The Allied governments then insisted on war reparations from Germany. These initial reparations assessments were quite large. The Allied Reparations Commission collected the charges by supervising Germany’s foreign trade and by internal controls on the German economy, and it was authorized to increase the reparations if it was felt that Germany could pay more. The treaty allowed France to occupy the Ruhr after Germany defaulted in 1923.

Ultimately, this tangled web of debts and reparations, which was a major factor in the course of international trade, depended upon two principal actions. First, the United States had to run an import surplus or, on net, export capital out of the United States to provide a pool of dollars overseas. Germany then had either to have an export surplus or else import American capital so as to build up dollar reserves—that is, the dollars the United States was exporting. In effect, these dollars were paid by Germany to Great Britain, France, and other countries that then shipped them back to the United States as payment on their U.S. debts. If these conditions did not occur, (and note that the “new” gold standard of the twenties had lost its flexibility because the price adjustment mechanism had been eliminated) disruption in international activity could easily occur and be transmitted to the domestic economies.

In the wake of the 1920-21 depression Congress passed the Emergency Tariff Act, which raised tariffs, particularly on manufactured goods. (Figures 26 and 27) The Fordney-McCumber Tariff of 1922 continued the Emergency Tariff of 1921, and its protection on many items was extremely high, ranging from 60 to 100 percent ad valorem (or as a percent of the price of the item). The increases in the Fordney-McCumber tariff were as large and sometimes larger than the more famous (or “infamous”) Smoot-Hawley tariff of 1930. As farm product prices fell at the end of the decade presidential candidate Herbert Hoover proposed, as part of his platform, tariff increases and other changes to aid the farmers. In January 1929, after Hoover’s election, but before he took office, a tariff bill was introduced into Congress. Special interests succeeded in gaining additional (or new) protection for most domestically produced commodities and the goal of greater protection for the farmers tended to get lost in the increased protection for multitudes of American manufactured products. In spite of widespread condemnation by economists, President Hoover signed the Smoot-Hawley Tariff in June 1930 and rates rose sharply.

Following the First World War, the U.S. government actively promoted American exports, and in each of the postwar years through 1929, the United States recorded a surplus in its balance of trade. (Figure 28) However, the surplus declined in the 1930s as both exports and imports fell sharply after 1929. From the mid-1920s on finished manufactures were the most important exports, while agricultural products dominated American imports.

The majority of the funds that allowed Germany to make its reparations payments to France and Great Britain and hence allowed those countries to pay their debts to the United States came from the net flow of capital out of the United States in the form of direct investment in real assets and investments in long- and short-term foreign financial assets. After the devastating German hyperinflation of 1922 and 1923, the Dawes Plan reformed the German economy and currency and accelerated the U.S. capital outflow. American investors began to actively and aggressively pursue foreign investments, particularly loans (Lewis, 1938) and in the late twenties there was a marked deterioration in the quality of foreign bonds sold in the United States. (Mintz, 1951)

The system, then, worked well as long as there was a net outflow of American capital, but this did not continue. In the middle of 1928, the flow of short-term capital began to decline. In 1928 the flow of “other long-term” capital out of the United States was 752 million dollars, but in 1929 it was only 34 million dollars. Though arguments now exist as to whether the booming stock market in the United States was to blame for this, it had far-reaching effects on the international economic system and the various domestic economies.

The Start of the Depression

The United States had the majority of the world’s monetary gold, about 40 percent, by 1920. In the latter part of the twenties, France also began accumulating gold as its share of the world’s monetary gold rose from 9 percent in 1927 to 17 percent in 1929 and 22 percent by 1931. In 1927 the Federal Reserve System had reduced discount rates (the interest rate at which they lent reserves to member commercial banks) and engaged in open market purchases (purchasing U.S. government securities on the open market to increase the reserves of the banking system) to push down interest rates and assist Great Britain in staying on the gold standard. By early 1928 the Federal Reserve System was worried about its loss of gold due to this policy as well as the ongoing boom in the stock market. It began to raise the discount rate to stop these outflows. Gold was also entering the United States so that foreigners could obtain dollars to invest in stocks and bonds. As the United States and France accumulated more and more of the world’s monetary gold, other countries’ central banks took contractionary steps to stem the loss of gold. In country after country these deflationary strategies began contracting economic activity and by 1928 some countries in Europe, Asia, and South America had entered into a depression. More countries’ economies began to decline in 1929, including the United States, and by 1930 a depression was in force for almost all of the world’s market economies. (Temin, 1989; Eichengreen, 1992)

Monetary and Fiscal Policies in the 1920s

Fiscal Policies

As a tool to promote stability in aggregate economic activity, fiscal policy is largely a post-Second World War phenomenon. Prior to 1930 the federal government’s spending and taxing decisions were largely, but not completely, based on the perceived “need” for government-provided public goods and services.

Though the fiscal policy concept had not been developed, this does not mean that during the twenties no concept of the government’s role in stimulating economic activity existed. Herbert Stein (1990) points out that in the twenties Herbert Hoover and some of his contemporaries shared two ideas about the proper role of the federal government. The first was that federal spending on public works could be an important force in reducin Smiley and Keehn, 1995.  investment. Both concepts fit the ideas held by Hoover and others of his persuasion that the U.S. economy of the twenties was not the result of laissez-faire workings but of “deliberate social engineering.”

The federal personal income tax was enacted in 1913. Though mildly progressive, its rates were low and topped out at 7 percent on taxable income in excess of $750,000. (Table 4) As the United States prepared for war in 1916, rates were increased and reached a maximum marginal rate of 12 percent. With the onset of the First World War, the rates were dramatically increased. To obtain additional revenue in 1918, marginal rates were again increased. The share of federal revenue generated by income taxes rose from 11 percent in 1914 to 69 percent in 1920. The tax rates had been extended downward so that more than 30 percent of the nation’s income recipients were subject to income taxes by 1918. However, through the purchase of tax exempt state and local securities and through steps taken by corporations to avoid the cash distribution of profits, the number of high income taxpayers and their share of total taxes paid declined as Congress kept increasing the tax rates. The normal (or base) tax rate was reduced slightly for 1919 but the surtax rates, which made the income tax highly progressive, were retained. (Smiley-Keehn, 1995)

President Harding’s new Secretary of the Treasury, Andrew Mellon, proposed cutting the tax rates, arguing that the rates in the higher brackets had “passed the point of productivity” and rates in excess of 70 percent simply could not be collected. Though most agreed that the rates were too high, there was sharp disagreement on how the rates should be cut. Democrats and  Smiley and Keehn, 1995.  Progressive Republicans argued for rate cuts targeted for the lower income taxpayers while maintaining most of the steep progressivity of the tax rates. They believed that remedies could be found to change the tax laws to stop the legal avoidance of federal income taxes. Republicans argued for sharper cuts that reduced the progressivity of the rates. Mellon proposed a maximum rate of 25 percent.

Though the federal income tax rates were reduced and made less progressive, it took three tax rate cuts in 1921, 1924, and 1925 before Mellon’s goal was finally achieved. The highest marginal tax rate was reduced from 73 percent to 58 percent to 46 percent and finally to 25 percent for the 1925 tax year. All of the other rates were also reduced and exemptions increased. By 1926, only about the top 10 percent of income recipients were subject to federal income taxes. As tax rates were reduced, the number of high income tax returns increased and the share of total federal personal income taxes paid rose. (Tables 5 and 6) Even with the dramatic income tax rate cuts and reductions in the number of low income taxpayers, federal personal income tax revenue continued to rise during the 1920s. Though early estimates of the distribution of personal income showed sharp increases in income inequality during the 1920s (Kuznets, 1953; Holt, 1977), more recent estimates have found that the increases in inequality were considerably less and these appear largely to be related to the sharp rise in capital gains due to the booming stock market in the late twenties. (Smiley, 1998 and 2000)

Each year in the twenties the federal government generated a surplus, in some years as much as 1 percent of GNP. The surpluses were used to reduce the federal deficit and it declined by 25 percent between 1920 and 1930. Contrary to simple macroeconomic models that argue a federal government budget surplus must be contractionary and tend to stop an economy from reaching full employment, the American economy operated at full-employment or close to it throughout the twenties and saw significant economic growth. In this case, the surpluses were not contractionary because the dollars were circulated back into the economy through the purchase of outstanding federal debt rather than pulled out as currency and held in a vault somewhere.

Monetary Policies

In 1913 fear of the “money trust” and their monopoly power led Congress to create 12 central banks when they created the Federal Reserve System. The new central banks were to control money and credit and act as lenders of last resort to end banking panics. The role of the Federal Reserve Board, located in Washington, D.C., was to coordinate the policies of the 12 district banks; it was composed of five presidential appointees and the current secretary of the treasury and comptroller of the currency. All national banks had to become members of the Federal Reserve System, the Fed, and any state bank meeting the qualifications could elect to do so.

The act specified fixed reserve requirements on demand and time deposits, all of which had to be on deposit in the district bank. Commercial banks were allowed to rediscount commercial paper and given Federal Reserve currency. Initially, each district bank set its own rediscount rate. To provide additional income when there was little rediscounting, the district banks were allowed to engage in open market operations that involved the purchasing and selling of federal government securities, short-term securities of state and local governments issued in anticipation of taxes, foreign exchange, and domestic bills of exchange. The district banks were also designated to act as fiscal agents for the federal government. Finally, the Federal Reserve System provided a central check clearinghouse for the entire banking system.

When the Federal Reserve System was originally set up, it was believed that its primary role was to be a lender of last resort to prevent banking panics and become a check-clearing mechanism for the nation’s banks. Both the Federal Reserve Board and the Governors of the District Banks were bodies established to jointly exercise these activities. The division of functions was not clear, and a struggle for power ensued, mainly between the New York Federal Reserve Bank, which was led by J. P. Morgan’s protege, Benjamin Strong, through 1928, and the Federal Reserve Board. By the thirties the Federal Reserve Board had achieved dominance.

There were really two conflicting criteria upon which monetary actions were ostensibly based: the Gold Standard and the Real Bills Doctrine. The Gold Standard was supposed to be quasi-automatic, with an effective limit to the quantity of money. However, the Real Bills Doctrine (which required that all loans be made on short-term, self-liquidating commercial paper) had no effective limit on the quantity of money. The rediscounting of eligible commercial paper was supposed to lead to the required “elasticity” of the stock of money to “accommodate” the needs of industry and business. Actually the rediscounting of commercial paper, open market purchases, and gold inflows all had the same effects on the money stock.

The 1920-21 Depression

During the First World War, the Fed kept discount rates low and granted discounts on banks’ customer loans used to purchase V-bonds in order to help finance the war. The final Victory Loan had not been floated when the Armistice was signed in November of 1918: in fact, it took until October of 1919 for the government to fully sell this last loan issue. The Treasury, with the secretary of the treasury sitting on the Federal Reserve Board, persuaded the Federal Reserve System to maintain low interest rates and discount the Victory bonds necessary to keep bond prices high until this last issue had been floated. As a result, during this period the money supply grew rapidly and prices rose sharply.

A shift from a federal deficit to a surplus and supply disruptions due to steel and coal strikes in 1919 and a railroad strike in early 1920 contributed to the end of the boom. But the most—common view is that the Fed’s monetary policy was the main determinant of the end of the expansion and inflation and the beginning of the subsequent contraction and severe deflation. When the Fed was released from its informal agreement with the Treasury in November of 1919, it raised the discount rate from 4 to 4.75 percent. Benjamin Strong (the governor of the New York bank) was beginning to believe that the time for strong action was past and that the Federal Reserve System’s actions should be moderate. However, with Strong out of the country, the Federal Reserve Board increased the discount rate from 4.75 to 6 percent in late January of 1920 and to 7 percent on June 1, 1920. By the middle of 1920, economic activity and employment were rapidly falling, and prices had begun their downward spiral in one of the sharpest price declines in American history. The Federal Reserve System kept the discount rate at 7 percent until May 5, 1921, when it was lowered to 6.5 percent. By June of 1922, the rate had been lowered yet again to 4 percent. (Friedman and Schwartz, 1963)

The Federal Reserve System authorities received considerable criticism then and later for their actions. Milton Friedman and Anna Schwartz (1963) contend that the discount rate was raised too much too late and then kept too high for too long, causing the decline to be more severe and the price deflation to be greater. In their opinion the Fed acted in this manner due to the necessity of meeting the legal reserve requirement with a safe margin of gold reserves. Elmus Wicker (1966), however, argues that the gold reserve ratio was not the main factor determining the Federal Reserve policy in the episode. Rather, the Fed knowingly pursued a deflationary policy because it felt that the money supply was simply too large and prices too high. To return to the prewar parity for gold required lowering the price level, and there was an excessive stock of money because the additional money had been used to finance the war, not to produce consumer goods. Finally, the outstanding indebtedness was too large due to the creation of Fed credit.

Whether statutory gold reserve requirements to maintain the gold standard or domestic credit conditions were the most important determinant of Fed policy is still an open question, though both certainly had some influence. Regardless of the answer to that question, the Federal Reserve System’s first major undertaking in the years immediately following the First World War demonstrated poor policy formulation.

Federal Reserve Policies from 1922 to 1930

By 1921 the district banks began to recognize that their open market purchases had effects on interest rates, the money stock, and economic activity. For the next several years, economists in the Federal Reserve System discussed how this worked and how it could be related to discounting by member banks. A committee was created to coordinate the open market purchases of the district banks.

The recovery from the 1920-1921 depression had proceeded smoothly with moderate price increases. In early 1923 the Fed sold some securities and increased the discount rate from 4 percent as they believed the recovery was too rapid. However, by the fall of 1923 there were some signs of a business slump. McMillin and Parker (1994) argue that this contraction, as well as the 1927 contraction, were related to oil price shocks. By October of 1923 Benjamin Strong was advocating securities purchases to counter this. Between then and September 1924 the Federal Reserve System increased its securities holdings by over $500 million. Between April and August of 1924 the Fed reduced the discount rate to 3 percent in a series of three separate steps. In addition to moderating the mild business slump, the expansionary policy was also intended to reduce American interest rates relative to British interest rates. This reversed the gold flow back toward Great Britain allowing Britain to return to the gold standard in 1925. At the time it appeared that the Fed’s monetary policy had successfully accomplished its goals.

By the summer of 1924 the business slump was over and the economy again began to grow rapidly. By the mid-1920s real estate speculation had arisen in many urban areas in the United States and especially in Southeastern Florida. Land prices were rising sharply. Stock market prices had also begun rising more rapidly. The Fed expressed some worry about these developments and in 1926 sold some securities to gently slow the real estate and stock market boom. Amid hurricanes and supply bottlenecks the Florida real estate boom collapsed but the stock market boom continued.

The American economy entered into another mild business recession in the fall of 1926 that lasted until the fall of 1927. One of the factors in this was Henry’s Ford’s shut down of all of his factories to changeover from the Model T to the Model A. His employees were left without a job and without income for over six months. International concerns also reappeared. France, which was preparing to return to the gold standard, had begun accumulating gold and gold continued to flow into the United States. Some of this gold came from Great Britain making it difficult for the British to remain on the gold standard. This occasioned a new experiment in central bank cooperation. In July 1927 Benjamin Strong arranged a conference with Governor Montagu Norman of the Bank of England, Governor Hjalmar Schacht of the Reichsbank, and Deputy Governor Charles Ritt of the Bank of France in an attempt to promote cooperation among the world’s central bankers. By the time the conference began the Fed had already taken steps to counteract the business slump and reduce the gold inflow. In early 1927 the Fed reduced discount rates and made large securities purchases. One result of this was that the gold stock fell from $4.3 billion in mid-1927 to $3.8 billion in mid-1928. Some of the gold exports went to France and France returned to the gold standard with its undervalued currency. The loss of gold from Britain eased allowing it to maintain the gold standard.

By early 1928 the Fed was again becoming worried. Stock market prices were rising even faster and the apparent speculative bubble in the stock market was of some concern to Fed authorities. The Fed was also concerned about the loss of gold and wanted to bring that to an end. To do this they sold securities and, in three steps, raised the discount rate to 5 percent by July 1928. To this point the Federal Reserve Board had largely agreed with district Bank policy changes. However, problems began to develop.

During the stock market boom of the late 1920s the Federal Reserve Board preferred to use “moral suasion” rather than increases in discount rates to lessen member bank borrowing. The New York City bank insisted that moral suasion would not work unless backed up by literal credit rationing on a bank by bank basis which they, and the other district banks, were unwilling to do. They insisted that discount rates had to be increased. The Federal Reserve Board countered that this general policy change would slow down economic activity in general rather than be specifically targeted to stock market speculation. The result was that little was done for a year. Rates were not raised but no open market purchases were undertaken. Rates were finally raised to 6 percent in August of 1929. By that time the contraction had already begun. In late October the stock market crashed, and America slid into the Great Depression.

In November, following the stock market crash the Fed reduced discount rates to 4.5 percent. In January they again decreased discount rates and began a series of discount rate decreases until the rate reached 2.5 percent at the end of 1930. No further open market operations were undertaken for the next six months. As banks reduced their discounting in 1930, the stock of money declined. There was a banking crisis in the southeast in November and December of 1930, and in its wake the public’s holding of currency relative to deposits and banks’ reserve ratios began to rise and continued to do so through the end of the Great Depression.

Conclusion

Though some disagree, there is growing evidence that the behavior of the American economy in the 1920s did not cause the Great Depression. The depressed 1930s were not “retribution” for the exuberant growth of the 1920s. The weakness of a few economic sectors in the 1920s did not forecast the contraction from 1929 to 1933. Rather it was the depression of the 1930s and the Second World War that interrupted the economic growth begun in the 1920s and resumed after the Second World War. Just as the construction of skyscrapers that began in the 1920s resumed in the 1950s, so did real economic growth and progress resume. In retrospect we can see that the introduction and expansion of new technologies and industries in the 1920s, such as autos, household electric appliances, radio, and electric utilities, are echoed in the 1990s in the effects of the expanding use and development of the personal computer and the rise of the internet. The 1920s have much to teach us about the growth and development of the American economy.

References

Adams, Walter, ed. The Structure of American Industry, 5th ed. New York: Macmillan Publishing Co., 1977.

Aldcroft, Derek H. From Versailles to Wall Street, 1919-1929. Berkeley: The University of California Press, 1977.

Allen, Frederick Lewis. Only Yesterday. New York: Harper and Sons, 1931.

Alston, Lee J. “Farm Foreclosures in the United States During the Interwar Period.” The Journal of Economic History 43, no. 4 (1983): 885-904.

Alston, Lee J., Wayne A. Grove, and David C. Wheelock. “Why Do Banks Fail? Evidence from the 1920s.” Explorations in Economic History 31 (1994): 409-431.

Ankli, Robert. “Horses vs. Tractors on the Corn Belt.” Agricultural History 54 (1980): 134-148.

Ankli, Robert and Alan L. Olmstead. “The Adoption of the Gasoline Tractor in California.” Agricultural History 55 (1981):— 213-230.

Appel, Joseph H. The Business Biography of John Wanamaker, Founder and Builder. New York: The Macmillan Co., 1930.

Baker, Jonathan B. “Identifying Cartel Pricing Under Uncertainty: The U.S. Steel Industry, 1933-1939.” The Journal of Law and Economics 32 (1989): S47-S76.

Barger, E. L, et al. Tractors and Their Power Units. New York: John Wiley and Sons, 1952.

Barnouw, Eric. A Tower in Babel: A History of Broadcasting in the United States: Vol. I—to 1933. New York: Oxford University Press, 1966.

Barnouw, Eric. The Golden Web: A History of Broadcasting in the United States: Vol. II—1933 to 1953. New York: Oxford University Press, 1968.

Beasley, Norman. Main Street Merchant: The Story of the J. C. Penney Company. New York: Whittlesey House, 1948.

Beckman, Theodore N. and Herman C. Nolen. The Chain Store Problem: A Critical Analysis. New York: McGraw-Hill Book Co., 1938.

Benson, Susan Porter. Counter Cultures: Saleswomen, Managers, and Customers in American Department Stores, 1890-1940. Urbana, IL: University of Illinois Press, 1986.

Bernstein, Irving. The Lean Years: A History of the American Worker, 1920-1933. Boston: Houghton Mifflin Co., 1960.

Bernstein, Michael A. The Great Depression: Delayed Recovery and Economic Change in America, 1929-1939. New York: Cambridge University Press, 1987.

Bishop, Jerry E. “Stock Market Experiment Suggests Inevitability of Booms and Busts.” The Wall Street Journal, 17 November, 1987.

Board of Governors of the Federal Reserve System. Banking and Monetary Statistics. Washington: USGOP, 1943.

Bogue, Allan G. “Changes in Mechanical and Plant Technology: The Corn Belt, 1910-1940.” The Journal of Economic History 43 (1983): 1-26.

Breit, William and Elzinga, Kenneth. The Antitrust Casebook: Milestones in Economic Regulation, 2d ed. Chicago: The Dryden Press, 1989.

Bright, Arthur A., Jr. The Electric Lamp Industry: Technological Change and Economic Development from 1800 to 1947. New York: Macmillan, 1947.

Brody, David. Labor in Crisis: The Steel Strike. Philadelphia: J. B. Lippincott Co., 1965.

Brooks, John. Telephone: The First Hundred Years. New York: Harper and Row, 1975.

Brown, D. Clayton. Electricity for Rural America: The Fight for the REA. Westport, CT: The Greenwood Press, 1980.

Brown, William A., Jr. The International Gold Standard Reinterpreted, 1914-1934, 2 vols. New York: National Bureau of Economic Research, 1940.

Brunner, Karl and Allen Meltzer. “What Did We Learn from the Monetary Experience of the United States in the Great Depression?” Canadian Journal of Economics 1 (1968): 334-48.

Bryant, Keith L., Jr., and Henry C. Dethloff. A History of American Business. Englewood Cliffs, NJ: Prentice-Hall, Inc., 1983.

Bucklin, Louis P. Competition and Evolution in the Distributive Trades. Englewood Cliffs, NJ: Prentice-Hall, 1972.

Bullock, Roy J. “The Early History of the Great Atlantic & Pacific Tea Company,” Harvard Business Review 11 (1933): 289-93.

Bullock, Roy J. “A History of the Great Atlantic & Pacific Tea Company Since 1878.” Harvard Business Review 12 (1933): 59-69.

Cecchetti, Stephen G. “Understanding the Great Depression: Lessons for Current Policy.” In The Economics of the Great Depression, Edited by Mark Wheeler. Kalamazoo, MI: W. E. Upjohn Institute for Employment Research, 1998.

Chandler, Alfred D., Jr. Strategy and Structure: Chapters in the History of the American Industrial Enterprise. Cambridge, MA: The M.I.T. Press, 1962.

Chandler, Alfred D., Jr. The Visible Hand: The Managerial Revolution in American Business. Cambridge, MA: the Belknap Press Harvard University Press, 1977.

Chandler, Alfred D., Jr. Giant Enterprise: Ford, General Motors, and the American Automobile Industry. New York: Harcourt, Brace, and World, 1964.

Chester, Giraud, and Garnet R. Garrison. Radio and Television: An Introduction. New York: Appleton-Century Crofts, 1950.

Clewett, Richard C. “Mass Marketing of Consumers’ Goods.” In The Growth of the American Economy, 2d ed. Edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Cochran, Thomas C. 200 Years of American Business. New York: Delta Books, 1977.

Cohen, Yehoshua S. Diffusion of an Innovation in an Urban System: The Spread of Planned Regional Shopping Centers in the United States, 1949-1968. Chicago: The University of Chicago, Department of Geography, Research Paper No. 140, 1972.

Daley, Robert. An American Saga: Juan Trippe and His Pan American Empire. New York: Random House, 1980.

De Long, J. Bradford, and Andre Shleifer. “The Stock Market Bubble of 1929: Evidence from Closed-end Mutual Funds.” The Journal of Economic History 51 (September 1991): 675-700.

Clarke, Sally. “New Deal Regulation and the Revolution in American Farm Productivity: A Case Study of the Diffusion of the Tractor in the Corn Belt, 1920-1940.” The Journal of Economic History 51, no. 1 (1991): 105-115.

Cohen, Avi. “Technological Change as Historical Process: The Case of the U.S. Pulp and Paper Industry, 1915-1940.” The Journal of Economic History 44 (1984): 775-79.

Davies, R. E. G. A History of the World’s Airlines. London: Oxford University Press, 1964.

Dearing, Charles L., and Wilfred Owen. National Transportation Policy. Washington: The Brookings Institution, 1949.

Degen, Robert A. The American Monetary System: A Concise Survey of Its Evolution Since 1896. Lexington, MA: Lexington Books, 1987.

De Long, J. Bradford and Andre Shleifer. “The Stock Market Bubble of 1929: Evidence from Closed-end Mutual Funds.” The Journal of Economic History 51 (1991): 675-700.

Devine, Warren D., Jr. “From Shafts to Wires: Historical Perspectives on Electrification.” The Journal of Economic History 43 (1983): 347-372.

Eckert, Ross D., and George W. Hilton. “The Jitneys.” The Journal of Law and Economics 15 (October 1972): 293-326.

Eichengreen, Barry, ed. The Gold Standard in Theory and History. New York: Metheun, 1985.

Barry Eichengreen. “The Political Economy of the Smoot-Hawley Tariff.” Research in Economic History 12 (1989): 1-43.

Eichengreen, Barry. Golden Fetters: The Gold Standard and the Great Depression, 1919-1939. New York: Oxford University Press, 1992.

Eis, Carl. “The 1919-1930 Merger Movement in American Industry.” The Journal of Law and Economics XII (1969): 267-96.

Emmet, Boris, and John E. Jeuck. Catalogues and Counters: A History of Sears Roebuck and Company. Chicago: University of Chicago Press, 1950.

Fearon, Peter. War, Prosperity, & Depression: The U.S. Economy, 1947-1945. Lawrence, KS: University of Kansas Press, 1987.

Field, Alexander J. “The Most Technologically Progressive Decade of the Century.” The American Economic Review 93 (2003): 1399-1413.

Fischer, Claude. “The Revolution in Rural Telephony, 1900-1920.” Journal of Social History 21 (1987): 221-38.

Fischer, Claude. “Technology’s Retreat: The Decline of Rural Telephony in the United States, 1920-1940.” Social Science History, Vol. 11 (Fall 1987), pp. 295-327.

Fisher, Irving. The Stock Market Crash—and After. New York: Macmillan, 1930.

French, Michael J. “Structural Change and Competition in the United States Tire Industry, 1920-1937.” Business History Review 60 (1986): 28-54.

French, Michael J. The U.S. Tire Industry. Boston: Twayne Publishers, 1991.

Fricke, Ernest B. “The New Deal and the Modernization of Small Business: The McCreary Tire and Rubber Company, 1930-1940.” Business History Review 56 (1982):— 559-76.

Friedman, Milton, and Anna J. Schwartz. A Monetary History of the United States, 1867-1960. Princeton: Princeton University Press, 1963.

Galbraith, John Kenneth. The Great Crash. Boston: Houghton Mifflin, 1954.

Garnet, Robert W. The Telephone Enterprise: The Evolution of the Bell System’s Horizontal Structure, 1876-1900. Baltimore: The Johns Hopkins University Press, 1985.

Gideonse, Max. “Foreign Trade, Investments, and Commercial Policy.” In The Growth of the American Economy, 2d ed. Edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Giedion, Simon. Mechanization Takes Command. New York: Oxford University Press, 1948.

Gordon, Robert Aaron. Economic Instability and Growth: The American Record. New York: Harper and Row, 1974.

Gray, Roy Burton. Development of the Agricultural Tractor in the United States, 2 vols. Washington, D. C.: USGPO, 1954.

Gunderson, Gerald. An Economic History of America. New York: McGraw-Hill, 1976.

Hadwiger, Don F., and Clay Cochran. “Rural Telephones in the United States.” Agricultural History 58 (July 1984): 221-38.

Hamilton, James D. “Monetary Factors in the Great Depression.” Journal of Monetary Economics 19 (1987): 145-169.

Hamilton, James D. “The Role of the International Gold Standard in Propagating the Great Depression.” Contemporary Policy Issues 6 (1988): 67-89.

Hayek, Friedrich A. Prices and Production. New York: Augustus M. Kelly reprint of 1931 edition.

Hayford, Marc and Carl A. Pasurka, Jr. “The Political Economy of the Fordney-McCumber and Smoot-Hawley Tariff Acts.” Explorations in Economic History 29 (1992): 30-50.

Hendrickson, Robert. The Grand Emporiums: The Illustrated History of America’s Great Department Stores. Briarcliff Manor, NY: Stein and Day, 1979.

Herbst, Anthony F., and Joseph S. K. Wu. “Some Evidence of Subsidization of the U.S. Trucking Industry, 1900-1920.” The Journal of Economic History— 33 (June 1973): 417-33.

Higgs, Robert. Crisis and Leviathan: Critical Episodes in the Growth of American Government. New York: Oxford University Press, 1987.

Hilton, George W., and John Due. The Electric Interurban Railways in America. Stanford: Stanford University Press, 1960.

Hoffman, Elizabeth and Gary D. Liebcap. “Institutional Choice and the Development of U.S. Agricultural Policies in the 1920s.” The Journal of Economic History 51 (1991): 397-412.

Holt, Charles F. “Who Benefited from the Prosperity of the Twenties?” Explorations in Economic History 14 (1977): 277-289.

Hower, Ralph W. History of Macy’s of New York, 1858-1919. Cambridge, MA: Harvard University Press, 1946.

Hubbard, R. Glenn, Ed. Financial Markets and Financial Crises. Chicago: University of Chicago Press, 1991.

Hunter, Louis C. “Industry in the Twentieth Century.” In The Growth of the American Economy, 2d ed., edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Jerome, Harry. Mechanization in Industry. New York: National Bureau of Economic Research, 1934.

Johnson, H. Thomas. “Postwar Optimism and the Rural Financial Crisis.” Explorations in Economic History 11, no. 2 (1973-1974): 173-192.

Jones, Fred R. and William H. Aldred. Farm Power and Tractors, 5th ed. New York: McGraw-Hill, 1979.

Keller, Robert. “Factor Income Distribution in the United States During the 20’s: A Reexamination of Fact and Theory.” The Journal of Economic History 33 (1973): 252-95.

Kelly, Charles J., Jr. The Sky’s the Limit: The History of the Airlines. New York: Coward-McCann, 1963.

Kindleberger, Charles. The World in Depression, 1929-1939. Berkeley: The University of California Press, 1973.

Klebaner, Benjamin J. Commercial Banking in the United States: A History. New York: W. W. Norton and Co., 1974.

Kuznets, Simon. Shares of Upper Income Groups in Income and Savings. New York: NBER, 1953.

Lebhar, Godfrey M. Chain Stores in America, 1859-1962. New York: Chain Store Publishing Corp., 1963.

Lewis, Cleona. America’s Stake in International Investments. Washington: The Brookings Institution, 1938.

Livesay, Harold C. and Patrick G. Porter. “Vertical Integration in American Manufacturing, 1899-1948.” The Journal of Economic History 29 (1969): 494-500.

Lipartito, Kenneth. The Bell System and Regional Business: The Telephone in the South, 1877-1920. Baltimore: The Johns Hopkins University Press, 1989.

Liu, Tung, Gary J. Santoni, and Courteney C. Stone. “In Search of Stock Market Bubbles: A Comment on Rappoport and White.” The Journal of Economic History 55 (1995): 647-654.

Lorant, John. “Technological Change in American Manufacturing During the 1920s.” The Journal of Economic History 33 (1967): 243-47.

MacDonald, Forrest. Insull. Chicago: University of Chicago Press, 1962.

Marburg, Theodore. “Domestic Trade and Marketing.” In The Growth of the American Economy, 2d ed. Edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Markham, Jesse. “Survey of the Evidence and Findings on Mergers.” In Business Concentration and Price Policy, National Bureau of Economic Research. Princeton: Princeton University Press, 1955.

Markin, Rom J. The Supermarket: An Analysis of Growth, Development, and Change. Rev. ed. Pullman, WA: Washington State University Press, 1968.

McCraw, Thomas K. TVA and the Power Fight, 1933-1937. Philadelphia: J. B. Lippincott, 1971.

McCraw, Thomas K. and Forest Reinhardt. “Losing to Win: U.S. Steel’s Pricing, Investment Decisions, and Market Share, 1901-1938.” The Journal of Economic History 49 (1989): 592-620.

McMillin, W. Douglas and Randall E. Parker. “An Empirical Analysis of Oil Price Shocks in the Interwar Period.” Economic Inquiry 32 (1994): 486-497.

McNair, Malcolm P., and Eleanor G. May. The Evolution of Retail Institutions in the United States. Cambridge, MA: The Marketing Science Institute, 1976.

Mercer, Lloyd J. “Comment on Papers by Scheiber, Keller, and Raup.” The Journal of Economic History 33 (1973): 291-95.

Mintz, Ilse. Deterioration in the Quality of Foreign Bonds Issued in the United States, 1920-1930. New York: National Bureau of Economic Research, 1951.

Mishkin, Frederic S. “Asymmetric Information and Financial Crises: A Historical Perspective.” In Financial Markets and Financial Crises Edited by R. Glenn Hubbard. Chicago: University of Chicago Press, 1991.

Morris, Lloyd. Not So Long Ago. New York: Random House, 1949.

Mosco, Vincent. Broadcasting in the United States: Innovative Challenge and Organizational Control. Norwood, NJ: Ablex Publishing Corp., 1979.

Moulton, Harold G. et al. The American Transportation Problem. Washington: The Brookings Institution, 1933.

Mueller, John. “Lessons of the Tax-Cuts of Yesteryear.” The Wall Street Journal, March 5, 1981.

Musoke, Moses S. “Mechanizing Cotton Production in the American South: The Tractor, 1915-1960.” Explorations in Economic History 18 (1981): 347-75.

Nelson, Daniel. “Mass Production and the U.S. Tire Industry.” The Journal of Economic History 48 (1987): 329-40.

Nelson, Ralph L. Merger Movements in American Industry, 1895-1956. Princeton: Princeton University Press, 1959.

Niemi, Albert W., Jr., U.S. Economic History, 2nd ed. Chicago: Rand McNally Publishing Co., 1980.

Norton, Hugh S. Modern Transportation Economics. Columbus, OH: Charles E. Merrill Books, Inc., 1963.

Nystrom, Paul H. Economics of Retailing, vol. 1, 3rd ed. New York: The Ronald Press Co., 1930.

Oshima, Harry T. “The Growth of U.S. Factor Productivity: The Significance of New Technologies in the Early Decades of the Twentieth Century.” The Journal of Economic History 44 (1984): 161-70.

Parker, Randall and W. Douglas McMillin. “An Empirical Analysis of Oil Price Shocks During the Interwar Period.” Economic Inquiry 32 (1994): 486-497.

Parker, Randall and Paul Flacco. “Income Uncertainty and the Onset of the Great Depression.” Economic Inquiry 30 (1992): 154-171.

Parker, William N. “Agriculture.” In American Economic Growth: An Economist’s History of the United States, edited by Lance E. Davis, Richard A. Easterlin, William N. Parker, et. al. New York: Harper and Row, 1972.

Passer, Harold C. The Electrical Manufacturers, 1875-1900. Cambridge: Harvard University Press, 1953.

Peak, Hugh S., and Ellen F. Peak. Supermarket Merchandising and Management. Englewood Cliffs, NJ: Prentice-Hall, 1977.

Pilgrim, John. “The Upper Turning Point of 1920: A Reappraisal.” Explorations in Economic History 11 (1974): 271-98.

Rae, John B. Climb to Greatness: The American Aircraft Industry, 1920-1960. Cambridge: The M.I.T. Press, 1968.

Rae, John B. The American Automobile Industry. Boston: Twayne Publishers, 1984.

Rappoport, Peter and Eugene N. White. “Was the Crash of 1929 Expected?” American Economic Review 84 (1994): 271-281.

Rappoport, Peter and Eugene N. White. “Was There a Bubble in the 1929 Stock Market?” The Journal of Economic History 53 (1993): 549-574.

Resseguie, Harry E. “Alexander Turney Stewart and the Development of the Department Store, 1823-1876,” Business History Review 39 (1965): 301-22.

Rezneck, Samuel. “Mass Production and the Use of Energy.” In The Growth of the American Economy, 2d ed., edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Rockwell, Llewellyn H., Jr., ed. The Gold Standard: An Austrian Perspective. Lexington, MA: Lexington Books, 1985.

Romer, Christina. “Spurious Volatility in Historical Unemployment Data.” The Journal of Political Economy 91 (1986): 1-37.

Romer, Christina. “New Estimates of Prewar Gross National Product and Unemployment.” Journal of Economic History 46 (1986): 341-352.

Romer, Christina. “World War I and the Postwar Depression: A Reinterpretation Based on Alternative Estimates of GNP.” Journal of Monetary Economics 22 (1988): 91-115.

Romer, Christina and Jeffrey A. Miron. “A New Monthly Index of Industrial Production, 1884-1940.” Journal of Economic History 50 (1990): 321-337.

Romer, Christina. “The Great Crash and the Onset of the Great Depression.” Quarterly Journal of Economics 105 (1990): 597-625.

Romer, Christina. “Remeasuring Business Cycles.” The Journal of Economic History 54 (1994): 573-609.

Roose, Kenneth D. “The Production Ceiling and the Turning Point of 1920.” American Economic Review 48 (1958): 348-56.

Rosen, Philip T. The Modern Stentors: Radio Broadcasters and the Federal Government, 1920-1934. Westport, CT: The Greenwood Press, 1980.

Rosen, Philip T. “Government, Business, and Technology in the 1920s: The Emergence of American Broadcasting.” In American Business History: Case Studies. Edited by Henry C. Dethloff and C. Joseph Pusateri. Arlington Heights, IL: Harlan Davidson, 1987.

Rothbard, Murray N. America’s Great Depression. Kansas City: Sheed and Ward, 1963.

Sampson, Roy J., and Martin T. Ferris. Domestic Transportation: Practice, Theory, and Policy, 4th ed. Boston: Houghton Mifflin Co., 1979.

Samuelson, Paul and Everett E. Hagen. After the War—1918-1920. Washington: National Resources Planning Board, 1943.

Santoni, Gary and Gerald P. Dwyer, Jr. “The Great Bull Markets, 1924-1929 and 1982-1987: Speculative Bubbles or Economic Fundamentals?” Federal Reserve Bank of St. Louis Review 69 (1987): 16-29.

Santoni, Gary, and Gerald P. Dwyer, Jr. “Bubbles vs. Fundamentals: New Evidence from the Great Bull Markets.” In Crises and Panics: The Lessons of History. Edited by Eugene N. White. Homewood, IL: Dow Jones/Irwin, 1990.

Scherer, Frederick M. and David Ross. Industrial Market Structure and Economic Performance, 3d ed. Boston: Houghton Mifflin, 1990.

Schlebecker, John T. Whereby We Thrive: A History of American Farming, 1607-1972. Ames, IA: The Iowa State University Press, 1975.

Shepherd, James. “The Development of New Wheat Varieties in the Pacific Northwest.” Agricultural History 54 (1980): 52-63.

Sirkin, Gerald. “The Stock Market of 1929 Revisited: A Note.” Business History Review 49 (Fall 1975): 233-41.

Smiley, Gene. The American Economy in the Twentieth Century. Cincinnati: South-Western Publishing Co., 1994.

Smiley, Gene. “New Estimates of Income Shares During the 1920s.” In Calvin Coolidge and the Coolidge Era: Essays on the History of the 1920s, edited by John Earl Haynes, 215-232. Washington, D.C.: Library of Congress, 1998.

Smiley, Gene. “A Note on New Estimates of the Distribution of Income in the 1920s.” The Journal of Economic History 60, no. 4 (2000): 1120-1128.

Smiley, Gene. Rethinking the Great Depression: A New View of Its Causes and Consequences. Chicago: Ivan R. Dee, 2002.

Smiley, Gene, and Richard H. Keehn. “Margin Purchases, Brokers’ Loans and the Bull Market of the Twenties.” Business and Economic History. 2d series. 17 (1988): 129-42.

Smiley, Gene and Richard H. Keehn. “Federal Personal Income Tax Policy in the 1920s.” the Journal of Economic History 55, no. 2 (1995): 285-303.

Sobel, Robert. The Entrepreneuers: Explorations Within the American Business Tradition. New York: Weybright and Talley, 1974.

Soule, George. Prosperity Decade: From War to Depression: 1917-1929. New York: Holt, Rinehart, and Winston, 1947.

Stein, Herbert. The Fiscal Revolution in America, revised ed. Washington, D.C.: AEI Press, 1990.

Stigler, George J. “Monopoly and Oligopoly by Merger.” American Economic Review, 40 (May 1950): 23-34.

Sumner, Scott. “The Role of the International Gold Standard in Commodity Price Deflation: Evidence from the 1929 Stock Market Crash.” Explorations in Economic History 29 (1992): 290-317.

Swanson, Joseph and Samuel Williamson. “Estimates of National Product and Income for the United States Economy, 1919-1941.” Explorations in Economic History 10, no. 1 (1972): 53-73.

Temin, Peter. “The Beginning of the Depression in Germany.” Economic History Review. 24 (May 1971): 240-48.

Temin, Peter. Did Monetary Forces Cause the Great Depression. New York: W. W. Norton, 1976.

Temin, Peter. The Fall of the Bell System. New York: Cambridge University Press, 1987.

Temin, Peter. Lessons from the Great Depression. Cambridge, MA: The M.I.T. Press, 1989.

Thomas, Gordon, and Max Morgan-Witts. The Day the Bubble Burst. Garden City, NY: Doubleday, 1979.

Ulman, Lloyd. “The Development of Trades and Labor Unions,” In American Economic History, edited by Seymour E. Harris, chapter 14. New York: McGraw-Hill Book Co., 1961.

Ulman Lloyd. The Rise of the National Trade Union. Cambridge, MA: Harvard University Press, 1955.

U.S. Department of Commerce, Bureau of the Census. Historical Statistics of the United States: Colonial Times to 1970, 2 volumes. Washington, D.C.: USGPO, 1976.

Walsh, Margaret. Making Connections: The Long Distance Bus Industry in the U.S.A. Burlington, VT: Ashgate, 2000.

Wanniski, Jude. The Way the World Works. New York: Simon and Schuster, 1978.

Weiss, Leonard W. Cast Studies in American Industry, 3d ed. New York: John Wiley & Sons, 1980.

Whaples, Robert. “Hours of Work in U.S. History.” EH.Net Encyclopedia, edited by Robert Whaples, August 15 2001 URL— http://www.eh.net/encyclopedia/contents/whaples.work.hours.us.php

Whatley, Warren. “Southern Agrarian Labor Contracts as Impediments to Cotton Mechanization.” The Journal of Economic History 87 (1987): 45-70.

Wheelock, David C. and Subal C. Kumbhakar. “The Slack Banker Dances: Deposit Insurance and Risk-Taking in the Banking Collapse of the 1920s.” Explorations in Economic History 31 (1994): 357-375.

White, Eugene N. “The Stock Market Boom and Crash of 1929 Revisited.” The Journal of Economic Perspectives. 4 (Spring 1990): 67-83.

White, Eugene N., Ed. Crises and Panics: The Lessons of History. Homewood, IL: Dow Jones/Irwin, 1990.

White, Eugene N. “When the Ticker Ran Late: The Stock Market Boom and Crash of 1929.” In Crises and Panics: The Lessons of History Edited by Eugene N. White. Homewood, IL: Dow Jones/Irwin, 1990.

White, Eugene N. “Stock Market Bubbles? A Reply.” The Journal of Economic History 55 (1995): 655-665.

White, William J. “Economic History of Tractors in the United States.” EH.Net Encyclopedia, edited by Robert Whaples, August 15 2001 URL http://www.eh.net/encyclopedia/contents/white.tractors.history.us.php

Wicker, Elmus. “Federal Reserve Monetary Policy, 1922-1933: A Reinterpretation.” Journal of Political Economy 73 (1965): 325-43.

Wicker, Elmus. “A Reconsideration of Federal Reserve Policy During the 1920-1921 Depression.” The Journal of Economic History 26 (1966): 223-38.

Wicker, Elmus. Federal Reserve Monetary Policy, 1917-1933. New York: Random House, 1966.

Wigmore, Barrie A. The Crash and Its Aftermath. Westport, CT: Greenwood Press, 1985.

Williams, Raburn McFetridge. The Politics of Boom and Bust in Twentieth-Century America. Minneapolis/St. Paul: West Publishing Co., 1994.

Williamson, Harold F., et al. The American Petroleum Industry: The Age of Energy, 1899-1959. Evanston, IL: Northwestern University Press, 1963.

Wilson, Thomas. Fluctuations in Income and Employment, 3d ed. New York: Pitman Publishing, 1948.

Wilson, Jack W., Richard E. Sylla, and Charles P. Jones. “Financial Market Panics and Volatility in the Long Run, 1830-1988.” In Crises and Panics: The Lessons of History Edited by Eugene N. White. Homewood, IL: Dow Jones/Irwin, 1990.

Winkler, John Kennedy. Five and Ten: The Fabulous Life of F. W. Woolworth. New York: R. M. McBride and Co., 1940.

Wood, Charles. “Science and Politics in the War on Cattle Diseases: The Kansas Experience, 1900-1940.” Agricultural History 54 (1980): 82-92.

Wright, Gavin. Old South, New South: Revolutions in the Southern Economy Since the Civil War. New York: Basic Books, 1986.

Wright, Gavin. “The Origins of American Industrial Success, 1879-1940.” The American Economic Review 80 (1990): 651-668.

Citation: Smiley, Gene. “US Economy in the 1920s”. EH.Net Encyclopedia, edited by Robert Whaples. June 29, 2004. URL http://eh.net/encyclopedia/the-u-s-economy-in-the-1920s/

The Dutch Economy in the Golden Age (16th – 17th Centuries)

Donald J. Harreld, Brigham Young University

In just over one hundred years, the provinces of the Northern Netherlands went from relative obscurity as the poor cousins of the industrious and heavily urbanized Southern Netherlands provinces of Flanders and Brabant to the pinnacle of European commercial success. Taking advantage of a favorable agricultural base, the Dutch achieved success in the fishing industry and the Baltic and North Sea carrying trade during the fifteenth and sixteenth centuries before establishing a far-flung maritime empire in the seventeenth century.

The Economy of the Netherlands up to the Sixteenth Century

In many respects the seventeenth-century Dutch Republic inherited the economic successes of the Burgundian and Habsburg Netherlands. For centuries, Flanders and to a lesser extent Brabant had been at the forefront of the medieval European economy. An indigenous cloth industry was present throughout all areas of Europe in the early medieval period, but Flanders was the first to develop the industry with great intensity. A tradition of cloth manufacture in the Low Countries existed from antiquity when the Celts and then the Franks continued an active textile industry learned from the Romans.

As demand grew early textile production moved from its rural origins to the cities and had become, by the twelfth century, an essentially urban industry. Native wool could not keep up with demand, and the Flemings imported English wool in great quantities. The resulting high quality product was much in demand all over Europe, from Novgorod to the Mediterranean. Brabant also rose to an important position in textile industry, but only about a century after Flanders. By the thirteenth century the number of people engaged in some aspect of the textile industry in the Southern Netherlands had become more than the total engaged in all other crafts. It is possible that this emphasis on cloth manufacture was the reason that the Flemish towns ignored the emerging maritime shipping industry which was eventually dominated by others, first the German Hanseatic League, and later Holland and Zeeland.

By the end of the fifteenth century Antwerp in Brabant had become the commercial capital of the Low Countries as foreign merchants went to the city in great numbers in search of the high-value products offered at the city’s fairs. But the traditional cloths manufactured in Flanders had lost their allure for most European markets, particularly as the English began exporting high quality cloths rather than the raw materials the Flemish textile industry depended on. Many textile producers turned to the lighter weight and cheaper “new draperies.” Despite protectionist measures instituted in the mid-fifteenth century, English cloth found an outlet in Antwerp ‘s burgeoning markets. By the early years of the sixteenth century the Portuguese began using Antwerp as an outlet for their Asian pepper and spice imports, and the Germans continued to bring their metal products (copper and silver) there. For almost a hundred years Antwerp remained the commercial capital of northern Europe, until the religious and political events of the 1560s and 1570s intervened and the Dutch Revolt against Spanish rule toppled the commercial dominance of Antwerp and the southern provinces. Within just a few years of the Fall of Antwerp (1585), scores of merchants and mostly Calvinist craftsmen fled the south for the relative security of the Northern Netherlands.

The exodus from the south certainly added to the already growing population of the north. However, much like Flanders and Brabant, the northern provinces of Holland and Zeeland were already populous and heavily urbanized. The population of these maritime provinces had been steadily growing throughout the sixteenth century, perhaps tripling between the first years of the sixteenth century to about 1650. The inland provinces grew much more slowly during the same period. Not until the eighteenth century, when the Netherlands as a whole faced declining fortunes would the inland provinces begin to match the growth of the coastal core of the country.

Dutch Agriculture

During the fifteenth century, and most of the sixteenth century, the Northern Netherlands provinces were predominantly rural compared to the urbanized southern provinces. Agriculture and fishing formed the basis for the Dutch economy in the fifteenth and sixteenth centuries. One of the characteristics of Dutch agriculture during this period was its emphasis on intensive animal husbandry. Dutch cattle were exceptionally well cared for and dairy produce formed a significant segment of the agricultural sector. During the seventeenth century, as the Dutch urban population saw dramatic growth many farmers also turned to market gardening to supply the cities with vegetables.

Some of the impetus for animal production came from the trade in slaughter cattle from Denmark and Northern Germany. Holland was an ideal area for cattle feeding and fattening before eventual slaughter and export to the cities of the Southern provinces. The trade in slaughter cattle expanded from about 1500 to 1660, but protectionist measures on the part of Dutch authorities who wanted to encourage the fattening of home-bred cattle ensured a contraction of the international cattle trade between 1660 and 1750.

Although agriculture made up the largest segment of the Dutch economy, cereal production in the Netherlands could not keep up with demand particularly by the seventeenth century as migration from the southern provinces contributed to population increases. The provinces of the Low Countries traditionally had depended on imported grain from the south (France and the Walloon provinces) and when crop failures interrupted the flow of grain from the south, the Dutch began to import grain from the Baltic. Baltic grain imports experienced sustained growth from about the middle of the sixteenth century to roughly 1650 when depression and stagnation characterized the grain trade into the eighteenth century.

Indeed, the Baltic grain trade (see below), a major source of employment for the Dutch, not only in maritime transport but in handling and storage as well, was characterized as the “mother trade.” In her recent book on the Baltic grain trade, Mijla van Tielhof defined “mother trade” as the oldest and most substantial trade with respect to ships, sailors and commodities for the Northern provinces. Over the long term, the Baltic grain trade gave rise to shipping and trade on other routes as well as to manufacturing industries.

Dutch Fishing

Along with agriculture, the Dutch fishing industry formed part of the economic base of the northern Netherlands. Like the Baltic grain trade, it also contributed to the rise of Dutch the shipping industry.

The backbone of the fishing industry was the North Sea herring fishery, which was quite advanced and included a form of “factory” ship called the herring bus. The herring bus was developed in the fifteenth century in order to allow the herring catch to be processed with salt at sea. This permitted the herring ship to remain at sea longer and increased the range of the herring fishery. Herring was an important export product for the Netherlands particularly to inland areas, but also to the Baltic offsetting Baltic grain imports.

The herring fishery reached its zenith in the first half of the seventeenth century. Estimates put the size of the herring fleet at roughly 500 busses and the catch at about 20,000 to 25,000 lasts (roughly 33,000 metric tons) on average each year in the first decades of the seventeenth century. The herring catch as well as the number of busses began to decline in the second half of the seventeenth century, collapsing by about the mid-eighteenth century when the catch amounted to only about 6000 lasts. This decline was likely due to competition resulting from a reinvigoration of the Baltic fishing industry that succeeded in driving prices down, as well as competition within the North Sea by the Scottish fishing industry.

The Dutch Textile Industry

The heartland for textile manufacturing had been Flanders and Brabant until the onset of the Dutch Revolt around 1568. Years of warfare continued to devastate the already beaten down Flemish cloth industry. Even the cloth producing towns of the Northern Netherlands that had been focusing on producing the “new draperies” saw their output decline as a result of wartime interruptions. But textiles remained the most important industry for the Dutch Economy.

Despite the blow it suffered during the Dutch revolt, Leiden’s textile industry, for instance, rebounded in the early seventeenth century – thanks to the influx of textile workers from the Southern Netherlands who emigrated there in the face of religious persecution. But by the 1630s Leiden had abandoned the heavy traditional wool cloths in favor of a lighter traditional woolen (laken) as well as a variety of other textiles such as says, fustians, and camlets. Total textile production increased from 50,000 or 60,000 pieces per year in the first few years of the seventeenth century to as much as 130,000 pieces per year during the 1660s. Leiden’s wool cloth industry probably reached peak production by 1670. The city’s textile industry was successful because it found export markets for its inexpensive cloths in the Mediterranean, much to the detriment of Italian cloth producers.

Next to Lyons, Leiden may have been Europe’s largest industrial city at end of seventeenth century. Production was carried out through the “putting out” system, whereby weavers with their own looms and often with other dependent weavers working for them, obtained imported raw materials from merchants who paid the weavers by the piece for their work (the merchant retained ownership of the raw materials throughout the process). By the end of the seventeenth century foreign competition threatened the Dutch textile industry. Production in many of the new draperies (says, for example) decreased considerably throughout the eighteenth century; profits suffered as prices declined in all but the most expensive textiles. This left the production of traditional woolens to drive what was left of Leiden’s textile industry in the eighteenth century.

Although Leiden certainly led the Netherlands in the production of wool cloth, it was not the only textile producing city in the United Provinces. Amsterdam, Utrecht, Delft and Haarlem, among others, had vibrant textile industries. Haarlem, for example, was home to an important linen industry during the first half of the seventeenth century. Like Leiden’s cloth industry, Haarlem’s linen industry benefited from experienced linen weavers who migrated from the Southern Netherlands during the Dutch Revolt. Haarlem’s hold on linen production, however, was due more to its success in linen bleaching and finishing. Not only was locally produced linen finished in Haarlem, but linen merchants from other areas of Europe sent their products to Haarlem for bleaching and finishing. As linen production moved to more rural areas as producers sought to decrease costs in the second half of the seventeenth century, Haarlem’s industry went into decline.

Other Dutch Industries

Industries also developed as a result of overseas colonial trade, in particular Amsterdam’s sugar refining industry. During the sixteenth century, Antwerp had been Europe’s most important sugar refining city, a title it inherited from Venice once the Atlantic sugar islands began to surpass Mediterranean sugar production. Once Antwerp fell to Spanish troops during the Revolt, however, Amsterdam replaced it as Europe’s dominant sugar refiner. The number of sugar refineries in Amsterdam increased from about 3 around 1605 to about 50 by 1662, thanks in no small part to Portuguese investment. Dutch merchants purchased huge amounts of sugar from both the French and the English islands in the West Indies, along with a great deal of tobacco. Tobacco processing became an important Amsterdam industry in the seventeenth century employing large numbers of workers and leading to attempts to develop domestic tobacco cultivation.

With the exception of some of the “colonial” industries (sugar, for instance), Dutch industry experienced a period of stagnation after the 1660s and eventual decline beginning around the turn of the eighteenth century. It would seem that as far as industrial production is concerned, the Dutch Golden Age lasted from the 1580s until about 1670. This period was followed by roughly one hundred years of declining industrial production. De Vries and van der Woude concluded that Dutch industry experienced explosive growth after 1580s because of the migration of skilled labor and merchant capital from the southern Netherlands at roughly the time Antwerp fell to the Spanish and because of the relative advantage continued warfare in the south gave to the Northern Provinces. After the 1660s most Dutch industries experienced either steady or steep decline as many Dutch industries moved from the cities into the countryside, while some (particularly the colonial industries) remained successful well into the eighteenth century.

Dutch Shipping and Overseas Commerce

Dutch shipping began to emerge as a significant sector during the fifteenth century. Probably stemming from the inaction on the part of merchants from the Southern Netherlands to participate in seaborne transport, the towns of Zeeland and Holland began to serve the shipping needs of the commercial towns of Flanders and Brabant (particularly Antwerp ). The Dutch, who were already active in the North Sea as a result of the herring fishery, began to compete with the German Hanseatic League for Baltic markets by exporting their herring catches, salt, wine, and cloth in exchange for Baltic grain.

The Grain Trade

Baltic grain played an essential role for the rapidly expanding markets in western and southern Europe. By the beginning of the sixteenth century the urban populations had increased in the Low Countries fueling the market for imported grain. Grain and other Baltic products such as tar, hemp, flax, and wood were not only destined for the Low Countries, but also England and for Spain and Portugal via Amsterdam, the port that had succeeded in surpassing Lübeck and other Hanseatic towns as the primary transshipment point for Baltic goods. The grain trade sparked the development of a variety of industries. In addition to the shipbuilding industry, which was an obvious outgrowth of overseas trade relationships, the Dutch manufactured floor tiles, roof tiles, and bricks for export to the Baltic; the grain ships carried them as ballast on return voyages to the Baltic.

The importance of the Baltic markets to Amsterdam, and to Dutch commerce in general can be illustrated by recalling that when the Danish closed the Sound to Dutch ships in 1542, the Dutch faced financial ruin. But by the mid-sixteenth century, the Dutch had developed such a strong presence in the Baltic that they were able to exact transit rights from Denmark (Peace of Speyer, 1544) allowing them freer access to the Baltic via Danish waters. Despite the upheaval caused by the Dutch and the commercial crisis that hit Antwerp in the last quarter of the sixteenth century, the Baltic grain trade remained robust until the last years of the seventeenth century. That the Dutch referred to the Baltic trade as their “mother trade” is not surprising given the importance Baltic markets continued to hold for Dutch commerce throughout the Golden Age. Unfortunately for Dutch commerce, Europe ‘s population began to decline somewhat at the close of the seventeenth century and remained depressed for several decades. Increased grain production in Western Europe and the availability of non-Baltic substitutes (American and Italian rice, for example) further decreased demand for Baltic grain resulting in a downturn in Amsterdam ‘s grain market.

Expansion into African, American and Asian Markets – “World Primacy”

Building on the early successes of their Baltic trade, Dutch shippers expanded their sphere of influence east into Russia and south into the Mediterranean and the Levantine markets. By the turn of the seventeenth century, Dutch merchants had their eyes on the American and Asian markets that were dominated by Iberian merchants. The ability of Dutch shippers to effectively compete with entrenched merchants, like the Hanseatic League in the Baltic, or the Portuguese in Asia stemmed from their cost cutting strategies (what de Vries and van der Woude call “cost advantages and institutional efficiencies,” p. 374). Not encumbered by the costs and protective restrictions of most merchant groups of the sixteenth century, the Dutch trimmed their costs enough to undercut the competition, and eventually establish what Jonathan Israel has called “world primacy.”

Before Dutch shippers could even attempt to break in to the Asian markets they needed to first expand their presence in the Atlantic. This was left mostly to the émigré merchants from Antwerp, who had relocated to Zeeland following the Revolt. These merchants set up the so-called Guinea trade with West Africa, and initiated Dutch involvement in the Western Hemisphere. Dutch merchants involved in the Guinea trade ignored the slave trade that was firmly in the hands of the Portuguese in favor of the rich trade in gold, ivory, and sugar from São Tomé. Trade with West Africa grew slowly, but competition was stiff. By 1599, the various Guinea companies had agreed to the formation of a cartel to regulate trade. Continued competition from a slew of new companies, however, insured that the cartel would be only partially effective until the organization of the Dutch West India Company in 1621 that also held monopoly rights in the West Africa trade.

The Dutch at first focused their trade with the Americas on the Caribbean. By the mid-1590s only a few Dutch ships each year were making the voyage across the Atlantic. When the Spanish instituted an embargo against the Dutch in 1598, shortages in products traditionally obtained in Iberia (like salt) became common. Dutch shippers seized the chance to find new sources for products that had been supplied by the Spanish and soon fleets of Dutch ships sailed to the Americas. The Spanish and Portuguese had a much larger presence in the Americas than the Dutch could mount, despite the large number vessels they sent to the area. Dutch strategy was to avoid Iberian strongholds while penetrating markets where the products they desired could be found. For the most part, this strategy meant focusing on Venezuela, Guyana, and Brazil. Indeed, by the turn of the seventeenth century, the Dutch had established forts on the coasts of Guyana and Brazil.

While competition between rival companies from the towns of Zeeland marked Dutch trade with the Americas in the first years of the seventeenth century, by the time the West India Company finally received its charter in 1621 troubles with Spain once again threatened to disrupt trade. Funding for the new joint-stock company came slowly, and oddly enough came mostly from inland towns like Leiden rather than coastal towns. The West India Company was hit with setbacks in the Americas from the very start. The Portuguese began to drive the Dutch out of Brazil in 1624 and by 1625 the Dutch were loosing their position in the Caribbean as well. Dutch shippers in the Americas soon found raiding (directed at the Spanish and Portuguese) to be their most profitable activity until the Company was able to establish forts in Brazil again in the 1630s and begin sugar cultivation. Sugar remained the most lucrative activity for the Dutch in Brazil, and once the revolt of Portuguese Catholic planters against the Dutch plantation owners broke out the late 1640s, the fortunes of the Dutch declined steadily.

The Dutch faced the prospect of stiff Portuguese competition in Asia as well. But, breaking into the lucrative Asian markets was not just a simple matter of undercutting less efficient Portuguese shippers. The Portuguese closely guarded the route around Africa. Not until roughly one hundred years after the first Portuguese voyage to Asia were the Dutch in a position to mount their own expedition. Thanks to the travelogue of Jan Huyghen van Linschoten, which was published in 1596, the Dutch gained the information they needed to make the voyage. Linschoten had been in the service of the Bishop of Goa, and kept excellent records of the voyage and his observations in Asia.

The United East India Company (VOC)

The first few Dutch voyages to Asia were not particularly successful. These early enterprises managed to make only enough to cover the costs of the voyage, but by 1600 dozens of Dutch merchant ships made the trip. This intense competition among various Dutch merchants had a destabilizing effect on prices driving the government to insist on consolidation in order to avoid commercial ruin. The United East India Company (usually referred to by its Dutch initials, VOC) received a charter from the States General in 1602 conferring upon it monopoly trading rights in Asia. This joint stock company attracted roughly 6.5 million florins in initial capitalization from over 1,800 investors, most of whom were merchants. Management of the company was vested in 17 directors (Heren XVII) chosen from among the largest shareholders.

In practice, the VOC became virtually a “country” unto itself outside of Europe, particularly after about 1620 when the company’s governor-general in Asia, Jan Pieterszoon Coen, founded Batavia (the company factory) on Java. While Coen and later governors-general set about expanding the territorial and political reach of the VOC in Asia, the Heren XVII were most concerned about profits, which they repeatedly reinvested in the company much to the chagrin of investors. In Asia, the strategy of the VOC was to insert itself into the intra-Asian trade (much like the Portuguese had done in the sixteenth century) in order to amass enough capital to pay for the spices shipped back to the Netherlands. This often meant displacing the Portuguese by waging war in Asia, while trying to maintain peaceful relations within Europe.

Over the long term, the VOC was very profitable during the seventeenth century despite the company’s reluctance to pay cash dividends in first few decades (the company paid dividends in kind until about 1644). As the English and French began to institute mercantilist strategies (for instance, the Navigation Acts of 1551 and 1660 in England, and import restrictions and high tariffs in the case of France ) Dutch dominance in foreign trade came under attack. Rather than experience a decline like domestic industry did at the end of the seventeenth century, the Dutch Asia trade continued to ship goods at steady volumes well into the eighteenth century. Dutch dominance, however, was met with stiff competition by rival India companies as the Asia trade grew. As the eighteenth century wore on, the VOC’s share of the Asia trade declined significantly compared to its rivals, the most important of which was the English East India Company.

Dutch Finance

The last sector that we need to highlight is finance, perhaps the most important sector for the development of the early modern Dutch economy. The most visible manifestation of Dutch capitalism was the exchange bank founded in Amsterdam in 1609; only two years after the city council approved the construction of a bourse (additional exchange banks were founded in other Dutch commercial cities). The activities of the bank were limited to exchange and deposit banking. A lending bank, founded in Amsterdam in 1614, rounded out the financial services in the commercial capital of the Netherlands.

The ability to manage the wealth generated by trade and industry (accumulated capital) in new ways was one of the hallmarks of the economy during the Golden Age. As early as the fourteenth century, Italian merchants had been experimenting with ways to decrease the use of cash in long-distance trade. The resulting instrument was the bill of exchange developed as a way to for a seller to extend credit to a buyer. The bill of exchange required the debtor to pay the debt at a specified place and time. But the creditor rarely held on to the bill of exchange until maturity preferring to sell it or otherwise use it to pay off debts. These bills of exchange were not routinely used in commerce in the Low Countries until the sixteenth century when Antwerp was still the dominant commercial city in the region. In Antwerp the bill of exchange could be assigned to another, and eventually became a negotiable instrument with the practice of discounting the bill.

The idea of the flexibility of bills of exchange moved to the Northern Netherlands with the large numbers of Antwerp merchants who brought with them their commercial practices. In an effort to standardize the practices surrounding bills of exchange, the Amsterdam government restricted payment of bills of exchange to the new exchange bank. The bank was wildly popular with merchants; deposits increasing from just less than one million guilders in 1611 to over sixteen million by 1700. Amsterdam ‘s exchange bank flourished because of its ability to handle deposits and transfers, and to settle international debts.

By the second half of the seventeenth century many wealthy merchant families had turned away from foreign trade and began engaging in speculative activities on a much larger scale. They traded in commodity values (futures), shares in joint-stock companies, and dabbled in insurance and currency exchanges to name only a few of the most important ventures.

Conclusion

Building on its fifteenth- and sixteenth-century successes in agricultural productivity, and in North Sea and Baltic shipping, the Northern Netherlands inherited the economic legacy of the southern provinces as the Revolt tore the Low Countries apart. The Dutch Golden Age lasted from roughly 1580, when the Dutch proved themselves successful in their fight with the Spanish, to about 1670, when the Republic’s economy experienced a down-turn. Economic growth was very fast during until about 1620 when it slowed, but continued to grow steadily until the end of the Golden Age. The last decades of the seventeenth century were marked by declining production and loss of market dominance overseas.

Bibliography

Attman, Artur. The Struggle for Baltic Markets: Powers in Conflict, 1558-1618. Göborg: Vetenskaps- o. vitterhets-samhäet, 1979.

Barbour, Violet. Capitalism in Amsterdam in the Seventeenth Century. Ann Arbor: University of Michigan Press, 1963.

Bulut, M. “Rethinking the Dutch Economy and Trade in the Early Modern Period, 1570-1680.” Journal of European Economic History 32 (2003): 391-424.

Christensen, Aksel. Dutch Trade to the Baltic about 1600. Copenhagen: Einar Munksgaard, 1941.

De Vries, Jan and Ad van der Woude, The First Modern Economy: Success, Failure, and Perseverance of the Dutch Economy, 1500-1815. Cambridge: Cambridge University Press, 1997.

De Vries, Jan, The Economy of Europe in an Age of Crisis, 1600-1750. Cambridge: Cambridge University Press, 1976.

Gelderblom, Oscar. Zuid-Nederlandse kooplieden en de opkomst van de Amsterdamse stapalmarkt (1578-1630). Hilversum: Uitgeverij Verloren, 2000.

Gijsbers, W. Kapitale Ossen: De internationale handel in slachtvee in Noordwest-Europa (1300-1750). Hilversum: Uitgeverij Verloren, 1999.

Haley, K.H.D. The Dutch in the Seventeenth Century. New York: Harcourt, Brace and Jovanovich, 1972.

Harreld, Donald J. “Atlantic Sugar and Antwerp’s Trade with Germany in the Sixteenth Century.” Journal of Early Modern History 7 (2003): 148-163.

Heers, W. G., et al, editors. From Dunkirk to Danzig: Shipping and Trade in the North Sea and the Baltic, 1350-1850. Hiversum: Verloren, 1988.

Israel, Jonathan I. “Spanish Wool Exports and the European Economy, 1610-1640.” Economic History Review 33 (1980): 193-211.

Israel, Jonathan I., Dutch Primacy in World Trade, 1585-1740. (Oxford: Clarendon Press, 1989).

O’Brien, Patrick, et al, editors. Urban Achievement in Early Modern Europe: Golden Ages in Antwerp, Amsterdam and London. Cambridge: Cambridge University Press, 2001.

Pirenne, Henri. “The Place of the Netherlands in the Economic History of Medieval Europe ” Economic History Review 2 (1929): 20-40.

Price, J.L. Dutch Society, 1588-1713. London: Longman, 2000.

Tracy, James D. “Herring Wars: The Habsburg Netherlands and the Struggle for Control of the North Sea, ca. 1520-1560.” Sixteenth Century Journal 24 no. 2 (1993): 249-272.

Unger, Richard W. “Dutch Herring, Technology, and International Trade in the Seventeenth Century.” Journal of Economic History 40 (1980): 253-280.

Van Tielhof, Mijla. The ‘Mother of all Trades’: The Baltic Grain Trade in Amsterdam from the Late Sixteenth to the Early Nineteenth Century. Leiden: Brill, 2002.

Wilson, Charles. “Cloth Production and International Competition in the Seventeenth Century.” Economic History Review 13 (1960): 209-221.

Citation: Harreld, Donald. “Dutch Economy in the “Golden Age” (16th-17th Centuries)”. EH.Net Encyclopedia, edited by Robert Whaples. August 12, 2004. URL http://eh.net/encyclopedia/the-dutch-economy-in-the-golden-age-16th-17th-centuries/

The United States Public Debt, 1861 to 1975

Franklin Noll, Ph.D.

Introduction

On January 1, 1790, the United States’ public debt stood at $52,788,722.03 (Bayley 31). It consisted of the debt of the Continental Congress and $191,608.81 borrowed by Secretary of the Treasury Alexander Hamilton in the spring of 1789 from New York banks to meet the new government’s first payroll (Bayley 108). Since then the public debt has passed by a number of historical milestones: the assumption of Revolutionary War debt in August 1790, the redemption of the debt in 1835, the financing innovations rising from Civil War in 1861, the introduction of war loan drives in 1917, the rise of deficit spending after 1932, the lasting expansion of the debt from World War II, and the passage of the Budget Control Act in 1975. (The late 1990s may mark another point of significance in the history of the public debt, but it is still too soon to tell.) This short study examines the public debt between the Civil War and the Budget Control Act, the period in which the foundations of our present public debt of over $7 trillion were laid. (See Figure 1.) We start our investigation by asking, “What exactly is the public debt?”

Source: Nominal figures from “Principal of the Public Debt, Fiscal Years 1790-1975,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix. (Washington, DC: Government Printing Office, 1975), 62-63 and Robert Sahr, Oregon State University. URL: http://oregonstate.edu/Dept/pol_sci/fac/sahr/sahrhome.htm. Real figures adjust for inflation. These figures and conversion factors provided by Robert Sahr.

Definitions

Throughout its history, the Treasury has recognized various categories of government debt. The oldest category and the largest in size is the public debt. The public debt, simply put, is all debt for which the government of the United States is wholly liable. In turn, the general public is ultimately responsible for such debt through taxation. Some authors use the terms federal debt and national debt interchangeably with public debt. From the view of the United States Treasury, this is incorrect.

Federal debt, as defined by the Treasury, is the public debt plus debt issued by government-sponsored agencies for their own use. The term first appears in 1973 when it is officially defined as including “the obligations issued by Federal Government agencies which are part of the unified budget totals and in which there is an element of Federal ownership, along with the marketable and nonmarketable obligations of the Department of the Treasury” (Annual Report of the Secretary of the Treasury, 1973: 13). Put more succinctly, federal debt is made up of the public debt plus contingent debt. The government is partially or, more precisely, contingently liable for the debt of government-sponsored enterprises for which it has pledged its guarantee. On the contingency that a government-sponsored enterprise such as the Government National Mortgage Association ever defaults on its debt, the United States government becomes liable for the debt.

National debt, though a popular term and used by Alexander Hamilton, has never been technically defined by the Treasury. The term suggests that one is referring to all debt for which the government could be liable–wholly or in part. During the period 1861 to 1975, the debt for which the government could be partially or contingently liable has included that of government-sponsored enterprises, railroads, insular possessions (Puerto Rico and the Philippines), and the District of Columbia. Taken together, these categories of debt could be considered the true national debt which, to my knowledge, has never been calculated.

Structure

But it is the public debt–only that debt for which the government is wholly liable–which has been totaled and mathematically examined in a myriad of ways by scholars and pundits. Yet, very few have broken down the public debt into its component parts of marketable and nonmarketable debt instruments: those securities, such as bills, bonds, and notes that make up the basis of the debt. In a simplified form, the structure of the public debt is as follows:

  • Interest-bearing debt
    • Marketable debt
      • Treasuries
    • Nonmarketable debt
      • Depositary Series
    • Foreign Government Series
    • Government Account Series
    • Investment Series
    • REA Series
    • SLG Series
    • US Savings Securities
  • Matured debt
  • Debt bearing no interest

Though the elements of the debt varied over time, this basic structure remained constant from 1861 to 1975 and into the present. As we investigate further the elements making up the structure of the public debt, we will focus on information from 1975, the last year of our study. By doing so, we can see the debt at its largest and most complex for the period 1861 to 1975 and in a structure most like that currently held by the public debt. It was also in 1975 that the Bureau of the Public Debt’s accounting and reporting of the public debt took on its present form.

Some Financial Terms

Bearer Security
A bearer security is one in which ownership is determined solely by possession or the bearer of the security.
Callable
The term callable refers to whether and under what conditions the government has the right to redeem a debt issue prior to its maturity date. The date at which a security can be called by the government for redemption is known as its call date.
Coupon
A coupon is a detachable part of a security that bears the interest payment date and the amount due. The bearer of the security detaches the appropriate coupon and presents it to the Treasury for payment. Coupon is synonymous with interest in financial parlance: the coupon rate refers to the interest rate.
Coupon Security
A coupon security is any security that has attached coupons, and usually refers to a bearer security.
Discount
The term discount refers to the sale of a debt instrument at a price below its face or par value.
Liquidity
A security is liquid if it can be easily bought and sold in the secondary market or easily converted to cash.
Maturity
The maturity of a security is the date at which it becomes payable in full.
Negotiable
A negotiable security is one that can be freely sold or transferred to another holder.
Par
Par is the nominal dollar amount assigned to a security by the government. It is the security’s face value.
Premium
The term premium refers to the sale of a debt instrument at a price above its face or par value.
Registered Security
A registered security is one in which the owner of the security is recorded by the Bureau of the Public Debt. Usually both the principal and interest are registered, making them non-negotiable or non-transferable.

Interest-Bearing Debt, Matured Debt, and Debt Bearing No Interest

This major division in the structure of the public debt is fairly self-explanatory. Interest-bearing debt contains all securities that carry an obligation on the part of the government to pay interest to the security’s owner on a regular basis. These debt instruments have not reached maturity. Almost all of the public debt falls into the interest-bearing debt category. (See Figure 2.) Securities that are past maturity (and therefore no longer paying interest), but have not yet been redeemed by their holders are located within the category of matured debt. This is an extremely small part of the total public debt. In the category of debt bearing no interest are securities that are non-negotiable and non-interest-bearing such as Special Notes of the United States issued to the International Monetary Fund. Securities in this category are often issued for one-time or extraordinary purposes. Also in the category are obsolete forms of currency such as fractional currency, legal tender notes, and silver certificates. In total, old currency made up only .114% of the public debt in 1975. The Federal Reserve Notes which have been issued since 1914 and which we deal with on a daily basis are obligations of the Federal Reserve and thus not part of the public debt.

Source: “Principal of the Public Debt, Fiscal Years 1790-1975,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix (Washington, DC: Government Printing Office, 1975), 62-63.

During the period under study, the value of outstanding matured debt generally grew with the overall size of the debt, except for a spike in the amount of unredeemed securities in the mid and late 1950s. (See Figure 3.) This was caused by the maturation of United States Savings Bonds bought during World War II. Many of these war bonds lay forgotten in people’s safe-deposit boxes for years. Wartime purchases of Defense Savings Stamps and War Savings Stamps account for much of the sudden increase in debt bearing no interest from 1943 to 1947. (See Figure 4.) The year 1947 saw the United States issuing non-interest paying notes to fund the establishment of the International Monetary Fund and the International Bank for Reconstruction and Development (part of the World Bank). As interest-bearing debt makes up over 99% of the public debt, it is basically equivalent to it. (See Figure 5.) And, the history of the overall public debt will be examined later.

Source: “Principal of the Public Debt, Fiscal Years 1790-1975,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix (Washington, DC: Government Printing Office, 1975), 62-63.

Marketable Debt and Nonmarketable Debt

Interest-bearing debt is divided between marketable debt and nonmarketable debt. Marketable debt consists of securities that can be easily bought and sold in the secondary market. The Treasury has used the term since World War II to describe issues that are available to the general public in registered or bearer form without any condition of sale. Nonmarketable debt refers to securities that cannot be bought and sold in the secondary market though there are rare exceptions. Generally, nonmarketable government securities may only be bought from or sold to the Treasury. They are issued in registered form only and/or can be bought only by government agencies, specific business enterprises, or individuals under strict conditions.

The growth of the marketable debt largely mirrors that of total interest-bearing debt; and until 1918, there was no such thing as nonmarketable debt. (See Figure 6.) Nonmarketable debt arose in fiscal year 1918, when securities were sold to the Federal Reserve in an emergency move to raise money as the United States entered World War I. This was the first sale of “special issue” securities as nonmarketable debt securities were classified prior to World War II. Special or nonmarketable issues continued through the interwar period and grew with the establishment of government programs. Such securities were sometimes issued by the Treasury in the name of a government fund or program and were then bought by the Treasury. In effect, the Treasury extended a loan to the government entity. More often the Treasury would sell a special security to the government fund or program for cash, creating a loan to the Treasury and an investment vehicle for the government entity. And, as the number of government programs grew and the size of government funds (like those associated with Social Security) expanded, so did the number and value of nonmarketable securities–greatly contributing to the rapid growth of nonmarketable debt. By 1975, these intragovernment securities combined with United States Savings Bonds helped make nonmarketable debt 40% of the total public debt. (See Figure 7.)

Source: The following were used to calculate outstanding marketable debt: Data for 1861 to 1880 derived from Rafael A. Bayley, The National Loans of the United States from July 4, 1776, to June 30, 1880, second edition, facs rpt (New York: Burt Franklin, 1970 [1881]), 180-84 and Annual Report of the Secretary of the Treasury on the State of the Finances, (Washington, DC: Government Printing Office, 1861), 44. Post-1880 numbers derived from “Analysis of the Principal of the Interest-Bearing Public Debt of the United States from July 1, 1856 to July 1, 1912,” idem (1912), 102-03; “Comparative Statement of the Public Debt Outstanding June 30, 1933 to 1939,” idem (1939), 452-53; “Composition of the Public Debt at the End of the Fiscal Years 1916 to 1938,” idem, 454-55; “Public Debt by Security Classes, June 30, 1939-49,” idem (1949), 400-01; “Public Debt Outstanding by Security Classes, June 30, 1945-55,” idem (1955); “Public Debt Outstanding by Classification,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix (Washington, DC: Government Printing Office, 1975), 67-71. The marketable debt figures were then subtracted from total outstanding interest bearing debt to obtain nonmarketable figures.

Source: “Public Debt Outstanding by Classification,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix (Washington, DC: Government Printing Office, 1975), 67-71.

Marketable Debt Securities: Treasuries

The general public is most familiar with those marketable debt instruments falling within the category of Treasury securities, more popularly known as simply Treasuries. These securities can be bought by anyone and have active secondary markets. The most commonly issued Treasuries between 1861 and 1975 are the following, listed in order of length of time to maturity, shortest to longest:

Treasury certificate of indebtedness
A couponed, short-term, interest-bearing security. It can have a maturity of as little as one day or as long as five years. Maturity is usually between 3 and 12 months. These securities were largely replaced by Treasury bills.
Treasury bill
A short-term security issued on a discount basis rather than at par. The price is determined by competitive bidding at auction. They have a maturity of a year or less and are usually sold on a weekly basis with maturities of 13 weeks and 26 weeks. They were first issued in December 1929.
Treasury note
A couponed, interest-bearing security that generally matures in 2 to 5 years. In 1968, the Treasury began to issue 7-year notes, and in 1976, the maximum maturity of Treasury notes was raised to 10 years.
Treasury bond
A couponed interest-bearing security that normally matures after 10 or more years.

The story of these securities between 1861 and 1975 is one of a general movement by the Treasury to issue ever more securities in the shorter maturities–certificates of indebtedness, bills, and notes. Until World War I, the security of preference was the bond with a call date before maturity. (See Figure 8.) Such an instrument provided the minimum attainable interest rate for the Treasury and was in demand as a long-term investment vehicle by investors. The pre-maturity call date allowed the Treasury the flexibility to redeem the bonds during a period of surplus revenue. Between 1861 and 1917, certificates of indebtedness were issued on occasion to manage cash flow through the Treasury and notes were issued only during the financial crisis years of the Civil War.

Source: Franklin Noll, A Guide to Government Obligations, 1861-1976, unpublished ms., 2004.

In terms of both numbers and values, the change to shorter maturity Treasury securities began with World War I. Unprepared for the financial demands of World War I, the Treasury was perennially short of cash and issued a great number of certificates of indebtedness and short-term notes. A market developed for these securities, and they were issued throughout the interwar period to meet cash demands and refund the remaining World War I debt. While the number of bonds issued rose in the World War I and World War II years, by 1975 bond issues had become rare; and by the late 1960s, the value of bonds issued was in steep decline. (See Figure 9.) In part, this was the effect of interest rates moving beyond statutory limits set on the interest rate the Treasury could pay on long-term securities. The primary reason for the decline of the bond, however, was post-World War II economic growth and inflation that drove up interest rates and established expectations of rising inflation. In such conditions, shorter term securities were more in favor with investors who sought to ride the rising tide of interest rates and keep their financial assets as liquid as possible. Correspondingly, the number and value of notes and bills rose throughout the postwar years. Certificates of indebtedness declined as they were replaced by bills. Treasury bills won out because they were easier and therefore less expensive for the Treasury to issue than certificates of indebtedness. Bills required no predetermination of interest rates or servicing of coupon payments.

Source: Data for 1861 to 1880 derived from Rafael A. Bayley, The National Loans of the United States from July 4, 1776, to June 30, 1880, second edition, facs rpt (New York: Burt Franklin, 1970 [1881]), 180-84 and Annual Report of the Secretary of the Treasury on the State of the Finances, (Washington, DC: Government Printing Office, 1861), 44. Post-1880 numbers derived from “Analysis of the Principal of the Interest-Bearing Public Debt of the United States from July 1, 1856 to July 1, 1912,” idem (1912), 102-03; “Comparative Statement of the Public Debt Outstanding June 30, 1933 to 1939,” idem (1939), 452-53; “Composition of the Public Debt at the End of the Fiscal Years 1916 to 1938,” idem, 454-55; “Public Debt by Security Classes, June 30, 1939-49,” idem (1949), 400-01; “Public Debt Outstanding by Security Classes, June 30, 1945-55,” idem (1955); “Public Debt Outstanding by Classification,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix (Washington, DC: Government Printing Office, 1975), 67-71.

Nonmarketable Debt Securities

Securities sold as nonmarketable debt come in the forms above–certificate of indebtedness, bill, note, and bond. Most, but not all, nonmarketable securities fall into these series or categories:

Depositary Series
Made up of depositary bonds held by depositary banks. These are banks that provide banking facilities for the Treasury. Depositary bonds act as collateral for the Treasury funds deposited at the bank. The interest on these collateral securities provides the banks with income for the services rendered.
Foreign Government Series
The group of Treasury securities sold to foreign governments or used in foreign exchange stabilization operations.
Government Account Series
Refers to all types of securities issued to or by government accounts and trust funds.
Investment Series
Contains Treasury Bond, Investment Series securities sold to institutional investors.
REA Series
Rural electrification Administration Series securities are sold to recipients of Rural Electrification Administration loans who have unplanned excess loan money. Holding on to excess funds in the form of bonds give the borrower the capacity to cash in the bonds and retrieve the unused loan funds without the need for negotiating a new loan.
SLG Series
State and Local Government Series securities were first issued in 1972 to help state and municipal governments meet federal arbitrage restrictions.
US Savings Securities
United States Savings Securities refers to a group of securities consisting of savings stamps and bonds (most notably United States Savings Bonds) aimed at small, non-institutional investors.

A number of nonmarketable securities fall outside these series. The special issue securities sold to the Federal Reserve in 1917 (the first securities recognized as nonmarketable) and mentioned above do not fit into any of these categories, neither do securities providing tax advantages like Mortgage Guaranty Insurance Company Tax and Loss Bonds or Special Notes of the United States issued on behalf of the International Monetary Fund. Treasury reports are, in fact, frustratingly full of anomalies and contradictions. One major anomaly is Postal Savings Bonds. First issued in 1911, Postal Savings Bonds were United States Savings Securities that were bought by depositors in the now defunct Postal Savings System. These bonds, unlike United States Savings Bonds, were fully marketable and could be bought and sold on the open market. As a savings security, it is included in the nonmarketable United States Savings Security series even though it is marketable. (It is to include these anomalous securities that we begin the graphs below in 1910.)

The United States Savings Security Series and the Government Account Series were the most significant in the growth of the nonmarketable debt component of the public debt. (See Figure 10.) The real rise in savings securities began with the introduction of the nonmarketable United States Savings Bonds in 1935. The bond drives of World War II established these savings bonds in the American psyche and small investor portfolios. Securities issued for the benefit of government funds or programs began in 1925 and, as in the case of savings securities, really took off with the stimulus of World War II. The growth of government and government programs continued to stimulate the growth of the Government Account Series, making it the largest part of nonmarketable debt by 1975. (See Figure 13.)

Source: Various tables and exhibits, Annual Report of the Secretary of the Treasury on the State of the Finances, (Washington, DC: Government Printing Office, 1910-1932); “Comparative Statement of the Public Debt Outstanding June 30, 1933 to 1939,” idem (1939), 452-53; “Composition of the Public Debt at the End of the Fiscal Years 1916 to 1938,” idem, 454-55; “Public Debt by Security Classes, June 30, 1939-49,” idem (1949), 400-01; “Public Debt Outstanding by Security Classes, June 30, 1945-55,” idem (1955); “Public Debt Outstanding by Classification,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix (Washington, DC: Government Printing Office, 1975), 67-71.

The Depositary, REA, and SLG series were of minor importance throughout the period with depositary bonds declining because their fixed interest rate of 2% became increasing uncompetitive with the rise in inflation. (See Figure 11.) As the Investment Series was tied to a single security, it declined with the gradual redemptions of Treasury Bond, Investment Series securities. (See Figure 12.) The Foreign Government Series grew with escalating efforts to stabilize the value of dollar in foreign exchange markets. (See Figure 12.)

Source: “Description of Public Debt Issues Outstanding, June 30, 1975,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix (Washington, DC: Government Printing Office, 1975), 88-112.

History of the Public Debt

While we have examined the development of the various components of the public debt, we have yet to consider the public debt as a whole. Quite a few writers in the recent past have commented on the ever-growing size of the public debt. Many were concerned that the public debt figures were becoming astronomical in size and that there was no end in sight to continued growth as perennial budget deficits forced the government to keep borrowing money. Such fears are not entirely new to our country. In the Civil War, World War I, and World War II, people were astounded at the unprecedented heights reached by the public debt during wartime. What changed during World War II (and maybe a bit before) was the assumption that the public debt would decrease once the present crisis was over. The pattern in America’s past was that after each war every effort would be made to pay off the accumulated debt as quickly as possible. Thus we find after the Civil War, World War I, and World War II declines in the total public debt. (See Figures 14 and 15.) Until the United States’ entry into World War I, the public debt never exceeded $3 billion (See Figure 14); and probably the debt would have returned near to this level after World War I if the Great Depression and World War II had not intervened. Yet, the last contraction of the public debt between 1861 and 1975 occurred in 1957. (See Figures 15 and 18.) Since then the debt grew at an ever-increasing rate. Why?

The period 1861 to 1975 roughly divides into two eras and two corresponding philosophies on the public debt. From 1861 to 1932, government officials basically followed traditional precepts of public debt management, pursuing balanced budgets and paying down any debt as quickly as possible (Withers, 35-42). We will label these officials traditionalists. To oversimplify, for traditionalists the economy was not to be meddled with by the government as no good would come from it. The ups and downs of business cycles were natural phenomena that had to be endured and when possible provided for through the accumulation of budget surpluses. These views of national finance and the public debt held sway before the Great Depression and lingered on into the 1950s (Conklin, 234). But it was during the Great Depression and the first term of President Franklin Roosevelt, that we see an acceptance of what was then called “new economics” and would later be called Keynesianism. Basically, “new” economists believed that the business cycle could be counteracted through government intervention into the economy (Withers, 32). During economic downturns, the government could dampen the down cycle by stimulating the economy through lower taxes, increased government spending, and an expanded money supply. As the economy recovered, these stimulants would be reversed to dampen the up cycle of the economy. These beliefs gained ever greater currency over time and we will designate the period 1932 to 1975, the New Era.

The Traditional Era, 1861-1932

(This discussion focuses on figures 14 and 16. Also See Figures 18, 19, and 20.) In 1861, the public debt stood at roughly $65 million. At the end of the Civil War the debt was some 42 times greater at $2,756 million and the country was off the gold standard. The Civil War was paid for by a new personal income tax, massive bond issues, and the printing of currency, popularly known as Greenbacks. Once the war was over, there was a drive to return to the status quo antebellum with a return to the gold standard, a pay down of the public debt, and the retirement of Greenbacks. The period 1866 to 1893, saw 28 continuous years of budget surpluses with revenues pouring in from tariffs and land sales in the west. During that time, successive Secretaries of the Treasury redeemed public debt securities to the greatest extent possible, often buying securities at a premium in the open market. The debt declined continuously until 1893 to a low of $961 million with a brief exception in the late 1870s as the country dealt with the recessionary after effects of the Panic of 1873 and the controversy regarding resumption of the gold standard in 1879. The Panic of 1893 and a decline in tariff revenues brought a period of budget deficits and slightly raised the public debt from its 1893 low to a steady average of around $1,150 million in the years leading up to World War I. The first war drives occurred during World War I. With the aid of the recently established Federal Reserve, the Treasury held four Liberty Loan drives and one Victory Loan drive. The Treasury also introduced low cost savings certificates and stamps to attract the smallest investor. For 25 cents, one could aid the war effort by buying a Thrift Stamp. As at the end of previous wars, once World War I ended there was a concerted drive to pay down the debt. By 1931, the debt was reduced to $16,801 million from a wartime high of $25,485 million. The first budget deficit since the end of the war also appeared in 1931, marking the deepening of the Great Depression and a move away from the fiscal orthodoxy of the past.

Source: “Principal of the Public Debt, Fiscal Years 1790-1975,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix. (Washington, DC: Government Printing Office, 1975), 62-63.

Source: Robert Sahr, Oregon State University. URL: http://oregonstate.edu/Dept/pol_sci/ fac/sahr/sahrhome.htm.

The New Era, 1932-1975

(This discussion focuses on figures 15 and 17. Also See Figures 18, 19, and 20.) It was Roosevelt who first experimented with deficit spending to pull the economy out of depression and to stimulate jobs through the creation of public works programs and other elements of his New Deal. Though taxes were raised on the wealthy, the depressed state of the economy meant government revenues were far too low to finance the New Deal. As a result, Roosevelt in his first year created a budget deficit almost six times greater than that of Hoover’s last year in office. Between 1931 and 1941, the public debt tripled in size, standing at $48,961 million upon the United States’ entry into World War II. To help fund the debt and get hoarded money back into circulation, the Treasury introduced the United States Savings Bond. Nonmarketable with a guaranteed redemption value at any point in the life of the security and a denomination as low as $25, the savings bond was aimed at small investors fearful of continued bank collapses. With the advent of war, these bonds became War Savings Bonds and were the focus of the eight war drives of World War II, which also included Treasury bonds and certificates of indebtedness. The public debt reached a height of $269,422 million because of the war.

The experience of the New Deal combined with the low unemployment and victory of wartime, seemed to confirm Keynesian theories and reduce the fear of budget deficits. In 1946, Congress passed the Full Employment Act, committing the government to the pursuit of low unemployment through government intervention in the economy, which could include deficit spending. Though Truman and Eisenhower promoted some government intervention in the economy, they were still economic traditionalists at heart and sought to pay down the public debt as much as possible. And, despite massive foreign aid, a sharp recession in the late 1950s, and large-scale foreign military deployments, including the Korean War, these two presidents were able to present budget surpluses more than 50% of the time and limit the growth of the public debt to an average of $1,000 million per year. From 1960 to 1975, there would only be one year of budget surplus and the public debt would grow at an average rate of $17,040 million per year. It was in 1960 and the arrival of the Kennedy administration that the “new economics” or Keynesianism came into full flower within the government. In the 1960s and 1970s, tax cuts and increased domestic spending were pursued not only to improve society but also to move the economy toward full employment. However, these economic stimulants were not just applied on down cycles of the economy but also on up cycles, resulting in ever-growing deficits. Added to this domestic spending were the continued outlays on military deployments overseas, including Vietnam, and borrowings in foreign markets to prop up the value of the dollar. During boom years, government revenues did increase but never enough to outpace spending. The exception was 1969 when a high rate of inflation boosted nominal revenues which were offset by the increased nominal cost of servicing the debt. By 1975, the United States was suffering from the high inflation and high unemployment of stagflation, and the budgetary deficits seemed to take on a life of their own. Each downturn in the economy brought smaller revenues aggravated by tax cuts while spending soared because of increased welfare and unemployment benefits and other government spending aimed at spurring job creation. The net result was an ever-increasing charge on the public debt and the huge numbers that have concerned so many in the past (and present).

Source: Nominal figures from “Principal of the Public Debt, Fiscal Years 1790-1975,” Annual Report of the Secretary of the Treasury on the State of the Finances, Statistical Appendix. (Washington, DC: Government Printing Office, 1975), 62-63; real figures adjust for inflation and are provided by Robert Sahr, Oregon State University. URL: http://oregonstate.edu/Dept/pol_sci/fac/sahr/sahrhome.htm.

Source: Derived from figures provided by Robert Sahr, Oregon State University. URL: http://oregonstate.edu/Dept/pol_sci/ fac/sahr/sahrhome.htm.

Source: Robert Sahr, Oregon State University. URL: http://oregonstate.edu/Dept/pol_sci/ fac/sahr/sahrhome.htm.

We end this study in 1975 and the passage of the Budget Control Act. Formally entitled the Congressional Budget and Impoundment Control Act of 1974, it was passed on July 12, 1974 (the start of fiscal year 1975). Some of the most notable provisions of the act were the establishment of House and Senate Budget Committees, creation of the Congressional Budget Office, and removal of impoundment authority from the President. Impoundment was the President’s ability to refrain from spending funds authorized in the budget. For example, if a government program ended up not spending all the money allotted it, the President (or more specifically the Treasury under the President’s authority) did not have to pay out the unneeded money. Or, if the President did not want to fund a project passed by Congress in the budget, he could in effect veto it by instructing the Treasury not to release the money. In sum, the Budget Control Act shifted the balance of budgetary power to the Congress from the executive branch. The effect was to weaken restraints on Congressional spending and contribute to the increased deficits and sharp, upward growth in the public debt in the next couple decades. (See Figures 1, 19, and 20.)

But the Budget Control Act was a watershed for the public debt not only in its rate of growth but also in the way it was recorded and reported. The act changed the fiscal year (the twelve-month period used to determine income and expenses for accounting purposes) from July 1 to June 30 of each year to October 1 to September 30. The Budget Control Act also initiated the reporting system currently used by the Bureau of the Public Debt to report on the public debt. Fiscal year 1975 saw the first publication of the Monthly Statement of the Public Debt of the United States. For the first time, it reported the public debt in the structure we examined above, a structure still used by the Treasury today.

Conclusion

The public debt from 1861 to 1975 was the product of many factors. First, it was the result of accountancy on the part of the United States Treasury. Only certain obligations of the United States fall into the definition of the public debt. Second, the debt was the effect of Treasury debt management decisions as to what debt instruments or securities were to be used to finance the debt. Third, the public debt was fundamentally a product of budget deficits. Massive government spending in itself did not create deficits and add to the debt. It was only when revenues were not sufficient to offset the spending that deficits and government borrowing were necessary. At times, as during wartime or severe recessions, deficits were largely unavoidable. The change that occurred between 1861 and 1975 was the attitude among the government and the public toward budget deficits. Until the Great Depression, deficits were seen as injurious to the public good, and the public debt was viewed with unease as something the country could really do without. After the Great Depression, deficits were still not welcomed but were now viewed as a necessary tool needed to aid in economic recovery and the creation of jobs. Post-World War II rising expectations of continuous economic growth and high employment at home and the extension of United States’ power abroad spurred the use of deficit spending. And, the belief among some influential Keynesians that more tinkering with the economy was all that was needed to fix a stagflating economy created an almost self-perpetuating growth of the public debt. In the end, the history of the public debt is not so much about accountancy or Treasury securities as about national ambitions, politics, and economic theories.

Annotated Bibliography

Though much has been written about the public debt, very little of it is of any real use in economic analysis or learning the history of the public debt. Most books deal with an ever-pending public debt crisis and give policy recommendations on how to solve the problem. However, there are a few recommendations:

Annual Report of the Secretary of the Treasury on the State of the Finances. Washington, DC: Government Printing Office, -1980.

This is the basic source for all information on the public debt until 1980.

Bayley, Rafael A. The National Loans of the United States from July 4, 1776, to June 30, 1880. Second edition. Facsimile reprint. New York: Burt Franklin, 1970 [1881].

This is the standard work on early United States financing written by a Treasury bureaucrat.

Bureau of the Public Debt. “The Public Debt Online.” URL: http://www.publicdebt.treas. gov/opd/opd.htm.

Provides limited data on the public debt, but provides all past issues of the Monthly Statement of the Public Debt.

Conklin, George T., Jr. “Treasury Financial Policy from the Institutional Point of View.” Journal of Finance 8, no. 2 (May 1953): 226-34.

This is a contemporary’s disapproving view of the growing acceptance of the “new economics” that appeared in the 1930s.

Gordon, John Steele. Hamilton’s Blessing: the Extraordinary Life and Times of Our National Debt. New York: Penguin, 1998.

This is a very readable, brief overview of the history of the public debt.

Love, Robert A. Federal Financing: A Study of the Methods Employed by the Treasury in Its Borrowing Operations. Reprint of 1931 edition. New York: AMS Press, 1968.

This is the most complete and thorough account of the structure of the public debt. Unfortunately, it only goes up to 1925.

Noll, Franklin. A Guide to Government Obligations, 1861-1976. Unpublished ms. 2004.

This is a descriptive inventory and chronological listing of the roughly 12,000 securities issued by the Treasury between 1861 and 1976.

Office of Management and Budget. “Historical Tables.” Budget of the United States Government, Fiscal Year 2005. URL: http://www.whitehouse.gov/omb/budget/fy2005/ pdf/hist.pdf.

Provides data on the public debt, budgets, and federal spending, but reports focus on the latter twentieth century.

Sahr, Robert. “National Government Budget.” URL: http://oregonstate.edu/Dept/pol_sci/ fac/sahr/sahr.htm.

This is a valuable web site containing a useful collection of detailed graphs on government spending and the public debt.

Withers, William. The Public Debt. New York: John Day Company, 1945.

Like Conklin, this is a contemporary’s view of the change in perspectives on the public debt occurring in the 1930s. Withers tends to favor the “new economics.”

Citation: Noll, Franklin. “The United States Public Debt, 1861 to 1975”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-united-states-public-debt-1861-to-1975/

Mechanical Cotton Picker

Donald Holley, University of Arkansas at Monticello

Until World War II, the Cotton South remained poor, backward, and un-mechanized. With minor exceptions, most tasks — plowing, cultivating, and finally harvesting cotton — were done by hand. Though sharecropping stifled the region’s attempts to mechanize, too many farmers, both tenants and owners, were trying to survive on small, uneconomical farms, trapping themselves in poverty. From 1910 to 1970 the Great Migration, which included whites as well as blacks, reduced the region’s oversupply of small farmers and embodied a tremendous success story for both migrants and the region itself. The mechanical cotton picker played an indispensable role in the transition from the prewar South of over-population, sharecropping, and hand labor to the capital-intensive agriculture of the postwar South.

Inventions and Inventors

In 1850 Samuel S. Rembert and Jedediah Prescott of Memphis, Tennessee, received the first patent for a cotton harvester from the U.S. Patent Office, but it was almost a century later that a mechanical picker was commercially produced. The late nineteenth century was an age of inventions, and many inventors sought to perfect a mechanical cotton harvester. Their lack of success reinforced the belief that cotton would always be picked by hand. For almost a hundred years, it seemed, a successful cotton picker had been just around the corner.

Inventors experimented with a variety of devices that were designed to pick cotton.

  • Pneumatic harvesters removed cotton fiber from the bolls with suction or a blast of air.
  • Electrical cotton harvesters used a statically charged belt or finger to attract the lint and remove it from the boll.
  • The thresher type cut down the plant near the surface of the ground and took the entire plant into the machine, where the cotton fiber was separated from the vegetable material.
  • The stripper type harvester combed the plant with teeth or drew it between stationary slots or teeth.
  • The picker or spindle type machine was designed to pick the open cotton from the bolls using spindles, fingers, or prongs, without injuring the plant’s foliage and unopened bolls.

The picker or spindle idea drew the most attention. In the 1880s Angus Campbell of Chicago, Illinois, was an agricultural engineer who saw the tedious process of picking cotton. For twenty years he made annual trips to Texas to test the latest model his spindle picker, but his efforts met with ridicule. The consensus of opinion was that cotton would always be picked by hand. Campbell joined with Theodore H. Price and formed the Price-Campbell Cotton Picker Corporation in 1912. The Price-Campbell machine performed poorly, but they believed they were on the right track.

Hiram M. Berry of Greenville, Mississippi, designed a picker with barbed spindles, though it was never perfected. Peter Paul Haring of Goliad, Texas, worked for thirty years to build a mechanical cotton picker using curved prongs or corkscrews.

John Rust

John Rust, the man who was ultimately credited with the invention of the mechanical cotton picker, personified the popular image of the lone inventor working in his garage. As a boy, he had picked cotton himself, and he dreamed that he could invent a machine that would relieve people of one of the most onerous forms of stoop labor.

John Daniel Rust was born in Texas in 1892. He was usually associated with his younger brother Mack Donald Rust, who had a degree in mechanical engineering. Mack did the mechanical work, while John was the dreamer who worried about the social consequences of their invention.

John was intrigued with the challenge of constructing a mechanical cotton picker. Other inventers had used spindles with barbs, which twisted the fibers around the spindle and pulled the lint from the boll. But the problem was how to remove the lint from the barbs. The spindle soon became clogged with lint, leaves, and other debris. He finally hit on the answer: use a smooth, moist spindle. As he later recalled:

The thought came to me one night after I had gone to bed. I remembered how cotton used to stick to my fingers when I was a boy picking in the early morning dew. I jumped out of bed, found some absorbent cotton and a nail for testing. I licked the nail and twirled it in the cotton and found that it would work.

By the mid-1930s the widespread use of mechanical cotton harvesters seemed imminent and inevitable. When in 1935 the Rust brothers moved to Memphis, the self-styled headquarters of the Cotton South, John Rust announced flatly, “The sharecropper system of the Old South will have to be abandoned.” The Rust picker could do the work of between 50 and 100 hand pickers, reducing labor needs by 75 percent. Rust expected to put the machine on the market within a year. A widely read article in the American Mercury entitled “The Revolution in Cotton” predicted the end of the entire plantation system. Most people compared the Rust picker with Eli Whitney’s cotton gin.

Rust’s 1936 Public Demonstration

In 1936, the Rust machine received a public trial at the Delta Experiment Station near Leland, Mississippi. Though the Rust picker was not perfected, it did pick cotton and it picked it well. The machine produced a sensation, sending a shutter throughout the region. The Rust brothers’ machine provoked the fear that a mechanical picker would destroy the South’s sharecropping system and, during the Great Depression, throw millions of people out of work. An enormous human tragedy would then release a flood of rural migrants, mostly black, on northern cities. The Jackson (Miss.) Daily News editorialized that the Rust machine “should be driven right out of the cotton fields and sunk into the Mississippi River.”

Soon a less strident and more balanced view emerged. William E. Ayres, head of the Delta Experiment Station, encouraged Rust:

We sincerely hope you can arrange to build and market your machine shortly. Lincoln emancipated the Southern Negro. It remains for cotton harvesting machinery to emancipate the Southern cotton planter. The sooner this [is] done, the better for the entire South.

Professional agricultural men saw the mechanization of cotton as a gradual process. The cheap price of farm labor in the depression had slowed the progress of mechanization. Still, the prospects for the future were grim. One agricultural economist predicted that mechanical cotton picking would become reality over the next ten or fifteen years.

Cotton Harvester Sweepstakes

International Harvester

Major farm implement companies, which had far more resources than did the Rust brothers, entered what may be called the cotton harvester sweepstakes. Usually avoiding publicity, implement companies were happy to let the Rust brothers bear the brunt of popular criticism. International Harvester (IH) of Chicago, Illinois, had invented the popular Farmall tractor in 1924 and then experimented with pneumatic pickers. After three years of work, Harvester realized that a skilled hand picker could easily pick faster than their pneumatic machine.

IH then bought up the Price-Campbell patents and turned to spindle pickers. By the late 1930s Harvester was sending a caravan southward every fall to test their latest prototype, picking early cotton in Texas and late-maturing cotton in Arkansas and Mississippi. In 1940 chief engineer C. R. Hagen abandoned the idea of a tractor that pulled the picking unit. Instead of driving the tractor forward, the tractor moved backward enabling the picking unit to encounter the cotton plants first. The transmission was reversed so that it still used forward gears.

After the 1942 caravan, Fowler McCormick, chairman of the board of International Harvester, formally announced that his company had a commercial cotton picker ready for production. The IH picker was a one-row, spindle-type picker, but unlike the Rust machine it used a barbed spindle, which improved its ability to snag cotton fibers. This machine employed a doffer to clean the spindles before the next rotation. Unfortunately, the War Production Board allocated IH only enough steel to continue production of experimental models; IH was unable to start full-scale production until after World War II was over.

In late 1944, as World War II entered its final months, attention turned to a dramatic announcement. The Hopson Planting Company near Clarksdale, Mississippi, produced the first cotton crop totally without the use of hand labor. Machines planted the cotton, chopped it, and harvested the crop. It was a stunning achievement that foretold the future.

IH’s Memphis Factory, 1949

After the war, International Harvester constructed Memphis Works, a huge cotton picker factory located on the north side of the city, and manufactured the first pickers in 1949. Though the company had assembled experimental models for testing purposes, this event marked the first commercial production of mechanical cotton pickers. The plant’s location clearly showed that the company aimed its pickers for use in the cotton areas of the Mississippi River Valley.

Deere

Deere and Company of Moline, Illinois, had experimented with stripper-type harvesters and variations of the spindle idea, but discontinued these experiments in 1931. In 1944 the company resumed work after buying the Berry patents, though Deere’s machine incorporated its own innovative designs. Deere quickly regained the ground it had lost during the depression. In 1950, Deere’s Des Moines Works at Ankeny, Iowa, began production of a two-row picker that could do almost twice the harvesting job of one-row machines.

Allis Chalmers

Despite his success, John Rust realized that his picker was substandard, and during World War II he went back to his drafting board and redesigned his entire machine. His lack of financial resources was overcome when he received an offer from Allis Chalmers of Indianapolis, Indiana, to produce machines using his patents. He signed a non-exclusive agreement.

Pearson

In late 1948 cotton farmers near Pine Bluff, Arkansas, suffered from a labor shortage. Since cotton still stood unpicked in the fields at the end of the year, they invited Rust to demonstrate his picker. The demonstration was a success. Rust entered into an agreement with Ben Pearson, a Pine Bluff company known for archery equipment, to produce 100 machines for $1,000 each, paid in advance. All the machines were sold, and Ben Pearson hired Rust as a consultant and manufactured Rust cotton pickers.

Ancillary Developments

The mechanization of cotton did indeed proceeded slowly. The production of cotton involved three distinct “labor peaks”: land breaking, planting, and cultivating; thinning and weeding; and harvesting. Until the 1960s cotton growers did not have a full set of technological tools to mechanize all labor peaks.

Weed Control

The control of weeds with herbicides was the last labor peak to be conquered. Desperate to solve the problem, farmers cross-cultivated their cotton, plowing across rows as well as up and down rows. Taking advantage of the toughness of cotton stalks, flame weeders used a flammable gas to kill weeds. The most peculiar sight in northeast Arkansas was flocks of weed-hungry geese that sauntered through cotton fields. The weed problem was solved not by machines, but by chemicals. In 1964, the preemergence herbicide Treflan became a household word because of a television commercial. Ultimately, the need to chop and thin cotton was a problem of plant genetics.

Western cotton growers embraced mechanization earlier than did southern farmers. As early as 1951, more than half of California’s cotton crop was mechanically harvested, with hand picking virtually eliminated by the 1960s. Environmental conditions produced smaller cotton plants, not the “rank” cotton in the Delta, and small plants favored machine picking. Western farmers also did not have to overcome the burden of an antiquated labor system. (See Figure 1.)

Figure 1. Machine Harvested Cotton as a Percentage of the Total Cotton Crop, Arkansas, California, South Carolina, and U.S. Average, 1949-1972

Source: United States Department of Agriculture, Economic Research Service. Statistics on Cotton and Related Data, 1920-1973, Statistical Bulletin No. 535 (Wash­ing­ton: Government Printing Office, 1974), 218.

Mechanization and Migration

The most controversial issue raised by the introduction of the mechanical cotton harvester has been its role in the Great Migration. Popular opinion has accepted the view that machines eliminated jobs and forced poor families to leave their homes and farms in a forlorn search for urban jobs. On the other hand agricultural experts argued that mechanization was not the cause, but the result of economic change in the Cotton South. Wartime and postwar labor shortages were the major factors in stimulating the use of machines in cotton fields. Most of the out-migration from the South stemmed from a desire to obtain high paying jobs in northern industries, not from an “enclosure” movement motivated by landowners who mechanized as rapidly as possible. Indeed, the South’s cotton farmers were often reluctant to make the transition from hand labor, which was familiar and workable, to machines, which were expensive and untried.

Holley (2000) used an empirical analysis to compare the impact of mechanization and manufacturing wages on the labor available for picking cotton. The result showed that mechanization accounted for less than 40 percent of the decrease in handpicking, while the other 60 percent was attributed to the decrease in the supply of labor caused by higher wages in manufacturing industries. Hand labor was pulled out of the Cotton South by higher industrial wages rather than displaced by job-destroying machines.

Timing of Migration

The evidence is overwhelming that migration greatly accelerated mechanization. The first commercial production of mechanical cotton pickers were manufactured in 1949, and these machines did not exist in large numbers until the early 1950s. Since the Great Migration began during World War I, mechanical pickers cannot have played any causal role in the first four decades of the migration. By 1950, soon after the first mechanical cotton pickers were commercially available, over six million migrants had already left the South. (See Table 1.) A decade later, most of the nation’s cotton was still hand picked. Only by the late 1960s, when the migration was losing momentum, did machines harvest virtually the total cotton crop.

Table 1
Net Migration from the South, by Race, 1870-1970 (thousands)

Decade Native White Black Total
1870-1880 91 -68 23
1880-1890 -271 88 -183
1890-1900 -30 -185 -215
1900-1910 -69 -194 -218
1910-1920 -663 -555 -1,218
1920-1930 -704 -903 -1,607
1930-1940 -558 -480 -1,038
1940-1950 -866 -1,581 -2,447
1950-1960* -1,003* -1,575* -2,578
1960-1970* -508* -1,430* -1,938
Totals for 1940-1970 -2,377 -4,586 -6,963

Source: Hope T. Eldridge and Dorothy S. Thomas, Population Redistribution and Economic Growth, vol. 3 (Philadelphia: American Philosophical Society, 1964), 90. *United States Bureau of the Census, Historical Statistics of the United States: Colonial Times to 1970 (Washington: Government Printing Office, 1975), Series C 55-62, pp. 93-95.

Migration figures also provide a comparison of statewide migration estimates in Arkansas, Louisiana, and Mississippi with estimates for counties that actually used mechanical pickers (79 of 221 counties or parishes). During the 1950s these counties accounted for less than half of the total white migration from the three-state region and just over half of the black migration. The same was true in the 1960s except that the white population showed a net gain, not a loss. (See Table 2.) Though push factors played some role in the migration, pull factors were more important. People deserted the cotton areas because they hoped to obtain better jobs and more money elsewhere.

Table 2
Estimated Statewide Migration, Arkansas, Louisiana, and Mississippi
Compared to Migration Estimates for Cotton Counties, 1950-1970

 

1950-1960 1960-1970
State as a Whole Counties Using Mechanical Pickers Percent­age State as a Whole Counties Using Mechanical Pickers Percent­age
White
Arkansas -283,000 -106,388 37.6 38,000 -26,026 68.5
Louisiana 43,000 -15,769 36.7 26,000 -28,949* 111.3
Mississippi -110,000 -50,997 46.4 10,000 -771 7.7
Totals -350,000 -173,154 49.6 74,000 -55,746 75.3
Black
Arkansas -150,000 -74,297 49.5 -112,000 -64,445 57.5
Louisiana -93,000 -42,151 45.3 -163,000 -62,290 38.2
Mississippi -323,000 -175,577 54.4 -279,000 -152,357 54.6
Totals -566,000 -292,025 51.6 -554,000 -279,092 50.4

Source: Donald Holley. The Second Great Emancipation: The Mechanical Cotton Picker, Black Migration, and How They Shaped the Modern South (Fayetteville: University of Arkansas Press, 2000), 178.

*The selected counties lost population, but Louisiana statewide recorded a population gain for the decade.

Most of the Arkansas migrants, for example, were young people from farm families who saw little future in agriculture. They were people with skills and thus possessed high employment potential. They also had better than average educations. In other words, they were not a collection of pathetic sharecroppers who had been driven off the land.

Conclusion

During and after World War II, the Cotton South was caught up in a complex interplay of economic forces. The region suffered shortages of agricultural labor during the war, which led to the collapse of the old plantation system. The number of tenant farmers and sharecroppers declined precipitously, and the U.S. Department of Agriculture stopped counting them after its 1959 census. The structure of southern agriculture changed as the number of farms declined steadily, while the size of farms increased. The age of Agri-Business had arrived.

The migration solved the long-standing problem of rural overpopulation, and did so without producing social upheaval. The migrants found jobs and improved their living standards, and simultaneously rural areas were relieved of their overpopulation. The migration also enabled black people to gain political clout in northern and western cities, and since Jim Crow was in part a system of labor control, the declining need for black labor in the South loosened the ties of segregation.

After World War II southern farmers faced a world that had changed. While the Civil War had freed the slaves, the mechanical cotton picker emancipated workers from backbreaking labor and emancipated the region itself from its dependence on cotton and sharecropping. Indeed, mechanization made possible the continuation of cotton farming in the post-plantation era. Yet cotton acreages declined as farmers moved into rice and soybeans, crops that were already mechanized, creating a more diversified agricultural economy. The end of sharecropping also signaled the end of the need for cheap, docile labor — always a prerequisite of plantation agriculture. The labor control that the South had always exercised over poor whites and blacks proved unattainable after the war. Thus the mechanization of cotton was an essential condition for the civil rights movement in the 1950s, which freed the region from Jim Crow. The relocation of political power from farms to cities was a related by-product of agricultural mechanization. In the second half of the twentieth century, the South underwent a second great emancipation as revolutionary changes swept the region that earlier were unattainable and even unimaginable.

Selected Bibliography

Carlson, Oliver. “Revolution in Cotton.” American Mercury 34 (February 1935): 129-36. Reprinted in Readers’ Digest 26 (March 1935): 13-16.

Cobb, James C. The Most Southern Place on Earth: The Mississippi Delta and the Roots of Regional Identity. New York: Oxford University Press, 1992.

Day, Richard H. “The Economics of Technological Change and the Demise of the Sharecropper.” American Economic Review 57 (June 1967): 427-49.

Drucker, Peter. “Exit King Cotton.” Harper’s 192 (May 1946): 473-80.

Fite, Gilbert C. Cotton Fields No More: Southern Agriculture, 1865-1980. Lexington: University of Kentucky Press, 1984.

Hagen, C. R. “Twenty-Five Years of Cotton Picker Development.” Agricultural Engineering 32 (November 1951): 593-96, 599.

Hamilton, C. Horace. “The Social Effects of Recent Trends in the Mechaniza­tion of Agriculture.” Rural Sociology 4 (March 1939): 3-19.

Heinicke, Craig. “African-American Migration and Mechanized Cotton Harvesting, 1950-1960.” Explorations in Economic History 31 (October 1994): 501-20.

Holley, Donald. The Second Great Emancipation: The Mechanical Cotton Picker, Black Migration, and How They Shaped the Modern South. Fayetteville: University of Arkansas Press, 2000.

Johnston, Oscar. “Will the Machine Ruin the South?” Saturday Evening Post 219 (May 31, 1947): 36-37, 94-95, 388.

Maier, Frank H. An Economic Analysis of Adoption of the Mechanical Cotton Picker.”Ph.D. dissertation, University of Chicago, 1969.

Peterson, Willis, and Yoav Kislev. “The Cotton Harvester in Retrospect: Labor Displacement or Replacement.” Journal of Economic History 46 (March 1986): 199-216.

Rasmussen, Wayne D. “The Mechanization of Agriculture.” Scientific American 247 (September 1982): 77-89.

Rust, John. “The Origin and Development of the Cotton Picker.” West Tennessee Historical Society Papers 7 (1953): 38-56.

Street, James H. The New Revolution in the Cotton Economy: Mechanization and Its Consequences. Chapel Hill: University of North Carolina Press, 1957.

Whatley, Warren C. “New Estimates of the Cost of Harvesting Cotton: 1949-1964.” Research in Economic History 13 (1991): 199-225.

Whatley, Warren C. “A History of Mechanization in the Cotton South: The Institutional Hypothesis.” Quarterly Journal of Economics 100 (November 1985): 1191-1215.

Wright, Gavin. Old South, New South: Revolutions in the Southern Economy since the Civil War. New York: Basic Books, 1986.

Citation: Holley, Donald. “Mechanical Cotton Picker”. EH.Net Encyclopedia, edited by Robert Whaples. June 16, 2003. URL http://eh.net/encyclopedia/mechanical-cotton-picker/