EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

Monetary Unions

Benjamin J. Cohen, University of California at Santa Barbara

Monetary tradition has long assumed that, in principle, each sovereign state issues and manages its own exclusive currency. In practice, however, there have always been exceptions — countries that elected to join together in a monetary union of some kind. Not all monetary unions have stood the test of time; in fact, many past initiatives have long since passed into history. Yet interest in monetary union persists, stimulated in particular by the example of the European Union’s Economic and Monetary Union (EMU), which has replaced a diversity of national monies with one joint currency called the euro. Today, the possibility of monetary union is actively discussed in many parts of the world.

A monetary union may be defined as a group of two or more states sharing a common currency or equivalent. Although some sources extend the definition to include the monetary regimes of national federations such as the United States or of imperial agglomerations such as the old Austro-Hungarian Empire, the conventional practice is to limit the term to agreements among units that are recognized as fully sovereign states under international law. The antithesis of a monetary union, of course, is a national currency with an independent central bank and a floating exchange rate.

In the strictest sense of the term, monetary union means complete abandonment of separate national currencies and full centralization of monetary authority in a single joint institution. In reality, considerable leeway exists for variations of design along two key dimensions. These dimensions are institutional provisions for (1) the issuing of currency and (2) the management of decisions. Currencies may continue to be issued by individual governments, tied together in an exchange-rate union. Alternatively, currencies may be replaced not by a joint currency but rather by the money of a larger partner — an arrangement generically labeled dollarization after the United States dollar, the money that is most widely used for this purpose. Similarly, monetary authority may continue to be exercised in some degree by individual governments or, alternatively, may be delegated not to a joint institution but rather to a single partner such as the United States.

In political terms, monetary unions divide into two categories, depending on whether national monetary sovereignty is shared or surrendered. Unions based on a joint currency or an exchange-rate union in effect pool monetary authority to some degree. They are a form of partnership or alliance of nominal equals. Unions created by dollarization are more hierarchical, a subordinate follower-leader type of regime.

The greatest attraction of a monetary union is that it reduces transactions costs as compared with a collection of separate national currencies. With a single money or equivalent, there is no need to incur the expense of currency conversion or hedging against exchange risk in transactions among the partners. But there are also two major economic disadvantages for governments to consider.

First, individual partners lose control of both the money supply and exchange rate as policy instruments to cope with domestic or external disturbances. Against a monetary union’s efficiency gains at the microeconomic level, governments must compare the cost of sacrificing autonomy of monetary policy at the macroeconomic level.

Second, individual partners lose the capacity derived from an exclusive national currency to augment public spending at will via money creation — a privilege known as seigniorage. Technically defined as the excess of the nominal value of a currency over its cost of production, seigniorage can be understood as an alternative source of revenue for the state beyond what can be raised by taxes or by borrowing from financial markets. Sacrifice of the seigniorage privilege must also be compared against a monetary union’s efficiency gains.

The seriousness of these two losses will depend on the type of monetary union adopted. In an alliance-type union, where authority is not surrendered but pooled, monetary control is delegated to the union’s joint institution, to be shared and in some manner collectively managed by all the countries involved. Hence each partner’s loss is simultaneously also each other’s gain. Though individual states may no longer have much latitude to act unilaterally, each government retains a voice in decision-making for the group as a whole. Losses will be greater with dollarization, which by definition transfers all monetary authority to the dominant power. Some measure of seigniorage may be retained by subordinate partners, but only with the consent of the leader.

The idea of monetary union among sovereign states was widely promoted in the nineteenth century, mainly in Europe, despite the fact that most national currencies were already tied together closely by the fixed exchange rates of the classical gold standard. Further efficiency gains could be realized, proponents argued, while little would be lost at a time when activist monetary policy was still unknown.

“Universal Currency” Fails, 1867

Most ambitious was a projected “universal currency” to be based on equivalent gold coins issued by the three biggest financial powers of the day: Britain, France, and the United States. As it happened, the gold content of French coins at the time was such that a 25-franc piece — not then in existence but easily mintable — would if brought into existence have contained 112.008 grains of gold, very close to both the English sovereign (containing 113.001 grains) and American half-eagle, equal to five dollars (containing 116.1 grains). Why not, then, seek some sort of standardization of coinage among the three countries to achieve the equivalent of one single money? That was the proposal of a major monetary conference sponsored by the French Government to coincide with an international exposition in Paris in 1867. Delegates from some 20 countries, with the critical exception of Britain’s representatives, enthusiastically supported creation of a universal currency based on a 25-franc piece and called for appropriate reductions in the gold content of the sovereign and half-eagle. In the end, however, no action was taken by either London or Washington, and for lack of sustained political support the idea ultimately faded away.

Latin Monetary Union

Two years before the 1867 conference, however, the French Government did succeed in gaining agreement for a more limited initiative — the Latin Monetary Union (LMU). Joining Belgium, Italy, and Switzerland together with France, the LMU was intended to standardize the existing gold and silver coinages of all four countries. Greece subsequently adhered to the terms of the LMU in 1868, though not becoming a formal member until 1876. In practical terms, a monetary partnership among these countries had already begun to coalesce even earlier as a result of independent decisions by Belgium, Greece, Italy, and Switzerland to model their currency systems on that of France. Each state chose to adopt a basic unit equal in value to the French franc — actually called a franc in Belgium and Switzerland — with equivalent subsidiary units defined according to the French-inspired decimal system. Starting in the 1850s, however, serious Gresham’s Law type problems developed as a result of differences in the weight and fineness of silver coins circulating in each country. The LMU established uniform standards for national coinages and, by making each member’s money legal tender throughout the Union, effectively created a wider area for the circulation of a harmonized supply of specie coins. In substance a formal exchange-rate union was created, with the authority for management of participating currencies remaining with each separate government.

As a group, members were distinguished from other countries by the reciprocal obligation of their central banks to accept one another’s coins at par and without limit. Soon after its founding, however, beginning in the late 1860s, the LMU was subjected to considerable strain owing to a global glut of silver production. The resulting depreciation of silver eventually led to a suspension of silver coinage by all the partners, effectively transforming the LMU from a bimetallic standard into what came to be called a “limping gold standard.” Even so, the arrangement managed to hold together until the generalized breakdown of global monetary relations during World War I. The LMU was not formally dissolved until 1927, following Switzerland’s decision to withdraw during the previous year.

Scandinavian Monetary Union

A similar arrangement also emerged in northern Europe — the Scandinavian Monetary Union (SMU), formed in 1873 by Sweden and Denmark and joined two years later by Norway. The Scandanavian Monetary Union too was an exchange-rate union designed to standardize existing coinages, although unlike the LMU the SMU was based from the start on a monometallic gold standard. The Union established the krone (crown) as a uniform unit of account, with national currencies permitted full circulation as legal tender in all three countries. As in the LMU, members of the SMU were distinguished from outsiders by the reciprocal obligation to accept one another’s currencies at par and without limit; also as in the LMU, mutual acceptability was initially limited to gold and silver coins only. In 1885, however, the three members went further, agreeing to accept one another’s bank notes and drafts as well, thus facilitating free intercirculation of all paper currency and resulting eventually in the total disappearance of exchange-rate quotations among the three moneys. By the turn of the century the SMU had come to function, in effect, as a single unit for all payments purposes, until relations were disrupted by the suspension of convertibility and floating of individual currencies at the start of World War I. Despite subsequent efforts during and after the war to restore at least some elements of the Union, particularly following the members’ return to the gold standard in the mid-1920s, the agreement was finally abandoned following the global financial crisis of 1931.

German Monetary Union

Repeated efforts to standardize coinages were made as well by various German states prior to Germany’s political union, but with rather less success. Early accords, following the start of the Zollverein (the German region’s customs union) in 1834, ostensibly established a German Monetary Union — technically, like the LMU and SMU, also an exchange-rate union — but in fact divided the area into two quite distinct currency alliances: one encompassing most northern states, using the thaler as its basic monetary unit; and a second including states in the south, based on the florin (also known as the guilder or gulden). Free intercirculation of coins was guaranteed in both groups but not at par: the exchange rate between the two units of account was fixed at one thaler for 1.75 florins (formally, 14: 24.5) rather than one-for-one. Moreover, states remained free to mint non-standardized coins in addition to their basic units, and many important German states (e.g., Bremen, Hamburg, and Schleswig-Holstein) chose to stay outside the agreement altogether. Nor were matters helped much by the short-lived Vienna Coinage Treaty signed with Austria in 1857, which added yet a third currency, Austria’s own florin, to the mix with a value slightly higher than that of the south German unit. The Austro-German Monetary Union was dissolved less than a decade later, following Austria’s defeat in the 1866 Austro-Prussian War. A full merger of all the currencies of the German states did not finally arrive until after consolidation of modern Germany, under Prussian leadership, in 1871.

The only truly successful monetary union in Europe prior to EMU came in 1922 with the birth of the Belgium-Luxembourg Economic Union (BLEU), which remained in force for more than seven decades until blended into EMU in 1999. Following severance of its traditional ties with the German Zollverein after World War I, Luxembourg elected to link itself commercially and financially with Belgium, agreeing to a comprehensive economic union including a merger of their separate money systems. Reflecting the partners’ considerable disparity of size (Belgium’s population is roughly thirty times Luxembourg’s), Belgian francs under BLEU formed the largest part of the money stock of Luxembourg as well as Belgium, and alone enjoyed full status as legal tender in both countries. Only Belgium, moreover, had a full-scale central bank. The Luxembourg franc was issued by a more modest institution, the Luxembourg Monetary Institute, was limited in supply, and served as legal tender just within Luxembourg itself. Despite the existence of formal joint decision-making bodies, Luxembourg in effect existed largely as an appendage of the Belgian monetary system until both nations joined their EU partners in creating the euro.

Monetary Disintegration

Europe in the twentieth century has also seen the disintegration of several monetary unions, usually as a by-product of political dissent or dissolution. A celebrated instance occurred after World War I when the Austro-Hungarian Empire was dismembered by the Treaty of Versailles. Almost immediately, in an abrupt and quite chaotic manner, new currencies were introduced by each successor state — including Czechoslovakia, Hungary, Yugoslavia, and ultimately even shrunken Austria itself — to replace the old imperial Austrian crown. Comparable examples have also been provided more recently, after the end of the Cold War, following fragmentation along ethnic lines of both the Czechoslovak and Yugoslav federations. Most spectacular was the collapse of the former ruble zone following the break-up of the seven-decade-old Soviet Union in late 1991. Out of the rubble of the ruble no fewer than a dozen new currencies emerged to take their place on the world stage.

Outside Europe, the idea of monetary union was promoted mainly in the context of colonial or other dependency relationships, including both alliance-type and dollarization arrangements. Though most imperial regimes were quickly abandoned in favor of newly created national currencies once decolonization began after World War II, a few have survived in modified form to the present day.

British Colonies

Alliance-type arrangements emerged in the colonial domains of both Britain and France, the two biggest imperial powers of the nineteenth century. First to act were the British, who after some experimentation succeeded in creating a series of common currency zones, each closely tied to the pound sterling through the mechanism of a currency board. With a currency board, exchange rates were firmly pegged to the pound and full sterling backing was required for any new issue of the colonial money. Joint currencies were created first in West Africa (1912) and East Africa (1919) and later for British possessions in Southeast Asia (1938) and the Caribbean (1950). In southern Africa, an equivalent zone was established during the 1920s based on the South African pound (later the rand), which became the sole legal tender in three of Britain’s nearby possessions, Bechuanaland (later Botswana), British Basutoland (later Lesotho), and Swaziland, as well as in South West Africa (later Namibia), a former German colony administered by South Africa under a League of Nations mandate. Of Britain’s various arrangements, only two still exist in some form.

East Caribbean

One is in the Caribbean, where Britain’s monetary legacy has proved remarkably durable. The British Caribbean Currency Board evolved first into the Eastern Caribbean Currency Authority in 1965 and then the Eastern Caribbean Central Bank in 1983, issuing one currency, the Eastern Caribbean dollar, to serve as legal tender for all participants. Included in the Eastern Caribbean Currency Union (ECCU) are the six independent microstates of Antigua and Barbuda, Dominica, Grenada, St. Kitts and Nevis, St. Lucia, and St. Vincent and the Grenadines, plus two islands that are still British dependencies, Anguilla and Montserrat. Embedded in a broadening network of other related agreements among the same governments (the Eastern Caribbean Common Market, the Organization of Eastern Caribbean States), the ECCU has functioned without serious difficulty since its formal establishment in 1965.

Southern Africa

The other is in southern Africa, where previous links have been progressively formalized, first in 1974 as the Rand Monetary Area, later in 1986 under the label Common Monetary Area (CMA), though, significantly, without the participation of diamond-rich Botswana, which has preferred to rely on its own national money. The CMA started as a monetary union tightly based on the rand, a local form of dollarization reflecting South Africa’s economic dominance of the region. But with the passage of time the degree of hierarchy has diminished considerably, as the three remaining junior partners have asserted their growing sense of national identity. Especially since the 1970s, the arrangement has been transformed into a looser exchange-rate union as each of South Africa’s partners introduced its own distinct national currency. One of them, Swaziland, has even gone so far as to withdraw the rand’s legal-tender status within its own borders. Moreover, though all three continue to peg their moneys to the rand at par, they are no longer bound by currency board-like provisions on money creation and may now in principle vary their exchange rates at will.

Africa’s CFA Franc Zone

In the French Empire monetary union did not arrive until 1945, when the newly restored government in Paris decided to consolidate the diverse currencies of its many African dependencies into one money, le franc des Colonies Françaises d’Afrique (CFA francs). Subsequently, in the early 1960s, as independence came to France’s African domains, the old colonial franc was replaced by two new regional currencies, each cleverly named to preserve the CFA franc appellation: for the eight present members of the West African Monetary Union, le franc de la Communauté Financière de l’Afrique, issued by the Central Bank of West African States; and for the six members of the Central African Monetary Area, le franc de la Coopération Financière Africaine, issued by the Bank of Central African States. Together the two groups comprise the Communauté Financière Africaine (African Financial Community). Though each of the two currencies is legal tender only within its own region, the two are equivalently defined and have always been jointly managed under the aegis of the French Ministry of Finance as integral parts of a single monetary union, popularly known as the CFA Franc Zone.

Elsewhere imperial powers preferred some version of a dollarization-type regime, promoting use of their own currencies in colonial possessions to reinforce dependency relationships — though few of these hierarchical arrangements survived the arrival of decolonization. The only major exceptions are to be found among smaller countries with special ties to the United States. Most prominently, these include Panama and Liberia, two states that owe their very existence to U.S. initiatives. Immediately after gaining its independence in 1903 with help from Washington, Panama adopted America’s greenback as its national currency in lieu of a money of its own. In similar fashion during World War II, Liberia — a nation founded by former American slaves — made the dollar its sole legal tender, replacing the British West African colonial coinage that had previously dominated the local money supply. Other long-time dollarizers include the Marshall Islands, Micronesia, and Palau, Pacific Ocean microstates that were all once administered by the United States under United Nations trusteeships. Most recently, the dollar replaced failed local currencies in Ecuador in 2000 and in El Salvador in 2001 and was adopted by East Timor when that state gained its independence in 2002.

Europe’s Monetary Union

The most dramatic episode in the history of monetary unions is of course EMU, in many ways a unique undertaking — a group of fully independent states, all partners in the European Union, that have voluntarily agreed to replace existing national currencies with one newly created money, the euro. The euro was first introduced in 1999 in electronic form (a “virtual” currency), with notes and coins following in 2002. Moreover, even while retaining political sovereignty, member governments have formally delegated all monetary sovereignty to a single joint authority, the European Central Bank. These are not former overseas dependencies like the members of ECCU or the CFA Franc Zone, inheriting arrangements that had originated in colonial times; nor are they small fragile economies like Ecuador or El Salvador, surrendering monetary sovereignty to an already proven and popular currency like the dollar. Rather, these are established states of long standing and include some of the biggest national economies in the world, engaged in a gigantic experiment of unprecedented proportions. Not surprisingly, therefore, EMU has stimulated growing interest in monetary union in many parts of the world. Despite the failure of many past initiatives, the future could see yet more joint currency ventures among sovereign states.

Bartel, Robert J. “International Monetary Unions: The XIXth Century Experience.” Journal of European Economic History 3, no. 3 (1974): 689-704.

Bordo, Michael, and Lars Jonung. Lessons for EMU from the History of Monetary Unions. London: Institute of Economic Affairs, 2000.

Capie, Forrest. “Monetary Unions in Historical Perspective: What Future for the Euro in the International Financial System.” In Ideas for the Future of the International Monetary System, edited by Michele Fratianni, Dominick Salvatore, and Paolo Savona., 77-95. Boston: Kluwer Academic Publishers, 1999.

Cohen, Benjamin J. “Beyond EMU: The Problem of Sustainability.” In The Political Economy of European Monetary Unification, second edition, edited by Barry Eichengreen and Jeffry A. Frieden, 179-204.Boulder, CO: Westview Press, 2001.

Cohen, Benjamin J. The Geography of Money. Ithaca, NY: Cornell University Press, 1998.

De Cecco, Marcello. “European Monetary and Financial Cooperation before the First World War.” Rivista di Storia Economica 9 (1992): 55-76.

Graboyes, Robert F. “The EMU: Forerunners and Durability.” Federal Reserve Bank of Richmond Economic Review 76, no. 4 (1990): 8-17.

Hamada, Koichi, and David Porteous. L’Intégration Monétaire dans Une Perspective Historique.” Revue d’Économie Financière 22 (1992): 77-92.

Helleiner, Eric. The Making of National Money: Territorial Currencies in Historical Perspective. Ithaca, NY: Cornell University Press, 2003.

Perlman, M. “In Search of Monetary Union.” Journal of European Economic History 22, no. 2 (1993): 313-332.

Vanthoor, Wim F.V. European Monetary Union Since 1848: A Political and Historical Analysis. Brookfield, VT: Edward Elgar, 1996.

Citation: Cohen, Benjamin. “Monetary Unions”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/monetary-unions/

Military Spending Patterns in History

Jari Eloranta, Appalachian State University

Introduction

Determining adequate levels of military spending and sustaining the burden of conflicts have been among key fiscal problems in history. Ancient societies were usually less complicated in terms of the administrative, fiscal, technological, and material demands of warfare. The most pressing problem was frequently the adequate maintenance of supply routes for the armed forces. On the other hand, these societies were by and large subsistence societies, so they could not extract massive resources for such ventures, at least until the arrival of the Roman and Byzantine Empires. The emerging nation states of the early modern period were much better equipped to fight wars. On the one hand, the frequent wars, new gunpowder technologies, and the commercialization of warfare forced them to consolidate resources for the needs of warfare. On the other hand, the rulers had to – slowly but surely – give up some of their sovereignty to be able to secure required credit both domestically and abroad. The Dutch and the British were masters at this, with the latter amassing an empire that spanned the globe at the eve of the First World War.

The early modern expansion of Western European states started to challenge other regimes all over the world, made possible by their military and naval supremacy as well as later on by their industrial prowess. The age of total war in the nineteenth and twentieth centuries finally pushed these states to adopt more and more efficient fiscal systems and enabled some of them to dedicate more than half of their GDP to the war effort during the world wars. Comparatively, even though military spending was regularly the biggest item in the budget for most states before the twentieth century, it still represented only a modest amount of their GDP. The Cold War period again saw high relative spending levels, due to the enduring rivalry between the West and the Communist Bloc. Finally, the collapse of the Soviet Union alleviated some of these tensions and lowered the aggregate military spending in the world. Newer security challenges such as terrorism and various interstate rivalries have again pushed the world towards growing overall military spending.

This article will first elaborate on some of the research trends in studying military spending and the multitude of theories attempting to explain the importance of warfare and military finance in history. This survey will be followed by a chronological sweep, starting with the military spending of the ancient empires and ending with a discussion of the current behavior of states in the post-Cold War international system. By necessity, this chronological review will be selective at best, given the enormity of the time period in question and the complexity of the topic at hand.

Theoretical Approaches

Military spending is a key phenomenon in order to understand various aspects of economic history: the cost, funding, and burden of conflicts; the creation of nation states; and in general the increased role of government in everyone’s lives especially since the nineteenth century. Nonetheless, certain characteristics can be distinguished from the efforts to study this complex topic among different sciences (mainly history, economics, and political sciences). Historians, especially diplomatic and military historians, have been keen on studying the origins of the two World Wars and perhaps certain other massive conflicts. Nonetheless, many of the historical studies on war and societies have analyzed developments at an elusive macro-level, often without a great deal of elaboration on the quantitative evidence behind the assumptions on the effects of military spending. For example, Paul Kennedy argued in his famous The Rise and Fall of the Great Powers: Economic Change and Military Conflict from 1500 to 2000 (1989) that military spending by hegemonic states eventually becomes excessive and a burden on its economy, finally leading to economic ruin. This argument has been criticized by many economists and historians, since it seems to lack the proper quantitative sources to support his notion of interaction between military spending and economic growth.[2] Quite frequently, as emerging from the classic studies by A.J.P. Taylor and many of the more current works, historians tend to be more interested in the impact of foreign policy decision-making and alliances, in addition to resolving the issue of “blame,” on the road towards major conflicts[3], rather than how reliable quantitative evidence can be mustered to support or disprove the key arguments. Economic historians, in turn, have not been particularly interested in the long-term economic impacts of military spending. Usually the interest of economic historians has centered on the economics of global conflicts — of which a good example of recent work combining the theoretical aspects of economics with historical case studies is The Economics of World War II, a compilation edited by Mark Harrison — as well as the immediate short-term economic impacts of wartime mobilization.[4]

The study of defense economics and military spending patterns as such is related to the immense expansion of military budgets and military establishments in the Cold War era. It involves the application of the methods and tools of economics to the study of issues arising from such a huge expansion. At least three aspects in defense economics set it apart from other fields of economics: 1) the actors (both private and public, for example in contracting); 2) theoretical challenges introduced by the interaction of different institutional and organizational arrangements, both in the budgeting and the allocation procedures; 3) the nature of military spending as a tool for destruction as well as providing security.[5] One of the shortcomings in the study of defense economics has been, at least so far, the lack of interest in periods before the Second World War.[6] For example, how much has the overall military burden (military expenditures as a percentage of GDP) of nation states changed over the last couple of centuries? Or, how big of a financial burden did the Thirty Years War (1618-1648) impose on the participating Great Powers?

A “typical” defense economist (see especially Sandler and Hartley (1995)) would model and attempt, based on public good theories, to explain military spending behavior (essentially its demand) by states with the following base equation:

(1)

In Equation 1, ME represents military expenditures by state i in year t, PRICE the price of military goods (affected by technological changes as well), INCOME most commonly the real GDP of the state in question, SPILLINS the impact of friendly states’ military spending (for example in an alliance), THREATS the impact of hostile states’ or alliances’ military expenditures, and STRATEGY the constraints imposed by changes in the overall strategic parameters of a nation. Most commonly, a higher price for military goods lowers military spending; higher income tends to increase ME (like during the industrial revolutions); alliances often lower ME due to the free riding tendencies of most states; threats usually increase military spending (and sometimes spur on arms races); and changes in the overall defensive strategy of a nation can affect ME in either direction, depending on the strategic framework implemented. While this model may be suitable for the study of, for example, the Cold War period, it fails to capture many other important explanatory factors, such as the influence of various organizations and interest groups in the budgetary processes as well as the impact of elections and policy-makers in general. For example, interest groups can get policy-makers to ignore price increases (on, for instance, domestic military goods), and election years usually alter (or focus) the behavior of elected officials.

In turn within peace sciences, a broader yet overlapping school of thought compared to defense economics, the focus in research has been to find the causal factors behind the most destructive conflicts. One of the most significant of such interdisciplinary efforts has been the Correlates of War (COW) project, which started in the spring of 1963. This project and the researchers loosely associated with it, not to mention its importance in producing comparative statistics, have had a big impact on the study of conflicts.[7] As Daniel S. Geller and J. David Singer have noted, the number of territorial states in the global system has ranged from fewer than 30 after the Napoleonic Wars to nearly 200 at the end of the twentieth century, and it is essential to test the various indicators collected by peace scientists against the historical record until theoretical premises can be confirmed or rejected.[8] In fact, a typical feature in most studies of this type is that they are focused on finding those sets of variables that might predict major wars and other conflicts, in a way similar to the historians’ origins-of-wars approach, whereas studies investigating the military spending behavior of monads (single states), dyads (pairs of states), or systems in particular are quite rare. Moreover, even though some cycle theorists and conflict scientists have been interested in the formation of modern nation states and the respective system of states since 1648, they have not expressed any real interest in pre-modern societies and warfare.[9]

Nevertheless, these contributions have had a lot to offer to the study of long-run dynamics of military spending, state formation, and warfare. According to Charles Tilly, there are four approximate approaches to the study of the relationships between war and power: 1) the statist; 2) the geopolitical; 3) the world system; and 4) the mode of production approach. The statist approach presents war, international relations, and state formation chiefly as a conse­quence of events within particular states. The geopolitical analysis is centered on the argument that state formation responds strongly to the current system of relations among states. The world system approach, á la Wallerstein, is mainly rooted in the idea that the different paths of state formation are influenced by the division of resources in the world system. In the mode of production framework, the way that production is organized determines the outcome of state formation. None of the approaches, as Tilly has pointed out, are adequate in their purest form in explaining state formation, international power relations, and economic growth as a whole.[10] Tilly himself maintains that coercion (a monopoly of violence by rulers and ability to wield coercion also externally) and capital (means of financing warfare) were the key elements in the European ascendancy to world domination in the early modern era. Warfare, state formation, and technological supremacy were all interrelated fundamentals of the same process.[11]

How can these theories of state behavior at the system level be linked to the analysis of military spending? According to George Modelski and William R. Thompson, proponents of Kondratieff waves and long cycles as explanatory forces in the development of world leadership patterns, the key aspect in a state’s ascendancy to prominence via such cycles in such models is naval power; i.e., a state’s ability to vie for world political leadership, colonization, and domination in trade.[12] One of the less explored aspects in most studies of hegemonic patterns is the military expenditure component in the competition between the states for military and economic leadership in the system. It is often argued, for example, that uneven economic growth levels cause nations to compete for economic and military prow­ess. The leader nation(s) thus has to dedicate increasing resources to armaments in order to maintain its position, while the other states, the so-called followers, can benefit from greater investments in other areas of economic activity. Therefore, the follower states act as free-riders in the international system stabilized by the hegemon. A built-in assumption in this hypothesized development pattern is that military spending eventually becomes harmful for economic development; a notion that has often been challenged based on empirical studies.[13]

Overall, the assertion arising from such a framework is that economic development and military spending are closely interdependent, with military spending being the driving force behind economic cycles. Moreover, based on this development pattern, it has been suggested that a country’s poor economic performance is linked to the “wasted” economic resources represented by military expenditures. However, as recent studies have shown, economic development is often more significant in explaining military spending rather than vice versa. The development of the U.S. economy since the Second World War certainly does not the type of hegemonic decline as predicted by Kennedy.[14] The aforementioned development pattern can be paraphrased as the so-called war chest hypothesis. As some of the hegemonic theorists reviewed above suggest, economic prosperity might be a necessary prerequisite for war and expansion. Thus, as Brian M. Pollins and Randall L. Schweller have indicated, economic growth would induce rising government expenditures, which in turn would enable higher military spending — therefore military expenditures would be “caused” by economic growth at a certain time lag.[15] In order for military spending to hinder economic performance, it would have to surpass all other areas of an economy, such as is often the case during wartime.

There have been relatively few credible attempts to model the military (or budgetary) spending behavior of states based on their long-run regime characteristics. Here I am going to focus on three in particular: 1) the Webber-Wildawsky model of budgeting; 2) the Richard Bonney model of fiscal systems; and 3) the Niall Ferguson model of interaction between public debts and forms of government. Caroly Webber and Aaron Wildawsky maintain essentially that each political culture generates its characteristic budgetary objectives; namely, productivity in market regimes, redistribution in sects (specific groups dissenting from an established authority), and more complex procedures in hierarchical regimes.[16] Thus, according to them the respective budgetary consequences arising from the chosen regime can be divided into four categories: despotism, state capitalism, American individualism, and social democracy. All of them in turn have implications for the respective regimes’ revenue and spending needs.

This model, however, is essentially a static one. It does not provide clues as to why nations’ behavior may change over time. Richard Bonney has addressed this problem in his writings on mainly the early modern states.[17] He has emphasized that the states’ revenue and tax collection systems, the backbone of any militarily successful nation state, have evolved over time. For example, in most European states the government became the arbiter of disputes and the defender of certain basic rights in the society by the early modern period. During the Middle Ages, the European fiscal systems were relatively backward and autarchic, with mostly predatory rulers (or roving bandits, as Mancur Olson has coined them).[18] In his model this would be the stage of the so-called tribute state. Next in the evolution came, respectively, the domain state (with stationary bandits, providing some public goods), the tax state (more reliance on credit and revenue collection), and finally the fiscal state (embodying more complex fiscal and political structures). A superpower like Great Britain in the nineteenth century, in fact, had to be a fiscal state to be able to dominate the world, due to all the burdens that went with an empire.[19]

While both of the models mentioned above have provided important clues as to how and why nations have prepared fiscally for wars, the most complete account of this process (along with Charles Tilly’s framework covered earlier) has been provided by Niall Ferguson.[20] He has maintained that wars have shaped all the most relevant institutions of modern economic life: tax-collecting bureaucracies, central banks, bond markets, and stock exchanges. Moreover, he argues that the invention of public debt instruments has gone hand-in-hand with more democratic forms of government and military supremacy – hence, the so-called Dutch or British model. These types of regimes have also been the most efficient economically, which has in turned reinforced the success of this fiscal regime model. In fact, military expenditures may have been the principal cause of fiscal innovation for most of history. Ferguson’s model highlights the importance, for a state’s survival among its challengers, of the adoption of the right types of institutions, technology, and a sufficient helping of external ambitions. All in all, I would summarize the required model, combining elements from the various frameworks, as being evolutionary, with regimes during different stages having different priorities and burdens imposed by military spending, depending also on their position in the international system. A successful ascendancy to a leadership position required higher expenditures, a substantial navy, fiscal and political structures conducive to increasing the availability of credit, and reoccurring participation in international conflicts.

Military Spending and the Early Empires

For most societies since the ancient river valley civilizations, military exertions and the means by which to finance them have been the crucial problems of governance. A centralized ability to plan and control spending were lacking in most governments until the nineteenth century. In fact, among the ancient civilizations, financial administration and the government were inseparable. Governments were organized on hierarchical basis, with the rulers having supreme control over military decisions. Taxes were often paid in kind to support the rulers, thus making it more difficult to monitor and utilize the revenues for military campaigns over great distances. For these agricultural economies, victory in war usually yielded lavish tribute to supplement royal wealth and helped to maintain the army and control the population. Thus, support of the large military forces and expeditions, contingent on food and supplies, was the ancient government’s principal expense and problem. Dependence on distant, often external suppliers of food limited the expansion of these empires. Fiscal management in turn was usually cumbersome and costly, and all of the ancient governments were internally unstable and vulnerable to external incursions.[21]

Soldiers, however, often supplemented their supplies by looting the enemy territory. The optimal size of an ancient empire was determined by the efficiency of tax collection and allocation, resource extraction, and its transportation system. Moreover, the supply of metal and weaponry, though important, was seldom the only critical variable for the military success an ancient empire. There were, however, important changing points in this respect, for example the introduction of bronze weaponry, starting with Mesopotamia about 3500 B.C. The introduction of iron weaponry about 1200 B.C. in eastern parts of Asia Minor, although the subsequent spread of this technology was fairly slow and gathered momentum from about 1000 B.C. onwards, and the use of chariot warfare introduced a new phase in warfare, due to the superior efficiency and cheapness of iron armaments as well as the hierarchical structures that were needed to use them during the chariot era.[22]

The river valley civilizations, nonetheless, paled in comparison with the military might and economy of one of the most efficient military behemoths of all time: the Roman Empire. Military spending was the largest item of public spending throughout Roman history. All Roman governments, similar to Athens during the time of Pericles, had problems in gathering enough revenue. Therefore, for example in the third century A.D. Roman citizenship was extended to all residents of the empire in order to raise revenue, as only citizens paid taxes. There were also other constraints on their spending, such as technological, geographic, and other productivity concerns. Direct taxation was, however, regarded as a dishonor, only to be extended in crisis times. Thus, taxation during most of the empire remained moderate, consisting of extraordinary taxes (so-called liturgies in ancient Athens) during such episodes. During the first two centuries of empire, the Roman army had about 150,000 to 160,000 legionnaires, in addition to 150,000 other troops, and during the first two centuries of empire soldiers’ wages began to increase rapidly to ensure the army’s loyalty. For example, in republican and imperial Rome military wages accounted for more than half of the revenue. The demands of the empire became more and more extensive during the third and fourth centuries A.D., as the internal decline of the empire became more evident and Rome’s external challengers became stronger. For example, the limited use of direct taxes and the commonness of tax evasion could not fulfill the fiscal demands of the crumbling empire. Armed forces were in turn used to maintain internal order. Societal unrest, inflation, and external incursions finally brought the Roman Empire, at least in the West, to an end.[23]

Warfare and the Rise of European Supremacy

During the Middle Ages, following the decentralized era of barbarian invasions, a varied system of European feudalism emerged, in which often feudal lords provided protection for communities for service or price. Since the Merovingian era, soldiers became more specialized professionals, with expensive horses and equipment. By the Carolingian era, military service had become largely the prerogative of an aristocratic elite. Prior to 1000 A.D., the command system was preeminent in mobilizing human and material resources for large-scale military enterprises, mostly on a contingency basis.[24] The isolated European societies, with the exception of the Byzantine Empire, paled in comparison with the splendor and accomplishment of the empires in China and the Muslim world. Also, in terms of science and inventions the Europeans were no match for these empires until the early modern period. Moreover, it was not until the twelfth century and the Crusades that the feudal kings needed to supplement the ordinary revenues to finance large armies. Internal discontent in the Middle Ages often led to an expansionary drive as the spoils of war helped calm the elite — for example, the French kings had to establish firm taxing power in the fourteenth century out of military necessity. The political ambitions of medieval kings, however, still relied on revenue strategies that catered to the short-term deficits, which made long-term credit and prolonged military campaigns difficult.[25]

Innovations in the ways of waging war and technology invented by the Chinese and the Islamic societies permeated Europe with a delay, such as the use of pikes in the fourteenth century and the gunpowder revolution of the fifteenth century, which in turn permitted armies to attack and defend larger territories. This also made possible a commercialization of warfare in Europe in the fourteenth and fifteenth centuries as feudal armies had to give way to professional mercenary forces. Accordingly, medieval states had to increase their taxation levels and tax collection to support the growing costs of warfare and the maintenance of larger standing armies. Equally, the age of commercialization of warfare was accompanied by the rising importance of sea power as European states began to build their overseas empires (as opposed to for example the isolationist turn of Ming China in the fifteenth century). States such as Portugal, the Netherlands, and England, respectively, became the “systemic leaders” due to their extensive fleets and commercial expansion in the period before the Napoleonic Wars. These were also states that were economically cohesive due to internal waterways and small geographic size as well. The early winners in the fight for world leadership, such as England, were greatly influenced by the availability of inexpensive credit, enabling them to mobilize limited resources effectively to meet military expenses. Their rise was of course preceded by the naval exploration and empire-building of many successful European states, especially Spain, both in Europe and around the globe.[26]

This pattern from command to commercialized warfare, from short-term to more permanent military management system, can be seen in the English case. In the period 1535-1547, the English defense share (military expenditures as a percentage of central government expenditures) averaged at 29.4 percent, with large fluctuations from year to year. However, in the period 1685-1813, the mean English defense share was 74.6 percent, never dropping below 55 percent in the said period. The newly-emerging nation states began to develop more centralized and productive revenue-expenditure systems, the goal of which was to enhance the state’s power, especially in the absolutist era. This also reflected on the growing cost and scale of warfare: During the Thirty Years’ War between 100,000 and 200,000 men fought under arms, whereas twenty years later 450,000 to 500,000 men fought on both sides in the War of the Spanish Succession. The numbers notwithstanding, the Thirty Years’ War was a conflict directly comparable to the world wars in terms of destruction. For example, Charles Tilly has estimated the battle deaths to have exceeded two million. Henry Kamen, in turn, has emphasized the mass scale destruction and economic dislocation this caused in the German lands, especially to the civilian population.[27]

With the increasing scale of armed conflicts in the seventeenth century, the participants became more and more dependent on access to long-term credit, because whichever government ran out of money had to surrender first. For example, even though the causes of Spain’s supposed decline in the seventeenth century are still disputed, nonetheless it can be said that the lack of royal credit and the poor management of government finances resulted in heavy deficit spending as military exertions followed one after another in the seventeenth century. Therefore, the Spanish Crown defaulted repeatedly during the sixteenth and seventeenth centuries, and on several occasions forced Spain to seek an end to its military activities. Spain still remained one of the most important Great Powers of the period, and was able to sustain its massive empire mostly intact until the nineteenth century.[28]

What about other country cases – can they shed further light into the importance of military spending and warfare in their early modern economic and political development? A key question for France, for example, was the financing of its military exertions. According to Richard Bonney, the cost of France’s armed forces in its era of “national greatness” were stupendous, with expenditure on the army by the period 1708-1714 averaging 218 million livres, whereas during the Dutch War of 1672-1678 it had averaged only 99 million in nominal terms. This was due to both growth in the size of the army and the navy, and the decline in the purchasing power of the French livre. The overall burden of war, however, remained roughly similar in this period: War expenditures accounted roughly 57 percent of total expenditure in 1683, whereas they represented about 52 percent in 1714. Moreover, as for all the main European monarchies, it was the expenditure on war that brought fiscal change in France, especially after the Napoleonic wars. Between 1815 and 1913, there was a 444 percent increase in French public expenditure and a consolidation of the emerging fiscal state. This also embodied a change in the French credit market structure.[29]

A success story, in a way a predecessor to the British model, was the Dutch state in this period. As Marjolein ‘t Hart has noted, the domestic investors were instrumental in supporting their new-born state as the state was able to borrow the money it needed from the credit markets, thus providing a stability in public finances even during crises. This financial regime lasted up until the end of the eighteenth century. Here again we can observe the intermarriage of military spending and the availability of credit, essentially the basic logic in the Ferguson model. One of the key features in the Dutch success in the seventeenth century was their ability to pay their soldiers relatively promptly. The Dutch case also underlines the primacy of military spending in state budgets and the burden involved for the early modern states. As we can see in Figure 1, the defense share of the Dutch region of Groningen remained consistently around 80 to 90 percent until the mid-seventeenth century, and then it declined, at least temporarily during periods of peace.[30]

Figure 1

Groningen’s Defense Share (Military Spending as a Percentage of Central Government Expenditures), 1596-1795

Source: L. van der Ent, et al. European State Finance Database. ESFD, 1999 [cited 1.2.2001]. Available from: http://www.le.ac.uk/hi/bon/ESFDB/frameset.html.

Respectively, in the eighteenth century, with rapid population growth in Europe, armies also grew in size, especially the Russian army. In Western Europe, a mounting intensity of warfare with the Seven Years War (1756-1763) finally culminated in the French Revolution and Napoleon’s conquests and defeat (1792-1815). The new style of warfare brought on by the Revolutionary Wars, with conscription and war of attrition as new elements, can be seen in the growth of army sizes. For example, the French army grew over 3.5 times in size from 1789 to 1793 – up to 650,000 men. Similarly, the British army grew from 57,000 in 1783 to 255,000 men in 1816. The Russian army acquired the massive size of 800,000 men in 1816, and Russia also kept the size of its armed forces at similar levels in the nineteenth century. However, the number of Great Power wars declined in number (see Table 1), as did the average duration of these wars. Yet, some of the conflicts of the industrial era became massive and deadly events, drawing in most parts of the world into essentially European skirmishes.

Table 1

Wars Involving the Great Powers

Century Number of wars Average duration of wars (years) Proportion of years war was underway, percentage
16th 34 1.6 95
17th 29 1.7 94
18th 17 1.0 78
19th 20 0.4 40
20th 15 0.4 53

Source: Charles Tilly. Coercion, Capital, and European States, AD 990-1990. Cambridge, Mass: Basil Blackwell, 1990.

The Age of Total War and Industrial Revolutions

With the new kind of mobilization, which became more or less a permanent state of affairs in the nineteenth century, centralized governments required new methods of finance. The nineteenth century brought on reforms, such as centralized public administration, reliance on specific, balanced budgets, innovations in public banking and public debt management, and reliance on direct taxation for revenue. However, for the first time in history, these reforms were also supported with the spread of industrialization and rising productivity. The nineteenth century was also the century of the industrialization of war, starting in the mid-century and gathering breakneck speed quickly. By the 1880s, military engineering began to forge ahead of even civil engineering. Also, a revolution in transportation with steamships and railroads made massive, long-distance mobilizations possible, as shown by the Prussian example against the French in 1870-1871.[31]

The demands posed by these changes on the state finances and economies differed. In the French case, the defense share stayed roughly the same, a little over 30 percent, throughout the nineteenth and early twentieth centuries, whereas its military burden increased about one percent to 4.2 percent. In the UK case, the defense share mean declined two percent to 36.7 percent in 1870-1913, compared to early nineteenth century. However, the strength of the British economy made it possible that the military burden actually declined a little to 2.6 percent, a similar figure incurred by Germany in the same period. For most countries the period leading to the First World War meant higher military burdens than that, such as Japan’s 6.1 percent. However, the United States, the new economic leader by the closing decades of the century, averaged spending a meager 0.7 percent of its GDP for military purposes, a trend that continued throughout the interwar period as well (military burden of 1.2 percent). As seen in Figure 2, the military burdens incurred by the Great Powers also varied in terms of timing, suggesting different reactions to external and internal pressures. Nonetheless, the aggregate, systemic real military spending of the period showed a clear upward trend for the entire period. Moreover, the impact of the Russo-Japanese was immense for the total (real) spending of the sixteen states represented in the figure below, due to the fact that both countries were Great Powers and Russian military expenditures alone were massive. The unexpected defeat of the Russians unleashed, along with the arrival of dreadnoughts, an intensive arms race.[32]

Figure 2

Military Burdens of Four Great Powers and Aggregate Real Military Expenditure (ME) for Sixteen Countries on the Aggregate, 1870-1913

Sources: See Jari Eloranta, “Struggle for Leadership? Military Spending Behavior of the Great Powers, 1870-1913,” Appalachian State University, Department of History, unpublished manuscript 2005b, also on the constructed system of states and the methods involved in converting the expenditures into a common currency (using exchange rates and purchasing power parities), which is always a controversial exercise.

With the beginning of the First World War in 1914, this military potential was unleashed in Europe with horrible consequences, as most of the nations anticipated a quick victory but ended up fighting a war of attrition in the trenches. Mankind had finally, even officially, entered the age of total war.[33] It has been estimated that about nine million combatants and twelve million civilians died during the so-called Great War, with property damage especially in France, Belgium, and Poland. According to Rondo Cameron and Larry Neal, the direct financial losses arising from the Great War were about 180-230 billion 1914 U.S. dollars, whereas the indirect losses of property and capital rose to over 150 billion dollars.[34] According to the most recent estimates, the economic losses arising from the war could be as high as 692 billion 1938 U.S. dollars.[35] But how much of their resources did they have to mobilize and what were the human costs of the war?

As Table 2 displays, the French military burden was fairly high, in addition to the size of its military forces and the number of battle deaths. Therefore, France mobilized the most resources in the war and, subsequently, suffered the greatest losses. The mobilization by Germany was also quite efficient, because almost the entire state budget was used to support the war effort. On the other hand, the United States barely participated in the war, and its personnel losses in the conflict were relatively small, as were its economic burdens. In comparison, the massive population reserves of Russia enabled fairly high personnel losses, quite similar to the Soviet experience in the Second World War.

Table 2

Resource Mobilization by the Great Powers in the First World War

Country and years in the war Average military burden (percent of GDP) Average defense share of government spending Military personnel as a percentage of population Battle deaths as a percentage of population
France

1914-1918

43 77 11 3.5
Germany

1914-1918

.. 91 7.3 2.7
Russia

1914-1917

.. .. 4.3 1.4
UK

1914-1918

22 49 7.3 2.0
US

1917-1918

7 47 1.7 0.1

Sources: Historical Statistics of the United States, Colonial Times to 1970, Washington, DC: U.S. Bureau of Census, 1975; Louis Fontvieille. Evolution et croissance de l’Etat Français: 1815-1969, Economies et sociëtës, Paris: Institut de Sciences Mathematiques et Economiques Appliquees, 1976 ; B. R. Mitchell. International Historical Statistics: Europe, 1750-1993, 4th edition, Basingstoke: Macmillan Academic and Professional, 1998a; E. V. Morgan, Studies in British Financial Policy, 1914-1925., London: Macmillan, 1952; J. David Singer and Melvin Small. National Material Capabilities Data, 1816-1985. Ann Arbor, MI: Inter-university Consortium for Political and Social Research, 1993. See also Jari Eloranta, “Sotien taakka: Makrotalouden ongelmat ja julkisen talouden kipupisteet maailmansotien jälkeen (The Burden of Wars: The Problems of Macro Economy and Public Sector after the World Wars),” in Kun sota on ohi, edited by Petri Karonen and Kerttu Tarjamo (forthcoming), 2005a.

In the interwar period, the pre-existing tendencies to continue social programs and support new bureaucracies made it difficult for the participants to cut their public expenditure, leading to a displacement of government spending to a slightly higher level for many countries. Public spending especially in the 1920s was in turn very static by nature, plagued by budgetary immobility and standoffs especially in Europe. This meant that although in many countries, except the authoritarian regimes, defense shares dropped noticeably, their respective military burdens stayed either at similar levels or even increased — for example, the French military burden rose to a mean level of 7.2 percent in this period. In Great Britain also, the defense share mean dropped to 18.0 percent, although the military burden mean actually increased compared to the pre-war period, despite the military expenditure cuts and the “Ten-Year Rule” in the 1920s. For these countries, the mid-1930s marked the beginning of intense rearmament whereas some of the authoritarian regimes had begun earlier in the decade. Germany under Hitler increased its military burden from 1.6 percent in 1933 to 18.9 percent in 1938, a rearmament program combining creative financing and promising both guns and butter for the Germans. Mussolini was not quite as successful in his efforts to realize the new Roman Empire, with a military burden fluctuating between four and five percent in the 1930s (5.0 percent in 1938). The Japanese rearmament drive was perhaps the most impressive, with as high as 22.7 percent military burden and over 50 percent defense share in 1938. For many countries, such as France and Russia, the rapid pace of technological change in the 1930s rendered many of the earlier armaments obsolete only two or three years later.[36]

Figure 3
Military Burdens of Denmark, Finland, France, and the UK, 1920-1938

Source: Jari Eloranta, “External Security by Domestic Choices: Military Spending as an Impure Public Good among Eleven European States, 1920-1938,” Dissertation, European University Institute, 2002.

There were differences between democracies as well, as seen in Figure 3. Finland’s behavior was similar to the UK and France, i.e. part of the so-called high spending group among European democracies. This was also similar to the actions of most East European states. Denmark was among the low-spending group, perhaps due to the futility of trying to defend its borders amidst probable conflicts involving giants in the south, France and Germany. Overall, the democracies maintained fairly steady military burdens throughout the period. Their rearmament was, however, much slower than the effort amassed by most autocracies. This is also amply displayed in Figure 4.

Figure 4
Military Burdens of Germany, Italy, Japan, and Russia/USSR, 1920-1938

Sources: Eloranta (2002), see especially appendices for the data sources. There are severe limitations and debates related to, for example, the German (see e.g. Werner Abelshauser, “Germany: Guns, Butter, and Economic Miracles,” in The Economics of World War II: Six Great Powers in International Comparison, edited by Mark Harrison, 122-176, Cambridge: Cambridge University Press, 2000) and the Soviet data (see especially R. W. Davies, “Soviet Military Expenditure and the Armaments Industry, 1929-33: A Reconsideration,” Europe-Asia Studies 45, no. 4 (1993): 577-608, as well as R. W. Davies and Mark Harrison. “The Soviet Military-Economic Effort under the Second Five-Year Plan, 1933-1937,” Europe-Asia Studies 49, no. 3 (1997): 369-406).

In the ensuing conflict, the Second World War, the initial phase from 1939 to early 1942 favored the Axis as far as strategic and economic potential was concerned. After that, the war of attrition, with the United States and the USSR joining the Allies, turned the tide in favor of the Allies. For example, in 1943 the Allied total GDP was 2,223 billion international dollars (in 1990 prices), whereas the Axis accounted for only 895 billion. Also, the impact of the Second World War was much more profound for the participants’ economies. For example, Great Britain at the height of the First World War incurred a military burden of about 27 percent, whereas the military burden level consistently held throughout the Second World War was over 50 percent.[37]

Table 3

Resource Mobilization by the Great Powers in the Second World War

Country and years in the war Average military burden (percent of GDP) Average defense share of government spending Military personnel as a percentage of population Battle deaths as a percentage of population
France

1939-1945

.. .. 4.2 0.5
Germany

1939-1945

50 .. 6.4 4.4
Soviet Union

1939-1945

44 48 3.3 4.4
UK

1939-1945

45 69 6.2 0.9
USA

1941-1945

32 71 5.5 0.3

Sources: Singer and Small (1993); Stephen Broadberry and Peter Howlett, “The United Kingdom: ‘Victory at All Costs’,” in The Economics of World War II: Six Great Powers in International Comparisons, edited by Mark Harrison (Cambridge University Press, 1998); Mark Harrison. “The Economics of World War II: An Overview,” in The Economics of World War II: Six Great Powers in International Comparisons, edited by Mark Harrison (Cambridge: Cambridge University Press, 1998a); Mark Harrison, “The Soviet Union: The Defeated Victor,” in The Economics of World War II: Six Great Powers in International Comparison, edited by Mark Harrison, 268-301 (Cambridge: Cambridge University Press, 2000); Mitchell (1998a); B.R. Mitchell. International Historical Statistics: The Americas, 1750-1993, fourth edition, London: Macmillan, 1998b. The Soviet defense share only applies to years 1940-1945, whereas the military burden applies to 1940-1944. These two measures are not directly comparable, since the former is measured in current prices and the latter in constant prices.

As Table 3 shows, the greatest military burden was most likely incurred by Germany, even though the other Great Powers experienced similar levels. Only the massive economic resources of the United States made possible its lower military burden. Also the UK and the United States mobilized their central/federal government expenditures efficiently for the military effort. In this sense the Soviet Union fared the worst, and additionally the share of military personnel out of the population was relatively small compared to the other Great Powers. On the other hand, the economic and demographic resources that the Soviet Union possessed ultimately ensured its survival during the German onslaught. On the aggregate, the largest personnel losses were incurred by Germany and the Soviet Union, in fact many times those of the other Great Powers.[38] In comparison with the First World War, the second one was even more destructive and lethal, and the aggregate economic losses from the war exceeded even 4,000 billion 1938 U.S. dollars. After the war, the European industrial and agricultural production amounted to only half of the 1938 total.[39]

The Atomic Age and Beyond

The Second World War brought with it also a new role for the United States in world politics, a military-political leadership role warranted by its dominant economic status established over fifty years earlier. With the establishment of NATO in 1949, a formidable defense alliance was formed for the capitalist countries. The USSR, rising to new prominence due to the war, established the Warsaw Pact in 1955 to counter these efforts. The war also meant a change in the public spending and taxation levels of most Western nations. The introduction of welfare states brought the OECD government expenditure average from just under 30 percent of the GDP in the 1950s to over 40 percent in the 1970s. Military spending levels followed suit and peaked during the early Cold War. The American military burden increased above 10 percent in 1952-1954, and the United States has retained a high mean value for the post-war period of 6.7 percent. Great Britain and France followed the American example after the Korean War.[40]

The Cold War embodied a relentless armaments race, with nuclear weapons now as the main investment item, between the two superpowers (see Figure 5). The USSR, according to some figures, spent about 60 to 70 percent of the American level in the 1950s, and actually spent more than the United States in the 1970s. Nonetheless, the United States maintained a massive advantage over the Soviets in terms of nuclear warheads. However, figures collected by SIPRI (Stockholm International Peace Research Institute), suggest an enduring yet dwindling lead for the US even in the 1970s. On the other hand, the same figures point to a 2-to-1 lead in favor of the NATO countries over the Warsaw Pact members in the 1970s and early 1980s. Part of this armaments race was due to technological advances that led to increases in the cost per soldier — it has been estimated that technological increases have produced a mean annual increase in real costs of around 5.5 percent in the post-war period. Nonetheless, spending on personnel and their maintenance has remained the biggest spending item for most countries.

Figure 5

Military Burdens (=MILBUR) of the United States and the United Kingdom, and the Soviet Military Spending as a Percentage of the US Military Spending (ME), 1816-1993

Sources: References to the economic data can be found in Jari Eloranta, “National Defense,” in The Oxford Encyclopedia of Economic History, edited by Joel Mokyr, 30-33 (Oxford: Oxford University Press, 2003b). ME (Military Expenditure) data from Singer and Small (1993), supplemented with the SIPRI (available from: http://www.sipri.org/) data for 1985-1993. Details are available from the author upon request. Exchange rates from Global Financial Data (Online databank), 2003. Available from http://www.globalfindata.com/. The same caveats apply to the underlying currency conversion methods as in Figure 2.

The one outcome of this Cold War arms race that is often cited is the so-called Military Industrial Complex (MIC), referring usually to the influence that the military and industry would have on each other’s policies. The more nefarious connotation refers to the unduly large influence that military producers might have over public sector’s acquisitions and foreign policy in particular in such a collusive relationship. In fact, the origins of this type of interaction can be found further back in history. As Paul Koistinen has emphasized, the First World War was a watershed in business-government relationships, since businessmen were often brought into government, to make supply decisions during this total conflict. Most governments, as a matter of fact, needed the expertise of the core business elites during the world wars. In the United States some form of an MIC came into existence before 1940. Similar developments can be seen in other countries before the Second World War, for example in the Soviet Union. The Cold War simply reinforced these tendencies.[41] Findings by, for example, Robert Higgs establish that the financial performance of the leading defense contracting companies was, on the average, much better than that of comparable large corporations during the period 1948-1989. Nonetheless, his findings do not support the normative conclusion that the profits of defense contractors were “too high.”[42]

World spending levels began a slow decline from the 1970s onwards, with the Reagan years being an exception for the US. In 1986, the US military burden was 6.5 percent, whereas in 1999 it was down to 3.0 percent. In France during the period 1977-1999, the military burden has declined from the post-war peak levels in the 1950s to a mean level of 3.6 percent at the turn of the millennium. This has been mostly the outcome of the reduction in tensions between the rival groups and the downfall of the USSR and the communist regimes in Eastern Europe. The USSR was spending almost as much on its armed forces as the United States up until mid-1980s, and the Soviet military burden was still 12.3 percent in 1990. Under the Russian Federation, with a declining GDP, this level has dropped rapidly to 3.2 percent in 1998. Similarly, other nations have downscaled their military spending since the late 1980s and the 1990s. For example, German military spending in constant US dollars in 1991 was over 52 billion, whereas in 1999 it declined to less than 40 billion. In the French case, the decline was from little over 52 billion in 1991 to below 47 billion in 1999, with its military burden decreasing from 3.6 percent to 2.8 percent.[43]

Overall, according to the SIPRI figures, there was a reduction of about one-third in real terms in world military spending in 1989-1996, with some fluctuation and even small increase since then. In the global scheme, world military expenditure is still highly concentrated on a few countries, with the 15 major spenders accounting for 80 percent of the world total in 1999. The newest military spending estimates (see e.g. http://www.sipri.org/) put the world military expenditures on a growth trend once again due to new threats such as international terrorism and the conflicts related to terrorism. In terms of absolute figures, the United States still dominates the world military spending with a 47 percent share of the world total in 2003. The U.S. spending total becomes less impressive when purchasing power parities are utilized. Nonetheless, the United States has entered the third millennium as the world’s only real superpower – a role that it embraces sometimes awkwardly. Whereas the United States was an absent hegemon in the late nineteenth and first half of the twentieth century, it now has to maintain its presence in many parts of the world, sometimes despite objections from the other players in the international system.[44]

Conclusions

Warfare has played a crucial role in the evolution of human societies. The ancient societies were usually less complicated in terms of the administrative, fiscal, technological, and material demands of warfare. The most pressing problem was commonly the maintenance of adequate supply for the armed forces during prolonged campaigns. This also put constraints on the size and expansion of the early empires, at least until the introduction of iron weaponry. The Roman Empire, for example, was able to sustain a large, geographically diverse empire for a long time period. The disjointed Middle Ages splintered the European societies into smaller communities, in which so-called roving bandits ruled, at least until the arrival of more organized military forces from the tenth century onwards. At the same time, the empires in China and the Muslim world developed into cradles of civilization in terms of scientific discoveries and military technologies.

The geographic and economic expansion of early modern European states started to challenge other regimes all over the world, made possible in part by their military and naval supremacy as well as their industrial prowess later on. The age of total war and revolutions in the nineteenth and twentieth centuries finally pushed these states to adopt more and more efficient fiscal systems and enabled some of them to dedicate more than half of their GDP to the war effort during the world wars. Even though military spending was regularly the biggest item in the budget for most states before the twentieth century, it still represented only a modest amount of their respective GDP. The Cold War period again saw high relative spending levels, due to the enduring rivalry between the West and the Communist bloc. Finally, the collapse of the Soviet Union alleviated some of these tensions and lowered the aggregate military spending in the world, if only temporarily. Newer security challenges such as terrorism and various interstate rivalries have again pushed the world towards a growth path in terms of overall military spending.

The cost of warfare has increased especially since the early modern period. The adoption of new technologies and massive standing armies, in addition to the increase in the “bang-for-buck” (namely, the destructive effect of military investments), have kept military expenditures in a central role vis-à-vis modern fiscal regimes. Although the growth of welfare states in the twentieth century has forced some tradeoffs between “guns and butter,” usually the spending choices have not been competing rather than complementary. Thus, the size and spending of governments have increased. Even though the growth in welfare spending has abated somewhat since the 1980s, according to Peter Lindert they will most likely still experience at least modest expansion in the future. Nor is it likely that military spending will be displaced as a major spending item in national budgets. Various international threats and the lack of international cooperation will ensure that military spending will remain the main contender to social expenditures.[45]


[1] I thank several colleagues for their helpful comments, especially Mark Harrison, Scott Jessee, Mary Valante, Ed Behrend, David Reid, as well as an anonymous referee and EH.Net editor Robert Whaples. The remaining errors and interpretations are solely my responsibility.

[2] See Paul Kennedy, The Rise and Fall of the Great Powers: Economic Change and Military Conflict from 1500 to 2000 (London: Fontana, 1989). Kennedy calls this type of approach, following David Landes, “large history.” On criticism of Kennedy’s “theory,” see especially Todd Sandler and Keith Hartley, The Economics of Defense, ed. Mark Perlman, Cambridge Surveys of Economic Literature (Cambridge: Cambridge University Press, 1995) and the studies listed in it. Other examples of long-run explanations can be found in, e.g., Maurice Pearton, The Knowledgeable State: Diplomacy, War, and Technology since 1830 (London: Burnett Books: Distributed by Hutchinson, 1982) and William H. McNeill, The Pursuit of Power: Technology, Armed Force, and Society since A.D. 1000 (Chicago: University of Chicago Press, 1982).

[3] Jari Eloranta, “Kriisien ja konfliktien tutkiminen kvantitatiivisena ilmiönä: Poikkitieteellisyyden haaste suomalaiselle sotahistorian tutkimukselle (The Study of Crises and Conflicts as Quantitative Phenomenon: The Challenge of Interdisciplinary Approaches to Finnish Study of Military History),” in Toivon historia – Toivo Nygårdille omistettu juhlakirja, ed. Kalevi Ahonen, et al. (Jyväskylä: Gummerus Kirjapaino Oy, 2003a).

[4] See Mark Harrison, ed., The Economics of World War II: Six Great Powers in International Comparisons (Cambridge, UK: Cambridge University Press, 1998b). Classic studies of this type are Alan Milward’s works on the European war economies; see e.g. Alan S. Milward, The German Economy at War (London: Athlon Press, 1965) and Alan S. Milward, War, Economy and Society 1939-1945 (London: Allen Lane, 1977).

[5] Sandler and Hartley, The Economics of Defense, xi; Jari Eloranta, “Different Needs, Different Solutions: The Importance of Economic Development and Domestic Power Structures in Explaining Military Spending in Eight Western Democracies during the Interwar Period” (Licentiate Thesis, University of Jyväskylä, 1998).

[6] See Jari Eloranta, “External Security by Domestic Choices: Military Spending as an Impure Public Good among Eleven European States, 1920-1938” (Dissertation, European University Institute, 2002) for details.

[7] Ibid.

[8] Daniel S. Geller and J. David Singer, Nations at War. A Scientific Study of International Conflict, vol. 58, Cambridge Studies in International Relations (Cambridge: Cambridge University Press, 1998), e.g. 1-7.

[9] See e.g. Jack S. Levy, “Theories of General War,” World Politics 37, no. 3 (1985). For an overview, see especially Geller and Singer, Nations at War: A Scientific Study of International Conflict. A classic study of war from the holistic perspective is Quincy Wright, A Study of War (Chicago: University of Chicago Press, 1942). See also Geoffrey Blainey, The Causes of War (New York: Free Press, 1973). On rational explanations of conflicts, see James D. Fearon, “Rationalist Explanations for War,” International Organization 49, no. 3 (1995).

[10] Charles Tilly, Coercion, Capital, and European States, AD 990-1990 (Cambridge, MA: Basil Blackwell, 1990), 6-14.

[11] For more, see especially ibid., Chapters 1 and 2.

[12] George Modelski and William R. Thompson, Leading Sectors and World Powers: The Coevolution of Global Politics and Economics, Studies in International Relations (Columbia, SC: University of South Carolina Press, 1996), 14-40. George Modelski and William R. Thompson, Seapower in Global Politics, 1494-1993 (Houndmills, UK: Macmillan Press, 1988).

[13] Kennedy, The Rise and Fall of the Great Powers: Economic Change and Military Conflict from 1500 to 2000, xiii. On specific criticism, see e,g, Jari Eloranta, “Military Competition between Friends? Hegemonic Development and Military Spending among Eight Western Democracies, 1920-1938,” Essays in Economic and Business History XIX (2001).

[14] Eloranta, “External Security by Domestic Choices: Military Spending as an Impure Public Good among Eleven European States, 1920-1938,” Sandler and Hartley, The Economics of Defense.

[15] Brian M. Pollins and Randall L. Schweller, “Linking the Levels: The Long Wave and Shifts in U.S. Foreign Policy, 1790- 1993,” American Journal of Political Science 43, no. 2 (1999), e.g. 445-446. E.g. Alex Mintz and Chi Huang, “Guns versus Butter: The Indirect Link,” American Journal of Political Science 35, no. 1 (1991) suggest an indirect (negative) growth effect via investment at a lag of at least five years.

[16] Caroly Webber and Aaron Wildavsky, A History of Taxation and Expenditure in the Western World (New York: Simon and Schuster, 1986).

[17] He outlines most of the following in Richard Bonney, “Introduction,” in The Rise of the Fiscal State in Europe c. 1200-1815, ed. Richard Bonney (Oxford: Oxford University Press, 1999b).

[18] Mancur Olson, “Dictatorship, Democracy, and Development,” American Political Science Review 87, no. 3 (1993).

[19] On the British Empire, see especially Niall Ferguson, Empire: The Rise and Demise of the British World Order and the Lessons for Global Power (New York: Basic Books, 2003). Ferguson has also tackled the issue of a possible American empire in a more polemical Niall Ferguson, Colossus: The Price of America’s Empire (New York: Penguin Press, 2004).

[20] Ferguson outlines his analytical framework most concisely in Niall Ferguson, The Cash Nexus: Money and Power in the Modern World, 1700-2000 (New York: Basic Books, 2001), especially Chapter 1.

[21] Webber and Wildavsky, A History of Taxation and Expenditure in the Western World, 39-67. See also McNeill, The Pursuit of Power: Technology, Armed Force, and Society since A.D. 1000.

[22] McNeill, The Pursuit of Power: Technology, Armed Force, and Society since A.D. 1000 , 9-12.

[23] Webber and Wildavsky, A History of Taxation and Expenditure in the Western World.

[24] This interpretation of early medieval warfare and societies, including the concept of feudalism, has been challenged in more recent military history literature. See especially John France, “Recent Writing on Medieval Warfare: From the Fall of Rome to c. 1300,” Journal of Military History 65, no. 2 (2001).

[25] Webber and Wildavsky, A History of Taxation and Expenditure in the Western World, McNeill, The Pursuit of Power. Technology, Armed Force, and Society since A.D. 1000. See also Richard Bonney, ed., The Rise of the Fiscal State in Europe c. 1200-1815 (Oxford: Oxford University Press, 1999c).

[26] Ferguson, The Cash Nexus: Money and Power in the Modern World, 1700-2000, Tilly, Coercion, Capital, and European States, AD 990-1990, Jari Eloranta, “National Defense,” in The Oxford Encyclopedia of Economic History, ed. Joel Mokyr (Oxford: Oxford University Press, 2003b). See also Modelski and Thompson, Seapower in Global Politics, 1494-1993.

[27] Tilly, Coercion, Capital, and European States, AD 990-1990, 165, Henry Kamen, “The Economic and Social Consequences of the Thirty Years’ War,” Past and Present April (1968).

[28] Eloranta, “National Defense,” Henry Kamen, Empire: How Spain Became a World Power, 1492-1763, 1st American ed. (New York: HarperCollins, 2003), Douglass C. North, Institutions, Institutional Change, and Economic Performance (New York.: Cambridge University Press, 1990).

[29] Richard Bonney, “France, 1494-1815,” in The Rise of the Fiscal State in Europe c. 1200-1815, ed. Richard Bonney (Oxford: Oxford University Press, 1999a). War expenditure percentages (for the seventeenth and eighteenth centuries) were calculated using the so-called Forbonnais (and Bonney) database(s), available from European State Finance Database: http://www.le.ac.uk/hi/bon/ESFDB/RJB/FORBON/forbon.html and should be considered only illustrative.

[30] Marjolein ’t Hart, “The United Provinces, 1579-1806,” in The Rise of the Fiscal State in Europe c. 1200-1815, ed. Richard Bonney (Oxford: Oxford University Press, 1999). See also Ferguson, The Cash Nexus..

[31] See especially McNeill, The Pursuit of Power..

[32] Eloranta, “External Security by Domestic Choices: Military Spending as an Impure Public Good Among Eleven European States, 1920-1938,” Eloranta, “National Defense”. See also Ferguson, The Cash Nexus.. On the military spending patterns of Great Powers in particular, see J. M. Hobson, “The Military-Extraction Gap and the Wary Titan: The Fiscal Sociology of British Defence Policy 1870-1914,” Journal of European Economic History 22, no. 3 (1993).

[33] The practice of total war, of course, is as old as civilizations themselves, ranging from the Punic Wars to the more modern conflicts. Here total war refers to the twentieth century connotation of this term, embodying the use of all economic, political, and military might of a nation to destroy another in war. Therefore, even though the destruction of Carthage certainly qualifies as an action of total war, it is only in the nineteenth and twentieth centuries that this type of warfare and strategic thinking comes to full fruition. For example, the famous ancient military genius Sun Tzu advocated caution and planning in warfare, rather than using all means possible to win a war: “Thus, those skilled in war subdue the enemy’s army without battle. They capture his cities without assaulting them and overthrow his state without protracted operations.” Sun Tzu, The Art of War (Oxford: Oxford University Press, 1963), 79. With the ideas put forth by Clausewitz (see Carl von Clausewitz, On War (London: Penguin Books, 1982, e.g. Book Five, Chapter II) in the century century, the French Revolution, and Napoleon, the nature of warfare began to change. Clausewitz’s absolute war did not go as far as prescribing indiscriminate slaughter or other ruthless means to subdue civilian populations, but did contribute to the new understanding of the means of warfare and military strategy in the industrial age. The generals and despots of the twentieth century drew their own conclusions, and thus total war came to include not only subjugating the domestic economy to the needs of the war effort but also propaganda, destruction of civilian (economic) targets, and genocide.

[34] Rondo Cameron and Larry Neal, A Concise Economic History of the World: From Paleolithic Times to the Present, 4th ed. (Oxford: The Oxford University Press, 2003), 339. Thus, the estimate in e.g. Eloranta, “National Defense” is a hypothetical minimum estimate originally expressed in Gerard J. de Groot, The First World War (New York: Palgrave, 2001).

[35] See Table 13 in Stephen Broadberry and Mark Harrison, “The Economics of World War I: An Overview,” in The Economics of World War I, ed. Stephen Broadberry and Mark Harrison ((forthcoming), Cambridge University Press, 2005). The figures are, as the authors point out, only tentative.

[36] Eloranta, “External Security by Domestic Choices: Military Spending as an Impure Public Good Among Eleven European States, 1920-1938”, Eloranta, “National Defense”, Webber and Wildavsky, A History of Taxation and Expenditure in the Western World.

[37] Eloranta, “National Defense”.

[38] Mark Harrison, “The Economics of World War II: An overview,” in The Economics of World War II: Six Great Powers in International Comparisons, ed. Mark Harrison (Cambridge, UK: Cambridge University Press, 1998a), Eloranta, “National Defense.”

[39] Cameron and Neal, A Concise Economic History of the World, Harrison, “The Economics of World War II: An Overview,” Broadberry and Harrison, “The Economics of World War I: An Overview.” Again, the same caveats apply to the Harrison-Broadberry figures as disclaimed earlier.

[40] Eloranta, “National Defense”.

[41] Mark Harrison, “Soviet Industry and the Red Army under Stalin: A Military-Industrial Complex?” Les Cahiers du Monde russe 44, no. 2-3 (2003), Paul A.C. Koistinen, The Military-Industrial Complex: A Historical Perspective (New York: Praeger Publishers, 1980).

[42] Robert Higgs, “The Cold War Economy: Opportunity Costs, Ideology, and the Politics of Crisis,” Explorations in Economic History 31, no. 3 (1994); Ruben Trevino and Robert Higgs. 1992. “Profits of U.S. Defense Contractors,” Defense Economics Vol. 3, no. 3: 211-18.

[43] Eloranta, “National Defense”.

[44] See more Eloranta, “Military Competition between Friends? Hegemonic Development and Military Spending among Eight Western Democracies, 1920-1938.”

[45] For more, see especially Ferguson, The Cash Nexus, Peter H. Lindert, Growing Public. Social Spending and Economic Growth since the Eighteenth Century, 2 Vols., Vol. 1 (Cambridge: Cambridge University Press, 2004). On tradeoffs, see e.g. David R. Davis and Steve Chan, “The Security-Welfare Relationship: Longitudinal Evidence from Taiwan,” Journal of Peace Research 27, no. 1 (1990), Herschel I. Grossman and Juan Mendoza, “Butter and Guns: Complementarity between Economic and Military Competition,” Economics of Governance, no. 2 (2001), Alex Mintz, “Guns Versus Butter: A Disaggregated Analysis,” The American Political Science Review 83, no. 4 (1989), Mintz and Huang, “Guns versus Butter: The Indirect Link,” Kevin Narizny, “Both Guns and Butter, or Neither: Class Interests in the Political Economy of Rearmament,” American Political Science Review 97, no. 2 (2003).

Citation: Eloranta, Jari. “Military Spending Patterns in History”. EH.Net Encyclopedia, edited by Robert Whaples. September 16, 2005. URL http://eh.net/encyclopedia/military-spending-patterns-in-history/

Urban Mass Transit In The United States

Zachary M. Schrag, Columbia University

The term “urban mass transit” generally refers to scheduled intra-city service on a fixed route in shared vehicles. Even this definition embraces horse-drawn omnibuses and streetcars, cable cars, electric streetcars and trolley coaches, gasoline and diesel buses, underground and above-ground rail rapid transit, ferries, and some commuter rail service. In the United States mass transit has, for the most part, meant some kind of local bus or rail service, and it is on these modes that this article focuses.

Nationwide in 1990, mass transit carried only 5.3 percent of commuting trips, down from 6.4 percent in 1980, and an even smaller percentage of total trips. But while mass transit may seem insignificant on this national scale, since the early nineteenth century it has shaped American cities and continues to do so. And in an age of concern about greenhouse gases and petroleum dependence, mass transit provides an important alternative to the automobile to millions of Americans.

The Era of Private Entrepreneurs

Omnibuses and horsecars

The history of mass transit on land in the United States begins in the 1830s with the introduction of horse-drawn omnibuses and streetcars in Eastern cities. Omnibuses — stagecoaches modified for local service — originated in France, and the idea spread to New York City in 1829, Philadelphia in 1831, Boston in 1835, and Baltimore in 1844. Omnibuses spared their passengers some fatigue, but they subjected them to a bumpy ride that was scarcely faster than walking. In contrast, horsecars running on iron rails provided smoother and faster travel. First introduced in New York City in 1832, horsecars spread in the 1850s, thanks to a method of laying rail flush with the pavement so it would not interfere with other traffic. By 1853, horsecars in New York alone carried about seven million riders. Whether running omnibuses or horsecars, private operators were granted government franchises to operate their vehicles on specific routes. After the Civil War, these companies began to merge, reducing competition.

Steam railroads

Even as some workers learned to depend on omnibuses and horsecars for their daily commute, others began riding intercity trains between home and work. Wealthy merchants and professionals could afford the fares or annual passes between leafy village and bustling downtown. Yonkers, New York; Newton, Massachusetts; Evanston, Illinois; and Germantown, Pennsylvania, all grew as bedroom communities, connected by steam locomotive to New York City, Boston, Chicago, and Philadelphia. Following the Civil War, some New York entrepreneurs hoped to bring the speed of these steam railroads to city streets by building elevated tracks on iron girders. After a few false starts, by 1876 New York had its first “el,” or elevated railroad. This was the nation’s first rapid transit: local transit running on an exclusive right-of-way between fixed stations.

Horse-drawn vehicles were noisy and smelly, and their motive power vulnerable to disease and injury. Steam locomotives on elevated tracks were even noisier, and their smoke and ash was no more welcome than the horse’s manure. Looking for cleaner alternatives, inventors turned to underground cables, first deployed in 1873. Steam engines in central powerhouses turned these cables in endless loops, allowing operators of cable cars to grip the cable through a slot in the street and be towed along the route. This proved a fairly inefficient means of transmitting power, and though twenty-three cities had cable operations in 1890, most soon scrapped them in favor of electric traction. San Francisco, whose hills challenged electric streetcars, remains a visible exception.

Electric streetcars

In most cities, however, electric streetcars seemed the ideal urban vehicle. They were relatively clean and quick, and more efficient than cable cars. Inaugurated in Richmond, Virginia, in 1889, streetcars — also known as trolleys — rapidly displaced horsecars, so that by 1902, 94 percent of street railway mileage in the United States was electrically powered, and only one percent horse-powered, with cables and other power sources making up the difference.

“Traction magnates” and monopolies

Unlike horsecars, both cable-car and electric-streetcar systems required substantial capital for the power plants, maintenance shops, tracks, electrical conduits, and rolling stock. Seeking economies of scale, entrepreneurs formed syndicates to buy up horsecar companies and their franchises, and, when necessary, bribed local governments. “Traction magnates,” such as Peter Widener in Philadelphia and New York, the brothers Henry and William Whitney in Boston and New York, and Charles Yerkes in Chicago, transformed the industry from one based on monopolies on individual routes to one based on near or complete monopolies in whole cities. But in taking over small companies, the barons also took on enormous corporate debts and watered stocks, leaving the new companies with shaky capital structures. And many behaved as true monopolists, callously packing their cars with riders who had no other choice of transportation. In many cities, the transit companies earned terrible reputations, depriving them of public support in later decades.

At the same time, companies anticipating monopoly profits made several decisions that would prove disastrous when they faced competition from the automobile. To secure franchises and to mollify unions, many companies often pledged to employ two men on every vehicle, to remove snow on the streets for which they had the franchise, and to pave the space between their tracks. One especially important commitment made by most transit companies was a pledge to forever provide service for a nickel, regardless of the length of the ride, a departure from the European practice of charging by the zone.

The “golden age” of street railways

Throughout the late nineteenth and early twentieth centuries, the growth of street railways was closely tied to real estate development and speculation. Each line extension brought new land within commuting distance of the employment core, sharply raising real estate values. By the 1890s, some entrepreneurs, such as F. M. Smith in Oakland, Henry Huntington in Los Angeles, and Francis Newlands in Washington, D.C., and its suburbs, were building unprofitable streetcar lines in order to profit from the sale of land they had previously purchased along the routes. But they still wanted farebox revenue, and several companies built amusement parks at the ends of their lines in order to get some ridership on weekends. For the most part, riders were drawn from the ranks of white-collar workers who could afford to spend ten cents a day on carfare.

By the late 1890s, mass transit had become indispensable to the life of large American cities. Had the streetcars disappeared, millions of Americans would have been stranded in residential neighborhoods distant from their jobs. But it was structured as a private enterprise, designed to maximize return for its stockholders even as it was required by franchise agreements to serve public needs. Moreover, the industry was premised on the assumption that riders would have no alternative to the streetcar, making revenue growth certain. In this world, transit executives felt little need to worry about watered stock or unprofitable extensions. In the twentieth century, that would change.

From Private to Public

The first subways

The first limit to private enterprise as the basis for mass transit was the capacity of city streets themselves. As streetcars jammed main thoroughfares, city governments looked for ways around the congestion. The London Underground, opened in 1863, showed the promise of an urban subway, but no private company would invest the enormous sums necessary to tunnel below city streets. Likewise, urban transit was so firmly in place as a private enterprise that few Americans imagined it as a function of city government. In the 1890s, the Boston Transit Commission, a public agency, proposed a compromise. It would issue bonds to build a tunnel for streetcars under Tremont Street, then recoup its investment with rents charged to the privately-owned street railway whose cars would use the tunnel subway. Opened in 1897, this short tunnel was the first subway on the continent.

New York’s subway

Meanwhile, in 1894 New York voters approved a similar plan to build transit tunnels using public bonds, then lease the tunnels to a private operator. Though it shared the same financial model as Boston’s, the New York plan was vastly more ambitious. Electric trains, rather than individual streetcars seen in Boston, ran at high speed the entire length of Manhattan and into the Bronx. The first segment opened in 1904 and proved popular enough to inspire calls for immediate expansion beyond the 21 route miles initially planned. After much debate, in 1913 the city signed the “dual contracts” with two private operators, calling for the construction of another 123 route miles of rapid transit, using both public and private capital.

Impact of World War I

In retrospect, the 1913 dual contracts may have been the high-water mark for privately financed urban mass transit, for within a few years, the industry would be in dire trouble. During and immediately after World War I, inflation robbed the nickel of most of its value, even as wages doubled. Companies begged legislatures for permission to raise their fares, usually in vain. By 1919, street railways in New York, Providence, Buffalo, New Orleans, Denver, St. Louis, Birmingham, Montgomery, Pittsburgh, and several smaller cities were in receivership. In response, President Wilson appointed a Federal Electric Railways Commission, which reported that while electric railways were still necessary and viable private enterprises, it would take a profound restructuring of regulation, labor relations, and capitalization to return them to profitability.

Arrival of automobiles

In the long run, the greatest threat to the transit companies was not inflation but competition from affordable mass-produced automobiles, such as the Model T Ford, fueled by cheap gasoline. In 1915, there was one automobile for every 61 persons in Chicago. Ten years later, the figure was one for each eleven. Nationwide, automobile registrations increased seven and a half times. Not only did each driver represent a lost fare, but many went into business as jitneys, offering rides to commuters who would otherwise take the streetcar. Moreover, automobiles clogged the same city streets used by streetcars; drastically reducing the latter’s average speed. By the mid-1920s, the transit industry spiraled downward, losing revenue and the ability to offer reliable, swift service. Patronage dropped from a local peak of 17.2 billion in 1926 to a nadir of 11.3 billion in 1933. In several major cities, plans for subways died on the drawing boards.

Beginnings of municipally-owned mass transit

Some reformers believed that the solution was to redefine transit as a public service to be provided by publicly owned agencies or authorities. In 1912, San Francisco launched the effort with its Municipal Railway, to be followed by public systems in Seattle, Detroit, and Toronto. In 1925, New York Mayor John Hylan broke ground on the IND, for “independent,” subway, a city-owned system designed to compete with the private transit operators, whom Hylan considered corrupt.

Switch to busses

For their part, private operators looked for technological fixes. Some companies tried to regain profitability by switching from streetcars to gasoline and diesel buses, a process known as “motorization.” Because buses could use the same streets provided free of charge to private automobiles, they bore lower fixed costs than did streetcars, making them especially attractive for suburban routes with less frequent service. Moreover, because many laws and taxes applied specifically to streetcars, a transit company could shed some of its more expensive obligations by changing its vehicles. But buses could not match the capacity of streetcars, nor could they slip into subway tunnels without concerns about exhaust. Another option was the trolley coach, a rubber-tired bus that, like a streetcar, drew electric power from overhead lines. First deployed in large numbers in early 1930s, the trolley coach avoided the capital costs of laying steel rails, but never did trolley coaches account for more than a sixth of total bus ridership.

Meanwhile, in an effort to save surface rail transit, several operators joined to design a new generation of streetcar. Introduced in 1937, the Presidents’ Conference Committee, or PCC, car was streamlined, roomy, and adaptable to various uses, even rapid transit. But it could not reverse the industry-wide decline, especially after 1938, when the Public Utility Holding Company Act took effect. Aimed at reforming the electric industry, this New Deal legislation had the unintended effect of forcing many electric utilities to sell off their street railway subsidiaries, depriving the latter of needed capital.

Impact of World War II

World War II provided a last hurrah for privately operated transit in the United States. In 1942, American automobile manufacturers suspended the production of private automobiles in favor of war materiel, while the federal government imposed gasoline rationing to limit Americans’ use of the cars they already owned. Left without an alternative, Americans turned to mass transit in record numbers. The industry reached its peak in 1946, carrying 23.4 billion riders.

The Age of Subsidy

Collapse of ridership after WWII

Following the war, transit ridership quickly collapsed. Not only were cars again available and affordable, but so were suburban houses, built so far from central employment areas and scattered so sparsely that mass transit was simply impractical. Moreover, the construction of new roads, including federally-financed expressways, encouraged automobile commuting, whether by driving alone or in a carpool. As a result, transit ridership dropped from 17.2 billion passengers in 1950 to 11.5 billion in 1955. By 1960, only 8.2 percent of American workers took a bus or streetcar to work, with another 3.9 percent commuting by rapid transit. Moreover, about a quarter of all transit riders were confined to New York City, whose island geography made automobile ownership less desirable. For American transit companies, there was even worse news, in that off- peak ridership declined even more steeply than transit commuting. Companies purchased expensive labor and equipment to muster enough capacity to serve the morning and evening commutes, but most of that capacity lay idle for the midday and evening hours.

Abandonment of streetcar lines

The decline in ridership left privately-owned transit companies financially weak and vulnerable to takeover. In an attempt to cope with the resulting decline in revenue, most American transit companies (including dozens acquired by National City Lines, a holding company with ties to bus manufacturer General Motors) chose to abandon their streetcars and their high capital costs. By 1963, streetcars carried only 300,000 riders, down from 12 or 13 billion per year in the 1920s. Some argue that the replacement of roomy, smooth railcars with smaller, polluting diesel buses in fact drove even more passengers away. Nor had transit companies escaped the problems that drove them into bankruptcy in the 1910s; they still faced high labor costs, strikes, inflation, high taxes, traffic congestion and difficulty in raising fares.

Municipal takeovers

In this environment, transit was no longer viable as a profit-making enterprise, and it also proved a drag on the budgets of those cities that had already taken over transit operation. Not wanting to lose mass transit altogether, city governments established publicly-owned transit authorities. The New York Transit Authority, for example, began operating the subway system, elevated lines, and municipally owned bus lines in 1953. Once a private industry that paid taxes, transit now became a public service that absorbed tax dollars.

Increasing federal role

Even municipal takeovers could not stop the bleeding. Desperate, cities turned to the federal government for subsidy. Since 1916, the federal government had financed road building, including, since 1956, ninety percent of the cost of the Interstate Highway System, but there were no comparable funds for mass transit. Beginning in 1961, the federal government financed small-scale experimental projects in various cities. The federal role increased with the passage of the Urban Mass Transportation Act of 1964, which authorized $375 million in aid to the capital costs of transit projects, with each two federal dollars to be matched with one local dollar. Another breakthrough came with the Highway Act of 1973, which gradually allowed states to abandon planned freeways and use their Trust Fund allocations for the capital costs of mass transit projects, though these would be matched at a less generous rate. Later legislation provided some federal aid for transit operating costs as well. Thanks to such measures, by the mid-1970s, transit patronage had reversed its long decline. Having dropped from 17.2 billion rides in 1950 to 6.6 billion in 1972, patronage was up to 8.0 billion in 1984.

Post-1970 rebirth of rail mass transit

Part of the recovery was due to the rebirth of rail transit since the early 1970s. The process began in Toronto, whose transit commission used cash from its heavy wartime ridership to open a new subway in 1954. In 1955, Cleveland opened a short rapid transit line along an old railroad right-of-way, and in 1957, California created the multi-county San Francisco Bay Area Rapid Transit District to allow planning for a rapid transit system there. After years of planning and engineering, the system opened for operation in 1972. It was soon followed by the first segments of rapid transit systems in Washington, D.C. and Atlanta, with additional systems opening later in Miami and Baltimore. These new rail systems were fantastically expensive, absorbing billions of federal aid dollars. But they are technically impressive, and they can attract riders. In Washington, for example, the percentage of people entering the city core during the morning rush hour who use transit rose from 27 percent in 1976, the year the Metro system opened, to 38 percent in 1996, an impressive gain when compared to the massive losses of previous decades. More recently, several cities have invested in new light-rail systems, similar to the streetcars of a century earlier but generally running on exclusive right-of-way, thus avoiding the traffic congestion that doomed the streetcar.

Recent legislation

Another bit of good news for the industry came in 1991, as Congress passed the Intermodal Surface Transportation Efficiency Act (ISTEA). (The law was renewed in 1998 as the Transportation Equity Act for the 21st Century, or TEA-21.) Both pieces of legislation increased the flexibility with which state governments could use their federal transportation grants, encouraging relatively more investment in transit, bicycle, and pedestrian projects and relatively less new road building.

At the start of the twenty-first century, mass transit remains an industry defined by public ownership, high costs, and low revenues. But few would argue that it is unnecessary. Indeed, several trends — increased congestion, concerns about energy shortages, citizen resistance to highway-building, and an aging population — suggest that mass transit will continue as a vital component of metropolitan America.

Continuing debates about mass transit

In large part because of these many policy implications, the history of urban transit in the United States has been fiercely debated. At one extreme are those who believe that mass transit as a thriving industry died of foul play, the victim of a criminal conspiracy of automobile, rubber, and oil producers who hoped to force Americans to depend on their cars. At the other extreme are those who see the decline of transit as the product of market forces, as a free and wealthy people chose the automobile in preference to streetcars and buses. In between, most scholars emphasize the importance of policy choices, ranging from road building to taxation to traffic management, which encouraged driving and hampered the transit industry’s ability to compete. But even within this interpretation, the degree to which these policies were the product of an open and democratic political system or were imposed by a small elite remains the subject of a vital historiographical debate.

References

Barrett, Paul. The Automobile and Urban Transit: The Formation of Public Policy in Chicago, 1900-1930. Philadelphia: Temple University Press, 1983.

Bottles, Scott L. Los Angeles and the Automobile: The Making of the Modern City. Berkeley: University of California Press, 1987.

Cheape, Charles W. Moving the Masses: Urban Public Transit in New York, Boston, and Philadelphia, 1880-1912. Cambridge, MA.: Harvard University Press, 1980.

Cudahy, Brian J. Cash, Tokens, and Transfers: A History of Urban Mass Transit in North America. New York: Fordham University Press, 1990.

Foster, Mark S. From Streetcar to Superhighway: American City Planners and Urban Transportation, 1900-1940. Philadelphia: Temple University Press, 1981.

Hood, Clifton. 722 Miles: The Building of the Subways and How They Transformed New York. New York: Simon & Schuster, 1993.

Jackson, Kenneth T. Crabgrass Frontier: The Suburbanization of the United States. New York: Oxford University Press, 1985.

Miller, John A. Fares, Please!: A Popular History of Trolleys, Horse-Cars, Street- Cars, Buses, Elevateds, and Subways. New York: Dover Publications, 1960.

Owen, Wilfred. The Metropolitan Transportation Problem. Washington: Brookings Institution, 1966.

Smerk, George M. The Federal Role In Urban Mass Transportation. Bloomington: Indiana University Press, 1991.

St. Clair, David James, The Motorization of American Cities. New York: Praeger, 1986.

Citation: Schrag, Zachary. “Urban Mass Transit In The United States”. EH.Net Encyclopedia, edited by Robert Whaples. May 7, 2002. URL http://eh.net/encyclopedia/urban-mass-transit-in-the-united-states/

William Marshall

David R. Stead, University of York

William Marshall (1745-1818) was one of the two leading writers on eighteenth century English agriculture, the other and far better known being his great rival Arthur Young. The younger son of William and Alice, yeoman farmers in Sinnington, in the North Riding of Yorkshire, Marshall spent the first fourteen years of his working life employed in commerce in London and the West Indies. After what he considered was a miraculous recovery from illness, Marshall decided to devote himself to the study of agriculture, which he had already been pursuing in his spare time. His method of research differed from the contemporary procedure, exemplified by Young, which was to investigate farming practices by briefly touring a county and interviewing the inhabitants. Marshall thought that the appropriate unit of analysis was the natural agricultural district rather than the regions somewhat artificially demarcated by county boundaries. He also believed that at least twelve months’ personal observation and experience of farming in an area was required before a proper assessment could be made.

Accordingly, in 1774 Marshall took a farm near Croydon, Surrey, and four years later published an account of his experiences there. In 1780 he applied for a grant from the Society of Arts to conduct his residential research elsewhere, but the committee – which included Young – rejected his request. Marshall instead funded himself by finding employment as an estate manager in Norfolk and then Staffordshire. In later years he resided and worked in a number of places throughout the country, and in 1798 finally completed his ambitious twelve-volume study of England’s Rural Economy. He was also intermittently employed as a landscape gardener, writing three books on the practice.

Marshall was a proponent of the establishment of a state-sponsored body to promote improved farming standards, but when the Board of Agriculture was created in 1793 the post of Secretary went to Young. Marshall disliked the Board’s decision to rapidly compile surveys of counties, but nevertheless contributed the report covering the central Highlands of Scotland. By the time he married Elizabeth Hodgson in 1807, Marshall was pursuing his second ambitious project, a Review and Abstract of the Board’s county surveys. His Review, which ran to five volumes published over ten years, was sharply critical of the quality of the reports, with Young being compared to ‘superficial charlatans.’ Marshall certainly lacked Young’s vigorous writing style and contemporary status as an internationally renowned agricultural expert, but historians continue to debate who was the more accurate and pioneering investigator. When he died, Marshall was acting upon his long-standing proposal for an agricultural college by building one at his home in Pickering, in his native county of Yorkshire.

Bibliography

Fussell, G. E. “My Impressions of William Marshall.” Agricultural History 23 (1949): 57-61.Horn, Pamela. William Marshall (1745-1818) and the Georgian Countryside. Abingdon: Beacon Publications, 1982.

Kerridge, Eric. “Arthur Young and William Marshall.” History Studies 1 (1968): 43-53.

Citation: Stead, David. “William Marshall”. EH.Net Encyclopedia, edited by Robert Whaples. November 18, 2003. URL http://eh.net/encyclopedia/william-marshall/

The Marshall Plan, 1948-1951

Albrecht Ritschl, Humboldt Universitaet – Berlin

Between 1948 and 1951, the United States poured financial aiding totaling $13 billion (about $100 billion at 2003 prices) into the economies of Western Europe. Officially termed the European Recovery Program (ERP), the Marshall Plan was approved by Congress in the Economic Cooperation Act of April 1948. After a transitory 90-Days Recovery Program, the Marshall Plan spanned three ERP years from July 1948 to June 1951. Congress appropriated payments to European countries in annual installments. Most of U.S. assistance under the ERP took the form of grants; the loan component had deliberately been kept low to avoid transfer problems. Distribution of the ERP funds among the recipient countries and their allocation to key sectors were placed in the hands of a U.S. board operating in Europe, the Economic Cooperation Agency (ECA). Countries would present requests for deliveries of goods to the ECA, which evaluated and decided them according to a set scheme of priorities. Dollar payments by the ECA for any deliveries were complemented by a system of national matching funds in the recipient countries, called counterpart funds. Countries would pay for ERP deliveries, not in U.S. dollars but in their own national currencies. These payments were credited to their respective counterpart funds. With a view to the German transfer problem of the inter-war period, no attempt was made to transfer these payments into U.S. dollars. Instead, the ECA employed these counterpart funds to channel investment into bottleneck sectors of the respective national economies. Repayment to the U.S. of the ERP’s loan component was effected in the mid-1950s.

The Marshall Plan was by no means the first U.S. aid program for post-war Europe. Already during 1945-1947, the U.S. paid out substantial financial assistance to Europe under various different schemes. In total annual amount, these payments were actually larger than the Marshall Plan itself. One key element of the Marshall Plan was to bundle existing, rival programs in a package and to identify and iron out inconsistencies. The origin of the Marshall Plan lay precisely in a crisis of the previous aid schemes. Extreme weather conditions in Europe in 1946/47 had disrupted an already shaky system of food rationing, exacerbated a coal and power shortage, and threatened to slow down the pace of recovery in Western Europe. Faced with increasing doubts in Congress about the efficiency of existing programs, the Truman administration felt the need to come up with a unifying concept. The Marshall Plan differed from previous programs mainly in the centralized administration of aid allotments and the strengthened link with America’s political agenda. Researchers currently agree that any effects of the Marshall Plan must have operated through its political conditionality, far less so through its size.

The Marshall Plan also did not bring about the immediate integration of Europe into international markets. Large external debts presented a serious obstacle to liberalization of Europe’s foreign exchange markets. A British attempt in 1947 to lift capital controls triggered a run on Britain’s foreign exchange reserves, and was abandoned after six weeks. As a result, markets would not easily provide the large capital imports needed for European reconstruction. The prospect of having to finance Europe’s so-called dollar gap out of U.S. aid indefinitely was instrumental in shaping the Marshall Plan. During the three years of the Plan’s operation, U.S. policy temporarily turned away from the goal of implementing the Bretton Woods system. Instead, it focused on the more modest goal of liberalizing trade and payments within Europe. To this end, the European Payments Union (EPU) was established in 1950. It lifted most capital controls within Europe, and combined a European fixed exchange rate system with a first round of trade liberalization among its members (Kaplan and Schleiminger (1989)). Although itself independent of the Marshall Plan, the EPU’s system of overdrafts and drawing rights was backed by ECA funds. The EPU was designed to smooth Europe’s transition to full convertibility with the Bretton Woods system, and had largely achieved this goal when it was dissolved formally in 1958 (Eichengreen (1993)).

Competing Interpretations of the Effects of the Marshall Plan

The Marshall Plan is still renowned as a showcase of successful U.S. intervention abroad. It was hailed by contemporaries as the decisive kick that pushed Western Europe beyond the threshold of sustained recovery (e.g., Ellis (1950), Wallich (1982 [1955]). Later observers sympathetic to the Marshall Plan pointed to its high political payoff and its allegedly strong multiplier effects (e.g., Arkes (1972), van der Wee (1986)). Still today, economic folklore credits the Marshall Plan with everything that improved in Europe after the war: the restoration of decent food supplies, the opening of supply bottlenecks in industry, and most importantly, the reconstruction of capital equipment and housing stocks in the devastated economies of Western Europe.

Later analyses of the Marshall Plan have disagreed fundamentally with this favorable interpretation, and have offered more skeptical views. An older literature interpreted the Marshall Plan largely as an American export program, inspired by Keynesian fears about stagnation in the U.S. post-war economy. At times enriched with a good dose of political Anti-Americanism, this interpretation was quick to assume that Marshall Aid primarily served the interests of U.S. big business.

A revision to this doctrine highlighted the small relative magnitude of the Marshall Plan. U.S. assistance hardly exceeded 2.5% of GNP of the recipient countries, and accounted for less than 20% of capital formation in that period. The allocation of aid often seemed to follow political, not economic needs: nearly half the resources never arrived in the disaster areas on the former European battlefields but served to buy political support in England and France, and to fend off communist threats in various countries. Also, the overall political outcome hardly seemed to fit with U.S. plans. Post-war Europe emerged from the Marshall Plan as a largely protectionist bloc of countries under French leadership. Rather than integrating smoothly into the Bretton Woods system as envisaged by the U.S., Europe seemed to work towards its own economic and financial integration. Epitomized by the work of Milward (1984), this line of research sees France as the main winner over the U.S. in a contest over political dominance in post-war Europe. In this perspective, Marshall Aid appears as a frustrated, economically less-than-significant attempt to influence the course of events in Europe.

This interpretation has seen its own revision. In spite of its small contribution to aggregate output growth, the Marshall Plan may have played a critical role in opening strategic bottlenecks in key industries. Borchardt and Buchheim (1991) argued that raw material imports under the Marshall Plan accelerated the recovery of West German manufacturing. De Long and Eichengreen (1993) argued for Marshall Plan conditionality as a key element in breaking up structural rigidities and bringing about readjustment in the recipient economies. This perspective is a classical story about backward and forward linkages: according to it, the Marshall Plan relaxed binding constraints in a complex input-output framework. Consequently, a purely macroeconomic perspective would be misleading. However, as Eichengreen and Uzan (1992) pointed out, most of these effects were probably temporary, and even their magnitude is questionable. Conditionality and the investment of counterpart funds into strategic sectors may have accelerated the speed of Europe’s convergence back to its steady state. However, to affect the conditional steady state itself, the Marshall Plan would have had to accomplish more than that, and solve a cooperation problem that free markets could not easily handle.

One such cooperation problem was a hold-up problem in labor markets, a theme recurrent also in Eichengreen (1996). Agents in Europe’s highly cartelized labor markets had the choice between reverting to an uncooperative equilibrium with high wage demands and low investment, or a new equilibrium with temporary wage restraint and high investment rates. To the extent that the ECA successfully linked Marshall Plan deliveries to wage restraint in collective bargaining, it implemented a low-wage, high-investment equilibrium. Again, however, from a neoclassical perspective this may have affected the speed of convergence more than the steady state itself.

There was also a bigger, international cooperation problem in whose solution the Marshall Plan was instrumental. Germany’s financial war machinery had left behind large amounts of debts owed to the formerly occupied countries. To this were added reparation demands that potentially dwarfed those of World War I. Any scheme for economic recovery and cooperation in Western Europe would have to deal with these unsettled financial consequences of World War II. At the same time, it had to address the security concerns of America’s allies, which perceived any reconstruction of Germany beyond the necessary minimum as a future threat. All of this implied defining a role for postwar Germany, a delicate task that had initially been left open.

The Monnet Plan for French postwar reconstruction envisioned shifting the center of European heavy industry from Germany’s Ruhr valley to France. U.S. postwar policies were initially built on similar principles: under the Morgenthau Plan, Germany’s heavy industry would be cut back and the German economy would be restructured to be based on light industry and agriculture. The price of these policies consisted of continued U.S. assistance to Europe. Coal and steel as well as machinery were shipped to Europe across the Atlantic, while German heavy industry, a traditional exporter of such items, was operating far below capacity. Among other things, the Marshall Plan was also a reaction to this problem of deficient German deliveries to Europe.

Diplomatic historians have long argued that German reconstruction under U.S. political aegis was the core of the Marshall Plan (see particularly Gimbel (1976) and Hogan (1987)). Given continued U.S. military presence in Europe, self-sustained recovery and economic cooperation could be implemented, such that U.S. deliveries to Western Europe were substituted with German exports. Berger and Ritschl (1995) document the diplomatic arm-twisting especially of France by the U.S., and interpret the Marshall Plan as a set of institutions, designed to serve as a commitment device for economic cooperation within Europe. To implement a cooperative equilibrium, U.S. policies linked Marshall Aid to free trade within Europe, to an agreement over the economic reconstruction of West Germany, and to a standstill regarding reparations and war debts as long as Germany was divided. Viewed from this perspective, Marshall Aid and its conditionality were merely the outer shell of a program whose core was a far wider political agenda for economic cooperation in Western Europe.

References

Arkes, Hadley. Bureaucracy, the Marshall Plan, and the National Interest. Princeton: Princeton University Press, 1972.

Berger, Helge and Albrecht Ritschl. “Germany and the Political Economy of the Marshall Plan, 1947-1952: A Re-Revisionist View.” In Europe’s Postwar Recovery, edited by Barry Eichengreen, 199-245. Cambridge: Cambridge University Press, 1995

Borchardt, Knut and Christoph Buchheim. “The Marshall Plan and Key Economic Sectors: A Microeconomic Perspective.” In The Marshall Plan and Germany, edited by Charles S. Maier and Gunter Bischof, 410-451. Oxford: Berg, 1991

De Long, J. Bradford and Barry Eichengreen. “The Marshall Plan: History’s Most Successful Structural Adjustment Program.” In Postwar Economic Reconstruction and Lessons for the East Today, edited by Rudiger Dornbusch et al, 189-230. Cambridge: MIT Press, 1993

Eichengreen, Barry. Reconstructing Europe’s Trade and Payments: The European Payments System. Manchester: Manchester University Press, 1993.

Eichengreen, Barry. “Institutions and Economic Growth: Europe after World War II.” In Economic Growth in Europe since 1945, edited by Nicholas Crafts and Gianni Toniolo, 38-70. Cambridge: Cambridge University Press, 1996

Eichengreen, Barry and Marc Uzan. “The Marshall Plan: Economic Effects and Implications for Eastern Europe and the USSR.” Economic Policy 14 (1992): 14-75.

Ellis, Howard. The Economics of Freedom: The Progress and Future of Aid to Europe. New York: Harper & Row, 1950

Gimbel, John. The Origins of the Marshall Plan. Stanford: Stanford University Press, 1976

Hogan, Michael J. The Marshall Plan, Britain, and the Reconstruction of Western Europe, 1947-1952. Cambridge: Cambridge University Press, 1987.

Kaplan, Jacob and Gunter Schleiminger. The European Payments Union: Financial Diplomacy in the 1950s. Oxford: Oxford University Press, 1989.

Milward, Alan S. The Reconstruction of Western Europe, 1945-1951. London: Methuen, 1984.

van der Wee, Herman. Prosperity and Upheaval: The World Economy, 1945-1980. Berkeley: University of California Press, 1986.

Wallich, Henry. Mainsprings of the German Revival. New Haven: Yale University Press, 1982 (1955).

Citation: Ritschl, Albrecht. “The Marshall Plan, 1948-1951”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/the-marshall-plan-1948-1951/

Thomas Robert Malthus

David R. Stead, University of York

The Reverend Thomas Robert Malthus (1766-1834) is famous for his pessimistic prediction that humankind would struggle to feed itself. Born in Wotton, Surrey, Robert Malthus (he preferred his second name) was the sixth child of Daniel and Henrietta, members of the English country gentry. After graduating from Jesus College, Cambridge University, Malthus entered the Church of England as curate of Okewood, Surrey. In 1798 he published his seminal An Essay on the Principle of Population. It contended that population has the potential to expand in a geometric progression (e.g. 1, 2, 4, 8, 16, 32…) but that food supplies can only increase in an arithmetic progression (e.g. 1, 2, 3, 4, 5, 6…), probably because of diminishing returns to producing food on the limited available amount of farmland. Since the supply of food cannot keep pace with the burgeoning numbers of people, the population will be reduced by the “positive checks” of war, disease and starvation. Malthus argued that the best means of escaping what has subsequently been called “the Malthusian trap” was for people to adopt the “preventive check” of limiting their fertility by marrying later in life. Malthus himself married Harriet Eckersall at the age of 38 (late for the period) in 1804, a year after he became rector of Walesby, Lincolnshire. The couple had three children.

First published anonymously, An Essay on Population scandalized many but quickly established Malthus as one of the leading economists in England. Appointed professor of political economy at the East India College, Hertfordshire, in 1805, Malthus wrote about a variety of economic issues, including the theory of rent and the Corn Laws. Ironically, at about the time Malthus published his pessimistic view, want of food no longer appears to have provided a serious check to English population growth. Malthus’ predictions proved inaccurate chiefly because he failed to foresee the enormous impact that science and technology was to have in squeezing increasing amounts of food out of each hectare of land. In the two centuries since the publication of An Essay on Population, other writers have similarly forecast mass famines – including Paul Ehrlich’s The Population Bomb of 1968 – but human ingenuity, together with falling birth rates in many parts of the world, has meant that food production has more than kept pace with population growth. The malnutrition present today is largely a result of an inadequate distribution of food, not insufficient production.

Bibliography

Ehrlich, Paul R. The Population Bomb. New York: Ballantine Books, 1968.Hollander, Samuel. The Economics of Thomas Robert Malthus. Toronto: University of Toronto Press, 1997.James, Patricia. Population Malthus: His Life and Times. London: Routledge & Kegan Paul, 1979.

Kelley, Allen C. “Economic Consequences of Population Change in the Third World.” Journal of Economic Literature 26 (1988): 1685-1728.

Winch, Donald. Malthus. Oxford: Oxford University Press, 1987.

Wrigley, E. A. and David Souden, editors. The Works of Thomas Robert Malthus (8 volumes). London: Pickering, 1986.

Citation: Stead, David R. “Thomas Robert Malthus”. EH.Net Encyclopedia, edited by Robert Whaples. December 19, 2003. URL http://eh.net/encyclopedia/thomas-robert-malthus/

Economic History of Malaysia

John H. Drabble, University of Sydney, Australia

General Background

The Federation of Malaysia (see map), formed in 1963, originally consisted of Malaya, Singapore, Sarawak and Sabah. Due to internal political tensions Singapore was obliged to leave in 1965. Malaya is now known as Peninsular Malaysia, and the two other territories on the island of Borneo as East Malaysia. Prior to 1963 these territories were under British rule for varying periods from the late eighteenth century. Malaya gained independence in 1957, Sarawak and Sabah (the latter known previously as British North Borneo) in 1963, and Singapore full independence in 1965. These territories lie between 2 and 6 degrees north of the equator. The terrain consists of extensive coastal plains backed by mountainous interiors. The soils are not naturally fertile but the humid tropical climate subject to monsoonal weather patterns creates good conditions for plant growth. Historically much of the region was covered in dense rainforest (jungle), though much of this has been removed for commercial purposes over the last century leading to extensive soil erosion and silting of the rivers which run from the interiors to the coast.

SINGAPORE

The present government is a parliamentary system at the federal level (located in Kuala Lumpur, Peninsular Malaysia) and at the state level, based on periodic general elections. Each Peninsular state (except Penang and Melaka) has a traditional Malay ruler, the Sultan, one of whom is elected as paramount ruler of Malaysia (Yang dipertuan Agung) for a five-year term.

The population at the end of the twentieth century approximated 22 million and is ethnically diverse, consisting of 57 percent Malays and other indigenous peoples (collectively known as bumiputera), 24 percent Chinese, 7 percent Indians and the balance “others” (including a high proportion of non-citizen Asians, e.g., Indonesians, Bangladeshis, Filipinos) (Andaya and Andaya, 2001, 3-4)

Significance as a Case Study in Economic Development

Malaysia is generally regarded as one of the most successful non-western countries to have achieved a relatively smooth transition to modern economic growth over the last century or so. Since the late nineteenth century it has been a major supplier of primary products to the industrialized countries; tin, rubber, palm oil, timber, oil, liquified natural gas, etc.

However, since about 1970 the leading sector in development has been a range of export-oriented manufacturing industries such as textiles, electrical and electronic goods, rubber products etc. Government policy has generally accorded a central role to foreign capital, while at the same time working towards more substantial participation for domestic, especially bumiputera, capital and enterprise. By 1990 the country had largely met the criteria for a Newly-Industrialized Country (NIC) status (30 percent of exports to consist of manufactured goods). While the Asian economic crisis of 1997-98 slowed growth temporarily, the current plan, titled Vision 2020, aims to achieve “a fully developed industrialized economy by that date. This will require an annual growth rate in real GDP of 7 percent” (Far Eastern Economic Review, Nov. 6, 2003). Malaysia is perhaps the best example of a country in which the economic roles and interests of various racial groups have been pragmatically managed in the long-term without significant loss of growth momentum, despite the ongoing presence of inter-ethnic tensions which have occasionally manifested in violence, notably in 1969 (see below).

The Premodern Economy

Malaysia has a long history of internationally valued exports, being known from the early centuries A.D. as a source of gold, tin and exotics such as birds’ feathers, edible birds’ nests, aromatic woods, tree resins etc. The commercial importance of the area was enhanced by its strategic position athwart the seaborne trade routes from the Indian Ocean to East Asia. Merchants from both these regions, Arabs, Indians and Chinese regularly visited. Some became domiciled in ports such as Melaka [formerly Malacca], the location of one of the earliest local sultanates (c.1402 A.D.) and a focal point for both local and international trade.

From the early sixteenth century the area was increasingly penetrated by European trading interests, first the Portuguese (from 1511), then the Dutch East India Company [VOC](1602) in competition with the English East India Company [EIC] (1600) for the trade in pepper and various spices. By the late eighteenth century the VOC was dominant in the Indonesian region while the EIC acquired bases in Malaysia, beginning with Penang (1786), Singapore (1819) and Melaka (1824). These were major staging posts in the growing trade with China and also served as footholds from which to expand British control into the Malay Peninsula (from 1870), and northwest Borneo (Sarawak from 1841 and North Borneo from 1882). Over these centuries there was an increasing inflow of migrants from China attracted by the opportunities in trade and as a wage labor force for the burgeoning production of export commodities such as gold and tin. The indigenous people also engaged in commercial production (rice, tin), but remained basically within a subsistence economy and were reluctant to offer themselves as permanent wage labor. Overall, production in the premodern economy was relatively small in volume and technologically undeveloped. The capitalist sector, already foreign dominated, was still in its infancy (Drabble, 2000).

The Transition to Capitalist Production

The nineteenth century witnessed an enormous expansion in world trade which, between 1815 and 1914, grew on average at 4-5 percent a year compared to 1 percent in the preceding hundred years. The driving force came from the Industrial Revolution in the West which saw the innovation of large scale factory production of manufactured goods made possible by technological advances, accompanied by more efficient communications (e.g., railways, cars, trucks, steamships, international canals [Suez 1869, Panama 1914], telegraphs) which speeded up and greatly lowered the cost of long distance trade. Industrializing countries required ever-larger supplies of raw materials as well as foodstuffs for their growing populations. Regions such as Malaysia with ample supplies of virgin land and relative proximity to trade routes were well placed to respond to this demand. What was lacking was an adequate supply of capital and wage labor. In both aspects, the deficiency was supplied largely from foreign sources.

As expanding British power brought stability to the region, Chinese migrants started to arrive in large numbers with Singapore quickly becoming the major point of entry. Most arrived with few funds but those able to amass profits from trade (including opium) used these to finance ventures in agriculture and mining, especially in the neighboring Malay Peninsula. Crops such as pepper, gambier, tapioca, sugar and coffee were produced for export to markets in Asia (e.g. China), and later to the West after 1850 when Britain moved toward a policy of free trade. These crops were labor, not capital, intensive and in some cases quickly exhausted soil fertility and required periodic movement to virgin land (Jackson, 1968).

Tin

Besides ample land, the Malay Peninsula also contained substantial deposits of tin. International demand for tin rose progressively in the nineteenth century due to the discovery of a more efficient method for producing tinplate (for canned food). At the same time deposits in major suppliers such as Cornwall (England) had been largely worked out, thus opening an opportunity for new producers. Traditionally tin had been mined by Malays from ore deposits close to the surface. Difficulties with flooding limited the depth of mining; furthermore their activity was seasonal. From the 1840s the discovery of large deposits in the Peninsula states of Perak and Selangor attracted large numbers of Chinese migrants who dominated the industry in the nineteenth century bringing new technology which improved ore recovery and water control, facilitating mining to greater depths. By the end of the century Malayan tin exports (at approximately 52,000 metric tons) supplied just over half the world output. Singapore was a major center for smelting (refining) the ore into ingots. Tin mining also attracted attention from European, mainly British, investors who again introduced new technology – such as high-pressure hoses to wash out the ore, the steam pump and, from 1912, the bucket dredge floating in its own pond, which could operate to even deeper levels. These innovations required substantial capital for which the chosen vehicle was the public joint stock company, usually registered in Britain. Since no major new ore deposits were found, the emphasis was on increased efficiency in production. European operators, again employing mostly Chinese wage labor, enjoyed a technical advantage here and by 1929 accounted for 61 percent of Malayan output (Wong Lin Ken, 1965; Yip Yat Hoong, 1969).

Rubber

While tin mining brought considerable prosperity, it was a non-renewable resource. In the early twentieth century it was the agricultural sector which came to the forefront. The crops mentioned previously had boomed briefly but were hard pressed to survive severe price swings and the pests and diseases that were endemic in tropical agriculture. The cultivation of rubber-yielding trees became commercially attractive as a raw material for new industries in the West, notably for tires for the booming automobile industry especially in the U.S. Previously rubber had come from scattered trees growing wild in the jungles of South America with production only expandable at rising marginal costs. Cultivation on estates generated economies of scale. In the 1870s the British government organized the transport of specimens of the tree Hevea Brasiliensis from Brazil to colonies in the East, notably Ceylon and Singapore. There the trees flourished and after initial hesitancy over the five years needed for the trees to reach productive age, planters Chinese and European rushed to invest. The boom reached vast proportions as the rubber price reached record heights in 1910 (see Fig.1). Average values fell thereafter but investors were heavily committed and planting continued (also in the neighboring Netherlands Indies [Indonesia]). By 1921 the rubber acreage in Malaysia (mostly in the Peninsula) had reached 935 000 hectares (about 1.34 million acres) or some 55 percent of the total in South and Southeast Asia while output stood at 50 percent of world production.

Fig.1. Average London Rubber Prices, 1905-41 (current values)

As a result of this boom, rubber quickly surpassed tin as Malaysia’s main export product, a position that it was to hold until 1980. A distinctive feature of the industry was that the technology of extracting the rubber latex from the trees (called tapping) by an incision with a special knife, and its manufacture into various grades of sheet known as raw or plantation rubber, was easily adopted by a wide range of producers. The larger estates, mainly British-owned, were financed (as in the case of tin mining) through British-registered public joint stock companies. For example, between 1903 and 1912 some 260 companies were registered to operate in Malaya. Chinese planters for the most part preferred to form private partnerships to operate estates which were on average smaller. Finally, there were the smallholdings (under 40 hectares or 100 acres) of which those at the lower end of the range (2 hectares/5 acres or less) were predominantly owned by indigenous Malays who found growing and selling rubber more profitable than subsistence (rice) farming. These smallholders did not need much capital since their equipment was rudimentary and labor came either from within their family or in the form of share-tappers who received a proportion (say 50 percent) of the output. In Malaya in 1921 roughly 60 percent of the planted area was estates (75 percent European-owned) and 40 percent smallholdings (Drabble, 1991, 1).

The workforce for the estates consisted of migrants. British estates depended mainly on migrants from India, brought in under government auspices with fares paid and accommodation provided. Chinese business looked to the “coolie trade” from South China, with expenses advanced that migrants had subsequently to pay off. The flow of immigration was directly related to economic conditions in Malaysia. For example arrivals of Indians averaged 61 000 a year between 1900 and 1920. Substantial numbers also came from the Netherlands Indies.

Thus far, most capitalist enterprise was located in Malaya. Sarawak and British North Borneo had a similar range of mining and agricultural industries in the 19th century. However, their geographical location slightly away from the main trade route (see map) and the rugged internal terrain costly for transport made them less attractive to foreign investment. However, the discovery of oil by a subsidiary of Royal Dutch-Shell starting production from 1907 put Sarawak more prominently in the business of exports. As in Malaya, the labor force came largely from immigrants from China and to a lesser extent Java.

The growth in production for export in Malaysia was facilitated by development of an infrastructure of roads, railways, ports (e.g. Penang, Singapore) and telecommunications under the auspices of the colonial governments, though again this was considerably more advanced in Malaya (Amarjit Kaur, 1985, 1998)

The Creation of a Plural Society

By the 1920s the large inflows of migrants had created a multi-ethnic population of the type which the British scholar, J.S. Furnivall (1948) described as a plural society in which the different racial groups live side by side under a single political administration but, apart from economic transactions, do not interact with each other either socially or culturally. Though the original intention of many migrants was to come for only a limited period (say 3-5 years), save money and then return home, a growing number were staying longer, having children and becoming permanently domiciled in Malaysia. The economic developments described in the previous section were unevenly located, for example, in Malaya the bulk of the tin mines and rubber estates were located along the west coast of the Peninsula. In the boom-times, such was the size of the immigrant inflows that in certain areas they far outnumbered the indigenous Malays. In social and cultural terms Indians and Chinese recreated the institutions, hierarchies and linguistic usage of their countries of origin. This was particularly so in the case of the Chinese. Not only did they predominate in major commercial centers such as Penang, Singapore, and Kuching, but they controlled local trade in the smaller towns and villages through a network of small shops (kedai) and dealerships that served as a pipeline along which export goods like rubber went out and in return imported manufactured goods were brought in for sale. In addition Chinese owned considerable mining and agricultural land. This created a distribution of wealth and division of labor in which economic power and function were directly related to race. In this situation lay the seeds of growing discontent among bumiputera that they were losing their ancestral inheritance (land) and becoming economically marginalized. As long as British colonial rule continued the various ethnic groups looked primarily to government to protect their interests and maintain peaceable relations. An example of colonial paternalism was the designation from 1913 of certain lands in Malaya as Malay Reservations in which only indigenous people could own and deal in property (Lim Teck Ghee, 1977).

Benefits and Drawbacks of an Export Economy

Prior to World War II the international economy was divided very broadly into the northern and southern hemispheres. The former contained most of the industrialized manufacturing countries and the latter the principal sources of foodstuffs and raw materials. The commodity exchange between the spheres was known as the Old International Division of Labor (OIDL). Malaysia’s place in this system was as a leading exporter of raw materials (tin, rubber, timber, oil, etc.) and an importer of manufactures. Since relatively little processing was done on the former prior to export, most of the value-added component in the final product accrued to foreign manufacturers, e.g. rubber tire manufacturers in the U.S.

It is clear from this situation that Malaysia depended heavily on earnings from exports of primary commodities to maintain the standard of living. Rice had to be imported (mainly from Burma and Thailand) because domestic production supplied on average only 40 percent of total needs. As long as export prices were high (for example during the rubber boom previously mentioned), the volume of imports remained ample. Profits to capital and good smallholder incomes supported an expanding economy. There are no official data for Malaysian national income prior to World War II, but some comparative estimates are given in Table 1 which indicate that Malayan Gross Domestic Product (GDP) per person was easily the leader in the Southeast and East Asian region by the late 1920s.

Table 1
GDP per Capita: Selected Asian Countries, 1900-1990
(in 1985 international dollars)

1900 1929 1950 1973 1990
Malaya/Malaysia1 6002 1910 1828 3088 5775
Singapore 22763 5372 14441
Burma 523 651 304 446 562
Thailand 594 623 652 1559 3694
Indonesia 617 1009 727 1253 2118
Philippines 735 1106 943 1629 1934
South Korea 568 945 565 1782 6012
Japan 724 1192 1208 7133 13197

Notes: Malaya to 19731; Guesstimate2; 19603

Source: van der Eng (1994).

However, the international economy was subject to strong fluctuations. The levels of activity in the industrialized countries, especially the U.S., were the determining factors here. Almost immediately following World War I there was a depression from 1919-22. Strong growth in the mid and late-1920s was followed by the Great Depression (1929-32). As industrial output slumped, primary product prices fell even more heavily. For example, in 1932 rubber sold on the London market for about one one-hundredth of the peak price in 1910 (Fig.1). The effects on export earnings were very severe; in Malaysia’s case between 1929 and 1932 these dropped by 73 percent (Malaya), 60 percent (Sarawak) and 50 percent (North Borneo). The aggregate value of imports fell on average by 60 percent. Estates dismissed labor and since there was no social security, many workers had to return to their country of origin. Smallholder incomes dropped heavily and many who had taken out high-interest secured loans in more prosperous times were unable to service these and faced the loss of their land.

The colonial government attempted to counteract this vulnerability to economic swings by instituting schemes to restore commodity prices to profitable levels. For the rubber industry this involved two periods of mandatory restriction of exports to reduce world stocks and thus exert upward pressure on market prices. The first of these (named the Stevenson scheme after its originator) lasted from 1 October 1922- 1 November 1928, and the second (the International Rubber Regulation Agreement) from 1 June 1934-1941. Tin exports were similarly restricted from 1931-41. While these measures did succeed in raising world prices, the inequitable treatment of Asian as against European producers in both industries has been debated. The protective policy has also been blamed for “freezing” the structure of the Malaysian economy and hindering further development, for instance into manufacturing industry (Lim Teck Ghee, 1977; Drabble, 1991).

Why No Industrialization?

Malaysia had very few secondary industries before World War II. The little that did appear was connected mainly with the processing of the primary exports, rubber and tin, together with limited production of manufactured goods for the domestic market (e.g. bread, biscuits, beverages, cigarettes and various building materials). Much of this activity was Chinese-owned and located in Singapore (Huff, 1994). Among the reasons advanced are; the small size of the domestic market, the relatively high wage levels in Singapore which made products uncompetitive as exports, and a culture dominated by British trading firms which favored commerce over industry. Overshadowing all these was the dominance of primary production. When commodity prices were high, there was little incentive for investors, European or Asian, to move into other sectors. Conversely, when these prices fell capital and credit dried up, while incomes contracted, thus lessening effective demand for manufactures. W.G. Huff (2002) has argued that, prior to World War II, “there was, in fact, never a good time to embark on industrialization in Malaya.”

War Time 1942-45: The Japanese Occupation

During the Japanese occupation years of World War II, the export of primary products was limited to the relatively small amounts required for the Japanese economy. This led to the abandonment of large areas of rubber and the closure of many mines, the latter progressively affected by a shortage of spare parts for machinery. Businesses, especially those Chinese-owned, were taken over and reassigned to Japanese interests. Rice imports fell heavily and thus the population devoted a large part of their efforts to producing enough food to stay alive. Large numbers of laborers (many of whom died) were conscripted to work on military projects such as construction of the Thai-Burma railroad. Overall the war period saw the dislocation of the export economy, widespread destruction of the infrastructure (roads, bridges etc.) and a decline in standards of public health. It also saw a rise in inter-ethnic tensions due to the harsh treatment meted out by the Japanese to some groups, notably the Chinese, compared to a more favorable attitude towards the indigenous peoples among whom (Malays particularly) there was a growing sense of ethnic nationalism (Drabble, 2000).

Postwar Reconstruction and Independence

The returning British colonial rulers had two priorities after 1945; to rebuild the export economy as it had been under the OIDL (see above), and to rationalize the fragmented administrative structure (see General Background). The first was accomplished by the late 1940s with estates and mines refurbished, production restarted once the labor force had been brought back and adequate rice imports regained. The second was a complex and delicate political process which resulted in the formation of the Federation of Malaya (1948) from which Singapore, with its predominantly Chinese population (about 75%), was kept separate. In Borneo in 1946 the state of Sarawak, which had been a private kingdom of the English Brooke family (so-called “White Rajas”) since 1841, and North Borneo, administered by the British North Borneo Company from 1881, were both transferred to direct rule from Britain. However, independence was clearly on the horizon and in Malaya tensions continued with the guerrilla campaign (called the “Emergency”) waged by the Malayan Communist Party (membership largely Chinese) from 1948-60 to force out the British and set up a Malayan Peoples’ Republic. This failed and in 1957 the Malayan Federation gained independence (Merdeka) under a “bargain” by which the Malays would hold political paramountcy while others, notably Chinese and Indians, were given citizenship and the freedom to pursue their economic interests. The bargain was institutionalized as the Alliance, later renamed the National Front (Barisan Nasional) which remains the dominant political grouping. In 1963 the Federation of Malaysia was formed in which the bumiputera population was sufficient in total to offset the high proportion of Chinese arising from the short-lived inclusion of Singapore (Andaya and Andaya, 2001).

Towards the Formation of a National Economy

Postwar two long-term problems came to the forefront. These were (a) the political fragmentation (see above) which had long prevented a centralized approach to economic development, coupled with control from Britain which gave primacy to imperial as opposed to local interests and (b) excessive dependence on a small range of primary products (notably rubber and tin) which prewar experience had shown to be an unstable basis for the economy.

The first of these was addressed partly through the political rearrangements outlined in the previous section, with the economic aspects buttressed by a report from a mission to Malaya from the International Bank for Reconstruction and Development (IBRD) in 1954. The report argued that Malaya “is now a distinct national economy.” A further mission in 1963 urged “closer economic cooperation between the prospective Malaysia[n] territories” (cited in Drabble, 2000, 161, 176). The rationale for the Federation was that Singapore would serve as the initial center of industrialization, with Malaya, Sabah and Sarawak following at a pace determined by local conditions.

The second problem centered on economic diversification. The IBRD reports just noted advocated building up a range of secondary industries to meet a larger portion of the domestic demand for manufactures, i.e. import-substitution industrialization (ISI). In the interim dependence on primary products would perforce continue.

The Adoption of Planning

In the postwar world the development plan (usually a Five-Year Plan) was widely adopted by Less-Developed Countries (LDCs) to set directions, targets and estimated costs. Each of the Malaysian territories had plans during the 1950s. Malaya was the first to get industrialization of the ISI type under way. The Pioneer Industries Ordinance (1958) offered inducements such as five-year tax holidays, guarantees (to foreign investors) of freedom to repatriate profits and capital etc. A modest degree of tariff protection was granted. The main types of goods produced were consumer items such as batteries, paints, tires, and pharmaceuticals. Just over half the capital invested came from abroad, with neighboring Singapore in the lead. When Singapore exited the federation in 1965, Malaysia’s fledgling industrialization plans assumed greater significance although foreign investors complained of stifling bureaucracy retarding their projects.

Primary production, however, was still the major economic activity and here the problem was rejuvenation of the leading industries, rubber in particular. New capital investment in rubber had slowed since the 1920s, and the bulk of the existing trees were nearing the end of their economic life. The best prospect for rejuvenation lay in cutting down the old trees and replanting the land with new varieties capable of raising output per acre/hectare by a factor of three or four. However, the new trees required seven years to mature. Corporately owned estates could replant progressively, but smallholders could not face such a prolonged loss of income without support. To encourage replanting, the government offered grants to owners, financed by a special duty on rubber exports. The process was a lengthy one and it was the 1980s before replanting was substantially complete. Moreover, many estates elected to switch over to a new crop, oil palms (a product used primarily in foodstuffs), which offered quicker returns. Progress was swift and by the 1960s Malaysia was supplying 20 percent of world demand for this commodity.

Another priority at this time consisted of programs to improve the standard of living of the indigenous peoples, most of whom lived in the rural areas. The main instrument was land development, with schemes to open up large areas (say 100,000 acres or 40 000 hectares) which were then subdivided into 10 acre/4 hectare blocks for distribution to small farmers from overcrowded regions who were either short of land or had none at all. Financial assistance (repayable) was provided to cover housing and living costs until the holdings became productive. Rubber and oil palms were the main commercial crops planted. Steps were also taken to increase the domestic production of rice to lessen the historical dependence on imports.

In the primary sector Malaysia’s range of products was increased from the 1960s by a rapid increase in the export of hardwood timber, mostly in the form of (unprocessed) saw-logs. The markets were mainly in East Asia and Australasia. Here the largely untapped resources of Sabah and Sarawak came to the fore, but the rapid rate of exploitation led by the late twentieth century to damaging effects on both the environment (extensive deforestation, soil-loss, silting, changed weather patterns), and the traditional hunter-gatherer way of life of forest-dwellers (decrease in wild-life, fish, etc.). Other development projects such as the building of dams for hydroelectric power also had adverse consequences in all these respects (Amarjit Kaur, 1998; Drabble, 2000; Hong, 1987).

A further major addition to primary exports came from the discovery of large deposits of oil and natural gas in East Malaysia, and off the east coast of the Peninsula from the 1970s. Gas was exported in liquified form (LNG), and was also used domestically as a substitute for oil. At peak values in 1982, petroleum and LNG provided around 29 percent of Malaysian export earnings but had declined to 18 percent by 1988.

Industrialization and the New Economic Policy 1970-90

The program of industrialization aimed primarily at the domestic market (ISI) lost impetus in the late 1960s as foreign investors, particularly from Britain switched attention elsewhere. An important factor here was the outbreak of civil disturbances in May 1969, following a federal election in which political parties in the Peninsula (largely non-bumiputera in membership) opposed to the Alliance did unexpectedly well. This brought to a head tensions, which had been rising during the 1960s over issues such as the use of the national language, Malay (Bahasa Malaysia) as the main instructional medium in education. There was also discontent among Peninsular Malays that the economic fruits since independence had gone mostly to non-Malays, notably the Chinese. The outcome was severe inter-ethnic rioting centered in the federal capital, Kuala Lumpur, which led to the suspension of parliamentary government for two years and the implementation of the New Economic Policy (NEP).

The main aim of the NEP was a restructuring of the Malaysian economy over two decades, 1970-90 with the following aims:

  1. to redistribute corporate equity so that the bumiputera share would rise from around 2 percent to 30 percent. The share of other Malaysians would increase marginally from 35 to 40 percent, while that of foreigners would fall from 63 percent to 30 percent.
  2. to eliminate the close link between race and economic function (a legacy of the colonial era) and restructure employment so that that the bumiputera share in each sector would reflect more accurately their proportion of the total population (roughly 55 percent). In 1970 this group had about two-thirds of jobs in the primary sector where incomes were generally lowest, but only 30 percent in the secondary sector. In high-income middle class occupations (e.g. professions, management) the share was only 13 percent.
  3. To eradicate poverty irrespective of race. In 1970 just under half of all households in Peninsular Malaysia had incomes below the official poverty line. Malays accounted for about 75 percent of these.

The principle underlying these aims was that the redistribution would not result in any one group losing in absolute terms. Rather it would be achieved through the process of economic growth, i.e. the economy would get bigger (more investment, more jobs, etc.). While the primary sector would continue to receive developmental aid under the successive Five Year Plans, the main emphasis was a switch to export-oriented industrialization (EOI) with Malaysia seeking a share in global markets for manufactured goods. Free Trade Zones (FTZs) were set up in places such as Penang where production was carried on with the undertaking that the output would be exported. Firms locating there received concessions such as duty-free imports of raw materials and capital goods, and tax concessions, aimed at primarily at foreign investors who were also attracted by Malaysia’s good facilities, relatively low wages and docile trade unions. A range of industries grew up; textiles, rubber and food products, chemicals, telecommunications equipment, electrical and electronic machinery/appliances, car assembly and some heavy industries, iron and steel. As with ISI, much of the capital and technology was foreign, for example the Japanese firm Mitsubishi was a partner in a venture to set up a plant to assemble a Malaysian national car, the Proton, from mostly imported components (Drabble, 2000).

Results of the NEP

Table 2 below shows the outcome of the NEP in the categories outlined above.

Table 2
Restructuring under the NEP, 1970-90

1970 1990
Wealth Ownership (%) Bumiputera 2.0 20.3
Other Malaysians 34.6 54.6
Foreigners 63.4 25.1
Employment
(%) of total
workers
in each
sector
Primary sector (agriculture, mineral
extraction, forest products and fishing)
Bumiputera 67.6 [61.0]* 71.2 [36.7]*
Others 32.4 28.8
Secondary sector
(manufacturing and construction)
Bumiputera 30.8 [14.6]* 48.0 [26.3]*
Others 69.2 52.0
Tertiary sector (services) Bumiputera 37.9 [24.4]* 51.0 [36.9]*
Others 62.1 49.0

Note: [ ]* is the proportion of the ethnic group thus employed. The “others” category has not been disaggregated by race to avoid undue complexity.
Source: Drabble, 2000, Table 10.9.

Section (a) shows that, overall, foreign ownership fell substantially more than planned, while that of “Other Malaysians” rose well above the target. Bumiputera ownership appears to have stopped well short of the 30 percent mark. However, other evidence suggests that in certain sectors such as agriculture/mining (35.7%) and banking/insurance (49.7%) bumiputera ownership of shares in publicly listed companies had already attained a level well beyond the target. Section (b) indicates that while bumiputera employment share in primary production increased slightly (due mainly to the land schemes), as a proportion of that ethnic group it declined sharply, while rising markedly in both the secondary and tertiary sectors. In middle class employment the share rose to 27 percent.

As regards the proportion of households below the poverty line, in broad terms the incidence in Malaysia fell from approximately 49 percent in 1970 to 17 percent in 1990, but with large regional variations between the Peninsula (15%), Sarawak (21 %) and Sabah (34%) (Drabble, 2000, Table 13.5). All ethnic groups registered big falls, but on average the non-bumiputera still enjoyed the lowest incidence of poverty. By 2002 the overall level had fallen to only 4 percent.

The restructuring of the Malaysian economy under the NEP is very clear when we look at the changes in composition of the Gross Domestic Product (GDP) in Table 3 below.

Table 3
Structural Change in GDP 1970-90 (% shares)

Year Primary Secondary Tertiary
1970 44.3 18.3 37.4
1990 28.1 30.2 41.7

Source: Malaysian Government, 1991, Table 3-2.

Over these three decades Malaysia accomplished a transition from a primary product-dependent economy to one in which manufacturing industry had emerged as the leading growth sector. Rubber and tin, which accounted for 54.3 percent of Malaysian export value in 1970, declined sharply in relative terms to a mere 4.9 percent in 1990 (Crouch, 1996, 222).

Factors in the structural shift

The post-independence state played a leading role in the transformation. The transition from British rule was smooth. Apart from the disturbances in 1969 government maintained a firm control over the administrative machinery. Malaysia’s Five Year Development plans were a model for the developing world. Foreign capital was accorded a central role, though subject to the requirements of the NEP. At the same time these requirements discouraged domestic investors, the Chinese especially, to some extent (Jesudason, 1989).

Development was helped by major improvements in education and health. Enrolments at the primary school level reached approximately 90 percent by the 1970s, and at the secondary level 59 percent of potential by 1987. Increased female enrolments, up from 39 percent to 58 percent of potential from 1975 to 1991, were a notable feature, as was the participation of women in the workforce which rose to just over 45 percent of total employment by 1986/7. In the tertiary sector the number of universities increased from one to seven between 1969 and 1990 and numerous technical and vocational colleges opened. Bumiputera enrolments soared as a result of the NEP policy of redistribution (which included ethnic quotas and government scholarships). However, tertiary enrolments totaled only 7 percent of the age group by 1987. There was an “educational-occupation mismatch,” with graduates (bumiputera especially) preferring jobs in government, and consequent shortfalls against strong demand for engineers, research scientists, technicians and the like. Better living conditions (more homes with piped water and more rural clinics, for example) led to substantial falls in infant mortality, improved public health and longer life-expectancy, especially in Peninsular Malaysia (Drabble, 2000, 248, 284-6).

The quality of national leadership was a crucial factor. This was particularly so during the NEP. The leading figure here was Dr Mahathir Mohamad, Malaysian Prime Minister from 1981-2003. While supporting the NEP aim through positive discrimination to give bumiputera an economic stake in the country commensurate with their indigenous status and share in the population, he nevertheless emphasized that this should ultimately lead them to a more modern outlook and ability to compete with the other races in the country, the Chinese especially (see Khoo Boo Teik, 1995). There were, however, some paradoxes here. Mahathir was a meritocrat in principle, but in practice this period saw the spread of “money politics” (another expression for patronage) in Malaysia. In common with many other countries Malaysia embarked on a policy of privatization of public assets, notably in transportation (e.g. Malaysian Airlines), utilities (e.g. electricity supply) and communications (e.g. television). This was done not through an open process of competitive tendering but rather by a “nebulous ‘first come, first served’ principle” (Jomo, 1995, 8) which saw ownership pass directly to politically well-connected businessmen, mainly bumiputera, at relatively low valuations.

The New Development Policy

Positive action to promote bumiputera interests did not end with the NEP in 1990, this was followed in 1991 by the New Development Policy (NDP), which emphasized assistance only to “Bumiputera with potential, commitment and good track records” (Malaysian Government, 1991, 17) rather than the previous blanket measures to redistribute wealth and employment. In turn the NDP was part of a longer-term program known as Vision 2020. The aim here is to turn Malaysia into a fully industrialized country and to quadruple per capita income by the year 2020. This will require the country to continue ascending the technological “ladder” from low- to high-tech types of industrial production, with a corresponding increase in the intensity of capital investment and greater retention of value-added (i.e. the value added to raw materials in the production process) by Malaysian producers.

The Malaysian economy continued to boom at historically unprecedented rates of 8-9 percent a year for much of the 1990s (see next section). There was heavy expenditure on infrastructure, for example extensive building in Kuala Lumpur such as the Twin Towers (currently the highest buildings in the world). The volume of manufactured exports, notably electronic goods and electronic components increased rapidly.

Asian Financial Crisis, 1997-98

The Asian financial crisis originated in heavy international currency speculation leading to major slumps in exchange rates beginning with the Thai baht in May 1997, spreading rapidly throughout East and Southeast Asia and severely affecting the banking and finance sectors. The Malaysian ringgit exchange rate fell from RM 2.42 to 4.88 to the U.S. dollar by January 1998. There was a heavy outflow of foreign capital. To counter the crisis the International Monetary Fund (IMF) recommended austerity changes to fiscal and monetary policies. Some countries (Thailand, South Korea, and Indonesia) reluctantly adopted these. The Malaysian government refused and implemented independent measures; the ringgitbecame non-convertible externally and was pegged at RM 3.80 to the US dollar, while foreign capital repatriated before staying at least twelve months was subject to substantial levies. Despite international criticism these actions stabilized the domestic situation quite effectively, restoring net growth (see next section) especially compared to neighboring Indonesia.

Rates of Economic Growth

Malaysia’s economic growth in comparative perspective from 1960-90 is set out in Table 4 below.

Table 4
Asia-Pacific Region: Growth of Real GDP (annual average percent)

1960-69 1971-80 1981-89
Japan 10.9 5.0 4.0
Asian “Tigers”
Hong Kong 10.0 9.5 7.2
South Korea 8.5 8.7 9.3
Singapore 8.9 9.0 6.9
Taiwan 11.6 9.7 8.1
ASEAN-4
Indonesia 3.5 7.9 5.2
Malaysia 6.5 8.0 5.4
Philippines 4.9 6.2 1.7
Thailand 8.3 9.9 7.1

Source: Drabble, 2000, Table 10.2; figures for Japan are for 1960-70, 1971-80, and 1981-90.

The data show that Japan, the dominant Asian economy for much of this period, progressively slowed by the 1990s (see below). The four leading Newly Industrialized Countries (Asian “Tigers” as they were called) followed EOF strategies and achieved very high rates of growth. Among the four ASEAN (Association of Southeast Asian Nations formed 1967) members, again all adopting EOI policies, Thailand stood out followed closely by Malaysia. Reference to Table 1 above shows that by 1990 Malaysia, while still among the leaders in GDP per head, had slipped relative to the “Tigers.”

These economies, joined by China, continued growth into the 1990s at such high rates (Malaysia averaged around 8 percent a year) that the term “Asian miracle” became a common method of description. The exception was Japan which encountered major problems with structural change and an over-extended banking system. Post-crisis the countries of the region have started recovery but at differing rates. The Malaysian economy contracted by nearly 7 percent in 1998, recovered to 8 percent growth in 2000, slipped again to under 1 percent in 2001 and has since stabilized at between 4 and 5 percent growth in 2002-04.

The new Malaysian Prime Minister (since October 2003), Abdullah Ahmad Badawi, plans to shift the emphasis in development to smaller, less-costly infrastructure projects and to break the previous dominance of “money politics.” Foreign direct investment will still be sought but priority will be given to nurturing the domestic manufacturing sector.

Further improvements in education will remain a key factor (Far Eastern Economic Review, Nov.6, 2003).

Overview

Malaysia owes its successful historical economic record to a number of factors. Geographically it lies close to major world trade routes bringing early exposure to the international economy. The sparse indigenous population and labor force has been supplemented by immigrants, mainly from neighboring Asian countries with many becoming permanently domiciled. The economy has always been exceptionally open to external influences such as globalization. Foreign capital has played a major role throughout. Governments, colonial and national, have aimed at managing the structure of the economy while maintaining inter-ethnic stability. Since about 1960 the economy has benefited from extensive restructuring with sustained growth of exports from both the primary and secondary sectors, thus gaining a double impetus.

However, on a less positive assessment, the country has so far exchanged dependence on a limited range of primary products (e.g. tin and rubber) for dependence on an equally limited range of manufactured goods, notably electronics and electronic components (59 percent of exports in 2002). These industries are facing increasing competition from lower-wage countries, especially India and China. Within Malaysia the distribution of secondary industry is unbalanced, currently heavily favoring the Peninsula. Sabah and Sarawak are still heavily dependent on primary products (timber, oil, LNG). There is an urgent need to continue the search for new industries in which Malaysia can enjoy a comparative advantage in world markets, not least because inter-ethnic harmony depends heavily on the continuance of economic prosperity.

Select Bibliography

General Studies

Amarjit Kaur. Economic Change in East Malaysia: Sabah and Sarawak since 1850. London: Macmillan, 1998.

Andaya, L.Y. and Andaya, B.W. A History of Malaysia, second edition. Basingstoke: Palgrave, 2001.

Crouch, Harold. Government and Society in Malaysia. Sydney: Allen and Unwin, 1996.

Drabble, J.H. An Economic History of Malaysia, c.1800-1990: The Transition to Modern Economic Growth. Basingstoke: Macmillan and New York: St. Martin’s Press, 2000.

Furnivall, J.S. Colonial Policy and Practice: A Comparative Study of Burma and Netherlands India. Cambridge (UK), 1948.

Huff, W.G. The Economic Growth of Singapore: Trade and Development in the Twentieth Century. Cambridge: Cambridge University Press, 1994.

Jomo, K.S. Growth and Structural Change in the Malaysian Economy. London: Macmillan, 1990.

Industries/Transport

Alavi, Rokiah. Industrialization in Malaysia: Import Substitution and Infant Industry Performance. London: Routledge, 1966.

Amarjit Kaur. Bridge and Barrier: Transport and Communications in Colonial Malaya 1870-1957. Kuala Lumpur: Oxford University Press, 1985.

Drabble, J.H. Rubber in Malaya 1876-1922: The Genesis of the Industry. Kuala Lumpur: Oxford University Press, 1973.

Drabble, J.H. Malayan Rubber: The Interwar Years. London: Macmillan, 1991.

Huff, W.G. “Boom or Bust Commodities and Industrialization in Pre-World War II Malaya.” Journal of Economic History 62, no. 4 (2002): 1074-1115.

Jackson, J.C. Planters and Speculators: European and Chinese Agricultural Enterprise in Malaya 1786-1921. Kuala Lumpur: University of Malaya Press, 1968.

Lim Teck Ghee. Peasants and Their Agricultural Economy in Colonial Malaya, 1874-1941. Kuala Lumpur: Oxford University Press, 1977.

Wong Lin Ken. The Malayan Tin Industry to 1914. Tucson: University of Arizona Press, 1965.

Yip Yat Hoong. The Development of the Tin Mining Industry of Malaya. Kuala Lumpur: University of Malaya Press, 1969.

New Economic Policy

Jesudason, J.V. Ethnicity and the Economy: The State, Chinese Business and Multinationals in Malaysia. Kuala Lumpur: Oxford University Press, 1989.

Jomo, K.S., editor. Privatizing Malaysia: Rents, Rhetoric, Realities. Boulder, CO: Westview Press, 1995.

Khoo Boo Teik. Paradoxes of Mahathirism: An Intellectual Biography of Mahathir Mohamad. Kuala Lumpur: Oxford University Press, 1995.

Vincent, J.R., R.M. Ali and Associates. Environment and Development in a Resource-Rich Economy: Malaysia under the New Economic Policy. Cambridge, MA: Harvard University Press, 1997

Ethnic Communities

Chew, Daniel. Chinese Pioneers on the Sarawak Frontier, 1841-1941. Kuala Lumpur: Oxford University Press, 1990.

Gullick, J.M. Malay Society in the Late Nineteenth Century. Kuala Lumpur: Oxford University Press, 1989.

Hong, Evelyne. Natives of Sarawak: Survival in Borneo’s Vanishing Forests. Penang: Institut Masyarakat Malaysia, 1987.

Shamsul, A.B. From British to Bumiputera Rule. Singapore: Institute of Southeast Asian Studies, 1986.

Economic Growth

Far Eastern Economic Review. Hong Kong. An excellent weekly overview of current regional affairs.

Malaysian Government. The Second Outline Perspective Plan, 1991-2000. Kuala Lumpur: Government Printer, 1991.

Van der Eng, Pierre. “Assessing Economic Growth and the Standard of Living in Asia 1870-1990.” Milan, Eleventh International Economic History Congress, 1994.

Citation: Drabble, John. “The Economic History of Malaysia”. EH.Net Encyclopedia, edited by Robert Whaples. July 31, 2004. URL http://eh.net/encyclopedia/economic-history-of-malaysia/

The Law of One Price

Karl Gunnar Persson, University of Copenhagen

Definitions and Explanation of the Law of One Price

The concept “Law of One Price” relates to the impact of market arbitrage and trade on the prices of identical commodities that are exchanged in two or more markets. In an efficient market there must be, in effect, only one price of such commodities regardless of where they are traded. The “law” can also be applied to factor markets, as is briefly noted in the concluding section.

The intellectual history of the concept can be traced back to economists active in France in the 1760-70’s, which applied the “law” to markets involved in international trade. Most of the modern literature also tends to discuss the “law” in that context.

However, since transport and transaction costs are positive the law of one price must be re-formulated when applied to spatial trade. Let us first look at a case with two markets which are trading, say, wheat but with wheat going in one direction only, from Chicago to Liverpool, as has been the case since the 1850’s.

In this case the price difference between Liverpool and Chicago markets of wheat of a particular quality, say, Red Winter no. 2, should be equal to the transport and transaction cost of shipping grain from Chicago to Liverpool. This is to say that the ratio of the Liverpool price to the price in Chicago plus transport and transaction costs should be equal to one. Tariffs are not explicitly discussed in the next paragraphs but can easily be introduced as a specific transaction cost at par with commissions and other trading costs.

If the price differential exceeds the transport and transaction costs, this means that the price ratio is greater than one, then self-interested and well-informed traders take the opportunity to make a profit by shipping wheat from Chicago to Liverpool. Such arbitrage closes the price gap because it increases supply and hence decreases price in Liverpool, while it increases demand, and hence price in Chicago. To be sure the operation of the law of one price is not only based on trade flows but inventory adjustments as well. In the example above traders in Liverpool might choose to release wheat from warehouses in Liverpool immediately since they anticipate shipments to Liverpool. This inventory release works to depress prices immediately. So the expectation of future shipments will have an impact on price immediately because of inventory adjustments.

If the price differential does not exceed the transport and transaction cost, this means that the price ratio is less than one, then self-interested and well informed traders take the opportunity to restrict the release of wheat from the warehouses in Liverpool and decrease the demand for shipments of wheat from Chicago. These reactions will trigger off an immediate price increase in Liverpool since supply falls in Liverpool and a price decrease in Chicago because demand falls.

Formal Presentation of the Law of One Price

Let PL and PC denote the prices in Liverpool and Chicago respectively. Furthermore, we also observe the transport and transactions costs, linked to shipping the commodity from Chicago to Liverpool, PTc. All prices are measured in the same currency and units, say, shillings per imperial quarter. What has been explained above verbally can be expressed formally. The law of one price adjusted for transport and transaction costs implies the following equilibrium, which henceforward will be referred to as the Fundamental Law of One Price Identity or FLOPI:

[Equation - Fundamental Law of One Price Identity]

In case the two markets both produce and can trade a commodity in either direction the law of one price states that the price difference should be smaller or equal to transport and transaction costs. FLOPI then is smaller or equal to one. If the price difference is larger than transport and transaction costs, trade will close the gap as suggested above. Occasionally domestic demand and supply conditions in two producing economies can be such that price differences are smaller than transport and transaction costs and there will not be any need for trade. In this particular case the two economies are both self-sufficient in wheat.

A case with many markets will necessitate a third elaboration of the concept of the law of one price. Let us look at it in a world of three markets, say Chicago, Liverpool and Copenhagen. Assume furthermore that both Chicago and Copenhagen supply Liverpool with the same commodity, say wheat. If so, the Liverpool-Copenhagen price differential must be equal to the transport and transaction costs between Copenhagen and Liverpool and the Chicago-London price differential will be equal to the transport and transaction costs between Chicago and Liverpool. But what about the price difference between Chicago and Copenhagen? It turns out that it will be determined by the difference between transport and transactions costs from Chicago to Liverpool and from Copenhagen to Liverpool. If it costs 7 cents to ship a bushel of grain from Chicago to Liverpool and 5 cents from Copenhagen to Liverpool, the law of price difference between Copenhagen and Chicago will be 2 cents that is 7 – 5 = 2. If price is 100 cents per bushel in Chicago it will be 107 in Liverpool and 102 in Copenhagen. So although the distance and transport cost between Chicago and Copenhagen is larger than between Chicago and Liverpool, the equilibrium price differential is smaller! This argument can be extended to many markets in the following sense: the price difference between two markets which do not trade with each other will be determined by the minimum difference in transport and transaction costs between these two markets to a market with which they both trade.

The argument in the preceding paragraph has important implications for the relationship between distance and price differences. It is often argued that the difference between prices of a commodity in two markets increases monotonically with distance. But this is true only if the two markets actually trade directly with each other. However, the likelihood that markets cease to trade directly with each other increases as the distance increases and long distance markets will therefore typically be only indirectly linked through a third common market. Hence the paradox illustrated above that the law of one price difference between Chicago and Copenhagen is smaller despite the larger geographical distance than that between Copenhagen and Liverpool or Chicago and Liverpool. In fact it is quite easy to imagine two markets at a distance of two units both exporting to a third market in between them at a distance of one unit from each of them and enjoying the same price despite the large distance.

Efficient Markets and the Law of One Price

In what follows we typically discuss the “law” in a context with trade of a particular commodity going in one direction only, that is FLOPI = 1.

In a market with arbitrage and trade, violations of the law of one price must be transitory. However, price differentials often differ from the law of one price equilibrium, that is FLOPI is larger or smaller than 1, so it is convenient to understand the law of one price as an “attractor equilibrium” rather than a permanent state in which prices and the ratio of prices rest. The concept “attractor equilibrium” can be understood with reference to the forces described in the preceding section. That is, there are forces which act to restore FLOPI when it has been subject to a shock.

A perfectly efficient set of markets will allow only very short violations of the law of one price. But this is too strong a condition to be of practical significance. There are always local shocks which will take time to get diffused to other markets and distortions of information will make global shocks affect local markets differently. How long violations can persist depends on the state of information technology, whether markets operate with inventories and how competitive markets are. Commodity markets with telegraphic or electronic information transmission, inventories and no barriers to entry for traders can be expected to tolerate only short and transitory violations of the law of one price. News about a price change in one major market will have immediate effects on prices elsewhere due to inventory adjustments.

A convenient econometric way of analyzing the nature of the law of one price as an “attractor equilibrium” is a so-called error correction model. In such a model an equilibrium law of one price is estimated. If markets are not well integrated one cannot establish or estimate FLOPI. Given the existence of a long-run or equilibrium price relationship between markets, a violation is a so called “innovation” or shock, which will be corrected for so that the equilibrium price difference is restored. Here is the intuition of the model described below: Assume first that Liverpool and Chicago prices are in a law of one price equilibrium. Then, for example, the price in Chicago is subject to a local shock or “innovation” so that price in Chicago plus transport and transaction costs now exceeds the price in Liverpool. That happens in period t-1, and then the price in Liverpool will increase in the next period, t, while the price in Chicago will fall. Prices will fall in Chicago because demand for shipments will fall and it will increase in Liverpool because of a fall in supply when traders in Liverpool stop releasing grain from the warehouses in expectation of higher prices in the future. Eventually the FLOPI = 1 condition will be restored but at higher prices in both Liverpool and Chicago.

To summarize, the logic behind the error correction model is that prices in Liverpool and Chicago will react if there is a dis-equilibrium, that is when the price differential is larger or smaller than transport and transaction costs. In this case the prices will adjust such that the deviation from equilibrium is decreasing. The error correction model is usually expressed in differences of log prices. Let. The error correction model in this version is given by:

[Equation - Error Correction Model]

whereare statistical error terms with are assumed to be normally distributed with mean zero and constant variances. Please, note that errors are not the “error” that figures in the term “error correction model.” A better name for the latter would be “shock correction model” or “innovation correction model” to evade misunderstanding.

and are so-called adjustment parameters which indicate the power of FLOPI as an “attractor equilibrium.” The expected sign of the parameter is negative and it is positive for. To see this, imagine a case where the expression in the parenthesis above is larger than one. Then price in Liverpool should fall and increase in Chicago.

The parameters and indicate the speed at which “innovations” are corrected, the larger the parameters are for a given magnitude of the “innovation,” the more transitory are the violations of the law of one price – in other words, the faster is the equilibrium restored. The magnitudes of the parameters are an indicator of the efficiency of the markets. The higher they are, the faster will the equilibrium law of one price (FLOPI) be restored and the more efficient markets are. (The absolute values of the sum of the parameters should not exceed one.) The magnitude of “innovations” also tends to fall as markets get more efficient as defined above.

It is convenient to express the parameters in terms of the half life of shocks. Half life of a shock measures the time it takes for an original deviation from the equilibrium law of one price (FLOPI) to be reduced to half. The half life of shocks has been reduced dramatically in the long-distance trade of bulky commodities like grain – that is distances above 1500 km. From the seventeenth to the late nineteenth centuries, the half life was reduced from up to two years to only two weeks in international wheat markets, as revealed by the increase in the adjustment parameters. The major reason for this dramatic change is the improvement in information transmission.

The adjustment parameters can also be illustrated graphically and Figure 1 displays the stylized characteristics of adjustment speed in long-distance wheat trade and indicates a spectacular increase in grain market efficiency, specifically in the nineteenth century.

Read Figure 1 in the following way. At time 0 the two markets are in a law of one price equilibrium (FLOPI), that is prices in the two markets are exactly equal (set here arbitrarily at 100), and the ratio of prices is one. In this particular graphical example we abstract from transport and transactions costs. Now imagine a shock to the price in one market by 10 percent to 110. That will be followed by a process of mutual adjustment to the law of one price equilibrium (FLOPI) but at higher prices in both markets compared to the situation before the shock. The new price level will not necessarily be halfway between the initial level and the level attained in the economy which was subject to a shock. Adjustments can be strong in some markets and weak in others. As can be seen in Figure 1, the adjustment is very slow in the case of the Pisa (Italy) to Ruremonde (Netherlands). In fact, a new law on price equilibrium is not attained within the time period, 24 months, allowed by the Figure. This indicates very low, but still significant, adjustment parameters. It is also worth noting the difference in adjustments speed between pre-telegraph Chicago-Liverpool trade in the 1850’s and post-telegraph trade in the 1880’s.

Figure 1

Adjustment Speed in Markets after a Local Shock in Long-distance Wheat Markets
Cases from 1700-1900.

[Figure 1 - Speed in Markets after a Local Shock in Long-distance Wheat  Markets]

Note: The data underlying the construction are from Persson (1988) and Ejrnæs and Persson (2006).

It is worth noting that the fast speed of adjustment back to the law of one price recorded for single goods in the nineteenth century contrasts strongly with the sluggish adjustment in price indices (prices for bundles of goods) across economies (Giovanini 1998). However, some of these surprising results may depend on misspecifications of the tests (Taylor 2001).

Law of One Price and Convergence

The relationship between the convergence of prices on identical goods and the law of one price is not as straightforward as often believed. As was highlighted above, the law of one price can exist as an “equilibrium attractor,” despite large price differentials between markets, as long as the price differential reflects transport and transaction costs and if they are not prohibitively high. So in principle the adjustment parameters can be high, despite large price differentials. For example, the Chicago to Liverpool trade in the nineteenth century was based on highly efficient markets, but transport and transaction costs remained at about 20-25 percent of the Chicago price of wheat. However, historically the convergence in price levels in the nineteenth century was associated with an improvement in market efficiency as revealed by higher adjustment parameters. Convergence seems to be a nineteenth-century phenomenon. Figure 2 below indicates that there is not a long-run convergence in wheat markets. Convergence is here expressed as the UK price relative to the U.S. price. Falling transport costs, falling tariffs and increased market efficiency, which reduced risk premiums for traders, compressed price levels in the nineteenth century. Falling transport costs were particularly important for the landlocked producers when they penetrated foreign long-distance markets, as displayed by the dramatic convergence of Chicago to UK price levels. When the U.S. Midwest started to export grain to UK, the UK price level was 2.5 times the Chicago price. However, the figure exaggerates the true convergence significantly because the prices used do not refer to identical quality goods. As much as a third of the convergence shown in the graph has to do with improved quality of Chicago wheat relative to UK wheat, a factor often neglected in the convergence literature.

However, after the convergence forces had been exploited, trade policy was reversed. European farmers had little land relative to farmers in the New World economies, such as Argentina, Canada and U.S. and the former faced strong competition from imported grain. A protectionist backlash in continental Europe emerged in the 1880’s, continued during the Great Depression and after 1960, which contributed to price divergence. The trends discussed above are applicable to agricultural commodities but not necessarily to other commodities because protectionism is commodity specific. However, it is important to note that long-distance ocean shipping costs have not been subject to a long-run declining trend despite the widespread belief that this has been the case and therefore the convergence/divergence outcome is mostly a matter of trade policy.

Figure 2
Price Convergence, United States to United Kingdom, 1800-2000

(UK price relative to Chicago or New York price of wheat)

[Figure 3 - Price Convergence, United States to United Kingdom, 1800-2000]

Source: Federico and Persson (2006).

Note: Kernel regression is a convenient way of smoothing a time series.

The Law of One Price, Trade Restrictions and Barriers to Factor Mobility

Tariffs affect the equilibrium price differential very much like transport and transaction costs, but will tariffs also affect adjustment speed and market efficiency as defined above? The answer to that question depends on the level of tariffs. If tariffs are prohibitively high, then the domestic market will be cut off from the world market and the law of one price as an “equilibrium attractor” will cease to operate.

The law of one price can also, of course, be applied to factor markets – that is markets for capital and labor. For capital markets the law of one price would be such that interest rate or return differentials on identical assets traded in different locations or nations converge to zero or close to zero – that is the ratio of interest rates should converge to 1. If there are significant differences in interest rates between economies, capital will flow into the economy with high yields and contribute to leveling the differentials. It is clear that international capital market restrictions affect interest rate spreads. Periods of open capital markets, such as the Gold Standard period from 1870 to 1914, were periods of small and falling interest rate differentials. But the disintegration of the international capital markets and the introduction of capital market controls in the aftermath of the Great Depression in the 1930s witnessed an increase in interest rate spreads which remained substantial also under the Bretton Woods System c.1945 to 1971(73), in which capital mobility was restricted. It was not until the capital market liberalization of the 1980s and 1990s that interest rate differences again reached levels as low as a century earlier. Periods of war, when capital markets cease to function, are also periods when interest rates spreads increase.

The labor market is, however, the market that displays the most persistent violations of the law of price. We need to be careful, however, in spotting violations, in that we need to compare wages of identically skilled laborers and take differences in costs of living into consideration. Even so, huge real wage differences persist. A major reason for that is that labor markets in high income nations are shielded from international migration by a multitude of barriers.

The law of one price does not thrive under restrictions to trade or factor mobility.

References:

Ejrnæs, Mette, and Karl Gunnar Persson. “The Gains from Improved Market Efficiency: Trade before and after the Transatlantic Telegraph,” Working paper, Department of Economics, University of Copenhagen, 2006.

Federico. Giovanni and Karl Gunnar Persson. “Market Integration and Convergence in the World Wheat Market, 1800-2000.” In New Comparative Economic History, Essays in Honor of Jeffrey G. Williamson, edited by Timothy Hatton, Kevin O’Rourke and Alan Taylor. Cambridge, MA.:MIT Press, 2006.

Giovanini, Alberto. “Exchange Rates and Traded Goods Prices.” Journal of International Economics 24 (1988): 45-68.

Persson. Karl Gunnar. Grain Markets in Europe, 1500-1900: Integration and Deregulation. Cambridge: Cambridge University Press, 1998.

Taylor, Alan M. “Potential Pitfalls for the Purchasing Power Parity Puzzle? Sampling and Specification Biases in Mean-Reversion Tests of the Law of One Price,” Econometrica 69, no. 2 (2001): 473-98.

Citation: Persson, Karl. “Law of One Price”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/the-law-of-one-price/

Labor Unions in the United States

Gerald Friedman, University of Massachusetts at Amherst

Unions and Collective Action

In capitalist labor markets, which developed in the nineteenth-century in the United States and Western Europe, workers exchange their time and effort for wages. But even while laboring under the supervision of others, wage earners have never been slaves, because they have recourse from abuse. They can quit to seek better employment. Or they are free to join with others to take collective action, forming political movements or labor unions. By the end of the nineteenth century, labor unions and labor-oriented political parties had become major forces influencing wages and working conditions. This article explores the nature and development of labor unions in the United States. It reviews the growth and recent decline of the American labor movement and makes comparisons with the experience of foreign labor unions to clarify particular aspects of the history of labor unions in the United States.

Unions and the Free-Rider Problem

Quitting, exit, is straightforward, a simple act for individuals unhappy with their employment. By contrast, collective action, such as forming a labor union, is always difficult because it requires that individuals commit themselves to produce “public goods” enjoyed by all, including those who “free ride” rather than contribute to the group effort. If the union succeeds, free riders receive the same benefits as do activists; but if it fails, the activists suffer while those who remained outside lose nothing. Because individualist logic leads workers to “free ride,” unions cannot grow by appealing to individual self-interest (Hirschman, 1970; 1982; Olson, 1966; Gamson, 1975).

Union Growth Comes in Spurts

Free riding is a problem for all collective movements, including Rotary Clubs, the Red Cross, and the Audubon Society. But unionization is especially difficult because unions must attract members against the opposition of often-hostile employers. Workers who support unions sacrifice money and risk their jobs, even their lives. Success comes only when large numbers simultaneously follow a different rationality. Unions must persuade whole groups to abandon individualism to throw themselves into the collective project. Rarely have unions grown incrementally, gradually adding members. Instead, workers have joined unions en masse in periods of great excitement, attracted by what the French sociologist Emile Durkheim labeled “collective effervescence” or the joy of participating in a common project without regard for individual interest. Growth has come in spurts, short periods of social upheaval punctuated by major demonstrations and strikes when large numbers see their fellow workers publicly demonstrating a shared commitment to the collective project. Union growth, therefore, is concentrated in short periods of dramatic social upheaval; in the thirteen countries listed in Tables 1 and 2, 67 percent of growth comes in only five years, and over 90 percent in only ten years. As Table 3 shows, in these thirteen countries, unions grew by over 10 percent a year in years with the greatest strike activity but by less than 1 percent a year in the years with the fewest strikers (Friedman, 1999; Shorter and Tilly, 1974; Zolberg, 1972).

Table 1
Union Members per 100 Nonagricultural Workers, 1880-1985: Selected Countries

Year Canada US Austria Denmark France Italy Germany Netherlands Norway Sweden UK Australia Japan
1880 n.a. 1.8 n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a.
1900 4.6 7.5 n.a. 20.8 5.0 n.a. 7.0 n.a. 3.4 4.8 12.7 n.a. n.a.
1914 8.6 10.5 n.a. 25.1 8.1 n.a. 16.9 17.0 13.6 9.9 23.0 32.8 n.a.
1928 11.6 9.9 41.7 39.7 8.0 n.a. 32.5 26.0 17.4 32.0 25.6 46.2 n.a.
1939 10.9 20.7 n.a. 51.8 22.4 n.a. n.a. 32.5 57.0 53.6 31.6 39.2 n.a.
1947 24.6 31.4 64.6 55.9 40.0 n.a. 29.1 40.4 55.1 64.6 44.5 52.9 45.3
1950 26.3 28.4 62.3 58.1 30.2 49.0 33.1 43.0 58.4 67.7 44.1 56.0 46.2
1960 28.3 30.4 63.4 64.4 20.0 29.6 37.1 41.8 61.5 73.0 44.2 54.5 32.2
1975 35.6 26.4 58.5 66.6 21.4 50.1 38.2 39.1 60.5 87.2 51.0 54.7 34.4
1985 33.7 18.9 57.8 82.2 14.5 51.0 39.3 28.6 65.3 103.0 44.2 51.5 28.9

Note: This table shows the unionization rate, the share of nonagricultural workers belonging to unions, in different countries in different years, 1880-1985. Because union membership often includes unemployed and retired union members it may exceed the number of employed workers, giving a unionization rate of greater than 100 percent.

Table 2
Union Growth in Peak and Other Years

Country Years Membership Growth Share of Growth (%) Excess Growth (%)
Top 5 Years Top 10 Years All Years 5 Years 10 Years 5 Years 10 Years
Australia 83 720 000 1 230 000 3 125 000 23.0 39.4 17.0 27.3
Austria 52 5 411 000 6 545 000 3 074 000 176.0 212.9 166.8 194.4
Canada 108 855 000 1 532 000 4 028 000 21.2 38.0 16.6 28.8
Denmark 85 521 000 795 000 1 883 000 27.7 42.2 21.8 30.5
France 92 6 605 000 7 557 000 2 872 000 230.0 263.1 224.5 252.3
Germany 82 10 849 000 13 543 000 9 120 000 119.0 148.5 112.9 136.3
Italy 38 3 028 000 4 671 000 3 713 000 81.6 125.8 68.4 99.5
Japan 43 4 757 000 6 692 000 8 983 000 53.0 74.5 41.3 51.2
Netherlands 71 671 000 1 009 000 1 158 000 57.9 87.1 50.9 73.0
Norway 85 304 000 525 000 1 177 000 25.8 44.6 19.9 32.8
Sweden 99 633 000 1 036 000 3 859 000 16.4 26.8 11.4 16.7
UK 96 4 929 000 8 011 000 8 662 000 56.9 92.5 51.7 82.1
US 109 10 247 000 14 796 000 22 293 000 46.0 66.4 41.4 57.2
Total 1043 49 530 000 67 942 000 73 947 000 67.0 91.9 60.7 79.4

Note: This table shows that most union growth comes in a few years. Union membership growth (net of membership losses) has been calculated for each country for each year. Years were then sorted for each country according to membership growth. This table reports growth for each country for the five and the ten years with the fastest growth and compares this with total growth over all years for which data are available. Excess growth has been calculated as the difference between the share of growth in the top five or ten years and the share that would have come in these periods if growth had been distributed evenly across all years.

Note that years of rapid growth are not necessarily contiguous. There can be more growth in years of rapid growth than over the entire period. This is because some is temporary when years of rapid growth are followed by years of decline.

Sources: Bain and Price (1980): 39, Visser (1989)

Table 3
Impact of Strike Activity on Union Growth
Average Union Membership Growth in Years Sorted by Proportion of Workers Striking

Country Striker Rate Quartile Change
Lowest Third Second Highest
Australia 5.1 2.5 4.5 2.7 -2.4
Austria 0.5 -1.9 0.4 2.4 1.9
Canada 1.3 1.9 2.3 15.8 14.5
Denmark 0.3 1.1 3.0 11.3 11.0
France 0.0 2.1 5.6 17.0 17.0
Germany -0.2 0.4 1.3 20.3 20.5
Italy -2.2 -0.3 2.3 5.8 8.0
Japan -0.2 5.1 3.0 4.3 4.5
Netherlands -0.9 1.2 3.5 6.3 7.2
Norway 1.9 4.3 8.6 10.3 8.4
Sweden 2.5 3.2 5.9 16.9 14.4
UK 1.7 1.7 1.9 3.4 1.7
US -0.5 0.6 2.1 19.9 20.4
Total: Average 0.72 1.68 3.42 10.49 9.78

Note: This table shows that except in Australia unions grew fastest in years with large number of strikers. The proportion of workers striking was calculated for each country for each year as the number of strikers divided by the nonagricultural labor force. Years were then sorted into quartiles, each including one-fourth of the years, according to this striker rate statistic. The average annual union membership growth rate was then calculated for each quartile as the mean of the growth rate in each year in the quartile.

Rapid Union Growth Provokes a Hostile Reaction

These periods of rapid union growth end because social upheaval provokes a hostile reaction. Union growth leads employers to organize, to discover their own collective interests. Emulating their workers, they join together to discharge union activists, to support each other in strikes, and to demand government action against unions. This rising opposition ends periods of rapid union growth, beginning a new phase of decline followed by longer periods of stagnant membership. The weakest unions formed during the union surge succumb to the post-boom reaction; but if enough unions survive they leave a movement larger and broader than before.

Early Labor Unions, Democrats and Socialists

Guilds

Before modern labor unions, guilds united artisans and their employees. Craftsmen did the work of early industry, “masters” working beside “journeymen” and apprentices in small workplaces. Throughout the cities and towns of medieval Europe, guilds regulated production by setting minimum prices and quality, and capping wages, employment, and output. Controlled by independent craftsmen, “masters” who employed journeymen and trained apprentices, guilds regulated industry to protect the comfort and status of the masters. Apprentices and journeymen benefited from guild restrictions only when they advanced to master status.

Guild power was gradually undermined in the early-modern period. Employing workers outside the guild system, including rural workers and semiskilled workers in large urban workplaces, merchants transformed medieval industry. By the early 1800s, few could anticipate moving up to becoming a master artisan or owning their own establishment. Instead, facing the prospect of a lifetime of wage labor punctuated by periods of unemployment, some wage earners began to seek a collective regulation of their individual employment (Thompson, 1966; Scott, 1974; Dawley, 1976; Sewell, 1980; Wilentz, 1984; Blewett, 1988).

The labor movement within the broader movement for democracy

This new wage-labor regime led to the modern labor movement. Organizing propertyless workers who were laboring for capitalists, organized labor formed one wing of a broader democratic movement struggling for equality and for the rights of commoners (Friedman, 1998). Within the broader democratic movement for legal and political equality, labor fought the rise of a new aristocracy that controlled the machinery of modern industry just as the old aristocracy had monopolized land. Seen in this light, the fundamental idea of the labor movement, that employees should have a voice in the management of industry, is comparable to the demand that citizens should have a voice in the management of public affairs. Democratic values do not, by any means, guarantee that unions will be fair and evenhanded to all workers. In the United States, by reserving good jobs for their members, unions of white men sometimes contributed to the exploitation of women and nonwhites. Democracy only means that exploitation will be carried out at the behest of a political majority rather than at the say of an individual capitalist (Roediger, 1991; Arnesen, 2001; Foner, 1974; 1979; Milkman, 1985).

Craft unions’ strategy

Workers formed unions to voice their interests against their employers, and also against other workers. Rejecting broad alliances along class lines, alliances uniting workers on the basis of their lack of property and their common relationship with capitalists, craft unions followed a narrow strategy, uniting workers with the same skill against both the capitalists and against workers in different trades. By using their monopoly of knowledge of the work process to restrict access to the trade, craft unions could have a strong bargaining position that was enhanced by alliances with other craftsmen to finance long strikes. A narrow craft strategy was followed by the first successful unions throughout Europe and America, especially in small urban shops using technologies that still depended on traditional specialized skills, including printers, furniture makers, carpenters, gold beaters and jewelry makers, iron molders, engineers, machinists, and plumbers. Craft unions’ characteristic action was the small, local strike, the concerted withdrawal of labor by a few workers critical to production. Typically, craft unions would present a set of demands to local employers on a “take-it-or-leave-it” basis; either the employer accepted their demands or fought a contest of strength to determine whether the employers could do without the skilled workers for longer than the workers could manage without their jobs.

The craft strategy offered little to the great masses of workers. Because it depends on restricting access to trades it could not be applied by common laborers, who were untrained, nor by semi-skilled employees in modern mass-production establishments whose employers trained them on-the-job. Shunned by craft unions, most women and African-Americans in the United States were crowded into nonunion occupations. Some sought employment as strikebreakers in occupations otherwise monopolized by craft unions controlled by white, native-born males (Washington, 1913; Whatley, 1993).

Unions among unskilled workers

To form unions, the unskilled needed a strategy of the weak that would utilize their numbers rather than specialized knowledge and accumulated savings. Inclusive unions have succeeded but only when they attract allies among politicians, state officials, and the affluent public. Sponsoring unions and protecting them from employer repression, allies can allow organization among workers without specialized skills. When successful, inclusive unions can grow quickly in mass mobilization of common laborers. This happened, for example, in Germany at the beginning of the Weimar Republic, during the French Popular Front of 1936-37, and in the United States during the New Deal of the 1930s. These were times when state support rewarded inclusive unions for organizing the unskilled. The bill for mass mobilization usually came later. Each boom was followed by a reaction against the extensive promises of the inclusive labor movement when employers and conservative politicians worked to put labor’s genie back in the bottle.

Solidarity and the Trade Unions

Unionized occupations of the late 1800s

By the late-nineteenth century, trade unions had gained a powerful position in several skilled occupations in the United States and elsewhere. Outside of mining, craft unions were formed among well-paid skilled craft workers — workers whom historian Eric Hobsbawm labeled the “labor aristocracy” (Hobsbawm, 1964; Geary, 1981). In 1892, for example, nearly two-thirds of British coal miners were union members, as were a third of machinists, millwrights and metal workers, cobblers and shoe makers, glass workers, printers, mule spinners, and construction workers (Bain and Price, 1980). French miners had formed relatively strong unions, as had skilled workers in the railroad operating crafts, printers, jewelry makers, cigar makers, and furniture workers (Friedman, 1998). Cigar makers, printers, furniture workers, some construction and metal craftsmen took the lead in early German unions (Kocka, 1986). In the United States, there were about 160,000 union members in 1880, including 120,000 belonging to craft unions, including carpenters, engineers, furniture makers, stone-cutters, iron puddlers and rollers, printers, and several railroad crafts. Another 40,000 belonged to “industrial” unions organized without regard for trade. About half of these were coal miners; most of the rest belonged to the Knights of Labor (KOL) (Friedman, 1999).

The Knights of Labor

In Europe, these craft organizations were to be the basis of larger, mass unions uniting workers without regard for trade or, in some cases, industry (Ansell, 2001). This process began in the United States in the 1880s when craft workers in the Knights of Labor reached out to organize more broadly. Formed by skilled male, native-born garment cutters in 1869, the Knights of Labor would seem an odd candidate to mobilize the mass of unskilled workers. But from a few Philadelphia craft workers, the Knights grew to become a national and even international movement. Membership reached 20,000 in 1881 and grew to 100,000 in 1885. Then, in 1886, when successful strikes on some western railroads attracted a mass of previously unorganized unskilled workers, the KOL grew to a peak membership of a million workers. For a brief time, the Knights of Labor was a general movement of the American working class (Ware, 1929; Voss, 1993).

The KOL became a mass movement with an ideology and program that united workers without regard for occupation, industry, race or gender (Hattam, 1993). Never espousing Marxist or socialist doctrines, the Knights advanced an indigenous form of popular American radicalism, a “republicanism” that would overcome social problems by extending democracy to the workplace. Valuing citizens according to their work, their productive labor, the Knights were true heirs of earlier bourgeois radicals. Open to all producers, including farmers and other employers, they excluded only those seen to be parasitic on the labor of producers — liquor dealers, gamblers, bankers, stock manipulators and lawyers. Welcoming all others without regard for race, gender, or skill, the KOL was the first American labor union to attract significant numbers of women, African-Americans, and the unskilled (Foner, 1974; 1979; Rachleff, 1984).

The KOL’s strategy

In practice, most KOL local assemblies acted like craft unions. They bargained with employers, conducted boycotts, and called members out on strike to demand higher wages and better working conditions. But unlike craft unions that depended on the bargaining leverage of a few strategically positioned workers, the KOL’s tactics reflected its inclusive and democratic vision. Without a craft union’s resources or control over labor supply, the Knights sought to win labor disputes by widening them to involve political authorities and the outside public able to pressure employers to make concessions. Activists hoped that politicizing strikes would favor the KOL because its large membership would tempt ambitious politicians while its members’ poverty drew public sympathy.

In Europe, a strategy like that of the KOL succeeded in promoting the organization of inclusive unions. But it failed in the United States. Comparing the strike strategies of trade unions and the Knights provides insight into the survival and eventual success of the trade unions and their confederation, the American Federation of Labor (AFL) in late-nineteenth century America. Seeking to transform industrial relations, local assemblies of the KOL struck frequently with large but short strikes involving skilled and unskilled workers. The Knights’ industrial leverage depended on political and social influence. It could succeed where trade unions would not go because the KOL strategy utilized numbers, the one advantage held by common laborers. But this strategy could succeed only where political authorities and the outside public might sympathize with labor. Later industrial and regional unions tried the same strategy, conducting short but large strikes. By demonstrating sufficient numbers and commitment, French and Italian unions, for example, would win from state officials concessions they could not force from recalcitrant employers (Shorter and Tilly, 1974; Friedman, 1998). But compared with the small strikes conducted by craft unions, “solidarity” strikes must walk a fine line, aggressive enough to draw attention but not so threatening to provoke a hostile reaction from threatened authorities. Such a reaction doomed the KOL.

The Knights’ collapse in 1886

In 1886, the Knights became embroiled in a national general strike demanding an eight-hour workday, the world’s first May Day. This led directly to the collapse of the KOL. The May Day strike wave in 1886 and the bombing at Haymarket Square in Chicago provoked a “red scare” of historic proportions driving membership down to half a million in September 1887. Police in Chicago, for example, broke up union meetings, seized union records, and even banned the color red from advertisements. The KOL responded politically, sponsoring a wave of independent labor parties in the elections of 1886 and supporting the Populist Party in 1890 (Fink, 1983). But even relatively strong showings by these independent political movements could not halt the KOL’s decline. By 1890, its membership had fallen by half again, and it fell to under 50,000 members by 1897.

Unions and radical political movements in Europe in the late 1800s

The KOL spread outside the United States, attracting an energetic following in the Canada, the United Kingdom, France, and other European countries. Industrial and regional unionism fared better in these countries than in the United States. Most German unionists belonged to industrial unions allied with the Social Democratic Party. Under Marxist leadership, unions and political party formed a centralized labor movement to maximize labor’s political leverage. English union membership was divided between members of a stable core of craft unions and a growing membership in industrial and regional unions based in mining, cotton textiles, and transportation. Allied with political radicals, these industrial and regional unions formed the backbone of the Labor Party, which held the balance of power in British politics after 1906.

The most radical unions were found in France. By the early 1890s, revolutionary syndicalists controlled the national union center, the Confédération générale du travail (or CGT), which they tried to use as a base for a revolutionary general strike where the workers would seize economic and political power. Consolidating craft unions into industrial and regional unions, the Bourses du travail, syndicalists conducted large strikes designed to demonstrate labor’s solidarity. Paradoxically, the syndicalists’ large strikes were effective because they provoked friendly government mediation. In the United States, state intervention was fatal for labor because government and employers usually united to crush labor radicalism. But in France, officials were more concerned to maintain a center-left coalition with organized labor against reactionary employers opposed to the Third Republic. State intervention helped French unionists to win concessions beyond any they could win with economic leverage. A radical strategy of inclusive industrial and regional unionism could succeed in France because the political leadership of the early Third Republic needed labor’s support against powerful economic and social groups who would replace the Republic with an authoritarian regime. Reminded daily of the importance of republican values and the coalition that sustained the Republic, French state officials promoted collective bargaining and labor unions. Ironically, it was the support of liberal state officials that allowed French union radicalism to succeed, and allowed French unions to grow faster than American unions and to organize the semi-skilled workers in the large establishments of France’s modern industries (Friedman, 1997; 1998).

The AFL and American Exceptionalism

By 1914, unions outside the United States had found that broad organization reduced the availability of strike breakers, advanced labor’s political goals, and could lead to state intervention on behalf of the unions. The United States was becoming exceptional, the only advanced capitalist country without a strong, united labor movement. The collapse of the Knights of Labor cleared the way for the AFL. Formed in 1881 as the Federation of Trade and Labor Unions, the AFL was organized to uphold the narrow interests of craft workers against the general interests of common laborers in the KOL. In practice, AFL-craft unions were little labor monopolies, able to win concessions because of their control over uncommon skills and because their narrow strategy did not frighten state officials. Many early AFL leaders, notably the AFL’s founding president Samuel Gompers and P. J. McGuire of the Carpenters, had been active in radical political movements. But after 1886, they learned to reject political involvements for fear that radicalism might antagonize state officials or employers and provoke repression.

AFL successes in the early twentieth-century

Entering the twentieth century, the AFL appeared to have a winning strategy. Union membership rose sharply in the late 1890s, doubling between 1896 and 1900 and again between 1900 and 1904. Fewer than 5 percent of industrial wage earners belonged to labor unions in 1895, but this share rose to 7 percent in 1900 and 13 percent in 1904, including over 21 percent of industrial wage earners (workers outside of commerce, government, and the professions). Half of coal miners in 1904 belonged to an industrial union (the United Mine Workers of America), but otherwise, most union members belonged to craft organizations, including nearly half the printers, and a third of cigar makers, construction workers and transportation workers. As shown in Table 4, other pockets of union strength included skilled workers in the metal trades, leather, and apparel. These craft unions had demonstrated their economic power, raising wages by around 15 percent and reducing hours worked (Friedman, 1991; Mullin, 1993).

Table 4
Unionization rates by industry in the United States, 1880-2000

Industry 1880 1910 1930 1953 1974 1983 2000
Agriculture Forestry Fishing 0.0 0.1 0.4 0.6 4.0 4.8 2.1
Mining 11.2 37.7 19.8 64.7 34.7 21.1 10.9
Construction 2.8 25.2 29.8 83.8 38.0 28.0 18.3
Manufacturing 3.4 10.3 7.3 42.4 37.2 27.9 14.8
Transportation Communication Utilities 3.7 20.0 18.3 82.5 49.8 46.4 24.0
Private Services 0.1 3.3 1.8 9.5 8.6 8.7 4.8
Public Employment 0.3 4.0 9.6 11.3 38.0 31.1 37.5
All Private 1.7 8.7 7.0 31.9 22.4 18.4 10.9
All 1.7 8.5 7.1 29.6 24.8 20.4 14.1

Note: This table shows the unionization rate, the share of workers belonging to unions, in different industries in the United States, 1880-1996.

Sources: 1880 and 1910: Friedman (1999): 83; 1930: Union membership from Wolman (1936); employment from United States, Bureau of the Census (1932); 1953: Troy (1957); 1974, 1986, 2000: United States, Current Population Survey.

Limits to the craft strategy

Even at this peak, the craft strategy had clear limits. Craft unions succeeded only in a declining part of American industry among workers still performing traditional tasks where training was through apprenticeship programs controlled by the workers themselves. By contrast, there were few unions in the rapidly growing industries employing semi-skilled workers. Nor was the AFL able to overcome racial divisions and state opposition to organize in the South (Friedman, 2000; Letwin, 1998). Compared with the KOL in the early 1880s, or with France’s revolutionary syndicalist unions, American unions were weak in steel, textiles, chemicals, paper and metal fabrication using technologies without traditional craft skills. AFL strongholds included construction, printing, cigar rolling, apparel cutting and pressing, and custom metal engineering, employed craft workers in relatively small establishments little changed from 25 years earlier (see Table 4).

Dependent on skilled craftsmen’s economic leverage, the AFL was poorly organized to battle large, technologically dynamic corporations. For a brief time, the revolutionary International Workers of the World (IWW), formed in 1905, organized semi-skilled workers in some mass production industries. But by 1914, it too had failed. It was state support that forced powerful French employers to accept unions. Without such assistance, no union strategy could force large American employers to accept unions.

Unions in the World War I Era

The AFL and World War I

For all its limits, it must be acknowledged that the AFL and its craft affiliates survived after their rivals ignited and died. The AFL formed a solid union movement among skilled craftsmen that with favorable circumstances could form the core of a broader union movement like what developed in Europe after 1900. During World War I, the Wilson administration endorsed unionization and collective bargaining in exchange for union support for the war effort. AFL affiliates used state support to organize mass-production workers in shipbuilding, metal fabrication, meatpacking and steel doubling union membership between 1915 and 1919. But when Federal support ended after the war’s end, employers mobilized to crush the nascent unions. The post-war union collapse has been attributed to the AFL’s failings. The larger truth is that American unions needed state support to overcome the entrenched power of capital. The AFL did not fail because of its deficient economic strategy; it failed because it had an ineffective political strategy (Friedman, 1998; Frank, 1994; Montgomery, 1987).

International effects of World War I

War gave labor extraordinary opportunities. Combatant governments rewarded pro-war labor leaders with positions in the expanded state bureaucracy and support for collective bargaining and unions. Union growth also reflected economic conditions when wartime labor shortages strengthened the bargaining position of workers and unions. Unions grew rapidly during and immediately after the war. British unions, for example, doubled their membership between 1914 and 1920, to enroll eight million workers, almost half the nonagricultural labor force (Bain and Price, 1980; Visser, 1989). Union membership tripled in Germany and Sweden, doubled in Canada, Denmark, the Netherlands, and Norway, and almost doubled in the United States (see Table 5 and Table 1). For twelve countries, membership grew by 121 percent between 1913 and 1920, including 119 percent growth in seven combatant countries and 160 percent growth in five neutral states.

Table 5
Impact of World War I on Union Membership Growth
Membership Growth in Wartime and After

12 Countries 7 Combatants 5 Neutrals
War-Time 1913 12 498 000 11 742 000 756 000
1920 27 649 000 25 687 000 1 962 000
Growth 1913-20: 121% 119% 160%
Post-war 1920 27 649 000
1929 18 149 000
Growth 1920-29: -34%

Shift toward the revolutionary left

Even before the war, frustration with the slow pace of social reform had led to a shift towards the revolutionary socialist and syndicalist left in Germany, the United Kingdom, and the United States (Nolan, 1981; Montgomery, 1987). In Europe, frustrations with rising prices, declining real wages and working conditions, and anger at catastrophic war losses fanned the flames of discontent into a raging conflagration. Compared with pre-war levels, the number of strikers rose ten or even twenty times after the war, including 2.5 million strikers in France in 1919 and 1920, compared with 200,000 strikers in 1913, 13 million German strikers, up from 300,000 in 1913, and 5 million American strikers, up from under 1 million in 1913. British Prime Minister Lloyd George warned in March 1919 that “The whole of Europe is filled with the spirit of revolution. There is a deep sense not only of discontent, but of anger and revolt among the workmen . . . The whole existing order in its political, social and economic aspects is questioned by the masses of the population from one end of Europe to the other” (quoted in Cronin, 1983: 22).

Impact of Communists

Inspired by the success of the Bolshevik revolution in Russia, revolutionary Communist Parties were organized throughout the world to promote revolution by organizing labor unions, strikes, and political protest. Communism was a mixed blessing for labor. The Communists included some of labor’s most dedicated activists and organizers who contributed greatly to union organization. But Communist help came at a high price. Secretive, domineering, intolerant of opposition, the Communists divided unions between their dwindling allies and a growing collection of outraged opponents. Moreover, they galvanized opposition, depriving labor of needed allies among state officials and the liberal bourgeoisie.

The “Lean Years”: Welfare Capitalism and the Open Shop

Aftermath of World War I

As with most great surges in union membership, the postwar boom was self-limiting. Helped by a sharp post- war economic contraction, employers and state officials ruthlessly drove back the radical threat, purging their workforce of known union activists and easily absorbing futile strikes during a period of rising unemployment. Such campaigns drove membership down by a third from a 1920 peak of 26 million members in eleven countries in 1920 to fewer than 18 million in 1924. In Austria, France, Germany, and the United States, labor unrest contributed to the election of conservative governments; in Hungary, Italy, and Poland it led to the installation of anti- democratic dictatorships that ruthlessly crushed labor unions. Economic stagnation, state repression, and anti-union campaigns by employers prevented any union resurgence through the rest of the 1920s. By 1929, unions in these eleven countries had added only 30,000 members, one-fifth of one percent.

Injunctions and welfare capitalism

The 1920s was an especially dark period for organized labor in the United States where weaknesses visible before World War I became critical failures. Labor’s opponents used fear of Communism to foment a post-war red scare that targeted union activists for police and vigilante violence. Hundreds of foreign-born activists were deported, and mobs led by the American Legion and the Ku Klux Klan broke up union meetings and destroyed union offices (see, for example, Frank, 1994: 104-5). Judges added law to the campaign against unions. Ignoring the intent of the Clayton Anti-Trust Act (1914) they used anti-trust law and injunctions against unions, forbidding activists from picketing or publicizing disputes, holding signs, or even enrolling new union members. Employers competed for their workers’ allegiance, offering paternalist welfare programs and systems of employee representation as substitutes for independent unions. They sought to build a nonunion industrial relations system around welfare capitalism (Cohen, 1990).

Stagnation and decline

After the promises of the war years, the defeat of postwar union drives in mass production industries like steel and meatpacking inaugurated a decade of union stagnation and decline. Membership fell by a third between 1920 and 1924. Unions survived only in the older trades where employment was usually declining. By 1924, they were almost completely eliminated from the dynamic industries of the second industrial revolution: including steel, automobiles, consumer electronics, chemicals and rubber manufacture.

New Deals for Labor

Great Depression

The nonunion industrial relations system of the 1920s might have endured and produced a docile working class organized in company unions (Brody, 1985). But the welfare capitalism of the 1920s collapsed when the Great Depression of the 1930s exposed its weaknesses and undermined political support for the nonunion, open shop. Between 1929 and 1933, real national income in the United States fell by one third, nonagricultural employment fell by a quarter, and unemployment rose from under 2 million in 1929 to 13 million in 1933, a quarter of the civilian labor force. Economic decline was nearly as great elsewhere, raising unemployment to over 15 percent in Austria, Canada, Germany, and the United Kingdom (Maddison, 1991: 260-61). Only the Soviet Union, with its authoritarian political economy was largely spared the scourge of unemployment and economic collapse — a point emphasized by Communists throughout the 1930s and later. Depression discredited the nonunion industrial relations system by forcing welfare capitalists to renege on promises to stabilize employment and to maintain wages. Then, by ignoring protests from members of employee representation plans, welfare capitalists further exposed the fundamental weakness of their system. Lacking any independent support, paternalist promises had no standing but depended entirely on the variable good will of employers. And sometimes that was not enough (Cohen, 1990).

Depression-era political shifts

Voters, too, lost confidence in employers. The Great Depression discredited the old political economy. Even before Franklin Roosevelt’s election as President of the United States in 1932, American states enacted legislation restricting the rights of creditors and landlords, restraining the use of the injunction in labor disputes, and providing expanded relief for the unemployed (Ely, 1998; Friedman, 2001). European voters abandoned centrist parties, embracing extremists of both left and right, Communists and Fascists. In Germany, the Nazis won, but Popular Front governments uniting Communists and socialists with bourgeois liberals assumed power in other countries, including Sweden, France and Spain. (The Spanish Popular Front was overthrown by a Fascist rebellion that installed a dictatorship led by Francisco Franco.) Throughout there was an impulse to take public control over the economy because free market capitalism and orthodox finance had led to disaster (Temin, 1990).

Economic depression lowers union membership when unemployed workers drop their membership and employers use their stronger bargaining position to defeat union drives (Bain and Elsheikh, 1976). Indeed, union membership fell with the onset of the Great Depression but, contradicting the usual pattern, membership rebounded sharply after 1932 despite high unemployment, rising by over 76 percent in ten countries by 1938 (see Table 6 and Table 1). The fastest growth came in countries with openly pro-union governments. In France, where the Socialist Léon Blum led a Popular Front government, and the United States, during Franklin Roosevelt’s New Deal, membership rose by 160 percent 1933-38. But membership grew by 33 percent in eight other countries even without openly pro-labor governments.

Table 6
Impact of the Great Depression and World War II on Union Membership Growth

11 Countries (no Germany) 10 Countries (no Austria)
Depression 1929 12 401 000 11 508 000
1933 11 455 000 10 802 000
Growth 1929-33 -7.6% -6.1%
Popular Front Period 1933 10 802 000
1938 19 007 000
Growth 1933-38 76.0%
Second World War 1938 19 007 000
1947 35 485 000
Growth 1938-47 86.7%

French unions and the Matignon agreements

French union membership rose from under 900,000 in 1935 to over 4,500,000 in 1937. The Popular Front’s victory in the elections of June 1936 precipitated a massive strike wave and the occupation of factories and workplaces throughout France. Remembered in movie, song and legend, the factory occupations were a nearly spontaneous uprising of French workers that brought France’s economy to a halt. Contemporaries were struck by the extraordinarily cheerful feelings that prevailed, the “holiday feeling” and sense that the strikes were a new sort of non-violent revolution that would overturn hierarchy and replace capitalist authoritarianism with true social democracy (Phillippe and Dubief, 1993: 307-8). After Blum assumed office, he brokered the Matignon agreements, named after the premier’s official residence in Paris. Union leaders and heads of France’s leading employer associations agreed to end the strikes and occupations in exchange for wage increases of around 15 percent, a 40 hour workweek, annual vacations, and union recognition. Codified in statute by the Popular Front government, French unions gained new rights and protections from employer repression. Only then did workers flock into unions. In a few weeks, French unions gained four million members with the fastest growth in the new industries of the second industrial revolution. Unions in metal fabrication and chemicals grew by 1,450 percent and 4,000 percent respectively (Magraw, 1992: 2, 287-88).

French union leader Léon Jouhaux hailed the Matignon agreements as “the greatest victory of the workers’ movement.” It included lasting gains, including annual vacations and shorter workweeks. But Simone Weil described the strikers of May 1936 as “soldiers on leave,” and they were soon returned to work. Regrouping, employers discharged union activists and attacked the precarious unity of the Popular Front government. Fighting an uphill battle against renewed employer resistance, the Popular Front government fell before it could build a new system of cooperative industrial relations. Contained, French unions were unable to maintain their momentum towards industrial democracy. Membership fell by a third in 1937-39.

The National Industrial Recovery Act

A different union paradigm was developed in the United States. Rather than vehicles for a democratic revolution, the New Deal sought to integrate organized labor into a reformed capitalism that recognized capitalist hierarchy in the workplace, using unions only to promote macroeconomic stabilization by raising wages and consumer spending (Brinkley, 1995). Included as part of a program for economic recovery was section 7(a) of the National Industrial Recovery Act (NIRA) giving “employees . . . the right to organize and bargain collectively through representatives of their own choosing . . . free from the interference, restraint, or coercion of employers.” AFL-leader William Green pronounced this a “charter of industrial freedom” and workers rushed into unions in a wave unmatched since the Knights of Labor in 1886. As with the KOL, the greatest increase came among the unskilled. Coal miners, southern textile workers, northern apparel workers, Ohio tire makers, Detroit automobile workers, aluminum, lumber and sawmill workers all rushed into unions. For the first time in fifty years, American unions gained a foothold in mass production industries.

AFL’s lack of enthusiasm

Promises of state support brought common laborers into unions. But once there, the new unionists received little help from aging AFL leaders. Fearing that the new unionists’ impetuous zeal and militant radicalism would provoke repression, AFL leaders tried to scatter the new members among contending craft unions with archaic craft jurisdictions. The new unionists were swept up in the excitement of unity and collective action but a half-century of experience had taught the AFL’s leadership to fear such enthusiasms.

The AFL dampened the union boom of 1933-34, but, again, the larger problem was not with the AFL’s flawed tactics but with its lack of political leverage. Doing little to enforce the promises of Section 7(a), the Federal government left employers free to ignore the law. Some flatly prohibited union organization; others formally honored the law but established anemic employee representation plans while refusing to deal with independent unions (Irons, 2000). By 1935 almost as many industrial establishments had employer-dominated employee- representation plans (27 percent) as had unions (30 percent). The greatest number had no labor organization at all (43 percent).

Birth of the CIO

Implacable management resistance and divided leadership killed the early New Deal union surge. It died even before the NIRA was ruled unconstitutional in 1935. Failure provoked rebellion within the AFL. Led by John L. Lewis of the United Mine Workers, eight national unions launched a campaign for industrial organization as the Committee for Industrial Organization. After Lewis punched Carpenter’s Union leader William L Hutcheson on the floor of the AFL convention in 1935, the Committee became an independent Congress of Industrial Organization (CIO). Including many Communist activists, CIO committees fanned out to organize workers in steel, automobiles, retail trade, journalism and other industries. Building effectively on local rank and file militancy, including sitdown strikes in automobiles, rubber, and other industries, the CIO quickly won contracts from some of the strongest bastions of the open shop, including United States Steel and General Motors (Zieger, 1995).

The Wagner Act

Creative strategy and energetic organizing helped. But the CIO owed its lasting success to state support. After the failure of the NIRA, New Dealers sought another way to strengthen labor as a force for economic stimulus. This led to the enactment in 1935 of the National Labor Relations Act, also known as the “Wagner Act.” The Wagner Act established a National Labor Relations Board charged to enforce employees’ “right to self-organization, to form, join, or assist labor organizations to bargain collectively through representatives of their own choosing and to engage in concerted activities for the purpose of collective bargaining or other mutual aid or protection.” It provided for elections to choose union representation and required employers to negotiate “in good faith” with their workers’ chosen representatives. Shifting labor conflict from strikes to elections and protecting activists from dismissal for their union work, the Act lowered the cost to individual workers of supporting collective action. It also put the Federal government’s imprimatur on union organization.

Crucial role of rank-and-file militants and state government support

Appointed by President Roosevelt, the first NLRB was openly pro-union, viewing the Act’s preamble as mandate to promote organization. By 1945 the Board had supervised 24,000 union elections involving some 6,000,000 workers, leading to the unionization of nearly 5,000,000 workers. Still, the NLRB was not responsible for the period’s union boom. The Wagner Act had no direct role in the early CIO years because it was ignored for two years until its constitutionality was established by the Supreme Court in National Labor Relations Board v. Jones and Laughlin Steel Company (1937). Furthermore, the election procedure’s gross contribution of 5,000,000 members was less than half of the period’s net union growth of 11,000,000 members. More important than the Wagner Act were crucial union victories over prominent open shop employers in cities like Akron, Ohio, Flint, Michigan, and among Philadelphia-area metal workers. Dedicated rank-and-file militants and effective union leadership were crucial in these victories. As important was the support of pro-New Deal local and state governments. The Roosevelt landslides of 1934 and 1936 brought to office liberal Democratic governors and mayors who gave crucial support to the early CIO. Placing a right to collective bargaining above private property rights, liberal governors and other elected officials in Michigan, Ohio, Pennsylvania and elsewhere refused to send police to evict sit-down strikers who had seized control of factories. This state support allowed the minority of workers who actively supported unionization to use force to overcome the passivity of the majority of workers and the opposition of the employers. The Open Shop of the 1920s was not abandoned; it was overwhelmed by an aggressive, government-backed labor movement (Gall, 1999; Harris, 2000).

World War II

Federal support for union organization was also crucial during World War II. Again, war helped unions both by eliminating unemployment and because state officials supported unions to gain support for the war effort. Established to minimize labor disputes that might disrupt war production, the National War Labor Board instituted a labor truce where unions exchanged a no-strike pledge for employer recognition. During World War II, employers conceded union security and “maintenance of membership” rules requiring workers to pay their union dues. Acquiescing to government demands, employers accepted the institutionalization of the American labor movement, guaranteeing unions a steady flow of dues to fund an expanded bureaucracy, new benefit programs, and even to raise funds for political action. After growing from 3.5 to 10.2 million members between 1935 and 1941, unions added another 4 million members during the war. “Maintenance of membership” rules prevented free riders even more effectively than had the factory takeovers and violence of the late-1930s. With millions of members and money in the bank, labor leaders like Sidney Hillman and Phillip Murray had the ear of business leaders and official Washington. Large, established, and respected: American labor had made it, part of a reformed capitalism committed to both property and prosperity.

Even more than the First World War, World War Two promoted unions and social change. A European civil war, the war divided the continent not only between warring countries but within countries between those, usually on the political right, who favored fascism over liberal parliamentary government and those who defended democracy. Before the war, left and right contended over the appeasement of Nazi Germany and fascist Italy; during the war, many businesses and conservative politicians collaborated with the German occupation against a resistance movement dominated by the left. Throughout Europe, victory over Germany was a triumph for labor that led directly to the entry into government of socialists and Communists.

Successes and Failures after World War II

Union membership exploded during and after the war, nearly doubling between 1938 and 1946. By 1947, unions had enrolled a majority of nonagricultural workers in Scandinavia, Australia, and Italy, and over 40 percent in most other European countries (see Table 1). Accumulated depression and wartime grievances sparked a post- war strike wave that included over 6 million strikers in France in 1948, 4 million in Italy in 1949 and 1950, and 5 million in the United States in 1946. In Europe, popular unrest led to a dramatic political shift to the left. The Labor Party government elected in the United Kingdom in 1945 established a new National Health Service, and nationalized mining, the railroads, and the Bank of England. A center-left post-war coalition government in France expanded the national pension system and nationalized the Bank of France, Renault, and other companies associated with the wartime Vichy regime. Throughout Europe, the share of national income devoted to social services jumped dramatically, as did the share of income going to the working classes.

Europeans unions and the state after World War II

Unions and the political left were stronger everywhere throughout post-war Europe, but in some countries labor’s position deteriorated quickly. In France, Italy, and Japan, the popular front uniting Communists, socialists, and bourgeois liberals dissolved, and labor’s management opponents recovered state support, with the onset of the Cold War. In these countries, union membership dropped after 1947 and unions remained on the defensive for over a decade in a largely adversarial industrial relations system. Elsewhere, notably in countries with weak Communist movements, such as in Scandinavia but also in Austria, Germany, and the Netherlands, labor was able to compel management and state officials to accept strong and centralized labor movements as social partners. In these countries, stable industrial relations allowed cooperation between management and labor to raise productivity and to open new markets for national companies. High-union-density and high-union-centralization allowed Scandinavian and German labor leaders to negotiate incomes policies with governments and employers restraining wage inflation in exchange for stable employment, investment, and wages linked to productivity growth. Such policies could not be instituted in countries with weaker and less centralized labor movements, including France, Italy, Japan, the United Kingdom and the United States because their unions had not been accepted as bargaining partners by management and they lacked the centralized authority to enforce incomes policies and productivity bargains (Alvarez, Garrett, and Lange, 1992).

Europe since the 1960s

Even where European labor was the weakest, in France or Italy in the 1950s, unions were stronger than before World War II. Working with entrenched socialist and labor political parties, European unions were able to maintain high wages, restrictions on managerial autonomy, and social security. The wave of popular unrest in the late 1960s and early 1970s would carry most European unions to new heights, briefly bringing membership to over 50 percent of the labor force in the United Kingdom and in Italy, and bringing socialists into the government in France, Germany, Italy, and the United Kingdom. Since 1980, union membership has declined some and there has been some retrenchment in the welfare state. But the essentials of European welfare states and labor relations have remained (Western, 1997; Golden and Pontusson, 1992).

Unions begin to decline in the US

It was after World War II that American Exceptionalism became most valid, when the United States emerged as the advanced, capitalist democracy with the weakest labor movement. The United States was the only advanced capitalist democracy where unions went into prolonged decline right after World War II. At 35 percent, the unionization rate in 1945 was the highest in American history, but even then it was lower than in most other advanced capitalist economies. It has been falling since. The post-war strike wave, including three million strikers in 1945 and five million in 1946, was the largest in American history but it did little to enhance labor’s political position or bargaining leverage. Instead, it provoked a powerful reaction among employers and others suspicious of growing union power. A concerted drive by the CIO to organize the South, “Operation Dixie,” failed dismally in 1946. Unable to overcome private repression, racial divisions, and the pro-employer stance of southern local and state governments, the CIO’s defeat left the South as a nonunion, low-wage domestic enclave and a bastion of anti- union politics (Griffith, 1988). Then, in 1946, a conservative Republican majority was elected to Congress, dashing hopes for a renewed, post-war New Deal.

The Taft-Hartley Act and the CIO’s Expulsion of Communists

Quickly, labor’s wartime dreams turned to post-war nightmares. The Republican Congress amended the Wagner Act, enacting the Taft-Hartley Act in 1947 to give employers and state officials new powers against strikers and unions. The law also required union leaders to sign a non-Communist affidavit as a condition for union participation in NLRB-sponsored elections. This loyalty oath divided labor during a time of weakness. With its roots in radical politics and an alliance of convenience between Lewis and the Communists, the CIO was torn by the new Red Scare. Hoping to appease the political right, the CIO majority in 1949 expelled ten Communist-led unions with nearly a third of the organization’s members. This marked the end of the CIO’s expansive period. Shorn of its left, the CIO lost its most dynamic and energetic organizers and leaders. Worse, it plunged the CIO into a civil war; non-Communist affiliates raided locals belonging to the “communist-led” unions fatally distracting both sides from the CIO’s original mission to organize the unorganized and empower the dispossessed. By breaking with the Communists, the CIO’s leadership signaled that it had accepted its place within a system of capitalist hierarchy. Little reason remained for the CIO to remain independent. In 1955 it merged with the AFL to form the AFL-CIO.

The Golden Age of American Unions

Without the revolutionary aspirations now associated with the discredited Communists, America’s unions settled down to bargain over wages and working conditions without challenging such managerial prerogatives as decisions about prices, production, and investment. Some labor leaders, notably James Hoffa of the Teamsters but also local leaders in construction and service trades, abandoned all higher aspirations to use their unions for purely personal financial gain. Allying themselves with organized crime, they used violence to maintain their power over employers and their own rank-and-file membership. Others, including former-CIO leaders, like Walter Reuther of the United Auto Workers, continued to push the envelope of legitimate bargaining topics, building challenges to capitalist authority at the workplace. But even the UAW was unable to force major managerial prerogatives onto the bargaining table.

The quarter century after 1950 formed a ‘golden age’ for American unions. Established unions found a secure place at the bargaining table with America’s leading firms in such industries as autos, steel, trucking, and chemicals. Contracts were periodically negotiated providing for the exchange of good wages for cooperative workplace relations. Rules were negotiated providing a system of civil authority at work, with negotiated regulations for promotion and layoffs, and procedures giving workers opportunities to voice grievances before neutral arbitrators. Wages rose steadily, by over 2 percent per year and union workers earned a comfortable 20 percent more than nonunion workers of similar age, experience and education. Wages grew faster in Europe but American wages were higher and growth was rapid enough to narrow the gap between rich and poor, and between management salaries and worker wages. Unions also won a growing list of benefit programs, medical and dental insurance, paid holidays and vacations, supplemental unemployment insurance, and pensions. Competition for workers forced many nonunion employers to match the benefit packages won by unions, but unionized employers provided benefits worth over 60 percent more than were given nonunion workers (Freeman and Medoff, 1984; Hirsch and Addison, 1986).

Impact of decentralized bargaining in the US

In most of Europe, strong labor movements limit the wage and benefit advantages of union membership by forcing governments to extend union gains to all workers in an industry regardless of union status. By compelling nonunion employers to match union gains, this limited the competitive penalty borne by unionized firms. By contrast, decentralized bargaining and weak unions in the United States created large union wage differentials that put unionized firms at a competitive disadvantage, encouraging them to seek out nonunion labor and localities. A stable and vocal workforce with more experience and training did raise unionized firms’ labor productivity by 15 percent or more above the level of nonunion firms and some scholars have argued that unionized workers earn much of their wage gain. Others, however, find little productivity gain for unionized workers after account is taken of greater use of machinery and other nonlabor inputs by unionized firms (compare Freeman and Medoff, 1984 and Hirsch and Addison, 1986). But even unionized firms with higher labor productivity were usually more conscious of the wages and benefits paid to union worker than they were of unionization’s productivity benefits.

Unions and the Civil Rights Movement

Post-war unions remained politically active. European unions were closely associated with political parties, Communists in France and Italy, socialists or labor parties elsewhere. In practice, notwithstanding revolutionary pronouncements, even the Communist’s political agenda came to resemble that of unions in the United States, liberal reform including a commitment to full employment and the redistribution of income towards workers and the poor (Boyle, 1998). Golden age unions have also been at the forefront of campaigns to extend individual rights. The major domestic political issue of the post-war United States, civil rights, was troubling for many unions because of the racist provisions in their own practice. Nonetheless, in the 1950s and 1960s, the AFL-CIO strongly supported the civil rights movement, funded civil rights organizations and lobbied in support of civil rights legislation. The AFL-CIO pushed unions to open their ranks to African-American workers, even at the expense of losing affiliates in states like Mississippi. Seizing the opportunity created by the civil rights movement, some unions gained members among nonwhites. The feminist movement of the 1970s created new challenges for the masculine and sometimes misogynist labor movement. But, here too, the search for members and a desire to remove sources of division eventually brought organized labor to the forefront. The AFL-CIO supported the Equal Rights Amendment and began to promote women to leadership positions.

Shift of unions to the public sector

In no other country have women and members of racial minorities assumed such prominent positions in the labor movement as they have in the United States. The movement of African-American and women to leadership positions in the late-twentieth century labor movement was accelerated by a shift in the membership structure of the United States union movement. Maintaining their strength in traditional, masculine occupations in manufacturing, construction, mining, and transportation, European unions remained predominantly male. Union decline in these industries combined with growth in heavily female public sector employments in the United States led to the femininization of the American labor movement. Union membership began to decline in the private sector in the United States immediately after World War II. Between 1953 and 1983, for example, the unionization rate fell from 42 percent to 28 percent in manufacturing, by nearly half in transportation, and by over half in construction and mining (see Table 4). By contrast, after 1960, public sector workers won new opportunities to form unions. Because women and racial minorities form a disproportionate share of these public sector workers, increasing union membership there has changed the American labor movement’s racial and gender composition. Women comprised only 19 percent of American union members in the mid-1950s but their share rose to 40 percent by the late 1990s. By then, the most unionized workers were no longer the white male skilled craftsmen of old. Instead, they were nurses, parole officers, government clerks, and most of all, school teachers.

Union Collapse and Union Avoidance in the US

Outside the United States, unions grew through the 1970s and, despite some decline since the 1980s, European and Canadian unions remain large and powerful. The United States is different. Union decline since World War II has brought the United States private-sector labor movement down to early twentieth century levels. As a share of the nonagricultural labor force, union membership fell from its 1945 peak of 35 percent down to under 30 percent in the early 1970s. From there, decline became a general rout. In the 1970s, rising unemployment, increasing international competition, and the movement of industry to the nonunion South and to rural areas undermined the bargaining position of many American unions leaving them vulnerable to a renewed management offensive. Returning to pre-New Deal practices, some employers established new welfare and employee representation programs, hoping to lure worker away from unions (Heckscher, 1987; Jacoby, 1997). Others returned to pre-New Deal repression. By the early 1980s, union avoidance had become an industry. Anti-union consultants and lawyers openly counseled employers how to use labor law to evade unions. Findings of employers’ unfair labor practices in violation of the Wagner Act tripled in the 1970s; by the 1980s, the NLRB reinstated over 10,000 workers a year who were illegally discharged for union activity, nearly one for every twenty who voted for a union in an NLRB election (Weiler, 1983). By the 1990s, the unionization rate in the United States fell to under 14 percent, including only 9 percent of the private sector workers and 37 percent of those in the public sector. Unions now have minimal impact on wages or working conditions for most American workers.

Nowhere else have unions collapsed as in the United States. With a unionization rate dramatically below that of other countries, including Canada, the United States has achieved exceptional status (see Table 7). There remains great interest in unions among American workers; where employers do not resist, unions thrive. In the public sector and in some private employers where workers have free choice to join a union, they are as likely as they ever were, and as likely as workers anywhere. In the past, as after 1886 and in the 1920s, when American employers broke unions, they revived when a government committed to workplace democracy sheltered them from employer repression. If we see another such government, we may yet see another union revival.

Table 7
Union Membership Rates for the United States and Six Other Leading Industrial Economies, 1970 to 1990

1970 1980 1990
U.S.: Unionization Rate: All industries 30.0 24.7 17.6
U.S.: Unionization Rate: Manufacturing 41.0 35.0 22.0
U.S.: Unionization Rate: Financial services 5.0 4.0 2.0
Six Countries: Unionization Rate: All industries 37.1 39.7 35.3
Six Countries: Unionization Rate: Manufacturing 38.8 44.0 35.2
Five Countries: Unionization Rate: Financial services 23.9 23.8 24.0
Ratio: U.S./Six Countries: All industries 0.808 0.622 0.499
Ratio: U.S./Six Countries: Manufacturing 1.058 0.795 0.626
Ratio: U.S./Five Countries: Financial services 0.209 0.168 0.083

Note: The unionization rate reported is the number of union members out of 100 workers in the specified industry. The ratio shown is the unionization rate for the United States divided by the unionization rate for the other countries. The six countries are Canada, France, Germany, Italy, Japan, and the United Kingdom. Data on union membership in financial services in France are not available.

Source: Visser (1991): 110.

References

Alvarez, R. Michael, Geoffrey Garrett and Peter Lange. “Government Partisanship, Labor Organization, and Macroeconomic Performance,” American Political Science Review 85 (1992): 539-556.

Ansell, Christopher K. Schism and Solidarity in Social Movements: The Politics of Labor in the French Third Republic. Cambridge: Cambridge University Press, 2001.

Arnesen, Eric, Brotherhoods of Color: Black Railroad Workers and the Struggle for Equality. Cambridge, MA: Harvard University Press, 2001.

Bain, George S., and Farouk Elsheikh. Union Growth and the Business Cycle: An Econometric Analysis. Oxford: Basil Blackwell, 1976.

Bain, George S. and Robert Price. Profiles of Union Growth: A Comparative Statistical Portrait of Eight Countries. Oxford: Basil Blackwell, 1980.

Bernard, Phillippe and Henri Dubief. The Decline of the Third Republic, 1914-1938. Cambridge: Cambridge University Press, 1993.

Blewett, Mary H. Men, Women, and Work: Class, Gender and Protest in the New England Shoe Industry, 1780-1910. Urbana, IL: University of Illinois Press, 1988.

Boyle, Kevin, editor. Organized Labor and American Politics, 1894-1994: The Labor-Liberal Alliance. Albany, NY: State University of New York Press, 1998.

Brinkley, Alan. The End of Reform: New Deal Liberalism in Recession and War. New York: Alfred A. Knopf, 1995.

Brody, David. Workers in Industrial America: Essays on the Twentieth-Century Struggle. New York: Oxford University Press, 1985.

Cazals, Rémy. Avec les ouvriers de Mazamet dans la grève et l’action quotidienne, 1909-1914. Paris: Maspero, 1978.

Cohen, Lizabeth. Making A New Deal: Industrial Workers in Chicago, 1919-1939. Cambridge: Cambridge University Press, 1990.

Cronin, James E. Industrial Conflict in Modern Britain. London: Croom Helm, 1979.

Cronin, James E. “Labor Insurgency and Class Formation.” In Work, Community, and Power: The Experience of Labor in Europe and America, 1900-1925, edited by James E. Cronin and Carmen Sirianni. Philadelphia: Temple University Press, 1983. .

Cronin, James E. and Carmen Sirianni, editors. Work, Community, and Power: The Experience of Labor in Europe and America, 1900-1925. Philadelphia: Temple University Press, 1983.

Dawley, Alan. Class and Community: The Industrial Revolution in Lynn. Cambridge, MA: Harvard University Press, 1976.

Ely, James W., Jr. The Guardian of Every Other Right: A Constitutional History of Property Rights. New York: Oxford, 1998.

Fink, Leon. Workingmen’s Democracy: The Knights of Labor and American Politics. Urbana, IL: University of Illinois Press, 1983.

Fink, Leon. “The New Labor History and the Powers of Historical Pessimism: Consensus, Hegemony, and the Case of the Knights of Labor.” Journal of American History 75 (1988): 115-136.

Foner, Philip S. Organized Labor and the Black Worker, 1619-1973. New York: International Publishers, 1974.

Foner, Philip S. Women and the American Labor Movement: From Colonial Times to the Eve of World War I. New York: Free Press, 1979.

Frank, Dana. Purchasing Power: Consumer Organizing, Gender, and the Seattle Labor Movement, 1919- 1929. Cambridge: Cambridge University Press, 1994.

Freeman, Richard and James Medoff. What Do Unions Do? New York: Basic Books, 1984.

Friedman, Gerald. “Dividing Labor: Urban Politics and Big-City Construction in Late-Nineteenth Century America.” In Strategic Factors in Nineteenth-Century American Economic History, edited by Claudia Goldin and Hugh Rockoff, 447-64. Chicago: University of Chicago Press, 1991.

Friedman, Gerald. “Revolutionary Syndicalism and French Labor: The Rebels Behind the Cause.” French Historical Studies 20 (Spring 1997).

Friedman, Gerald. State-Making and Labor Movements: France and the United States 1876-1914. Ithaca, NY: Cornell University Press, 1998.

Friedman, Gerald. “New Estimates of United States Union Membership, 1880-1914.” Historical Methods 32 (Spring 1999): 75-86.

Friedman, Gerald. “The Political Economy of Early Southern Unionism: Race, Politics, and Labor in the South, 1880-1914.” Journal of Economic History 60, no. 2 (2000): 384-413.

Friedman, Gerald. “The Sanctity of Property in American Economic History” (manuscript, University of Massachusetts, July 2001).

Gall, Gilbert. Pursuing Justice: Lee Pressman, the New Deal, and the CIO. Albany, NY: State University of New York Press, 1999.

Gamson, William A. The Strategy of Social Protest. Homewood, IL: Dorsey Press, 1975.

Geary, Richard. European Labour Protest, 1848-1939. New York: St. Martin’s Press, 1981.

Golden, Miriam and Jonas Pontusson, editors. Bargaining for Change: Union Politics in North America and Europe. Ithaca, NY: Cornell University Press, 1992.

Griffith, Barbara S. The Crisis of American Labor: Operation Dixie and the Defeat of the CIO. Philadelphia: Temple University Press, 1988.

Harris, Howell John. Bloodless Victories: The Rise and Fall of the Open Shop in the Philadelphia Metal Trades, 1890-1940. Cambridge: Cambridge University Press, 2000.

Hattam, Victoria C. Labor Visions and State Power: The Origins of Business Unionism in the United States. Princeton: Princeton University Press, 1993.

Heckscher, Charles C. The New Unionism: Employee Involvement in the Changing Corporation. New York: Basic Books, 1987.

Hirsch, Barry T. and John T. Addison. The Economic Analysis of Unions: New Approaches and Evidence. Boston: Allen and Unwin, 1986.

Hirschman, Albert O. Exit, Voice and Loyalty: Responses to Decline in Firms, Organizations, and States. Cambridge, MA, Harvard University Press, 1970.

Hirschman, Albert O. Shifting Involvements: Private Interest and Public Action. Princeton: Princeton University Press, 1982.

Hobsbawm, Eric J. Labouring Men: Studies in the History of Labour. London: Weidenfeld and Nicolson, 1964.

Irons, Janet. Testing the New Deal: The General Textile Strike of 1934 in the American South. Urbana, IL: University of Illinois Press, 2000.

Jacoby, Sanford. Modern Manors: Welfare Capitalism Since the New Deal. Princeton: Princeton University Press, 1997.

Katznelson, Ira and Aristide R. Zolberg, editors. Working-Class Formation: Nineteenth-Century Patterns in Western Europe and the United States. Princeton: Princeton University Press, 1986. Kocka, Jurgen. “Problems of Working-Class Formation in Germany: The Early Years, 1800-1875.” In Working- Class Formation: Nineteenth-Century Patterns in Western Europe and the United States, edited by Ira Katznelson and Aristide R. Zolberg, 279-351. Princeton: Princeton University Press, 1986. Letwin, Daniel. The Challenge of Interracial Unionism: Alabama Coal Miners, 1878-1921. Chapel Hill: University of North Carolina Press, 1998. Maddison, Angus. Dynamic Forces in Capitalist Development: A Long-Run Comparative View. Oxford: Oxford University Press, 1991. Magraw, Roger. A History of the French Working Class, two volumes. London: Blackwell, 1992. Milkman, Ruth. Women, Work, and Protest: A Century of United States Women’s Labor. Boston: Routledge and Kegan Paul, 1985.

Montgomery, David. The Fall of the House of Labor: The Workplace, the State, and American Labor Activism, 1865-1920. Cambridge: Cambridge University Press, 1987.

Mullin, Debbie Dudley. “The Porous Umbrella of the AFL: Evidence From Late Nineteenth-Century State Labor Bureau Reports on the Establishment of American Unions.” Ph.D. diss., University of Virginia, 1993.

Nolan, Mary. Social Democracy and Society: Working-Class Radicalism in Dusseldorf, 1890-1920. Cambridge: Cambridge University Press, 1981.

Olson, Mancur. The Logic of Collective Action: Public Goods and the Theory of Groups. Cambridge, MA: Harvard University Press, 1971.

Perlman, Selig. A Theory of the Labor Movement. New York: MacMillan, 1928.

Rachleff, Peter J. Black Labor in the South, 1865-1890. Philadelphia: Temple University Press, 1984.

Roediger, David. The Wages of Whiteness: Race and the Making of the American Working Class. London: Verso, 1991.

Scott, Joan. The Glassworkers of Carmaux: French Craftsmen in Political Action in a Nineteenth-Century City. Cambridge, MA: Harvard University Press, 1974.

Sewell, William H. Jr. Work and Revolution in France: The Language of Labor from the Old Regime to 1848. Cambridge: Cambridge University Press, 1980.

Shorter, Edward and Charles Tilly. Strikes in France, 1830-1968. Cambridge: Cambridge University Press, 1974.

Temin, Peter. Lessons from the Great Depression. Cambridge, MA: MIT Press, 1990.

Thompson, Edward P. The Making of the English Working Class. New York: Vintage, 1966.

Troy, Leo. Distribution of Union Membership among the States, 1939 and 1953. New York: National Bureau of Economic Research, 1957.

United States, Bureau of the Census. Census of Occupations, 1930. Washington, DC: Government Printing Office, 1932.

Visser, Jelle. European Trade Unions in Figures. Boston: Kluwer, 1989.

Voss, Kim. The Making of American Exceptionalism: The Knights of Labor and Class Formation in the Nineteenth Century. Ithaca, NY: Cornell University Press, 1993.

Ware, Norman. The Labor Movement in the United States, 1860-1895: A Study in Democracy. New York: Vintage, 1929.

Washington, Booker T. “The Negro and the Labor Unions.” Atlantic Monthly (June 1913).

Weiler, Paul. “Promises to Keep: Securing Workers Rights to Self-Organization Under the NLRA.” Harvard Law Review 96 (1983).

Western, Bruce. Between Class and Market: Postwar Unionization in the Capitalist Democracies. Princeton: Princeton University Press, 1997.

Whatley, Warren. “African-American Strikebreaking from the Civil War to the New Deal.” Social Science History 17 (1993), 525-58.

Wilentz, Robert Sean. Chants Democratic: New York City and the Rise of the American Working Class, 1788-1850. Oxford: Oxford University Press, 1984.

Wolman, Leo. Ebb and Flow in Trade Unionism. New York: National Bureau of Economic Research, 1936.

Zieger, Robert. The CIO, 1935-1955. Chapel Hill: University of North Carolina Press, 1995.

Zolberg, Aristide. “Moments of Madness.” Politics and Society 2 (Winter 1972): 183-207. 60

Citation: Friedman, Gerald. “Labor Unions in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/labor-unions-in-the-united-states/

History of Labor Turnover in the U.S.

Laura Owen, DePaul University

Labor turnover measures the movement of workers in and out of employment with a particular firm. Consequently, concern with the issue and interest in measuring such movement only arose when working for an employer (rather than self-employment in craft or agricultural production) became the norm. The rise of large scale firms in the late nineteenth century and the decreasing importance (in percentage terms) of agricultural employment meant that a growing number of workers were employed by firms. It was only in this context that interest in measuring labor turnover and understanding its causes began.

Trends in Labor Turnover

Labor turnover is typically measured in terms of the separation rate (quits, layoffs, and discharges per 100 employees on the payroll). The aggregate data on turnover among U.S. workers is available from a series of studies focusing almost entirely on the manufacturing sector. These data show high rates of labor turnover (annual rates exceeding 100%) in the early decades of the twentieth century, substantial declines in the 1920s, significant fluctuations during the economic crisis of the 1930s and the boom of the World War II years, and a return to the low rates of the 1920s in the post-war era. (See Figure 1 and its notes.) Firm and state level data (from the late nineteenth and early twentieth centuries) also indicate that labor turnover rates exceeding 100 were common to many industries.

Contemporaries expressed concern over the high rates of labor turnover in the early part of the century and conducted numerous studies to understand its causes and consequences. (See for example, Douglas 1918, Lescohier 1923, and Slichter 1921.) Some of these studies focused on the irregularity in labor demand which resulted in seasonal and cyclical layoffs. Others interpreted the high rates of labor turnover as an indication of worker dissatisfaction and labor relations problems. Many observers began to recognize that labor turnover was costly for the firm (in terms of increased hiring and training expenditures) and for the worker (in terms of irregularity of income flows).

Both the high rates of labor turnover in the early years of the twentieth century and the dramatic declines in the 1920s are closely linked with changes in the worker-initiated component of turnover rates. During the 1910s and 1920s, quits accounted (on average) for over seventy percent of all separations and the decline in annual separation rates from 123.4 in 1920 to 37.1 in 1928 was primarily driven be a decline in quit rates, from 100.9 to 25.8 per 100 employees.

Explanations of the Decline in Turnover in the 1920s

The aggregate decline in labor turnover in the 1920s appears to be the beginning of a long run trend. Numerous studies, seeking to identify why workers began quitting their jobs less frequently, have pointed to the role of altered employment relationships. (See, for example, Owen 1995b, Ozanne 1967, and Ross 1958.) The new practices of employers, categorized initially as welfare work and later as the development of internal labor markets, included a variety of policies aimed at strengthening the attachment between workers and firms. The most important of these policies were the establishment of personnel or employment departments, the offering of seniority-based compensation, and the provision of on-the-job training and internal promotion ladders. In the U.S., these changes in employment practices began at a few firms around the turn of the twentieth century, intensified during WWI and became more widespread in the 1920s. However, others have suggested that the changes in quit behavior in the 1920s were the result of immigration declines (due to newly implemented quotas) and slack labor markets (Goldin 2000, Jacoby 1985).

Even the motivation of firms’ implementation of the new practices is subject to debate. One argument focuses on how the shift from craft to mass production increased the importance of firm-specific skills and on-the-job training. Firms’ greater investment in training meant that it was more costly to have workers leave and provided the incentive for firms to lower turnover. However, others have provided evidence that job ladders and internal promotion were not always implemented to reward the increased worker productivity resulting from on-the-job training. Rather, these employment practices were sometimes attempts to appease workers and to prevent unionization. Labor economists have also noted that providing various forms of deferred compensation (pensions, wages which increase with seniority, etc.) can increase worker effort and reduce the costs of monitoring workers. Whether promotion ladders established within firms reflect an attempt to forestall unionization, a means of protecting firm investments in training by lowering turnover, or a method of ensuring worker effort is still open to debate, though the explanations are not necessarily mutually exclusive (Jacoby 1983, Lazear 1981, Owen 1995b, Sundstrum 1988, Stone 1974).

Subsequent Patterns of Labor Turnover

In the 1930s and 1940s the volatility in labor turnover increased and the relationships between the components of total separations shifted (Figure 1). The depressed labor markets of the 1930s meant that procyclical quit rates declined, but increased layoffs kept total separation rates relatively high, (on average 57 per 100 employees between 1930 and 1939). During the tight labor markets of the World War II years, turnover again reached rates exceeding 100%, with increases in quits acting as the primary determinant. Quits and total separations declined after the war, producing much lower and less volatile turnover rates between 1950 and 1970 (Figure 1).

Though the decline in labor turnover in the early part of the twentieth century was seen by many as a sign of improved labor-management relations, the low turnover rates of the post-WWII era led macroeconomists to begin to question the benefits of strong attachments between workers and firms. Specifically, there was concern that long-term employment contracts (either implicit or explicit) might generate wage rigidities which could result in increased unemployment and other labor market adjustment problems (Ross 1958). More recently, labor economists have wondered whether the movement toward long-term attachments between workers and firms is reversing itself. “Changes in Job Stability and Job Security” a special issue of the Journal of Labor Economics (October 1999) includes numerous analyses suggesting that job instability increased among some groups of workers (particularly those with longer tenure) amidst the restructuring activities of the 1990s.

Turnover Data and Methods of Analysis

The historical analyses of labor turnover have relied upon two types of data. The first type consists of firm-level data on turnover within a particular workplace or governmental collections (through firms) of data on the level of turnover within particular industries or geographic locales. If these turnover data are broken down into their components – quits, layoffs, and discharges – a quit rate model (such as the one developed by Parsons 1973) can be employed to analyze the worker-initiated component of turnover as it relates to job search behavior. These analyses (see for example, Owen 1995a) estimate quit rates as a function of variables reflecting labor demand conditions (e.g., unemployment and relative wages) and of labor supply variables reflecting the composition of the labor force (e.g., age/gender distributions and immigrant flows).

The second type of turnover data is generated using employment records or governmental surveys as the source for information specific to individual workers. Job histories can be created with these data and used to analyze the impact of individual characteristics such as age, education, and occupation, on labor turnover, firm tenure and occupational experience. Analysis of this type of data typically employs a “hazard” model that estimates the probability of a worker’s leaving a job as a function of individual worker characteristics. (See, for example, Carter and Savoca 1992, Maloney 1998, Whatley and Sedo 1998.)

Labor Turnover and Long Term Employment

Another measure of worker/firm attachment is tenure – the number of years a worker stays with a particular job or firm. While significant declines in labor turnover (such as those observed in the 1920s) will likely be reflected in rising average tenure with the firm, high rates of labor turnover do not imply that long tenure is not present among the workforce. If high turnover is concentrated among a subset of workers (the young or the unskilled), then high turnover can co-exist with the existence of life-time jobs for another subset (the skilled). For example, the high rates of labor turnover that were common until the mid-1920s co-existed with long term jobs for some workers. The evidence indicates that while long-term employment became more common in the twentieth century, it was not completely absent from nineteenth-century labor markets (Carter 1988, Carter and Savoca 1990, Hall 1982).

Notes on Turnover Data in Figure 1

The turnover data used to generate Figure 1 come from three separate sources: Brissenden and Frankel (1920) for the 1910-1918 data; Berridge (1929) for the 1919-1929 data; and U.S. Bureau of the Census (1972) for the 1930-1970 data. Several adjustments were necessary to present them in a single format. The Brissenden and Frankel study calculated the separate components of turnover (quits and layoffs) from only a subsample of their data. The subsample data were used to calculate the percentage of total separations accounted for by quits and layoffs and these percentages were applied to the total separations data from the full sample to estimate the quit and layoff components. The 1930-1970 data reported in Historical Statistics of the United States were collected by the U.S. Bureau of Labor Statistics and originally reported in Employment and Earning, U.S., 1909-1971. Unlike the earlier series, these data were originally reported as average monthly rates and have been converted into annualized figures by multiplying times 12.

In addition to the adjustments described above, there are four issues relating to the comparability of these data which should be noted. First, the turnover data for the 1919 to 1929 period are median rates, whereas the data from before and after that period were compiled as weighted averages of the rates of all firms surveyed. If larger firms have lower turnover rates (as Arthur Ross 1958 notes), medians will be higher than weighted averages. The data for the one year covered by both studies (1919) confirm this difference: the median turnover rates from Berridge (1920s data) exceed the weighted average turnover rates from Brissenden and Frankel (1910s data). Brissenden and Frankel suggested that the actual turnover of labor in manufacturing may have been much higher than their sample statistics suggest:

The establishments from which the Bureau of Labor Statistics has secured labor mobility figures have necessarily been the concerns which had the figures to give, that is to say, concerns which had given rather more attention than most firms to their force-maintenance problems. These firms reporting are chiefly concerns which had more or less centralized employment systems and were relatively more successful in the maintenance of a stable work force (1920, p. 40).

A similar underestimation bias continued with the BLS collection of data because the average firm size in the sample was larger than the average firm size in the whole population of manufacturing firms (U.S. Bureau of the Census, p.160), and larger firms tend to have lower turnover rates.

Second, the data for 1910-1918 (Brissenden and Frankel) includes workers in public utilities and mercantile establishments in addition to workers in manufacturing industries and is therefore not directly comparable to the later series on the turnover of manufacturing workers. However, these non-manufacturing workers had lower turnover rates than the manufacturing workers in both 1913/14 and 1917/18 (the two years for which Brissenden and Frankel present industry-level data). Thus, the decline in turnover of manufacturing workers from the 1910s to the 1920s may actually be underestimated.

Third, the turnover rates for 1910 to 1918 (Brissenden and Frankel) were originally calculated per labor hour. The number of employees was estimated at one worker per 3,000 labor hours – the number of hours in a typical work year. This conversion generates the number of full-year workers, not allowing for any procyclicality of labor hours. If labor hours are procyclical, this calculation overstates (understates) the number of workers during an upswing (downswing), thus dampening the response of turnover rates to economic cycles.

Fourth, total separations are broken down into quits, layoffs, discharges and other (including military enlistment, death and retirement). Prior to 1940, the “other” separations were included in quits.

References

Berridge, William A. “Labor Turnover in American Factories.” Monthly Labor Review 29 (July 1929): 62-65.
Brissenden, Paul F. and Emil Frankel. “Mobility of Labor in American Industry.” Monthly Labor Review 10 (June 1920): 1342-62.
Carter, Susan B. “The Changing Importance of Lifetime Jobs, 1892-1978.”Industrial Relations 27, no. 3 (1988): 287-300.
Carter, Susan B. and Elizabeth Savoca. “The ‘Teaching Procession’? Another Look at Teacher Tenure, 1845-1925.” Explorations in Economic History 29, no. 4 (1992): 401-16.
Carter, Susan B. and Elizabeth Savoca. “Labor Mobility and Lengthy Jobs in Nineteenth-Century America.” Journal of Economic History 50, no. 1 (1990): 1-16.
Douglas, Paul H. “The Problem of Labor Turnover.” American Economic Review 8, no. 2 (1918): 306-16.
Goldin, Claudia. “Labor Markets in the Twentieth Century.” In The Cambridge Economic History of the United States, III, edited by Stanley L. Engerman and Robert E. Gallman, 549-623. Cambridge: Cambridge University Press, 2000.
Hall, Robert E. “The Importance of Lifetime Jobs in the U.S. Economy.” American Economic Review 72, no. 4 (1982): 716-24.
Jacoby, Sanford M. “Industrial Labor Mobility in Historical Perspective.” Industrial Relations 22, no. 2 (1983): 261-82.
Jacoby, Sanford M. Employing Bureaucracy: Managers, Unions, and the Transformation of Work in American Industry, 1900-1945. New York: Columbia University Press, 1985.
Lazear, Edward. P. “Agency, Earnings Profiles, Productivity, and Hours Reduction.” American Economic Review 71, no. 4 (1981): 606-19.
Lescohier, Don D. The Labor Market. New York: Macmillan, 1923.
Maloney, Thomas N. “Racial Segregation, Working Conditions and Workers’ Health: Evidence from the A.M. Byers Company, 1916-1930.” Explorations in Economic History 35, no. 3 (1998): 272-95.
Owen, Laura .J. “Worker Turnover in the 1920s: What Labor Supply Arguments Don’t Tell Us.” Journal of Economic History 55, no.4 (1995a): 822-41.
Owen, Laura J. “Worker Turnover in the 1920s: The Role of Changing Employment Policies.” Industrial and Corporate Change 4 (1995b): 499-530.
Ozanne, Robert. A Century of Labor-Management Relations at McCormick and International Harvester. Madison: University of Wisconsin Press, 1967.
Parsons, Donald O. “Quit Rates Over Time: A Search and Information Approach.” American Economic Review 63, no.3 (1973): 390-401.
Ross, Arthur M. “Do We Have a New Industrial Feudalism?” American Economic Review 48 (1958): 903-20.
Slichter, Sumner. The Turnover of Factory Labor. New York: Appleton, 1921.
Stone, Katherine. “The Origins of Job Structures in the Steel Industry.” Review of Radical Political Economy 6, no.2 (1974): 113-73.
Sundstrom, William A. “Internal Labor Markets before World War I: On-the-Job Training and Employee Promotion.” Explorations in Economic History 25 (October 1988): 424-45.
U.S. Bureau of the Census. Historical Statistics of the United States, Colonial Times to 1970. Washington, D.C., 1975.
Whatley, Warren C. and Stan Sedo. “Quit Behavior as a Measure of Worker Opportunity: Black Workers in the Interwar Industrial North.” American Economic Revie w 88, no. 2 (1998): 363-67.

Citation: Owen, Laura. “History of Labor Turnover in the U.S.”. EH.Net Encyclopedia, edited by Robert Whaples. April 29, 2004. URL http://eh.net/encyclopedia/history-of-labor-turnover-in-the-u-s/