EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

From GATT to WTO: The Evolution of an Obscure Agency to One Perceived as Obstructing Democracy

Susan Ariel Aaronson, National Policy Association

Historical Roots of GATT and the Failure of the ITO

While the United States has always participated in international trade, it did not take a leadership role in global trade policy making until the Great Depression. One reason for this is that under the US Constitution, Congress has responsibility for promoting and regulating commerce, while the executive branch has responsibility for foreign policy. Thus, trade policy was a tug of war between the branches and the two branches did not always agree on the mix of trade promotion and protection. However, in 1934, the United States began an experiment, the Reciprocal Trade Agreements Act of 1934. In the hopes of expanding employment, Congress agreed to permit the executive branch to negotiate bilateral trade agreements. (Bilateral agreements are those between two parties — for example, the US and another country.)

During the 1930s, the amount of bilateral negotiation under this act was fairly limited, and in truth it did not do much to expand global or domestic trade. However, the Second World War led policy makers to experiment on a broader level. In the 1940s, working with the British government, the United States developed two innovations to expand and govern trade among nations. These mechanisms were called the General Agreement on Tariffs and Trade (GATT) and the ITO (International Trade Organization). GATT was simply a temporary multilateral agreement designed to provide a framework of rules and a forum to negotiate trade barrier reductions among nations. It was built on the Reciprocal Trade Agreements Act, which allowed the executive branch to negotiate trade agreements, with temporary authority from the Congress.

The ITO

The ITO, in contrast, set up a code of world trade principles and a formal international institution. The ITO’s architects were greatly influenced by John Maynard Keynes, the British economist. The ITO represented an internationalization of the view that governments could play a positive role in encouraging international economic growth. It was incredibly comprehensive: including chapters on commercial policy, investment, employment and even business practices (what we call antitrust or competition policies today). The ITO also included a secretariat with the power to arbitrate trade disputes. But the ITO was not popular. It also took a long time to negotiate. Its final charter was signed by 54 nations at the UN Conference on Trade and Employment in Havana in March 1948, but this was too late. The ITO missed the flurry of support for internationalism that accompanied the end of WWII and which led to the establishment of agencies such as the UN, the IMF and the World Bank. The US Congress never brought membership in the ITO to a vote, and when the president announced that he would not seek ratification of the Havana Charter, the ITO effectively died. Consequently the provisional GATT (which was not a formal international organization) governed world trade until 1994 (Aaronson, 1996, 3-5).

GATT

GATT was a club, albeit a club that was increasingly popular. But GATT was not a treaty. The United States (and other nations) joined GATT under its Protocol of Provisional Application. This meant that the provisions of GATT were binding only insofar as they are not inconsistent with a nation’s existing legislation. With this clause, the United States could spur trade liberalization or contravene the rules of GATT when politically or economically necessary (US Tariff Commission, 1950, 19-21, 20 note 4).

From 1948 until 1993, GATT’s purview and membership grew dramatically. During this period, GATT sponsored eight trade rounds where member nations, called contracting parties, agreed to mutually reduce trade barriers. But trade liberalization under the GATT came with costs to some Americans. Important industries in the United States such as textiles, television, steel and footwear suffered from foreign competition and some workers lost jobs. However, most Americans benefited from this growth in world trade; as consumers they got a cheaper and more diverse supply of goods, as producers, most found new markets and growing employment. From 1948 to about 1980 this economic growth came at little cost to the American economy as a whole or to American democracy (Aaronson, 1996, 133-134).

The Establishment of the WTO

By the late 1980s, a growing number of nations decided that GATT could better serve global trade expansion if it became a formal international organization. In 1988, the US Congress, in the Omnibus Trade and Competitiveness Act, explicitly called for more effective dispute settlement mechanisms. They pressed for negotiations to formalize GATT and to make it a more powerful and comprehensive organization. The result was the World Trade Organization, (WTO), which was established during the Uruguay Round (1986-1993) of GATT negotiations and which subsumed GATT. The WTO provides a permanent arena for member governments to address international trade issues and it oversees the implementation of the trade agreements negotiated in the Uruguay Round of trade talks

The WTO’s Powers

The WTO is not simply GATT transformed into a formal international organization. It covers a much broader purview, including subsidies, intellectual property, food safety and other policies that were once solely the subject of national governments. The WTO also has strong dispute settlement mechanisms. As under GATT, panels weigh trade disputes, but these panels have to adhere to a strict time schedule. Moreover, in contrast with GATT procedure, no country can veto or delay panel decisions. If US laws protecting the environment (such as laws requiring gas mileage standards) were found to be de facto trade impediments, the US must take action. It can either change its law, do nothing and face retaliation, or compensate the other party for lost trade if it keeps such a law (Jackson, 1994).

The WTO’s Mixed Record

Despite its broader scope and powers, the WTO has had a mixed record. Nations have clamored to join this new organization and receive the benefits of expanded trade and formalized multinational rules. Today the WTO has grown 142 members. Nations such as China, Russia, Saudi Arabia and Ukraine hope to join the WTO soon. But since the WTO was created, its members have not been able to agree on the scope of a new round of trade talks. Many developing countries believe that their industrialized trading partners have not fully granted them the benefits promised under the Uruguay Round of GATT. Some countries regret including intellectual property protections under the aegis of the WTO.

Protests

A wide range of citizens has become concerned about the effect of trade rules upon the achievement of other important policy goals. In India, Latin America, Europe, Canada and the United States, alarmed citizens have taken to the streets to protest globalization and in particular what they perceive as the undemocratic nature of the WTO. During the fiftieth anniversary of GATT in Geneva in 1998, some 30,000 people rioted. During the Seattle Ministerial Meetings in November/December 1999, again about 30,000 people protested, some violently. When the WTO attempts to kick off a new round in Doha, Qatar later this year, protestors are again planning to disrupt the proceedings (Aaronson, 2001).

Explaining Recent Protests about the WTO

During the first thirty years of GATT’s history, the relationship of trade policy to human rights, labor rights, consumer protection, and the environment were essentially “off-stage.” This is because GATT’s role was limited to governing how nations used traditional tools of economic protection — border measures such as tariffs and quotas.

GATT’s Scope Was Initially Limited

Why did policy makers limit the scope of GATT? The US could participate in GATT negotiations only by Congress granting extensions of the Reciprocal Trade Agreements Act of 1934. But this act allowed the president only to negotiate commercial policy. As a result, GATT said almost nothing about the effects of trade (whether trade degrades the environment or injures workers) or the conditions of trade (whether disparate systems of regulation, such as consumer, environmental, or labor standards, allow for fair competition). From the 1940s to the 1970s, few policy makers would admit that their systems of regulations sometimes distorted trade. Such regulations were the turf of domestic policy makers, not foreign policy makers. GATT also said little about domestic norms or regulations. In 1971, GATT established a working party on environmental measures and international trade, but it did not meet until 1991, after much pressure from some European nations (Charnovitz, 1992, 341, 348).

GATT’s Scope Widened to Include Domestic Policies

Policy makers and economists have long recognized that trade and social regulations can intersect. Although the United States did not ban trade in slaves until 1807, the US was among the first nations to ban goods manufactured by forced labor (prison labor) in the Tariff Act of 1890 (section 51) (Aaronson, 2001, 44). This provision influenced many trade agreements that followed, including GATT, which includes a similar provision. But in the 1970s, public officials began to admit that domestic regulations, such as health and safety regulations, could with or without intent, also distort trade (Keck and Sikkink, 1998, 41-47). They worked to include rules governing such regulations in the purview of GATT and other trade agreements. This process began in the Tokyo Round (1973-79) of GATT negotiations, but came to fruition during the Uruguay Round. Policy makers expanded the turf of trade agreements to include rules governing once domestic policies such as intellectual property, food safety, and subsidies (GATT Secretariat, 1993, Annex IV, 91).

Rising Importance of International Trade and Trade Policy

In 1970, the import and export of American goods and services added up to only about 11.5% of gross domestic product. This climbed swiftly to 20.5% in 1980 and at the end of the century averaged about 24%. (In addition, by the mid-1980s a persistent trade deficit emerged, with imports exceeding exports by significant amounts year after year — imports exceeded exports by 3% of GDP in 1987, for example.)

Public Opinion Has Become More Concerned about Trade Policy

Partly because of the rising importance of international trade, since at least 1980, the relationship of trade policy to the achievement of other public policy goals became an important and contentious issue. A growing number of citizens began to question whether trade agreements should include such social or environmental issues. Others argued that trade agreements had the effect of undermining domestic regulations such as environmental, food safety or consumer regulations. Still others argued that trade agreements did not sufficiently regulate the behavior of global corporations. Although relatively few Americans have taken to the streets to protest trade laws, polling data reveal that Americans agree with some of the principal concerns of the protesters. They want trade agreements to raise the environmental and labor standards in the nations with which Americans trade.

Most Agree That Trade Fuels Economic Growth

On the other hand, most people agree with analysts who argue that trade helps fuel American growth (PIPA, 1999). (For example, 93% of economists surveyed agreed that tariffs and import quotas usually reduce general economic welfare (Alston, Kearl, Vaughan, 1992).) Economists argue that the US must trade if it is to maintain its high standard of living. Autarchy is not a practical option even for America’s mighty and diversified economy. Although the US is blessed with navigable rivers, fertile soil, abundant resources, a hard working populace, and a huge internal market, Americans must trade because they cannot efficiently or sufficiently produce all the goods and services that citizens desire. Moreover, there are some goods that Americans cannot produce. That is why America from the beginning of its history has signed trade agreements with other nations.

Building a National Consensus on Trade Policy Is a Difficult Balancing Act

For the last decade, Americans have not been able to find common ground on the turf of trade policy and how to ensure that trade agreements such as those enforced by the WTO don’t thwart achievement of other important policy goals. After 1993, American business did not push for a new round of trade talks, as the global and the domestic economy prospered. But in recent months (early 2001), business has been much more active, as has George W. Bush’s Administration, in trying to develop a new round of trade talks under the WTO. Business has become more eager as economic growth has slowed. Moreover, American business leaders seem to have learned the lessons of the 1999 Seattle protests. The members of the Business Roundtable, an organization of chief executive officers from America’s largest, most prestigious companies have noted, “we must first build a national consensus on trade policy… Building this consensus will…require the careful consideration of international labor and environmental issues…that cannot be ignored.” The Roundtable concluded by noting the problem is not whether these issues are trade policy issues. They stressed that trade proponents and critics must find a strategy — a trade policy approach that allows negotiators to address these issues constructively (Business Roundtable, 2001). The Roundtable was essentially saying that we must find common ground and must acknowledge the relationship of trade policy to the achievement of other policy goals. The Roundtable was not alone. Other formal and informal business groups such as the National Association of Manufacturers, as well as environmental and labor groups, have tried to develop an inventory of ideas on how to proceed in pursuing trade agreements while also promoting other important policy goals such as environmental protection or labor rights. Republican members of Congress responded publicly to these efforts with a warning that such efforts could compromise the President’s strategy for
trade liberalization. As of this writing, however, the US Trade Representative has not announced how it will resolve the relationship between trade and social/environmental policy goals within specific trade agreements, such as the WTO. Resolving these issues will undoubtedly be very difficult, so the WTO will probably remain the source of contention.

References

Aaronson, Susan. Trade and the American Dream: A Social History of Postwar Trade Policy. Lexington, KY: University Press of Kentucky, 1996.

Aaronson, Susan. Taking Trade to the Streets: The Lost History of Efforts to Shape Globalization. Ann Arbor: University of Michigan Press, 2001.

Alston, Richard M., J.R. Kearl, and Michael B. Vaughan. “Is There a Consensus among Economists in the 1990’s?” American Economic Review: Papers and Proceedings 82 (1992): 203-209.

Business Roundtable. “The Case for US Trade Leadership: The United States is Falling Behind.” Statement 2/9/2001. www.brt.org.

Charnovitz, Steve. “Environmental and Labour Standards in Trade.” World Economy 15 (1992).

GATT Secretariat. “Final Act Embodying the Results of the Uruguay Round of Multilateral Trade Negotiations.” December 15, 1993.

Jackson, John H. “The World Trade Organization, Dispute Settlement and Codes of Conduct.” In The New GATT: Implications for the United States, edited by Susan M. Collins and Barry P. Bosworth, 63-75. Washington: Brookings, 1994.

Keck, Margaret E. and Kathryn Sikkink. Activists beyond Borders: Advocacy Networks in International Politics. Ithaca: Cornell University Press, 1998.

Program on International Policy Attitudes. “Americans on Globalization.” Poll conducted October 21-October 29, 1999 with 18,126 adults. See www.pipa.org/OnlineReports/Globalization/executive_summary.html

US Tariff Commission. Operation of the Trades Agreements Program, Second Report,

Arthur Young

David R. Stead, University of York

Arthur Young (1741-1820) was widely regarded by his contemporaries as the leading agricultural writer of the time. Born in London, he was the youngest child of the Suffolk gentry landowners Anne and the Reverend Arthur. Young was educated at Lavenham Grammar School, and after abortive attempts to become a merchant and then army officer, in 1763 took a farm on his mother’s estate at Bradfield, although he had little knowledge of farming. Nevertheless he conducted a variety of agricultural experiments and continued his early interest in writing by publishing his first major agricultural work, The Farmer’s Letters, in 1767. Young’s subsequent output was prolific. Most famous are his Tours of England, Ireland and France, which mixed travel diaries with facts, figures and critical commentary on farming practices. In 1784 he founded the periodical Annals of Agriculture, and edited the forty-six volumes published as well as contributing a large proportion of their content. Young was somewhat controversially appointed Secretary of the Board of Agriculture (a state-sponsored body promoting improved farming standards) in 1793, a position he held until his death. He also wrote six of the Board’s surveys of English counties.

Young was a vigorous advocate of agrarian improvements, especially enclosures and long leases, and his statistics and lively prose must have helped publicize and diffuse the innovations in farming practices that were taking place. He was consulted by agriculturists and politicians at home and abroad, including George Washington, and received numerous honors. His marriage to Martha Allen from 1765 was unhappy, though, with faults seemingly on both sides. The youngest of the couple’s four children died in 1797, triggering the melancholia and religious fervor that characterised Young in his later years. His prodigious work rate slowed after about 1805 on account of deteriorating vision, and ultimately blindness.

Some contemporary rivals, notably William Marshall, were fiercely critical of Young’s abilities as a farmer and accurate observer: the judgment of historians remains divided. Young certainly never made a financial success of farming, but this was partly because he expended large sums on agricultural experiments and was frequently absent from his farm writing or travelling. Allegations that Young’s enquiries were based on alehouse gossip, or conducted too hastily, are perhaps not without some truth, but his sample survey investigative procedure undoubtedly represented a pioneering scientific approach to agricultural research. Ironically, historians’ analysis of Youngs facts and figures has produced results that do not always support his original conclusions. For example, enclosures turn out to be not as important in increasing farm output as Young maintained.

Bibliography

Allen, Robert C. and Cormac Ó Gráda. “On the Road Again with Arthur Young: English, Irish, and French Agriculture during the Industrial Revolution.” Journal of Economic History 48 (1988): 93-116.Betham-Edwards, M., editor. The Autobiography of Arthur Young. London: Smith, Elder & Co., 1898.

Brunt, Liam. “The Advent of the Sample Survey in the Social Sciences.” The Statistician 50 (2001): 179-89.

Brunt, Liam. “Rehabilitating Arthur Young.” Economic History Review 56 (2003): 265-99.

Gazley, John G. The Life of Arthur Young, 1741-1820. Philadelphia: American Philosophical Society, 1973.

Kerridge, Eric. “Arthur Young and William Marshall.” History Studies 1 (1968): 43-53.

Mingay, G. E., editor. Arthur Young and His Times. London: Macmillan, 1975.

Citation: Stead, David. “Arthur Young”. EH.Net Encyclopedia, edited by Robert Whaples. November 18, 2003. URL http://eh.net/encyclopedia/arthur-young/

The American Economy during World War II

Christopher J. Tassava

For the United States, World War II and the Great Depression constituted the most important economic event of the twentieth century. The war’s effects were varied and far-reaching. The war decisively ended the depression itself. The federal government emerged from the war as a potent economic actor, able to regulate economic activity and to partially control the economy through spending and consumption. American industry was revitalized by the war, and many sectors were by 1945 either sharply oriented to defense production (for example, aerospace and electronics) or completely dependent on it (atomic energy). The organized labor movement, strengthened by the war beyond even its depression-era height, became a major counterbalance to both the government and private industry. The war’s rapid scientific and technological changes continued and intensified trends begun during the Great Depression and created a permanent expectation of continued innovation on the part of many scientists, engineers, government officials and citizens. Similarly, the substantial increases in personal income and frequently, if not always, in quality of life during the war led many Americans to foresee permanent improvements to their material circumstances, even as others feared a postwar return of the depression. Finally, the war’s global scale severely damaged every major economy in the world except for the United States, which thus enjoyed unprecedented economic and political power after 1945.

The Great Depression

The global conflict which was labeled World War II emerged from the Great Depression, an upheaval which destabilized governments, economies, and entire nations around the world. In Germany, for instance, the rise of Adolph Hitler and the Nazi party occurred at least partly because Hitler claimed to be able to transform a weakened Germany into a self-sufficient military and economic power which could control its own destiny in European and world affairs, even as liberal powers like the United States and Great Britain were buffeted by the depression.

In the United States, President Franklin Roosevelt promised, less dramatically, to enact a “New Deal” which would essentially reconstruct American capitalism and governance on a new basis. As it waxed and waned between 1933 and 1940, Roosevelt’s New Deal mitigated some effects of the Great Depression, but did not end the economic crisis. In 1939, when World War II erupted in Europe with Germany’s invasion of Poland, numerous economic indicators suggested that the United States was still deeply mired in the depression. For instance, after 1929 the American gross domestic product declined for four straight years, then slowly and haltingly climbed back to its 1929 level, which was finally exceeded again in 1936. (Watkins, 2002; Johnston and Williamson, 2004)

Unemployment was another measure of the depression’s impact. Between 1929 and 1939, the American unemployment rate averaged 13.3 percent (calculated from “Corrected BLS” figures in Darby, 1976, 8). In the summer of 1940, about 5.3 million Americans were still unemployed — far fewer than the 11.5 million who had been unemployed in 1932 (about thirty percent of the American workforce) but still a significant pool of unused labor and, often, suffering citizens. (Darby, 1976, 7. For somewhat different figures, see Table 3 below.)

In spite of these dismal statistics, the United States was, in other ways, reasonably well prepared for war. The wide array of New Deal programs and agencies which existed in 1939 meant that the federal government was markedly larger and more actively engaged in social and economic activities than it had been in 1929. Moreover, the New Deal had accustomed Americans to a national government which played a prominent role in national affairs and which, at least under Roosevelt’s leadership, often chose to lead, not follow, private enterprise and to use new capacities to plan and administer large-scale endeavors.

Preparedness and Conversion

As war spread throughout Europe and Asia between 1939 and 1941, nowhere was the federal government’s leadership more important than in the realm of “preparedness” — the national project to ready for war by enlarging the military, strengthening certain allies such as Great Britain, and above all converting America’s industrial base to produce armaments and other war materiel rather than civilian goods. “Conversion” was the key issue in American economic life in 1940-1942. In many industries, company executives resisted converting to military production because they did not want to lose consumer market share to competitors who did not convert. Conversion thus became a goal pursued by public officials and labor leaders. In 1940, Walter Reuther, a high-ranking officer in the United Auto Workers labor union, provided impetus for conversion by advocating that the major automakers convert to aircraft production. Though initially rejected by car-company executives and many federal officials, the Reuther Plan effectively called the public’s attention to America’s lagging preparedness for war. Still, the auto companies only fully converted to war production in 1942 and only began substantially contributing to aircraft production in 1943.

Even for contemporary observers, not all industries seemed to be lagging as badly as autos, though. Merchant shipbuilding mobilized early and effectively. The industry was overseen by the U.S. Maritime Commission (USMC), a New Deal agency established in 1936 to revive the moribund shipbuilding industry, which had been in a depression since 1921, and to ensure that American shipyards would be capable of meeting wartime demands. With the USMC supporting and funding the establishment and expansion of shipyards around the country, including especially the Gulf and Pacific coasts, merchant shipbuilding took off. The entire industry had produced only 71 ships between 1930 and 1936, but from 1938 to 1940, commission-sponsored shipyards turned out 106 ships, and then almost that many in 1941 alone (Fischer, 41). The industry’s position in the vanguard of American preparedness grew from its strategic import — ever more ships were needed to transport American goods to Great Britain and France, among other American allies — and from the Maritime Commission’s ability to administer the industry through means as varied as construction contracts, shipyard inspectors, and raw goading of contractors by commission officials.

Many of the ships built in Maritime Commission shipyards carried American goods to the European allies as part of the “Lend-Lease” program, which was instituted in 1941 and provided another early indication that the United States could and would shoulder a heavy economic burden. By all accounts, Lend-Lease was crucial to enabling Great Britain and the Soviet Union to fight the Axis, not least before the United States formally entered the war in December 1941. (Though scholars are still assessing the impact of Lend-Lease on these two major allies, it is likely that both countries could have continued to wage war against Germany without American aid, which seems to have served largely to augment the British and Soviet armed forces and to have shortened the time necessary to retake the military offensive against Germany.) Between 1941 and 1945, the U.S. exported about $32.5 billion worth of goods through Lend-Lease, of which $13.8 billion went to Great Britain and $9.5 billion went to the Soviet Union (Milward, 71). The war dictated that aircraft, ships (and ship-repair services), military vehicles, and munitions would always rank among the quantitatively most important Lend-Lease goods, but food was also a major export to Britain (Milward, 72).

Pearl Harbor was an enormous spur to conversion. The formal declarations of war by the United States on Japan and Germany made plain, once and for all, that the American economy would now need to be transformed into what President Roosevelt had called “the Arsenal of Democracy” a full year before, in December 1940. From the perspective of federal officials in Washington, the first step toward wartime mobilization was the establishment of an effective administrative bureaucracy.

War Administration

From the beginning of preparedness in 1939 through the peak of war production in 1944, American leaders recognized that the stakes were too high to permit the war economy to grow in an unfettered, laissez-faire manner. American manufacturers, for instance, could not be trusted to stop producing consumer goods and to start producing materiel for the war effort. To organize the growing economy and to ensure that it produced the goods needed for war, the federal government spawned an array of mobilization agencies which not only often purchased goods (or arranged their purchase by the Army and Navy), but which in practice closely directed those goods’ manufacture and heavily influenced the operation of private companies and whole industries.

Though both the New Deal and mobilization for World War I served as models, the World War II mobilization bureaucracy assumed its own distinctive shape as the war economy expanded. Most importantly, American mobilization was markedly less centralized than mobilization in other belligerent nations. The war economies of Britain and Germany, for instance, were overseen by war councils which comprised military and civilian officials. In the United States, the Army and Navy were not incorporated into the civilian administrative apparatus, nor was a supreme body created to subsume military and civilian organizations and to direct the vast war economy.

Instead, the military services enjoyed almost-unchecked control over their enormous appetites for equipment and personnel. With respect to the economy, the services were largely able to curtail production destined for civilians (e.g., automobiles or many non-essential foods) and even for war-related but non-military purposes (e.g., textiles and clothing). In parallel to but never commensurate with the Army and Navy, a succession of top-level civilian mobilization agencies sought to influence Army and Navy procurement of manufactured goods like tanks, planes, and ships, raw materials like steel and aluminum, and even personnel. One way of gauging the scale of the increase in federal spending and the concomitant increase in military spending is through comparison with GDP, which itself rose sharply during the war. Table 1 shows the dramatic increases in GDP, federal spending, and military spending.

Table 1: Federal Spending and Military Spending during World War II

(dollar values in billions of constant 1940 dollars)

Nominal GDP Federal Spending Defense Spending
Year total $ % increase total $ % increase % of GDP total $ % increase % of GDP % of federal spending
1940 101.4 9.47 9.34% 1.66 1.64% 17.53%
1941 120.67 19.00% 13.00 37.28% 10.77% 6.13 269.28% 5.08% 47.15%
1942 139.06 15.24% 30.18 132.15% 21.70% 22.05 259.71% 15.86% 73.06%
1943 136.44 -1.88% 63.57 110.64% 46.59% 43.98 99.46% 32.23% 69.18%
1944 174.84 28.14% 72.62 14.24% 41.54% 62.95 43.13% 36.00% 86.68%
1945 173.52 -0.75% 72.11 -0.70% 41.56% 64.53 2.51% 37.19% 89.49%

Sources: 1940 GDP figure from “Nominal GDP: Louis Johnston and Samuel H. Williamson, “The Annual Real and Nominal GDP for the United States, 1789 — Present,” Economic History Services, March 2004, available at http://www.eh.net/hmit/gdp/ (accessed 27 July 2005). 1941-1945 GDP figures calculated using Bureau of Labor Statistics, “CPI Inflation Calculator,” available at http://data.bls.gov/cgi-bin/cpicalc.pl. Federal and defense spending figures from Government Printing Office, “Budget of the United States Government: Historical Tables Fiscal Year 2005,” Table 6.1—Composition of Outlays: 1940—2009 and Table 3.1—Outlays by Superfunction and Function: 1940—2009.

Preparedness Agencies

To oversee this growth, President Roosevelt created a number of preparedness agencies beginning in 1939, including the Office for Emergency Management and its key sub-organization, the National Defense Advisory Commission; the Office of Production Management; and the Supply Priorities Allocation Board. None of these organizations was particularly successful at generating or controlling mobilization because all included two competing parties. On one hand, private-sector executives and managers had joined the federal mobilization bureaucracy but continued to emphasize corporate priorities such as profits and positioning in the marketplace. On the other hand, reform-minded civil servants, who were often holdovers from the New Deal, emphasized the state’s prerogatives with respect to mobilization and war making. As a result of this basic division in the mobilization bureaucracy, “the military largely remained free of mobilization agency control” (Koistinen, 502).

War Production Board

In January 1942, as part of another effort to mesh civilian and military needs, President Roosevelt established a new mobilization agency, the War Production Board, and placed it under the direction of Donald Nelson, a former Sears Roebuck executive. Nelson understood immediately that the staggeringly complex problem of administering the war economy could be reduced to one key issue: balancing the needs of civilians — especially the workers whose efforts sustained the economy — against the needs of the military — especially those of servicemen and women but also their military and civilian leaders.

Though neither Nelson nor other high-ranking civilians ever fully resolved this issue, Nelson did realize several key economic goals. First, in late 1942, Nelson successfully resolved the so-called “feasibility dispute,” a conflict between civilian administrators and their military counterparts over the extent to which the American economy should be devoted to military needs during 1943 (and, by implication, in subsequent war years). Arguing that “all-out” production for war would harm America’s long-term ability to continue to produce for war after 1943, Nelson convinced the military to scale back its Olympian demands. He thereby also established a precedent for planning war production so as to meet most military and some civilian needs. Second (and partially as a result of the feasibility dispute), the WPB in late 1942 created the “Controlled Materials Plan,” which effectively allocated steel, aluminum, and copper to industrial users. The CMP obtained throughout the war, and helped curtail conflict among the military services and between them and civilian agencies over the growing but still scarce supplies of those three key metals.

Office of War Mobilization

By late 1942 it was clear that Nelson and the WPB were unable to fully control the growing war economy and especially to wrangle with the Army and Navy over the necessity of continued civilian production. Accordingly, in May 1943 President Roosevelt created the Office of War Mobilization and in July put James Byrne — a trusted advisor, a former U.S. Supreme Court justice, and the so-called “assistant president” — in charge. Though the WPB was not abolished, the OWM soon became the dominant mobilization body in Washington. Unlike Nelson, Byrnes was able to establish an accommodation with the military services over war production by “acting as an arbiter among contending forces in the WPB, settling disputes between the board and the armed services, and dealing with the multiple problems” of the War Manpower Commission, the agency charged with controlling civilian labor markets and with assuring a continuous supply of draftees to the military (Koistinen, 510).

Beneath the highest-level agencies like the WPB and the OWM, a vast array of other federal organizations administered everything from labor (the War Manpower Commission) to merchant shipbuilding (the Maritime Commission) and from prices (the Office of Price Administration) to food (the War Food Administration). Given the scale and scope of these agencies’ efforts, they did sometimes fail, and especially so when they carried with them the baggage of the New Deal. By the midpoint of America’s involvement in the war, for example, the Civilian Conservation Corps, the Works Progress Administration, and the Rural Electrification Administration — all prominent New Deal organizations which tried and failed to find a purpose in the mobilization bureaucracy — had been actually or virtually abolished.

Taxation

However, these agencies were often quite successful in achieving their respective, narrower aims. The Department of the Treasury, for instance, was remarkably successful at generating money to pay for the war, including the first general income tax in American history and the famous “war bonds” sold to the public. Beginning in 1940, the government extended the income tax to virtually all Americans and began collecting the tax via the now-familiar method of continuous withholdings from paychecks (rather than lump-sum payments after the fact). The number of Americans required to pay federal taxes rose from 4 million in 1939 to 43 million in 1945. With such a large pool of taxpayers, the American government took in $45 billion in 1945, an enormous increase over the $8.7 billion collected in 1941 but still far short of the $83 billion spent on the war in 1945. Over that same period, federal tax revenue grew from about 8 percent of GDP to more than 20 percent. Americans who earned as little as $500 per year paid income tax at a 23 percent rate, while those who earned more than $1 million per year paid a 94 percent rate. The average income tax rate peaked in 1944 at 20.9 percent (“Fact Sheet: Taxes”).

War Bonds

All told, taxes provided about $136.8 billion of the war’s total cost of $304 billion (Kennedy, 625). To cover the other $167.2 billion, the Treasury Department also expanded its bond program, creating the famous “war bonds” hawked by celebrities and purchased in vast numbers and enormous values by Americans. The first war bond was purchased by President Roosevelt on May 1, 1941 (“Introduction to Savings Bonds”). Though the bonds returned only 2.9 percent annual interest after a 10-year maturity, they nonetheless served as a valuable source of revenue for the federal government and an extremely important investment for many Americans. Bonds served as a way for citizens to make an economic contribution to the war effort, but because interest on them accumulated slower than consumer prices rose, they could not completely preserve income which could not be readily spent during the war. By the time war-bond sales ended in 1946, 85 million Americans had purchased more than $185 billion worth of the securities, often through automatic deductions from their paychecks (“Brief History of World War Two Advertising Campaigns: War Loans and Bonds”). Commercial institutions like banks also bought billions of dollars of bonds and other treasury paper, holding more than $24 billion at the war’s end (Kennedy, 626).

Price Controls and the Standard of Living

Fiscal and financial matters were also addressed by other federal agencies. For instance, the Office of Price Administration used its “General Maximum Price Regulation” (also known as “General Max”) to attempt to curtail inflation by maintaining prices at their March 1942 levels. In July, the National War Labor Board (NWLB; a successor to a New Deal-era body) limited wartime wage increases to about 15 percent, the factor by which the cost of living rose from January 1941 to May 1942. Neither “General Max” nor the wage-increase limit was entirely successful, though federal efforts did curtail inflation. Between April 1942 and June 1946, the period of the most stringent federal controls on inflation, the annual rate of inflation was just 3.5 percent; the annual rate had been 10.3 percent in the six months before April 1942 and it soared to 28.0 percent in the six months after June 1946 (Rockoff, “Price and Wage Controls in Four Wartime Periods,” 382).With wages rising about 65 percent over the course of the war, this limited success in cutting the rate of inflation meant that many American civilians enjoyed a stable or even improving quality of life during the war (Kennedy, 641). Improvement in the standard of living was not ubiquitous, however. In some regions, such as rural areas in the Deep South, living standards stagnated or even declined, and according to some economists, the national living standard barely stayed level or even declined (Higgs, 1992).

Labor Unions

Labor unions and their members benefited especially. The NWLB’s “maintenance-of-membership” rule allowed unions to count all new employees as union members and to draw union dues from those new employees’ paychecks, so long as the unions themselves had already been recognized by the employer. Given that most new employment occurred in unionized workplaces, including plants funded by the federal government through defense spending, “the maintenance-of-membership ruling was a fabulous boon for organized labor,” for it required employers to accept unions and allowed unions to grow dramatically: organized labor expanded from 10.5 million members in 1941 to 14.75 million in 1945 (Blum, 140). By 1945, approximately 35.5 percent of the non-agricultural workforce was unionized, a record high.

The War Economy at High Water

Despite the almost-continual crises of the civilian war agencies, the American economy expanded at an unprecedented (and unduplicated) rate between 1941 and 1945. The gross national product of the U.S., as measured in constant dollars, grew from $88.6 billion in 1939 — while the country was still suffering from the depression — to $135 billion in 1944. War-related production skyrocketed from just two percent of GNP to 40 percent in 1943 (Milward, 63).

As Table 2 shows, output in many American manufacturing sectors increased spectacularly from 1939 to 1944, the height of war production in many industries.

Table 2: Indices of American Manufacturing Output (1939 = 100)

1940 1941 1942 1943 1944
Aircraft 245 630 1706 2842 2805
Munitions 140 423 2167 3803 2033
Shipbuilding 159 375 1091 1815 1710
Aluminum 126 189 318 561 474
Rubber 109 144 152 202 206
Steel 131 171 190 202 197

Source: Milward, 69.

Expansion of Employment

The wartime economic boom spurred and benefited from several important social trends. Foremost among these trends was the expansion of employment, which paralleled the expansion of industrial production. In 1944, unemployment dipped to 1.2 percent of the civilian labor force, a record low in American economic history and as near to “full employment” as is likely possible (Samuelson). Table 3 shows the overall employment and unemployment figures during the war period.

Table 3: Civilian Employment and Unemployment during World War II

(Numbers in thousands)

1940 1941 1942 1943 1944 1945
All Non-institutional Civilians 99,840 99,900 98,640 94,640 93,220 94,090
Civilian Labor Force Total 55,640 55,910 56,410 55,540 54,630 53,860
% of Population 55.7% 56% 57.2% 58.7% 58.6% 57.2%
Employed Total 47,520 50,350 53,750 54,470 53,960 52,820
% of Population 47.6% 50.4% 54.5% 57.6% 57.9% 56.1%
% of Labor Force 85.4% 90.1% 95.3% 98.1% 98.8% 98.1%
Unemployed Total 8,120 5,560 2,660 1,070 670 1,040
% of Population 8.1% 5.6% 2.7% 1.1% 0.7% 1.1%
% of Labor Force 14.6% 9.9% 4.7% 1.9% 1.2% 1.9%

Source: Bureau of Labor Statistics, “Employment status of the civilian noninstitutional population, 1940 to date.” Available at http://www.bls.gov/cps/cpsaat1.pdf.

Not only those who were unemployed during the depression found jobs. So, too, did about 10.5 million Americans who either could not then have had jobs (the 3.25 million youths who came of age after Pearl Harbor) or who would not have then sought employment (3.5 million women, for instance). By 1945, the percentage of blacks who held war jobs — eight percent — approximated blacks’ percentage in the American population — about ten percent (Kennedy, 775). Almost 19 million American women (including millions of black women) were working outside the home by 1945. Though most continued to hold traditional female occupations such as clerical and service jobs, two million women did labor in war industries (half in aerospace alone) (Kennedy, 778). Employment did not just increase on the industrial front. Civilian employment by the executive branch of the federal government — which included the war administration agencies — rose from about 830,000 in 1938 (already a historical peak) to 2.9 million in June 1945 (Nash, 220).

Population Shifts

Migration was another major socioeconomic trend. The 15 million Americans who joined the military — who, that is, became employees of the military — all moved to and between military bases; 11.25 million ended up overseas. Continuing the movements of the depression era, about 15 million civilian Americans made a major move (defined as changing their county of residence). African-Americans moved with particular alacrity and permanence: 700,000 left the South and 120,000 arrived in Los Angeles during 1943 alone. Migration was especially strong along rural-urban axes, especially to war-production centers around the country, and along an east-west axis (Kennedy, 747-748, 768). For instance, as Table 4 shows, the population of the three Pacific Coast states grew by a third between 1940 and 1945, permanently altering their demographics and economies.

Table 4: Population Growth in Washington, Oregon, and California, 1940-1945

(populations in millions)

1940 1941 1942 1943 1944 1945 % growth
1940-1945
Washington 1.7 1.8 1.9 2.1 2.1 2.3 35.3%
Oregon 1.1 1.1 1.1 1.2 1.3 1.3 18.2%
California 7.0 7.4 8.0 8.5 9.0 9.5 35.7%
Total 9.8 10.3 11.0 11.8 12.4 13.1 33.7%

Source: Nash, 222.

A third wartime socioeconomic trend was somewhat ironic, given the reduction in the supply of civilian goods: rapid increases in many Americans’ personal incomes. Driven by the federal government’s abilities to prevent price inflation and to subsidize high wages through war contracting and by the increase in the size and power of organized labor, incomes rose for virtually all Americans — whites and blacks, men and women, skilled and unskilled. Workers at the lower end of the spectrum gained the most: manufacturing workers enjoyed about a quarter more real income in 1945 than in 1940 (Kennedy, 641). These rising incomes were part of a wartime “great compression” of wages which equalized the distribution of incomes across the American population (Goldin and Margo, 1992). Again focusing on three war-boom states in the West, Table 5 shows that personal-income growth continued after the war, as well.

Table 5: Personal Income per Capita in Washington, Oregon, and California, 1940 and 1948

1940 1948 % growth
Washington $655 $929 42%
Oregon $648 $941 45%
California $835 $1,017 22%

Source: Nash, 221. Adjusted for inflation using Bureau of Labor Statistics, “CPI Inflation Calculator,” available at http://data.bls.gov/cgi-bin/cpicalc.pl

Despite the focus on military-related production in general and the impact of rationing in particular, spending in many civilian sectors of the economy rose even as the war consumed billions of dollars of output. Hollywood boomed as workers bought movie tickets rather than scarce clothes or unavailable cars. Americans placed more legal wagers in 1943 and 1944, and racetracks made more money than at any time before. In 1942, Americans spent $95 million on legal pharmaceuticals, $20 million more than in 1941. Department-store sales in November 1944 were greater than in any previous month in any year (Blum, 95-98). Black markets for rationed or luxury goods — from meat and chocolate to tires and gasoline — also boomed during the war.

Scientific and Technological Innovation

As observers during the war and ever since have recognized, scientific and technological innovations were a key aspect in the American war effort and an important economic factor in the Allies’ victory. While all of the major belligerents were able to tap their scientific and technological resources to develop weapons and other tools of war, the American experience was impressive in that scientific and technological change positively affected virtually every facet of the war economy.

The Manhattan Project

American techno-scientific innovations mattered most dramatically in “high-tech” sectors which were often hidden from public view by wartime secrecy. For instance, the Manhattan Project to create an atomic weapon was a direct and massive result of a stunning scientific breakthrough: the creation of a controlled nuclear chain reaction by a team of scientists at the University of Chicago in December 1942. Under the direction of the U.S. Army and several private contractors, scientists, engineers, and workers built a nationwide complex of laboratories and plants to manufacture atomic fuel and to fabricate atomic weapons. This network included laboratories at the University of Chicago and the University of California-Berkeley, uranium-processing complexes at Oak Ridge, Tennessee, and Hanford, Washington, and the weapon-design lab at Los Alamos, New Mexico. The Manhattan Project climaxed in August 1945, when the United States dropped two atomic weapons on Hiroshima and Nagasaki, Japan; these attacks likely accelerated Japanese leaders’ decision to seek peace with the United States. By that time, the Manhattan Project had become a colossal economic endeavor, costing approximately $2 billion and employing more than 100,000.

Though important and gigantic, the Manhattan Project was an anomaly in the broader war economy. Technological and scientific innovation also transformed less-sophisticated but still complex sectors such as aerospace or shipbuilding. The United States, as David Kennedy writes, “ultimately proved capable of some epochal scientific and technical breakthroughs, [but] innovated most characteristically and most tellingly in plant layout, production organization, economies of scale, and process engineering” (Kennedy, 648).

Aerospace

Aerospace provides one crucial example. American heavy bombers, like the B-29 Superfortress, were highly sophisticated weapons which could not have existed, much less contributed to the air war on Germany and Japan, without innovations such as bombsights, radar, and high-performance engines or advances in aeronautical engineering, metallurgy, and even factory organization. Encompassing hundreds of thousands of workers, four major factories, and $3 billion in government spending, the B-29 project required almost unprecedented organizational capabilities by the U.S. Army Air Forces, several major private contractors, and labor unions (Vander Meulen, 7). Overall, American aircraft production was the single largest sector of the war economy, costing $45 billion (almost a quarter of the $183 billion spent on war production), employing a staggering two million workers, and, most importantly, producing over 125,000 aircraft, which Table 6 describe in more detail.

Table 6: Production of Selected U.S. Military Aircraft (1941-1945)

Bombers 49,123
Fighters 63,933
Cargo 14,710
Total 127,766

Source: Air Force History Support Office

Shipbuilding

Shipbuilding offers a third example of innovation’s importance to the war economy. Allied strategy in World War II utterly depended on the movement of war materiel produced in the United States to the fighting fronts in Africa, Europe, and Asia. Between 1939 and 1945, the hundred merchant shipyards overseen by the U.S. Maritime Commission (USMC) produced 5,777 ships at a cost of about $13 billion (navy shipbuilding cost about $18 billion) (Lane, 8). Four key innovations facilitated this enormous wartime output. First, the commission itself allowed the federal government to direct the merchant shipbuilding industry. Second, the commission funded entrepreneurs, the industrialist Henry J. Kaiser chief among them, who had never before built ships and who were eager to use mass-production methods in the shipyards. These methods, including the substitution of welding for riveting and the addition of hundreds of thousands of women and minorities to the formerly all-white and all-male shipyard workforces, were a third crucial innovation. Last, the commission facilitated mass production by choosing to build many standardized vessels like the ugly, slow, and ubiquitous “Liberty” ship. By adapting well-known manufacturing techniques and emphasizing easily-made ships, merchant shipbuilding became a low-tech counterexample to the atomic-bomb project and the aerospace industry, yet also a sector which was spectacularly successful.

Reconversion and the War’s Long-term Effects

Reconversion from military to civilian production had been an issue as early as 1944, when WPB Chairman Nelson began pushing to scale back war production in favor of renewed civilian production. The military’s opposition to Nelson had contributed to the accession by James Byrnes and the OWM to the paramount spot in the war-production bureaucracy. Meaningful planning for reconversion was postponed until 1944 and the actual process of reconversion only began in earnest in early 1945, accelerating through V-E Day in May and V-J Day in September.

The most obvious effect of reconversion was the shift away from military production and back to civilian production. As Table 7 shows, this shift — as measured by declines in overall federal spending and in military spending — was dramatic, but did not cause the postwar depression which many Americans dreaded. Rather, American GDP continued to grow after the war (albeit not as rapidly as it had during the war; compare Table 1). The high level of defense spending, in turn, contributed to the creation of the “military-industrial complex,” the network of private companies, non-governmental organizations, universities, and federal agencies which collectively shaped American national defense policy and activity during the Cold War.

Table 7: Federal Spending, and Military Spending after World War II

(dollar values in billions of constant 1945 dollars)

Nominal GDP Federal Spending Defense Spending
Year Total % increase total % increase % of GDP Total % increase % of GDP % of federal
spending
1945 223.10 92.71 1.50% 41.90% 82.97 4.80% 37.50% 89.50%
1946 222.30 -0.36% 55.23 -40.40% 24.80% 42.68 -48.60% 19.20% 77.30%
1947 244.20 8.97% 34.5 -37.50% 14.80% 12.81 -70.00% 5.50% 37.10%
1948 269.20 9.29% 29.76 -13.70% 11.60% 9.11 -28.90% 3.50% 30.60%
1949 267.30 -0.71% 38.84 30.50% 14.30% 13.15 44.40% 4.80% 33.90%
1950 293.80 9.02% 42.56 9.60% 15.60% 13.72 4.40% 5.00% 32.20%

1945 GDP figure from “Nominal GDP: Louis Johnston and Samuel H. Williamson, “The Annual Real and Nominal GDP for the United States, 1789 — Present,” Economic History Services, March 2004, available at http://www.eh.net/hmit/gdp/ (accessed 27 July 2005). 1946-1950 GDP figures calculated using Bureau of Labor Statistics, “CPI Inflation Calculator,” available at http://data.bls.gov/cgi-bin/cpicalc.pl. Federal and defense spending figures from Government Printing Office, “Budget of the United States Government: Historical Tables Fiscal Year 2005,” Table 6.1—Composition of Outlays: 1940—2009 and Table 3.1—Outlays by Superfunction and Function: 1940—2009.

Reconversion spurred the second major restructuring of the American workplace in five years, as returning servicemen flooded back into the workforce and many war workers left, either voluntarily or involuntarily. For instance, many women left the labor force beginning in 1944 — sometimes voluntarily and sometimes involuntarily. In 1947, about a quarter of all American women worked outside the home, roughly the same number who had held such jobs in 1940 and far off the wartime peak of 36 percent in 1944 (Kennedy, 779).

G.I. Bill

Servicemen obtained numerous other economic benefits beyond their jobs, including educational assistance from the federal government and guaranteed mortgages and small-business loans via the Serviceman’s Readjustment Act of 1944 or “G.I. Bill.” Former servicemen thus became a vast and advantaged class of citizens which demanded, among other goods, inexpensive, often suburban housing; vocational training and college educations; and private cars which had been unobtainable during the war (Kennedy, 786-787).

The U.S.’s Position at the End of the War

At a macroeconomic scale, the war not only decisively ended the Great Depression, but created the conditions for productive postwar collaboration between the federal government, private enterprise, and organized labor, the parties whose tripartite collaboration helped engender continued economic growth after the war. The U.S. emerged from the war not physically unscathed, but economically strengthened by wartime industrial expansion, which placed the United States at absolute and relative advantage over both its allies and its enemies.

Possessed of an economy which was larger and richer than any other in the world, American leaders determined to make the United States the center of the postwar world economy. American aid to Europe ($13 billion via the Economic Recovery Program (ERP) or “Marshall Plan,” 1947-1951) and Japan ($1.8 billion, 1946-1952) furthered this goal by tying the economic reconstruction of West Germany, France, Great Britain, and Japan to American import and export needs, among other factors. Even before the war ended, the Bretton Woods Conference in 1944 determined key aspects of international economic affairs by establishing standards for currency convertibility and creating institutions such as the International Monetary Fund and the precursor of the World Bank.

In brief, as economic historian Alan Milward writes, “the United States emerged in 1945 in an incomparably stronger position economically than in 1941″… By 1945 the foundations of the United States’ economic domination over the next quarter of a century had been secured”… [This] may have been the most influential consequence of the Second World War for the post-war world” (Milward, 63).

Selected References

Adams, Michael C.C. The Best War Ever: America and World War II. Baltimore: Johns Hopkins University Press, 1994.

Anderson, Karen. Wartime Women: Sex Roles, Family Relations, and the Status of Women during World War II. Westport, CT: Greenwood Press, 1981.

Air Force History Support Office. “Army Air Forces Aircraft: A Definitive Moment.” U.S. Air Force, 1993. Available at http://www.airforcehistory.hq.af.mil/PopTopics/AAFaircraft.htm.

Blum, John Morton. V Was for Victory: Politics and American Culture during World War II. New York: Harcourt Brace, 1976.

Bordo, Michael. “The Gold Standard, Bretton Woods, and Other Monetary Regimes: An Historical Appraisal.” NBER Working Paper No. 4310. April 1993.

“Brief History of World War Two Advertising Campaigns.” Duke University Rare Book, Manuscript, and Special Collections, 1999. Available at http://scriptorium.lib.duke.edu/adaccess/wwad-history.html

Brody, David. “The New Deal and World War II.” In The New Deal, vol. 1, The National Level, edited by John Braeman, Robert Bremmer, and David Brody, 267-309. Columbus: Ohio State University Press, 1975.

Connery, Robert. The Navy and Industrial Mobilization in World War II. Princeton: Princeton University Press, 1951.

Darby, Michael R. “Three-and-a-Half Million U.S. Employees Have Been Mislaid: Or, an Explanation of Unemployment, 1934-1941.” Journal of Political Economy 84, no. 1 (February 1976): 1-16.

Field, Alexander J. “The Most Technologically Progressive Decade of the Century.” American Economic Review 93, no 4 (September 2003): 1399-1414.

Field, Alexander J. “U.S. Productivity Growth in the Interwar Period and the 1990s.” (Paper presented at “Understanding the 1990s: the Long Run Perspective” conference, Duke University and the University of North Carolina, March 26-27, 2004) Available at www.unc.edu/depts/econ/seminars/Field.pdf.

Fischer, Gerald J. A Statistical Summary of Shipbuilding under the U.S. Maritime Commission during World War II. Washington, DC: Historical Reports of War Administration; United States Maritime Commission, no. 2, 1949.

Friedberg, Aaron. In the Shadow of the Garrison State. Princeton: Princeton University Press, 2000.

Gluck, Sherna Berger. Rosie the Riveter Revisited: Women, the War, and Social Change. Boston: Twayne Publishers, 1987.

Goldin, Claudia. “The Role of World War II in the Rise of Women’s Employment.” American Economic Review 81, no. 4 (September 1991): 741-56.

Goldin, Claudia and Robert A. Margo. “The Great Compression: Wage Structure in the United States at Mid-Century.” Quarterly Journal of Economics 107, no. 2 (February 1992): 1-34.

Harrison, Mark, editor. The Economics of World War II: Six Great Powers in International Comparison. Cambridge: Cambridge University Press, 1998.

Higgs, Robert. “Wartime Prosperity? A Reassessment of the U.S. Economy in the 1940s.” Journal of Economic History 52, no. 1 (March 1992): 41-60.

Holley, I.B. Buying Aircraft: Materiel Procurement for the Army Air Forces. Washington, DC: U.S. Government Printing Office, 1964.

Hooks, Gregory. Forging the Military-Industrial Complex: World War II’s Battle of the Potomac. Urbana: University of Illinois Press, 1991.

Janeway, Eliot. The Struggle for Survival: A Chronicle of Economic Mobilization in World War II. New Haven: Yale University Press, 1951.

Jeffries, John W. Wartime America: The World War II Home Front. Chicago: Ivan R. Dee, 1996.

Johnston, Louis and Samuel H. Williamson. “The Annual Real and Nominal GDP for the United States, 1789 – Present.” Available at Economic History Services, March 2004, URL: http://www.eh.net/hmit/gdp/; accessed 3 June 2005.

Kennedy, David M. Freedom from Fear: The American People in Depression and War, 1929-1945. New York: Oxford University Press, 1999.

Kryder, Daniel. Divided Arsenal: Race and the American State during World War II. New York: Cambridge University Press, 2000.

Lane, Frederic, with Blanche D. Coll, Gerald J. Fischer, and David B. Tyler. Ships for Victory: A History of Shipbuilding under the U.S. Maritime Commission in World War II. Baltimore: Johns Hopkins University Press, 1951; republished, 2001.

Koistinen, Paul A.C. Arsenal of World War II: The Political Economy of American Warfare, 1940-1945. Lawrence, KS: University Press of Kansas, 2004.

Lichtenstein, Nelson. Labor’s War at Home: The CIO in World War II. New York: Cambridge University Press, 1982.

Lingeman, Richard P. Don’t You Know There’s a War On? The American Home Front, 1941-1945. New York: G.P. Putnam’s Sons, 1970.

Milkman, Ruth. Gender at Work: The Dynamics of Job Segregation by Sex during World War II. Urbana: University of Illinois Press, 1987.

Milward, Alan S. War, Economy, and Society, 1939-1945. Berkeley: University of California Press, 1979.

Nash, Gerald D. The American West Transformed: The Impact of the Second World War. Lincoln: University of Nebraska Press, 1985.

Nelson, Donald M. Arsenal of Democracy: The Story of American War Production. New York: Harcourt Brace, 1946.

O’Neill, William L. A Democracy at War: America’s Fight at Home and Abroad in World War II. New York: Free Press, 1993.

Overy, Richard. How the Allies Won. New York: W.W. Norton, 1995.

Rockoff, Hugh. “The Response of the Giant Corporations to Wage and Price Control in World War II.” Journal of Economic History 41, no. 1 (March 1981): 123-28.

Rockoff, Hugh. “Price and Wage Controls in Four Wartime Periods.” Journal of Economic History 41, no. 2 (June 1981): 381-401.

Samuelson, Robert J., “Great Depression.” The Concise Encyclopedia of Economics. Indianapolis: Liberty Fund, Inc., ed. David R. Henderson, 2002. Available at http://www.econlib.org/library/Enc/GreatDepression.html

U.S. Department of the Treasury, “Fact Sheet: Taxes,” n. d. Available at http://www.treas.gov/education/fact-sheets/taxes/ustax.shtml

U.S. Department of the Treasury, “Introduction to Savings Bonds,” n.d. Available at http://www.treas.gov/offices/treasurer/savings-bonds.shtml

Vander Meulen, Jacob. Building the B-29. Washington, DC: Smithsonian Institution Press, 1995.

Watkins, Thayer. “The Recovery from the Depression of the 1930s.” 2002. Available at http://www2.sjsu.edu/faculty/watkins/recovery.htm

Citation: Tassava, Christopher. “The American Economy during World War II”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/the-american-economy-during-world-war-ii/

U.S. Economy in World War I

Hugh Rockoff, Rutgers University

Although the United States was actively involved in World War I for only nineteen months, from April 1917 to November 1918, the mobilization of the economy was extraordinary. (See the chronology at the end for key dates). Over four million Americans served in the armed forces, and the U.S. economy turned out a vast supply of raw materials and munitions. The war in Europe, of course, began long before the United States entered. On June 28, 1914 in Sarajevo Gavrilo Princip, a young Serbian revolutionary, shot and killed Austrian Archduke Franz Ferdinand and his wife Sophie. A few months later the great powers of Europe were at war.

Many Europeans entered the war thinking that victory would come easily. Few had the understanding shown by a 26 year-old conservative Member of Parliament, Winston Churchill, in 1901. “I have frequently been astonished to hear with what composure and how glibly Members, and even Ministers, talk of a European War.” He went on to point out that in the past European wars had been fought by small professional armies, but in the future huge populations would be involved, and he predicted that a European war would end “in the ruin of the vanquished and the scarcely less fatal commercial dislocation and exhaustion of the conquerors.”[1]

Reasons for U.S. Entry into the War

Once the war began, however, it became clear that Churchill was right. By the time the United States entered the war Americans knew that the price of victory would be high. What, then, impelled the United States to enter? What role did economic forces play? One factor was simply that Americans generally – some ethnic minorities were exceptions – felt stronger ties to Britain and France than to Germany and Austria. By 1917 it was clear that Britain and France were nearing exhaustion, and there was considerable sentiment in the United States for saving our traditional allies.

The insistence of the United States on her trading rights was also important. Soon after the war began Britain, France, and their allies set up a naval blockade of Germany and Austria. Even food was contraband. The Wilson Administration complained bitterly that the blockade violated international law. U.S. firms took to using European neutrals, such as Sweden, as intermediaries. Surely, the Americans argued, international law protected the right of one neutral to trade with another. Britain and France responded by extending the blockade to include the Baltic neutrals. The situation was similar to the difficulties the United States experienced during the Napoleonic wars, which drove the United States into a quasi-war against France, and to war against Britain.

Ultimately, however, it was not the conventional surface vessels used by Britain and France to enforce its blockade that enraged American opinion, but rather submarines used by Germany. When the British (who provided most of the blockading ships) intercepted an American ship, the ship was escorted into a British port, the crew was well treated, and there was a chance of damage payments if it turned out that the interception was a mistake. The situation was very different when the Germans turned to submarine warfare. German submarines attacked without warning, and passengers had little chance of to save themselves. To many Americans this was a brutal violation of the laws of war. The Germans felt they had to use submarines because their surface fleet was too small to defeat the British navy let alone establish an effective counter-blockade.

The first submarine attack to inflame American opinion was the sinking of the Lusitania in May 1915. The Lusitania left New York with a cargo of passengers and freight, including war goods. When the ship was sunk over 1150 passengers were lost including 115 Americans. In the months that followed further sinkings brought more angry warnings from President Wilson. For a time the Germans gave way and agreed to warn American ships before sinking them and to save their passengers. In February 1917, however, the Germans renewed unrestricted submarine warfare in an attempt to starve Britain into submission. The loss of several U.S. ships was a key factor in President Wilson’s decision to break diplomatic relations with Germany and to seek a declaration of war.

U.S. Entry into the War and the Costs of Lost Trade

From a crude dollar-and-cents point of view it is hard to justify the war based on the trade lost to the United States. U.S. exports to Europe rose from $1.479 billion dollars in 1913 to $4.062 billion in 1917. Suppose that the United States had stayed out of the war, and that as a result all trade with Europe was cut off. Suppose further, that the resources that would have been used to produce exports for Europe were able to produce only half as much value when reallocated to other purposes such as producing goods for the domestic market or exports for non-European countries. Then the loss of output in 1917 would have been $2.031 billion per year. This was about 3.7 percent of GNP in 1917, and only about 6.3 percent of the total U.S. cost of the war.[2]

On March 21, 1918 the Germans launched a massive offensive on the Somme battlefield and successfully broke through the Allied lines. In May and early June, after U.S. entry into the war, the Germans followed up with fresh attacks that brought them within fifty miles of Paris. Although a small number of Americans participated it was mainly the old war: the Germans against the British and the French. The arrival of large numbers of Americans, however, rapidly changed the course of the war. The turning point was the Second Battle of the Marne fought between July 18 and August 6. The Allies, bolstered by significant numbers of Americans, halted the German offensive.

The initiative now passed to the Allies. They drove the Germans back in a series of attacks in which American troops played an increasingly important role. The first distinctively American offensive was the battle of the St. Mihiel Salient fought from September 12 to September 16, 1918; over half a million U.S. troops participated. The last major offensive of the war, the Meuse-Argonne offensive, was launched on September 26, with British, French, and American forces attacking the Germans on a broad front. The Germans now realized that their military situation was deteriorating rapidly, and that they would have to agree to end to the fighting. The Armistice occurred on November 11, 1918 – at the eleventh hour, of the eleventh day, of the eleventh month.

Mobilizing the Economy

The first and most important mobilization decision was the size of the army. When the United States entered the war, the army stood at 200,000, hardly enough to have a decisive impact in Europe. However, on May 18, 1917 a draft was imposed and the numbers were increased rapidly. Initially, the expectation was that the United States would mobilize an army of one million. The number, however, would go much higher. Overall some 4,791,172 Americans would serve in World War I. Some 2,084,000 would reach France, and 1,390,000 would see active combat.

Once the size of the Army had been determined, the demands on the economy became obvious, although the means to satisfy them did not: food and clothing, guns and ammunition, places to train, and the means of transport. The Navy also had to be expanded to protect American shipping and the troop transports. Contracts immediately began flowing from the Army and Navy to the private sector. The result, of course, was a rapid increase in federal spending from $477 million in 1916 to a peak of $8,450 million in 1918. (See Table 1 below for this and other data on the war effort.) The latter figure amounted to over 12 percent of GNP, and that amount excludes spending by other wartime agencies and spending by allies, much of which was financed by U.S. loans.

Table 1
Selected Economic Variables, 1916-1920
1916 1917 1918 1919 1920
1. Industrial production (1916 =100) 100 132 139 137 108
2. Revenues of the federal government (millions of dollars) $930 2,373 4,388 5,889 6,110
3. Expenditures of the federal government (millions of dollars) $1,333 7,316 15,585 12,425 5,710
4. Army and Navy spending (millions of dollars) $477 3,383 8,580 6,685 2,063
5. Stock of money, M2 (billions of dollars) $20.7 24.3 26.2 30.7 35.1
6. GNP deflator (1916 =100) 100 120 141 160 185
7. Gross National Product (GNP) (billions of dollars) $46.0 55.1 69.7 77.2 87.2
8. Real GNP (billions of 1916 dollars) $46.0 46.0 49.6 48.1 47.1
9. Average annual earnings per full-time manufacturing employee (1916 dollars) $751 748 802 813 828
10. Total labor force (millions) 40.1 41.5 44.0 42.3 41.5
11. Military personnel (millions) .174 .835 2.968 1.266 .353
Sources by row:

1. Miron and Romer (1990, table 2).

2-3. U.S. Bureau of the Census (1975), series Y352 and Y457.

4. U.S. Bureau of the Census (1975), series Y458 and Y459. The estimates are the average for fiscal year t and fiscal year t+1.

5. Friedman and Schwartz (1970, table 1, June dates).

6-8. Balke and Gordon (1989, table 10, pp. 84-85).The original series were in 1982 dollars.

9. U.S. Bureau of the Census (1975), series D740.

10-11. Kendrick (1961, table A-VI, p. 306; table A-X, p. 312).

Although the Army would number in the millions, raising these numbers did not prove to be an unmanageable burden for the U.S economy. The total labor force rose from about 40 million in 1916 to 44 million in 1918. This increase allowed the United States to field a large military while still increasing the labor force in the nonfarm private sector from 27.8 million in 1916 to 28.6 million in 1918. Real wages rose in the industrial sector during the war, perhaps by six or seven percent, and this increase combined with the ease of finding work was sufficient to draw many additional workers into the labor force.[3] Many of the men drafted into the armed forces were leaving school and would have been entering the labor force for the first time in any case. The farm labor force did drop slightly from 10.5 million in 1916 to 10.3 million workers in 1918, but farming included many low-productivity workers and farm output on the whole was sustained. Indeed, the all-important category of food grains showed strong increases in 1918 and 1919.

Figure 1 shows production of steel ingots and “total industrial production” – an index of steel, copper, rubber, petroleum, and so on – monthly from January 1914 through 1920.[4] It is evident that the United States built up its capacity to turn out these basic raw materials during the years of U.S. neutrality when Britain and France were its buying supplies and the United States was beginning its own tentative build up. The United States then simply maintained the output of these materials during the years of active U.S. involvement and concentrated on turning these materials into munitions.[5]

Figure 1

Steel Ingots and Total Industrial Production, 1914-1920

Prices on the New York Stock Exchange, shown in Figure 2, provide some insight into what investors thought about the strength of the economy during the war era. The upper line shows the Standard and Poor’s/Cowles Commission Index. The lower line shows the “real” price of stocks – the nominal index divided by the consumer price index. When the war broke out the New York Stock Exchange was closed to prevent panic selling. There are no prices for the New York Stock Exchange, although a lively “curb market” did develop. After the market reopened it rose as investors realized that the United States would profit as a neutral. The market then began a long slide that began when tensions between the United States and Germany rose at the end of 1916 and continued after the United States entered the war. A second, less rise began in the spring of 1918 when an Allied victory began to seem possible. The increase continued and gathered momentum after the Armistice. In real terms, however, as shown by the lower line in the figure, the rise in the stock market was not sufficient to offset the rise in consumer prices. At times one hears that war is good for the stock market, but the figures for World War I, as the figures for other wars, tell a more complex story.[6]

Figure 2

The Stock Market, 1913-1920

Table 2 shows the amounts of some of the key munitions produced during the war. During and after the war critics complained that the mobilization was too slow. American troops, for example, often went into battle with French artillery, clearly evidence, the critics implied, of incompetence somewhere in the supply chain. It does take time, however, to convert existing factories or build new ones and to work out the details of the production and distribution process. The last column of Table 2 shows peak monthly production, usually October 1918, at an annual rate. It is obvious that by the end of the war the United States was beginning to achieve the “production miracle” that occurred in World War II. When Franklin Roosevelt called for 50,000 planes in World War II, his demand was seen as an astounding exercise in bravado. Yet when we look at the last column of the table we see that the United States was hitting this level of production for Liberty engines by the end World War I. There were efforts during the war to coordinate Allied production. To some extent this was tried – the United States produced much of the smokeless powder used by the Allies – but it was always clear that the United States wanted its own army equipped with its own munitions.

Table 2
Production of Selected Munitions in World War I
Munition Total Production Peak monthly production at an annual rate
Rifles 3,550,000 3,252,000
Machine guns 226,557 420,000
Artillery units 3,077 4,920
Smokeless powder (pounds) 632,504,000 n.a.
Toxic Gas (tons) 10,817 32,712
De Haviland-4 bombers 3,227 13,200
Liberty airplane engines 13,574 46,200
Source: Ayres (1919, passim)

Financing the War

Where did the money come from to buy all these munitions? Then as now there were, the experts agreed, three basic ways to raise the money: (1) raising taxes, (2) borrowing from the public, and (3) printing money. In the Civil War the government had had simply printed the famous greenbacks. In World War I it was possible to “print money” in a more roundabout way. The government could sell a bond to the newly created Federal Reserve. The Federal Reserve would pay for it by creating a deposit account for the government, which the government could then draw upon to pay its expenses. If the government first sold the bond to the general public, the process of money creation would be even more roundabout. In the end the result would be much the same as if the government had simply printed greenbacks: the government would be paying for the war with newly created money. The experts gave little consideration to printing money. The reason may be that the gold standard was sacrosanct. A financial policy that would cause inflation and drive the United States off the gold standard was not to be taken seriously. Some economists may have known the history of the greenbacks of the Civil War and the inflation they had caused.

The real choice appeared to be between raising taxes and borrowing from the public. Most economists of the World War I era believed that raising taxes was best. Here they were following a tradition that stretched back to Adam Smith who argued that it was necessary to raise taxes in order to communicate the true cost of war to the public. During the war Oliver Morton Sprague, one of the leading economists of the day, offered another reason for avoiding borrowing. It was unfair, Sprague argued, to draft men into the armed forces and then expect them to come home and pay higher taxes to fund the interest and principal on war bonds. Most men of affairs, however, thought that some balance would have to be struck between taxes and borrowing. Treasury Secretary William Gibbs McAdoo thought that financing about 50 percent from taxes and 50 percent from bonds would be about right. Financing more from taxes, especially progressive taxes, would frighten the wealthier classes and undermine their support for the war.

In October 1917 Congress responded to the call for higher taxes with the War Revenue Act. This act increased the personal and corporate income tax rates and established new excise, excess-profit, and luxury taxes. The tax rate for an income of $10,000 with four exemptions (about $140,000 in 2003 dollars) went from 1.2 percent in 1916 to 7.8 percent. For incomes of $1,000,000 the rate went from 10.3 percent in 1916 to 70.3 percent in 1918. These increase in taxes and the increase in nominal income raised revenues from $930 million in 1916 to $4,388 million in 1918. Federal expenditures, however, increased from $1,333 million in 1916 to $15,585 million in 1918. A huge gap had opened up that would have to be closed by borrowing.

Short-term borrowing was undertaken as a stopgap. To reduce the pressure on the Treasury and the danger of a surge in short-term rates, however, it was necessary to issue long-term bonds, so the Treasury created the famous Liberty Bonds. The first issue was a thirty-year bond bearing a 3.5% coupon callable after fifteen years. There were three subsequent issues of Liberty Bonds, and one of shorter-term Victory Bonds after the Armistice. In all, the sale of these bonds raised over $20 billion dollars for the war effort.

In order to strengthen the market for Liberty Bonds, Secretary McAdoo launched a series of nationwide campaigns. Huge rallies were held in which famous actors, such as Charlie Chaplin, urged the crowds to buy Liberty Bonds. The government also enlisted famous artists to draw posters urging people to purchase the bonds. One of these posters, which are widely sought by collectors, is shown below.

But Mother Had Done Nothing Wrong, Had She, Daddy?

Louis Raemaekers. After a Zeppelin Raid in London: “But Mother Had Done Nothing Wrong, Had She, Daddy?” Prevent this in New York: Invest in Liberty Bonds. 19″ x 12.” From the Rutgers University Library Collection of Liberty Bond Posters.

Although the campaigns may have improved the morale of both the armed forces and the people at home, how much the campaigns contributed to expanding the market for the bonds is an open question. The bonds were tax-exempt – the exact degree of exemption varied from issue to issue – and this undoubtedly made them attractive to investors in high tax brackets. Indeed, the Treasury was criticized for imposing high marginal taxes with one hand, and then creating a loophole with the other. The Federal Reserve also bought many of the bonds creating new money. Some of this new “highpowered money” augmented the reserves of the commercial banks which allowed them to buy bonds or to finance their purchase by private citizens. Thus, directly or indirectly, a good deal of the support for the bond market was the result of money creation rather than savings by the general public.

Table 3 provides a rough breakdown of the means used to finance the war. Of the total cost of the war, about 22 percent was financed by taxes and from 20 to 25 percent by printing money, which meant that from 53 to 58 percent was financed through the bond issues.

Table 3
Financing World War I, March 1917-May 1919
Source of finance Billions of Dollars Percent (M2) Percent (M4)
Taxation and nontax receipts 7.3 22 22
Borrowing from the public 24 58 53
Direct money creation 1.6 5 5
Indirect money creation (M2) 4.8 15
Indirect money creation (M4) 6.6 20
Total cost of the war 32.9 100 100
Note: Direct money creation is the increase in the stock of high-powered money net of the increase in monetary gold. Indirect money creation is the increase in monetary liabilities not matched by the increase in high-powered money.

Source: Friedman and Schwartz (1963, 221)

Heavy reliance on the Federal Reserve meant, of course, that the stock of money increased rapidly. As shown in Table 1, the stock of money rose from $20.7 billion in 1916 to $35.1 billion in 1920, about 70 percent. The price level (GDP deflator) increased 85 percent over the same period.

The Government’s Role in Mobilization

Once the contracts for munitions were issued and the money began flowing, the government might have relied on the price system to allocate resources. This was the policy followed during the Civil War. For a number of reasons, however, the government attempted to manage the allocation of resources from Washington. For one thing, the Wilson administration, reflecting the Progressive wing of the Democratic Party, was suspicious of the market, and doubted its ability to work quickly and efficiently, and to protect the average person against profiteering. Another factor was simply that the European belligerents had adopted wide-ranging economic controls and it made sense for the United States, a latecomer, to follow suit.

A wide variety of agencies were created to control the economy during the mobilization. A look at four of the most important – (1) the Food Administration, (2) the Fuel Administration, (3) the Railroad Administration, and (4) the War Industries Board – will suggest the extent to which the United States turned away from its traditional reliance on the market. Unfortunately, space precludes a review of many of the other agencies such as the War Shipping Board, which built noncombatant ships, the War Labor Board, which attempted to settle labor disputes, and the New Issues Committee, which vetted private issues of stocks and bonds.

Food Administration

The Food Administration was created by the Lever Food and Fuel Act in August 1917. Herbert Hoover, who had already won international fame as a relief administrator in China and Europe, was appointed to head it. The mission of the Food Administration was to stimulate the production of food and assure a fair distribution among American civilians, the armed forces, and the Allies, and at a fair price. The Food Administration did not attempt to set maximum prices at retail or (with the exception of sugar) to ration food. The Act itself set what then was a high minimum price for wheat – the key grain in international markets – at the farm gate, although the price would eventually go higher. The markups of processors and distributors were controlled by licensing them and threatening to take their licenses away if they did not cooperate. The Food Administration then attempted control prices and quantities at retail through calls for voluntary cooperation. Millers were encouraged to tie the sale of wheat flour to the sale of less desirable flours – corn meal, potato flour, and so on – thus making a virtue out of a practice that would have been regarded as a disreputable evasion of formal price ceilings. Bakers were encouraged to bake “Victory bread,” which included a wheat-flour substitute. Finally, Hoover urged Americans to curtail their consumption of the most valuable foodstuffs: there were, for example, Meatless Mondays and Wheatless Wednesdays.

Fuel Administration

The Fuel Administration was created under the same Act as the Food Administration. Harry Garfield, the son of President James Garfield, and the President of Williams College, was appointed to head it. Its main problem was controlling the price and distribution of bituminous coal. In the winter of 1918 a variety of factors combined to cause a severe coal shortage that forced school and factory closures. The Fuel Administration set the price of coal at the mines and the margins of dealers, mediated disputes in the coalfields, and worked with the Railroad Administration (described below) to reduce long hauls of coal.

Railroad Administration

The Wilson Administration nationalized the railroads and put them under the control of the Railroad Administration in December of 1917, in response to severe congestion in the railway network that was holding up the movement of war goods and coal. Wilson’s energetic Secretary of the Treasury (and son-in-law), William Gibbs McAdoo, was appointed to head it. The railroads would remain under government control for another 26 months. There has been considerable controversy over how well the system worked under federal control. Defenders of the takeover point out that the congestion was relieved and that policies that increased standardization and eliminated unnecessary competition were put in place. Critics of the takeover point to the large deficit that was incurred, nearly $1.7 billion, and to the deterioration of the capital stock of the industry. William J. Cunningham’s (1921) two papers in the Quarterly Journal of Economics, although written shortly after the event, still provide one of the most detailed and fair-minded treatments of the Railroad Administration.

War Industries Board

The most important federal agency, at least in terms of the scope of its mission, was the War Industries Board. The Board was established in July of 1917. Its purpose was no less than to assure the full mobilization of the nation’s resources for the purpose of winning the war. Initially the Board relied on persuasion to make its orders effective, but rising criticism of the pace of mobilization, and the problems with coal and transport in the winter of 1918, led to a strengthening of its role. In March 1918 the Board was reorganized, and Wilson placed Bernard Baruch, a Wall Street investor, in charge. Baruch installed a “priorities system” to determine the order in which contracts could be filled by manufacturers. Contracts rated AA by the War Industries Board had to be filled before contracts rated A, and so on. Although much hailed at the time, this system proved inadequate when tried in World War II. The War Industries Board also set prices of industrial products such as iron and steel, coke, rubber, and so on. This was handled by the Board’s independent Price Fixing Committee.

It is tempting to look at these experiments for clues on how the economy would perform under various forms of economic control. It is important, however, to keep in mind that these were very brief experiments. When the war ended in November 1918 most of the agencies immediately wound up their activities. Only the Railroad Administration and the War Shipping Board continued to operate. The War Industries Board, for example, was in operation only for a total of sixteen months; Bernard Baruch’s tenure was only eight months. Obviously only limited conclusions can be drawn from these experiments.

Costs of the War

The human and economic costs of the war were substantial. The death rate was high: 48,909 members of the armed forces died in battle, and 63,523 died from disease. Many of those who died from disease, perhaps 40,000, died from pneumonia during the influenza-pneumonia epidemic that hit at the end of the war. Some 230,074 members of the armed forces suffered nonmortal wounds.

John Maurice Clark provided what is still the most detailed and thoughtful estimate of the cost of the war; a total amount of about $32 billion. Clark tried to estimate what an economist would call the resource cost of the war. For that reason he included actual federal government spending on the Army and Navy, the amount of foreign obligations, and the difference between what government employees could earn in the private sector and what they actually earned. He excluded interest on the national debt and part of the subsidies paid to the Railroad Administration because he thought they were transfers. His estimate of $32 billion amounted to about 46 percent of GNP in 1918.

Long-run Economic Consequences

The war left a number of economic legacies. Here we will briefly describe three of the most important.

The finances of the federal government were permanently altered by the war. It is true that the tax increases put in place during the war were scaled back during the 1920s by successive Republican administrations. Tax rates, however, had to remain higher than before the war to pay for higher expenditures due mainly to interest on the national debt and veterans benefits.

The international economic position of the United States was permanently altered by the war. The United States had long been a debtor country. The United States emerged from the war, however, as a net creditor. The turnaround was dramatic. In 1914 U.S investments abroad amounted to $5.0 billion, while total foreign investments in the United States amounted to $7.2 billion. Americans were net debtors to the tune of $2.2 billion. By 1919 U.S investments abroad had risen to $9.7 billion, while total foreign investments in the United States had fallen to $3.3 billion: Americans were net creditors to the tune of $6.4 billion.[7] Before the war the center of the world capital market was London, and the Bank of England was the world’s most important financial institution; after the war leadership shifted to New York, and the role of the Federal Reserve was enhanced.

The management of the war economy by a phalanx of Federal agencies persuaded many Americans that the government could play an important positive role in the economy. This lesson remained dormant during the 1920s, but came to life when the United States faced the Great Depression. Both the general idea of fighting the Depression by creating federal agencies and many of the specific agencies and programs reflected precedents set in Word War I. The Civilian Conservation Corps, a Depression era agency that hired young men to work on conservation projects, for example, attempted to achieve the benefits of military training in a civilian setting. The National Industrial Recovery Act reflected ideas Bernard Baruch developed at the War Industries Board, and the Agricultural Adjustment Administration hearkened back to the Food Administration. Ideas about the appropriate role of the federal government in the economy, in other words, may have been the most important economic legacy of American involvement in World War I.

Chronology of World War I
1914
June Archduke Franz Ferdinand is shot.
August Beginning of the war.
1915
May Sinking of the Lusitania. War talk begins in the United States.
1916
June National Defense Act expands the Army
1917
February Germany renews unrestricted submarine warfare.
U.S.S. Housatonic sunk.
U.S. breaks diplomatic relations with Germany
April U.S. declares war.
May Selective Service Act
June First Liberty Loan
July War Industries Board
August Lever Food and Fuel Control Act
October War Revenue Act
November Second Liberty Loan
December Railroads are nationalized.
1918
January Maximum prices for steel
March Bernard Baruch heads the War Industries Board
Germans begin massive offensive on the western front
May Third Liberty Loan
First independent action by the American Expeditionary Force
June Battle of Belleau Wood – the first sizable U.S. action
July Second Battle of the Marne – German offensive stopped
September 900,000 Americans in the Battle of Meuse-Argonne
October Fourth Liberty Loan
November Armistice

References and Suggestions for Further Reading

Ayres, Leonard P. The War with Germany: A Statistical Summary. Washington DC: Government Printing Office. 1919.

Balke, Nathan S. and Robert J. Gordon. “The Estimation of Prewar Gross National Product: Methodology and New Evidence.” Journal of Political Economy 97, no. 1 (1989): 38-92.

Clark, John Maurice. “The Basis of War-Time Collectivism.” American Economic Review 7 (1917): 772-790.

Clark, John Maurice. The Cost of the World War to the American People. New Haven: Yale University Press for the Carnegie Endowment for International Peace, 1931.

Cuff, Robert D. The War Industries Board: Business-Government Relations during World War I. Baltimore: Johns Hopkins University Press, 1973.

Cunningham, William J. “The Railroads under Government Operation. I: The Period to the Close of 1918.” Quarterly Journal of Economics 35, no. 2 (1921): 288-340. “II: From January 1, 1919, to March 1, 1920.” Quarterly Journal of Economics 36, no. 1. (1921): 30-71.

Friedman, Milton, and Anna J. Schwartz. A Monetary History of the United States, 1867-1960. Princeton: Princeton University Press, 1963.

Friedman, Milton, and Anna J. Schwartz. Monetary Statistics of the United States: Estimates, Sources, and Methods. New York: Columbia University Press, 1970.

Gilbert, Martin. The First World War: A Complete History. New York: Henry Holt, 1994.

Kendrick, John W. Productivity Trends in the United States. Princeton: Princeton University Press, 1961.

Koistinen, Paul A. C. Mobilizing for Modern War: The Political Economy of American Warfare, 1865-1919. Lawrence, KS: University Press of Kansas, 1997.

Miron, Jeffrey A. and Christina D. Romer. “A New Monthly Index of Industrial Production, 1884-1940.” Journal of Economic History 50, no. 2 (1990): 321-37.

Rockoff, Hugh. Drastic Measures: A History of Wage and Price Controls in the United States. New York: Cambridge University Press, 1984.

Rockoff, Hugh. “Until It’s Over, Over There: The U.S. Economy in World War I.” National Bureau of Economic Research, Working Paper w10580, 2004.

U.S. Bureau of the Census. Historical Statistics of the United States, Colonial Times to 1970, Bicentennial Edition. Washington, DC: Government Printing Office, 1975.


Endnotes

[1] Quoted in Gilbert (1994, 3).

[2] U.S. exports to Europe are from U.S. Bureau of the Census (1975), series U324.

[3] Real wages in manufacturing were computed by dividing “Hourly Earnings in Manufacturing Industries” by the Consumer Price Index (U.S. Bureau of the Census 1975, series D766 and E135).

[4] Steel ingots are from the National Bureau of Economic Research, macrohistory database, series m01135a, www.nber.org. Total Industrial Production is from Miron and Romer (1990), Table 2.

[5] The sharp and temporary drop in the winter of 1918 was due to a shortage of coal.

[6] The chart shows end-of-month values of the S&P/Cowles Composite Stock Index, from Global Financial Data: http://www.globalfinancialdata.com/. To get real prices I divided this index by monthly values of the United States Consumer Price Index for all items. This is available as series 04128 in the National Bureau of Economic Research Macro-Data Base available at http://www.nber.org/.

[7] U.S. investments abroad (U.S. Bureau of the Census 1975, series U26); Foreign investments in the U.S. (U.S.

Citation: Rockoff, Hugh. “US Economy in World War I”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/u-s-economy-in-world-war-i/

The Works Progress Administration

Jim Couch, University of North Alabama

Introduction: The Great Depression and the New Deal

The Great Depression stands as an event unique in American history due to both its length and severity. With the unprecedented economic collapse, the nation faced “an emergency more serious than war” (Higgs 1987, p. 159). The Depression was a time of tremendous suffering and at its worst, left a quarter of the workforce unemployed. During the twentieth century, the annual unemployment rate averaged double-digit levels in just eleven years. Ten of these occurred during the Great Depression.

A confused and hungry nation turned to the government for assistance. With the inauguration of Franklin Delano Roosevelt on March 4, 1933, the federal government’s response to the economic emergency was swift and massive. The explosion of legislation — which came to be collectively called the New Deal — was designed, at least in theory, to bring a halt to the human suffering and put the country on the road to recovery. The president promised relief, recovery and reform.

Although the Civil Works Administration (CWA), the Civilian Conservation Corps (CCC), and the National Recovery Administration (NRA) were all begun two years earlier, the Works Progress Administration (WPA) became the best known of the administration’s alphabet agencies. Indeed, for many the works program is synonymous with the entire New Deal. Roosevelt devoted more energy and more money to the WPA than to any other agency (Charles 1963, p. 220). The WPA would provide public employment for people who were out of work. The administration felt that the creation of make-work jobs for the jobless would restore the human spirit, but dignity came with a price tag — an appropriation of almost $5 billion was requested. From 1936 to 1939 expenditures totaled nearly $7 billion. Annual figures are given in Table 1.

Table 1
WPA Expenditures

Year Expenditure
1936 $1,295,459,010
1937 $1,879,493,595
1938 $1,463,694,664
1939 $2,125,009,386

Source: Office of Government Reports, Statistical Section, Federal Loans and Expenditures, Vol. II, Washington, D.C., 1940.

WPA Projects and Procedures

The legislation that created the WPA, the Emergency Relief Appropriation Act of 1935 sailed through the House, passing by a margin of 329 to 78 but bogged down in the Senate where a vocal minority argued against the measure. Despite the opposition, the legislation passed in April of 1935.

Harry Hopkins headed the new organization. Hopkins became, “after Roosevelt, the most powerful man in the administration” (Reading 1972, pp. 16-17). All WPA administrators, whether assigned to Washington or to the agency’s state and local district offices, were employees of the federal government and all WPA workers’ wages were distributed directly from the U.S. Treasury (Kurzman 1974, p. 107). The WPA required the states to provide some of their own resources to finance projects but a specific match was never stipulated — a fact that would later become a source of contentious debate.

The agency prepared a “Guide to Eligibility of WPA Projects” which was made available to the states. Nineteen types of potentially fundable activities were described ranging from malaria control to recreational programs to street building (MacMahon, Millet and Ogden 1941, p. 308).

Hopkins and Roosevelt proposed that WPA compensation be based on a “security wage” which would be an hourly amount greater than the typical relief payment but less than that offered by private employers. The administration contended that it was misleading to evaluate the programs’ effects solely on the basis of wages paid — more important were earnings through continuous employment. Thus, wages were reported in monthly amounts.

Wages differed widely from region to region and state-to-state. Senator Richard Russell of Georgia explained, “In the State of Tennessee the man who is working with a pick and shovel at 18 cents an hour is limited to $26 a month, and he must work 144 hours to earn $26. Whereas the man who is working in Pennsylvania has to work only 30 hours to earn $94, out of funds which are being paid out of the common Treasury of the United States” (U.S. House of Representatives 1938, p. 913). Recurring complaints of this nature led to adjustments in the wage rate that narrowed regional differentials to more closely reflect the cost of living in the state.

Robert Margo argues that federal relief programs like the WPA may have exacerbated the nation’s unemployment problem. He presents evidence indicating that the long-term unemployed on work relief were “not very responsive to improved economic conditions” while the long-term unemployed not on work relief “were responsive to improved economic conditions” (Margo 1991:339). Many workers were afraid of the instability associated with a private-sector job and were reluctant to leave the WPA. As Margo explains, “By providing an alternative to the employment search (which many WPA workers perceived, correctly or not, to be fruitless), work relief may have lessened downward pressure on nominal wages” (p. 340). This lack of adjustment of the wage rate may have slowed the economy’s return to full employment.

The number of persons employed by the WPA is given in Figure 1. Gavin Wright points out that “WPA employment reached peaks in the fall of election years” (Wright 1974, p. 35).

Figure 1 – Number of Persons Employed by WPA
1936-1941
(in thousands)

Source: Wright (1974), p. 35.

The work done by the organization stands as a tribute to the WPA. Almost every community in America has a park, bridge or school constructed by the agency. As of 1940, the WPA had erected 4,383 new school buildings and made repairs and additions to over 30,000 others. More than 130 hospitals were built and improvements made to another 1670 (MacMahon, Millet and Ogden 1941, pp. 4-5). Nearly 9000 miles of new storm drains and sanitary sewer lines were laid. The agency engaged in conservation work planting 24 million trees (Office of Government Reports 1939, p. 80). The WPA built or refurbished over 2500 sports stadiums around the country with a combined seating capacity of 6,000,000 (MacMahon, Millet and Ogden 1941. pp. 6-7).

Addressing the nation’s transportation needs accounted for much of the WPA’s work. By the summer of 1938, 280,000 miles of roads and streets had been paved or repaired and 29,000 bridges had been constructed. Over 150 new airfields and 280 miles of runway were built (Office of Government Reports 1939, p. 79).

Because Harry Hopkins believed that the work provided by the WPA should match the skills of the unemployed, artists were employed to paint murals in public buildings, sculptors created park and battlefield monuments, and actors and musicians were paid to perform. These white-collar programs did not escape criticism and the term “boondoggling” was added to the English language to describe government projects of dubious merit.

Work relief for the needy was the putative purpose of the WPA. Testifying before the Senate Special Committee to Investigate Unemployment and Relief in 1938, Corrington Gill — Assistant to WPA administrator Harry Hopkins — asserted, “Our regional representatives . . . are intimately in touch with the States and the conditions in the States” (U.S. Senate 1938, p. 51).

The Roosevelt administration, of course, asserted that dollars were allocated to where need was the greatest. Some observers at the time, however, were suspicious of what truly motivated the New Dealers.

The Distribution of WPA Funds

In 1939, Georgia Senator Richard Russell in a speech before the Senate compared the appropriation his state received with those received by Wisconsin, a state with similar land area and population but with far more resources. He was interrupted by Senator Ellison Smith of South Carolina:

Mr. Smith: I have been interested in the analysis the Senator has made of the wealth and population which showed that Wisconsin and Georgia were so nearly equal in those features. I wondered if the Senator had any way of ascertaining the political aspect in those two States.
Mr. Russell: Mr. President, I had not intended to touch upon any political aspects of this question.
Mr. Smith: Why not? The Senator knows that is all there is to it (U.S. House of Representatives 1939, p. 926).

Scholars have begun to examine the New Deal in this light, producing evidence supporting Senator Smith’s assertion that political considerations helped to shape the WPA.

An empirical analysis of New Deal spending priorities was made possible by Leonard Arrington’s discovery in 1969 of documents prepared by an obscure federal government agency. “Prepared in late 1939 by the Office of Government Reports for the use of Franklin Roosevelt during the presidential campaign of 1940, the 50-page reports — one for each state — give precise information on the activities and achievements of the various New Deal economic agencies” (Arrington 1969, p. 311).

Using this data source to investigate the relationship between WPA appropriations to the states and state economic conditions makes the administration’s claims of allocating dollars to where need was greatest difficult to support. Instead, evidence supports a political motivation to the pattern of expenditures. While the legislation that funded the WPA sailed through the House, a vocal minority in the Senate argued against the measure — a fact the Roosevelt administration did not forget. “Hopkins devoted considerable attention to his relations with Congress, particularly from 1935 on. While he continually ignored several Congressmen because of their obnoxious ways of opposing the New Deal . . . he gave special attention to Senators . . . who supported the work relief program (Charles 1963, p. 162).

Empirical results confirm Charles’ assertion; WPA dollars flowed to states whose Senators voted in favor of the 1935 legislation. Likewise, if the state’s Senators opposed the measure, significantly fewer work relief dollars were distributed to the state.

The matching funds required to ‘buy’ WPA appropriations were not uniform from state-to-state. The Roosevelt administration argued that allowing them discretion to determine the size of the match would enable them to get projects to the states with fewer resources. Senator Richard Russell of Georgia complained in a Senate speech, “the poorer states . . . are required to contribute more from their poverty toward sponsored projects than the wealthier states are” (Congressional Record 1939, p. 921). Senator Russell entered sponsor contributions from each state into the Congressional Record. The data support the Senator’s assertion. Citizens in relatively poor Tennessee were forced to contribute 33.2 percent toward WPA projects while citizens in relatively rich Pennsylvania were required to contribute only 10.1 percent toward their projects. Empirical evidence supports the notion that by lowering the size of the match, Roosevelt was able to put more projects into states that were important to him politically (Couch and Smith, 2000).

The WPA represented the largest program of its kind in American history. It put much-needed dollars into the hands of jobless millions and in the process contributed to the nation’s infrastructure. Despite this record of achievement, serious questions remain concerning whether the program’s money, projects, and jobs were distributed to those who were truly in need or instead to further the political aspirations of the Roosevelt administration.

References

Arrington, Leonard J. “The New Deal in the West: A Preliminary Statistical Inquiry.” Pacific Historical Review 38 (1969): 311-16.

Charles, Searle F. Minister of Relief: Harry Hopkins and the Depression. Syracuse: Syracuse University Press, 1969.

Congressional Record (1934 and 1939) Washington: Government Printing Office.

Couch, Jim F. and Lewis Smith (2000) “New Deal Programs and State Matching Funds: Reconstruction or Re-election?” unpublished manuscript, University of North Alabama.

Higgs, Robert. Crisis and Leviathan: Critical Episodes in the Growth of American Government, New York: Oxford University Press, 1987.

Kurzman, Paul A. Harry Hopkins and the New Deal. Fairlawn, NJ: R.E. Burdick, 1974.

MacMahon, Arthur, John Millett and Gladys Ogden. The Administration of Federal Work Relief. Chicago: Public Administration Service, 1941.

Margo, Robert A. “The Microeconomics of Depression Unemployment.” Journal of Economic History 51, no. 2 (1991): 333-41.

Office of Government Reports. Activities of Selected Federal Agencies, Report No. 7. Washington, DC: Office of Government Reports, 1939.

Reading, Don C. “A Statistical Analysis of New Deal Economic Programs in the Forty-eight States, 1933-1939.” Ph.D. dissertation, Utah State University, 1972.

US House of Representatives. Congressional Directory, Washington, DC: US Government Printing Office, 1938 and 1939.

US Senate, Special Committee to Investigate Unemployment and Relief (‘Byrnes Committee’). Unemployment and Relief: Hearings before a Special Committee to Investigate Unemployment and Relief, Washington, DC: US Government Printing Office, 1938.

Wright, Gavin. “The Political Economy of New Deal Spending: An Econometric Analysis.” Review of Economics and Statistics 56, no. 1 (1974): 30-38.

Suggestions for further reading:

Heckelman, Jac C., John C. Moorhouse, and Robert M. Whaples, editors. Public Choice Interpretations of American Economic History. Boston: Kluwer Academic Publishers, 2000.

Couch, Jim F. and William F. Shughart. The Political Economy of the New Deal Edward Elgar, 1998.

Citation: Couch, Jim. “Works Progress Administration”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-works-progress-administration/

Workers’ Compensation

Price V. Fishback, University of Arizona

Workers’ compensation was one of the first social insurance programs adopted broadly throughout the United States. Under workers’ compensation employers are required to make provisions such that workers who are injured in accidents arising “out of or in the course of employment” receive medical treatment and receive payments ranging up to roughly two-thirds of their wages to replace lost income. Workers’ compensation laws were originally adopted by most states between 1911 and 1920 and the programs continue to be administered by state governments today.

The Origins of Workers’ Compensation

The System of Negligence Liability

Prior to the introduction of workers’ compensation, workers injured on the job were compensated under a system of negligence liability. If the worker could show that the accident was caused by the employer’s negligence, the worker was entitled to full compensation for the damage he experienced. The employer was considered negligent if he failed to exercise due care. If the worker could show the employer had been negligent, the worker still might not have received any compensation if the employer could rely on one of three defenses: assumption of risk, fellow-servant defense, and contributory negligence. The employer was no longer liable, respectively, if the worker knew of the danger and assumed the risk of the danger when accepting the job, if a fellow worker caused the accident, or if the worker’s own negligence contributed to the accident.

Compensation to Accident Victims before Workers’ Compensation

These common law rules were the ultimate guide for judges who adjudicated disputes between employers and workers. As in many civil situations, the vast majority of accident cases were settled long before they ever went to trial. The employers or their insurers typically offered settlements to injured workers. Various studies done by state employer liability commissions suggest that a substantial number of workers received no compensation for their accidents, which might have been expected if the employer’s negligence was not a cause of the accident. In samples of fatal accidents, about half the families of fatal accident victims received some payments for the loss of their loved ones. For those who received payments, the average payment was around one year’s income. There were a few cases where the accident victims and their families received substantial payments, but there were far more cases where no payment was made.

To some extent workers received some compensation for accepting accident risk in the form of higher wages for more dangerous jobs. Workers had relatively limited opportunities to use these higher wages to buy accident or life insurance, or pay premiums into benefit societies. As a result, many workers and families tried to rely on savings to sustain them in the event of an accident. The problem they faced is that it would take quite a few years to save enough to cover the losses of an accident, and if they were unlucky enough to have an accident early on, they would quickly exhaust these savings. The system of negligence liability, although without the three defenses, continues to determine the nature of accident compensation in the railroad industry.

Adoption of Workers’ Compensation Laws in the 1910s

In the late nineteenth century a number of European countries began to introduce workers’ compensation in a variety of forms. Among industrial countries the U.S. was relatively slow to adopt the changes. The federal government generally considered social insurance and welfare to be the purview of the states, so workers’ compensation was adopted at the state and not the federal level. The federal government did lead the way in covering its own workforce under workers’ compensation with legislation passed in 1908. As shown in Table 1, the vast majority of states adopted workers’ compensation laws between 1911 and 1920. The last state to adopt was Mississippi in 1948.

Table 1
Characteristics of Workers’ Compensation Laws in the United States, 1910-1930

State Year State Legislature First Enacted a General Lawa Method of Insuranceb
New York 1910 (1913)a Competitive Statec
California 1911 Competitive Statec
Illinois 1911 Private
Kansas 1911 Private
Massachusetts 1911 Private
New Hampshire 1911 Private
New Jersey 1911 Private
Ohio 1911 State
Washington 1911 State
Wisconsin 1911 Private
Marylandf 1912 Competitive State
Michigan 1912 Competitive State
Rhode Island 1912 Private
Arizona 1913 Competitive State
Connecticut 1913 Private
Iowa 1913 Private
Minnesota 1913 Private
Nebraska 1913 Private
Nevada 1913 State
New Yorkf 1913 Competitive State
Oregon 1913 State
Texas 1913 Private
West Virginia 1913 State
Louisiana 1914 Private
Kentucky 1914 (1916)a Private
Colorado 1915 Competitive State
Indiana 1915 Private
Maine 1915 Private
Montanaf 1915 Competitive State
Oklahoma 1915 Private
Pennsylvania 1915 Competitive State
Vermont 1915 Private
Wyoming 1915 State
Delaware 1917 Private
Idaho 1917 Competitive State
New Mexico 1917 Private
South Dakota 1917 Private
Utah 1917 Competitive State
Virginia 1918 Private
Alabama 1919 Private
North Dakota 1919 State
Tennessee 1919 Private
Missouri 1919 (1926)a Private
Georgia 1920 Private
North Carolina 1929 Private
Florida 1935 Private
South Carolina 1935 Private
Arkansas 1939 Private
Mississippi 1948 Private

Source: Fishback and Kantor, 2000, pp. 103-4.

a Some general laws were enacted by legislatures but were declared unconstitutional. The years that the law was permanently established are in parentheses. New York passed a compulsory law in 1910 and an elective law in 1910, but the compulsory law was declared unconstitutional, and the elective law saw little use. New York passed a compulsory law in 1913 after passing a constitutional amendment. The Kentucky law of 1914 was declared unconstitutional and was replaced by a law in 1916. The Missouri General Assembly passed a workers’ compensation law in 1919, but it failed to receive enough votes in a referendum in 1920. Another law passed in 1921 was defeated in a referendum in 1922 and an initiative on the ballot was again defeated in 1924. Missouri voters finally approved a workers’ compensation law in a 1926 referendum on a 1925 legislative act. Maryland (1902) and Montana (1909) passed earlier laws specific to miners that were declared unconstitutional.

b Competitive state insurance allowed employers to purchase their workers’ compensation insurance from either private insurance companies or the state. A monopoly state fund required employers to purchase their policies through the state’s fund. Most states also allowed firms to self-insure if they could meet certain financial solvency tests.

c California and New York established their competitive state funds in 1913.

d The initial laws in Ohio, Illinois, and California were elective. Ohio and California in 1913 and Illinois later established compulsory laws.

e Illinois’ initial law was administered by the courts; they switched to a commission in 1913.

f Employees have option to collect compensation or sue for damages after injury.

g Compulsory for motor bus industry only.

h Compulsory for coal mining only.

Provisions of Workers’ Compensation Laws

The adoption of workers’ compensation led to substantial changes in the nature of workplace accident compensation. Compensation was no longer based on the worker showing that the employer was at fault, nor could compensation be denied if the worker’s negligence contributed to the injury. An injured worker typically had to sustain an injury that lasted several days before he would become eligible for wage replacement. Once he became eligible, he could expect to receive weekly payments of up to two-thirds of his wage while injured. These payments were often capped at a fixed amount per week. As a result, high-wage workers sometimes received payments that replaced a smaller percentage of their lost earnings. The families of workers killed in fatal accidents typically receive burial expenses and a weekly payment of up to two-thirds of the wage, often subject to caps on the weekly payments and limits on the total amounts paid out.

Gains to Workers from Workers’ Compensation Laws

Most workers appeared to benefit from the introduction of workers’ compensation. Comparisons of the typical payments under negligence liability and payments under workers’ compensation suggest that a typical worker injured on the job was likely to receive more compensation under workers’ compensation than under negligence liability. Partly this rise was due to the fact that all workers injured under workers’ compensation were eligible for compensation; partly it was due to higher average workers’ compensation payments when compared with the typical settlement under negligence liability. Studies of wages before and after the introduction of workers’ compensation show, however, that non-union workers’ wages were reduced by the introduction of workers’ compensation. In essence, the non-union workers “bought” these improvements in their benefit levels. Even though workers may have paid for their benefits, they still seem to have been better off as a result of the introduction of workers’ compensation. Many workers had faced problems in purchasing accident insurance at the turn of the century. Workers’ compensation left them better insured, and allowed many of them to spend some of their savings that they had set aside in case of an accident.

Employers and Insurers Also Favor Workers’ Compensation

Employers were also active in pressing for workers’ compensation legislation for a variety of reasons. Some were troubled by the uncertainties associated with the courts and juries applying negligence liability to accidents. Some large awards by juries fueled these fears. Others were worried about state legislatures adopting legislation that would limit their defenses in liability suits. The negligence liability system had become an increasing source of friction between workers and employers. In the final analysis, the employers were also able to pass many of the costs of the new workers’ compensation system back to the workers in the form of lower wages. Finally, insurance companies also favored the introduction of workers’ compensation as long as the states did not try to establish their own insurance funds. Under the negligence liability system, the insurers had not been selling much accident insurance to workers because of information problems in identifying who would be good and bad risks. The switch to workers’ compensation put more of the impetus for insurance on employers and insurers found that they could more effectively solve these information problems when selling insurance to employers. As a result, insurance companies saw a rise in their business of insuring workplace accidents.

In the final analysis, the adoption of workers’ compensation was popular legislation. It was supported by the major interest groups-employers, workers, and insurers-each of whom anticipated gains from the legislation. Progressives and social reformers played some role in the adoption of the legislation, but their efforts were not as important to the passage as often surmised because so many interests groups supported the legislation.

Interest Groups Battle over Specific Provisions

On the other hand, the various interest groups fought, sometimes bitterly, over the specific details of the legislation, including the generosity of benefit levels and whether or not the state would sell workers’ compensation insurance to employers. These battles over the details at times slowed the passage of the legislation. The benefit levels tended to be higher in states where there were more workers in unionized industry but lower in states where dangerous industries predominated. Reformers played a larger role on the details as they promoted higher benefits. In several states the insurance companies lost the battle over state insurance, most often in settings where the insurance industry had a limited presence and reformers had a strong presence. As seen in Table 1, several states established monopoly state insurance funds that prevented private companies from underwriting workers’ compensation insurance. Some other states established state insurance funds that would compete with private insurers.

Trends in Workers’ Compensation over the Past Century

Changes in Occupational Coverage

Since its introduction, workers’ compensation has gone through several changes. More classes of workers have been covered by workers’ compensation over time. When workers’ compensation was first introduced, several types of employment were exempted, including agricultural workers, domestic servants, many railroad workers in interstate commerce, and in some states workers in nonhazardous employments. Further, workers hired by employers with fewer than 3 to 5 workers (varying by state) have been typically exempt from the law. As seen in Table 2, by 1940 employees earning wages and salaries accounting for 75 percent of wage and salary disbursements were covered by workers’ compensation laws. At the time that Mississippi adopted in 1948, the percentage rose to about 78 percent. Since that time a decline in domestic servitude, railroading, and agricultural employment, as well as expansions of workers’ compensation coverage have led to payroll coverage of about 92 percent.

Growth in Expenditures on Workers’ Compensation

Since 1939, real expenditures on workers’ compensation programs (in 1996 dollars) have grown at an average annual rate of 4.8 percent per year. The growth has been caused in part by the expansions in the types of workers covered, as described above. Another source of growth has been expansions in the coverage of types of injuries and occupational diseases. Although workers’ compensation was originally established to insure workers again workplace accidents, the programs in most states were expanded to cover occupation-related diseases. Starting with California in 1915, states began expanding the coverage of workers’ compensation laws to include payments to workers’ disabled by occupational diseases. By 1939 23 states covered at least some occupational diseases.1 As of July 1953 every state but Mississippi and Wyoming had at least some coverage for occupation diseases. By the 1980s all states had some form of coverage. More recently, some states have begun to expand coverage to include compensation to persons suffering from work-related disabilities associated with psychological stress.

Increased Benefit Levels

Another contributor to the growth in workers’ compensation spending has been an increase in benefit levels. The rise in benefits paid out has occurred even though workplace accident rates have declined since the beginning of the century. Workers’ compensation costs as a percentage of covered payroll (see Table 2) generally stayed around 1 percent until the late 1960s and early 1970. Since then, these costs have risen along a strong upward trend to nearly 2.5 percent in 1990. The rise in compensation costs in Table 2 was driven in part by increased payments for benefits and medical coverage, as well as the introduction of the Black Lung program for coal miners in 1969. The rise in benefits can be explained in part by a series of amendments to state laws in the 1970s that sharply increased the weekly maximums that could be paid for benefits.

Table 2
Long-Term Trends in Workers’ Compensation Coverage and Costs

Year Share of wage and salary payments to workers covered by WC WC benefits paid in 1996 dollars Cost of WC programs as percent of covered payrolla WC benefits as percent of covered payrolla Medical and hospital payments as percent of wage and salaries covered by WC Disability payments as percent of wage and salaries covered by WC Survivor payments as percent of wage and salaries covered by WC
percent $ (millions) percent percent percent percent percent
1940 73.6 2686 1.2 0.7 0.27 0.36 0.09
1941 na 2839 na na na na na
1942 na 2859 na na na na na
1943 na 2862 na na na na na
1944 na 3047 na na na na na
1945 63.0 3148 na na 0.17 0.33 0.06
1946 71.4 2997 0.9 0.5 0.18 0.31 0.06
1947 74.3 3000 na na 0.17 0.31 0.05
1948 77.5 3090 1.0 0.5 0.17 0.29 0.05
1949 76.4 3296 1.0 0.6 0.18 0.32 0.05
1950 77.2 3532 0.9 0.5 0.18 0.32 0.05
1951 76.8 3815 0.9 0.5 0.18 0.32 0.05
1952 76.3 4132 0.9 0.6 0.18 0.33 0.05
1953 77.3 4387 1.0 0.6 0.18 0.32 0.05
1954 77.7 4503 1.0 0.6 0.20 0.33 0.05
1955 79.4 4641 0.9 0.6 0.19 0.31 0.04
1956 79.5 4909 0.9 0.6 0.19 0.32 0.04
1957 79.4 5017 0.9 0.6 0.19 0.32 0.04
1958 79.8 5121 0.9 0.6 0.20 0.34 0.05
1959 80.7 5485 0.9 0.6 0.20 0.33 0.05
1960 80.9 5789 0.9 0.6 0.20 0.34 0.05
1961 81.0 6074 1.0 0.6 0.20 0.35 0.05
1962 80.9 6494 1.0 0.6 0.21 0.36 0.05
1963 81.0 6822 1.0 0.6 0.21 0.37 0.05
1964 80.9 7251 1.0 0.6 0.21 0.37 0.05
1965 80.7 7565 1.0 0.6 0.21 0.37 0.05
1966 80.6 8107 1.0 0.6 0.21 0.36 0.05
1967 80.1 8608 1.1 0.6 0.22 0.38 0.05
1968 80.0 8956 1.1 0.6 0.22 0.37 0.04
1969 80.3 9471 1.1 0.6 0.22 0.37 0.04
1970 80.4 10348 1.1 0.7 0.24 0.40 0.05
1971 80.7 11557 1.1 0.7 0.24 0.44 0.08
1972 80.6 12620 1.1 0.7 0.24 0.46 0.09
1973 82.3 15000 1.2 0.7 0.26 0.51 0.12
1974 83.2 15641 1.2 0.8 0.28 0.53 0.11
1975 84.1 16344 1.3 0.8 0.30 0.57 0.11
1976 84.3 17724 1.5 0.9 0.32 0.59 0.11
1977 84.1 18930 1.7 0.9 0.32 0.61 0.11
1978 83.4 20094 1.9 0.9 0.32 0.63 0.10
1979 84.1 22822 2.0 1.0 0.34 0.69 0.12
1980 82.8 23733 2.0 1.1 0.35 0.74 0.12
1981 82.6 24010 1.9 1.1 0.36 0.74 0.11
1982 82.0 24668 1.8 1.2 0.39 0.76 0.11
1983 82.4 25383 1.7 1.2 0.41 0.75 0.11
1984 82.4 27416 1.7 1.2 0.42 0.77 0.11
1985 81.9 30003 1.8 1.3 0.46 0.81 0.10
1986 82.3 32531 2.0 1.4 0.50 0.83 0.10
1987 82.0 35094 2.1 1.4 0.54 0.86 0.09
1988 81.8 38159 2.2 1.5 0.58 0.88 0.08
1989 81.8 41067 2.3 1.6 0.63 0.91 0.08
1990 89.0 44037 2.4 1.7 0.62 0.87 0.08
1991 90.3 46981 2.4 1.8 0.66 0.92 0.08
1992 90.4 49802 2.4 1.9 0.68 0.90 0.07
1993 90.7 48141 2.4 1.8 0.63 0.84 0.07
1994 91.0 46376 2.3 1.7 0.58 0.86 0.07
1995 91.0 44173 2.1 1.6 0.54 0.79 0.06

Sources: 1939-1967, Alfred M. Skolnik and Daniel N. Price, “Another Look at Workmen’s Compensation,” in U.S. Social Security Administration, Social Security Bulletin 33 (October 1970), pp. 3-25; 1968-1986, U.S. Social Security Administration, Social Security Bulletin, Annual Statistical Supplement, 1994, Table 9.B1, p. 333; 1992-1993, Jack Schmulowitz, “Workers’ Compensation: Coverage, Benefits, and Costs, 1992-93,” Social Security Bulletin 58 (Summer 1995), pp. 51-57. For 1987 through 1998, National Academy of Social Insurance, “Workers’ Compensation: Benefits, Coverage and Costs, 1997-1998 New Estimates.” The publication is available at the National Academy of Social Science website: http://www.nasi.org/.

a The workers’ compensation series on costs as a percentage of the covered payroll (pvf.b.18.10) contains some employer contributions to the Black Lung program while the benefits series (pvf.b.18.11) does not include benefits associated with the Black Lung program

Expenditures on Medical Care, Disability and Survivors

Over time, and particularly during the 1980s and early 1990s, rising medical expenditures have been a prime contributor to rising costs. Expenditures on medical and hospital benefits have risen from less than 0.2 percent of the payroll to over 0.6 percent in the early 1990s. At that time employers and insurers began managing their health care costs more closely and have slowed the growth of workers’ compensation medical costs during the 1990s. Similarly, the disability benefits paid to replace lost earnings have also risen sharply over times as reforms of workers’ compensation expanded the range of workplace injuries and diseases covered. Payments of replacement wages to disabled workers have increased relative to the size of payrolls from 0.3 percent of wages and salaries covered by workers compensation to as high as .9 percent around 1990 (see Table 2). In contrast, the percentage of the payrolls spent on paying benefits to the survivors of the victims of fatal accidents has stayed relatively constant at below 0.1 percent from the 1940s through 1970 and again from the 1980s to the present (see Table 2). The upward surge in the percentage of payroll paid out to survivors between 1970 and 1973 was driven by the introduction of the federal Black Lung program. The impact of Black Lung was so dramatic because of the accumulation of a number of years of survivors all being added to the system in the span of three years. Once the Black Lung program had stabilized, the survivors’ benefits reached a steady state of about 0.1 percent of the payroll and have declined in the 1990s.

Declining Injury and Illness Rates

The general rise in workers’ compensation benefits as a share of the payroll should not necessarily be considered a sign that workplaces have become more dangerous. Workers’ compensation has increasingly provided benefits for a wide range of injuries and diseases for which compensation would not have been awarded earlier in the century. Data on occupational injury and illness rates for all occupations shows that number of cases of injury and illness per 100 workers in the private sectors has fallen by 32 percent since 1972, while the number of lost workday cases has stayed roughly constant.

Trends in the Shares of Payments Made by Types of Insurers

Although the states establish the basic rules for compensation, employers can obtain insurance to cover their compensation responsibilities from several sources: private insurance carriers in the majority of states, government-sponsored insurance funds in roughly half of the states, or the employer can self-insure as long as they demonstrate sufficient resources to handle their benefit obligations. Between the end of World War II and 1970, the distribution of benefits paid by these various insurers stayed relatively constant (see Table 3). The percentage of benefits paid by private insurers was roughly 62 percent, by state and federal funds roughly 25 percent and by self-insurers was about 12 to 15 percent. The introduction of the Black Lung benefit program in 1970 led to a sharp rise in the state and federal insurance funds, as a large number of workers not previously covered received federal coverage for black lung disease. Since 1973 the trend has been to return more of the insurance activity to private insurers, and many employers have increasingly self-insured.

Table 3
Shares of Workers’ Compensation Payments Made by Types of Insurer

Year Private Insurer Government Fund Self-Insurance
percent percent percent
1940 52.7 28.5 18.8
1941 55.0 26.5 18.6
1942 57.9 24.7 17.4
1943 60.3 22.9 16.7
1944 61.4 22.3 16.3
1945 61.9 22.2 15.9
1946 62.2 22.1 15.7
1947 62.1 22.6 15.2
1948 62.7 22.7 14.6
1949 62.4 23.3 14.3
1950 62.0 24.2 13.8
1951 62.7 24.0 13.3
1952 62.5 24.6 12.9
1953 62.3 25.0 12.7
1954 61.7 25.7 12.6
1955 61.5 26.0 12.6
1956 61.7 25.8 12.5
1957 62.2 25.5 12.2
1958 62.5 25.7 11.9
1959 62.2 26.1 11.7
1960 62.5 25.1 12.4
1961 61.9 25.3 12.8
1962 62.1 24.9 13.0
1963 62.4 24.5 13.1
1964 62.6 24.1 13.2
1965 62.0 24.5 13.5
1966 62.0 24.3 13.8
1967 62.2 23.9 13.8
1968 62.4 23.4 14.2
1969 62.3 23.0 14.7
1970 60.8 24.9 14.3
1971 56.3 30.8 12.9
1972 53.6 33.9 12.4
1973 49.3 39.1 11.6
1974 51.4 36.1 12.5
1975 51.9 35.2 12.9
1976 52.4 33.9 13.7
1977 53.6 31.9 14.5
1978 53.7 31.1 15.3
1979 51.2 33.4 15.4
1980 51.6 31.8 16.6
1981 52.3 30.5 17.2
1982 52.7 29.1 18.2
1983 52.7 28.8 18.5
1984 53.9 27.5 18.6
1985 55.5 25.9 18.6
1986 56.2 25.4 18.4
1987 56.6 24.8 18.6
1988 57.0 24.3 18.7
1989 58.0 23.2 18.7
1990 58.1 22.9 19.0
1991 58.1 23.0 18.8
1992 55.4 23.4 21.3
1993 53.2 23.3 23.4
1994 50.0 24.1 25.9
1995 48.8 25.4 25.9
1996 48.8 25.4 25.8
1997 50.8 24.9 24.3
1998 53.3 24.8 21.9

Sources: See Table 2.

The Moral Hazard Problem and Accident Compensation

The provision of accident compensation is potentially subject to problems with moral hazard, which is a situation where people reduce their prevention activities because their net losses from the injury are reduced by the presence of compensation. Over the course of the century, there have been two trends that have contributed to the potential for greater moral hazard problems. First, the character of the most common injuries has changed. In the early 1900s the common workplace injuries were readily identifiable, as the probability of accidents leading to broken bones, lost body parts, and fatalities were far more common. The most common forms of workers’ compensation injuries today are soft tissue injuries to the back and carpal tunnel syndrome in wrists. These injuries are not so easy to diagnose effectively, which could lead to excess reporting of this type of injury. The second trend has been a rise in benefit levels as a share of after-tax income. Workers’ compensation payments are not taxed. When the workers’ compensation programs were first introduced, the federal income tax was first being put into place. Through 1940, less than 7 percent of households were subject to the income tax. Since World War II, however, the income tax rates have been substantially higher. As a result, workers’ compensation benefits have been replacing a higher share of the after-tax wage. The absence of much taxation in the early 1900s meant that workers’ compensation benefits often replaced less than two-thirds of the after-tax wage, and sometimes weekly maximums on payments led to replacement of a substantially lower percentage. In the modern era, with greater taxation of wages, workers’ compensation benefits are replacing up to 90 percent of the after-tax wage in some states. Both the trends toward more soft-tissue injuries and the higher after-tax replacement rates have led to improvements in the compensation of injured workers, although there is evidence that workers pay for these improvements through lower wages (Moore and Viscusi 1990). On the other hand, the trends increase the risk of problems with moral hazard, which in turn lead to higher insurance costs for employers and insurers. Employers and insurers have sought to limit the problems with moral hazard through closer monitoring of accident claims and the recovery process. The tensions between improved accident compensation and moral hazard have been a constant source of conflict in the debates over the proper level of compensation for workers.

Conclusion

Workers’ compensation is now one of the cornerstones of our network of social insurance programs. Although many of the modern social insurance programs were proposed at the state level during the 1910s, workers’ compensation was the only program to be widely adopted at the time. Unemployment insurance and old-age pension programs later joined the network through federal legislation in the 1930s. All of these programs have faced new challenges, as they have become a central feature of our economic terrain.

References

Aldrich, Mark. Safety First: Technology, Labor, and Business in the Building of American Work Safety, 1870-1939. Baltimore: Johns Hopkins University Press, 1997.

Fishback, Price V. and Shawn Everett Kantor. A Prelude to the Welfare State: The Origins of Workers’ Compensation. Chicago: University of Chicago Press, 2000.

Moore, Michael J., and W. Kip Viscusi. Compensation Mechanisms for Job Risks: Wages, Workers’ Compensation, and Product Liability. Princeton, NJ: Princeton University Press, 1990.

Data and descriptions of trends for workers’ compensation are available from the National Academy of Social Insurance website: http://www.nasi.org/. The NASI continues to publish annual updates. In addition, detailed descriptions of the benefit rules in each state are published annually by the U.S. Chamber of Commerce in Analysis of Workers’ Compensation Laws.

1 The states include California 1915, North Dakota 1925, Minnesota 1927, Connecticut 1930, Kentucky 1930, New York 1930, Illinois 1931, Missouri 1931, New Jersey 1931, Ohio 1931, Massachusetts 1932, Nebraska 1935, North Carolina 1935, Wisconsin 1935, West Virginia 1935, Rhode Island 1936, Delaware 1937, Indiana 1937, Michigan 1937, Pennsylvania 1937, Washington 1937, Idaho 1939 and Maryland 1939. Balkan 1998, p. 64.

Citation: Fishback, Price. “Workers’ Compensation”. EH.Net Encyclopedia, edited by Robert Whaples. March 26, 2008. URL http://eh.net/encyclopedia/workers-compensation/

An Economic History of Weather Forecasting

Erik D. Craft, University of Richmond

Introduction

The United States Congress established a national weather organization in 1870 when it instructed the Secretary of War to organize the collection of meteorological observations and forecasting of storms on the Great Lakes and Atlantic Seaboard. Large shipping losses on the Great Lakes during the 1868 and 1869 seasons, growing acknowledgement that storms generally traveled from the West to the East, a telegraphic network that extended west of the Great Lakes and the Atlantic Seaboard, and an eager Army officer promising military discipline are credited with convincing Congress that a storm-warning system was feasible. The United States Army Signal Service weather organization immediately dwarfed its European counterparts in budget and geographical size and shortly thereafter created storm warnings that on the Great Lakes alone led to savings in shipping losses that exceeded the entire network’s expenses.

Uses of Weather Information

Altering Immediate Behavior

The most obvious use of weather information is to change behavior in response to expected weather outcomes. The motivating force behind establishing weather organizations in England, France, Germany, and the United States was to provide warnings to ships of forthcoming storms, so that the ships might remain in harbor. But it soon became obvious that agricultural and commercial interests would benefit from weather forecasts as well. Farmers could protect fruit sensitive to freezes, and shippers could limit spoilage of produce while en route. Beyond preparation for severe weather, weather forecasts are now created for ever more specialized activities: implementing military operations, scheduling operation of power generation facilities, routing aircraft safely and efficiently, planning professional sports teams’ strategies, estimating demand for commodities sensitive to weather outcomes, planning construction projects, and optimizing the use of irrigation and reservoir systems’ resources.

Applying Climatological Knowledge

Climatological data can be used to match crop varieties, construction practices, and other activities appropriately to different regions. For example, in 1947 the British Government planned to grow groundnuts on 3.2 million acres in East and Central Africa. The groundnut was chosen because it was suited to the average growing conditions of the chosen regions. But due a lack of understanding of the variance in amount and timing of rainfall, the project was abandoned after five years and initial capital outlays of 24 million British pounds and annual operating costs of 7 million pounds. The preparation of ocean wind and weather charts in the 1850s by Matthew Fontaine Maury, Superintendent of the U.S. Navy’s Depot of Charts and Instruments, identified better routes for vessels sailing between America and Europe and from the United States East Cost to United States West Coast. The reduced sailing durations are alleged to have saved millions of dollars annually. Climatological data can also be used in modern environmental forecasts of air quality and how pollution is dispersed in the air. There are even forensic meteorologists who specialize in identifying weather conditions at a given point in time after accidents and subsequent litigation. Basic climatological information is also one reason why the United States cinema industry became established in Southern California; it was known that a high percentage of all days were sunny, so that outdoor filming would not be delayed.

Smoothing Consumption of Weather-Sensitive Commodities

An indirect use of weather forecasts and subsequent weather occurrences is their influence on the prices of commodities that are affected by weather outcomes. Knowledge that growing conditions will be poor or have been poor will lead to expectations of a smaller crop harvest. This causes expected prices of the crop to rise, thereby slowing consumption. This is socially efficient, since the present inventory and now smaller future harvest will have to be consumed more slowly over the time period up until the next season’s crop can be planted, cultivated, and harvested. Without an appropriate rise in price after bad weather outcomes, an excessive depletion of the crop’s inventory could result, leading to more variability in the consumption path of the commodity. People generally prefer consuming their income and individual products in relatively smooth streams, rather than in large amounts in some periods and small amounts in other periods. Both improved weather forecasts and United State Department of Agriculture crop forecasts help buyers more effectively consume a given quantity of a crop.

The History Weather Forecasts in the United States

An important economic history question is whether or not it was necessary for the United States Federal Government to found a weather forecasting organization. There are two challenges in answering that question: establishing that the weather information was socially valuable and determining if private organizations were incapable of providing the appropriate level of services. Restating the latter issue, did weather forecasts and the gathering of climatological information possess enough attributes of a public good such that private organizations would create an insufficiently large amount of socially- beneficial information? There are also two parts to this latter public good problem: nonexcludability and nonrivalry. Could private producers of weather information create a system whereby they earned enough money from users of weather information to cover the costs of creating the information? Would such a weather system be of the socially optimal size?

Potential Organizational Sources of Weather Forecasts

There were many organizations during the 1860s that the observer might imagine would benefit from the creation of weather forecasts. After the consolidation of most telegraphic service in the United States into Western Union in 1866, an organization with employees throughout the country existed. The Associated Press had a weather-reporting network, but there is no evidence that it considered supplementing its data with forecasts. One Ebenezer E. Merriam began supplying New York newspapers with predictions in 1856. Many years later, astronomer turned Army Signal Service forecaster Cleveland Abbe concluded that Merriam made his predictions using newspaper weather reports. The Chicago Board of Trade declined an invitation in 1869 to support a weather forecasting service based in Cincinnati. Neither ship-owners nor marine insurers appear to have expressed any interest in creating or buying weather information. Great Lakes marine insurers had even already overcome organizational problems by forming the Board of Lake Underwriters in 1855. For example, the group incurred expenses of over $11,000 in 1861 inspecting vessels and providing ratings on behalf of its members in the annual Lake Vessel Register. The Board of Lake Underwriters even had nine inspectors distributed on the Great Lakes to inspect wrecks on behalf of its members. Although there was evidence that storms generally traveled in a westerly direction, none of these groups apparently expected the benefits to itself to exceed the costs of establishing the network necessary to provide useful weather information.

Cleveland Abbe at the Cincinnati Observatory began the most serious attempt to establish a quasi-private meteorological organization in 1868 when he sought financial support from the Associated Press, Western Union, local newspapers, and the Cincinnati Chamber of Commerce. His initial plan included a system of one hundred reporting stations with the Associated Press covering the $100 instrument costs at half of the stations and the dispatch costs. In the following year, he widened his scope to include the Chicago Board of Trade and individual subscribers and proposed a more limited network of between sixteen and twenty-two stations. The Cincinnati Chamber of Commerce, whose president published the Cincinnati Commercial, funded the experiment from September through November of 1869. Abbe likely never had more than ten observers report on any given day and could not maintain more than about thirty local subscribers for his service, which provided at most only occasional forecasts. Abbe continued to receive assistance from Western Union in the collection and telegraphing of observations after the three-month trial, but he fell short in raising funds to allow the expansion of his network to support weather forecasts. His ongoing “Weather Bulletin of the Cincinnati Observatory” was not even published in the Cincinnati Commercial.

Founding of the Army Signal Service Weather Organization

Just as the three-month trial of Abbe’s weather bulletin concluded, Increase A. Lapham, a Milwaukee natural scientist, distributed his second list of Great Lakes shipping losses, entitled “Disaster on the Lakes.” The list included 1,164 vessel casualties, 321 deaths, and $3.1 million in property damaged in 1868 and 1,914 vessel casualties, 209 lives lost, and $4.1 million in financial losses in 1869. The number of ships that were totally destroyed was 105 and 126 in each year, respectively. According to a separate account, the storm of November 16-19, 1869 alone destroyed vessels whose value exceeded $420,000. Lapham’s list of losses included a petition to establish a weather forecasting service. In 1850, he had prepared a similar proposal alongside a list of shipping of losses, and twice during the 1850s he had tracked barometric lows across Wisconsin to provide evidence that storms could be forecast.

Recipients of Lapham’s petitions included the Wisconsin Academy of Sciences, the Chicago Academy of Sciences, the National Board of Trade meeting in Richmond, a new Chicago monthly business periodical entitled The Bureau, and Congressman Halbert E. Paine of Milwaukee. Paine had studied meteorological theories under Professor Elias Loomis at Western Reserve College and would introduce storm-warning service bills and eventually the final joint resolution in the House that gave the Army Signal Service storm-warning responsibilities. In his book Treatise on Meteorology (1868), Loomis claimed that the approach of storms to New York could be predicted reliably given telegraphic reports from several locations in the Mississippi Valley. From December 1869 through February 1870, Lapham’s efforts received wider attention. The Bureau featured nine pieces on meteorology from December until March, including at least two by Lapham.

Following the Civil War, the future of a signaling organization in the Army was uncertain. Having had budget requests for telegraph and signal equipment for years 1870 and 1871 cut in half to $5000, Colonel Albert J. Myer, Chief Signal Officer, led a small organization seeking a permanent existence. He visited Congressman Paine’s office in December of 1869 with maps showing proposed observation stations throughout the United Stations. Myer’s eagerness for the weather responsibilities, as well as the discipline of the Army organization and a network of military posts in the West, many linked via telegraph, would appear to have made the Army Signal Service a natural choice. The marginal costs of an Army weather organization using Signal Service personnel included only instruments and commercial telegraphy expenses. On February 4, 1870, Congress approved the Congressional Joint Resolution which “authorizes and requires the Secretary of War to provide for taking of meteorological observations . . . and for giving notice on the northern lakes and on the sea-coast of the approach and force of storms.” Five days later, President Grant signed the bill.

Expansion of the Army Signal Service’s Weather Bureau

Observer-sergeants in the Signal Service recorded their first synchronous observations November 1, 1870, 7:35 a.m. Washington time at twenty-four stations. The storm-warning system began formal operation October 23, 1871 with potential flag displays at eight ports on the Great Lakes and sixteen ports on the Atlantic seaboard. At that time, only fifty general observation stations existed. Already by June 1872, Congress expanded the Army Signal Service’s explicit forecast responsibilities via an appropriations act to most of the United States “for such stations, reports, and signal as may be found necessary for the benefit of agriculture and commercial interests.” In 1872, the Signal Service also began publication of the Weekly Weather Chronicle during the growing seasons. It disappeared in 1877, reemerging in 1887 as the Weather Crop Bulletin. As the fall of 1872 began, confidence in the utility of weather information was so high that 89 agricultural societies and 38 boards of trade and chambers of commerce had appointed meteorological committees to communicate with the Army Signal Service. In addition to dispensing general weather forecasts for regions of the country three times a day, the Signal Service soon sent special warnings to areas in danger of cold waves and frosts.

The original method of warning ships of dangerous winds was hoisting a single red flag with a black square located in the middle. This was known as a cautionary signal, and Army personnel at Signal Service observation stations or civilians at display stations would raise the flag on a pole “whenever the winds are expected to be as strong as twenty-five miles per hour, and to continue so for several hours, within a radius of one hundred miles from the station.” In the first year of operation ending 1 September 1872, 354 cautionary signals were flown on both the Great Lakes and the Atlantic Seaboard, approximately 70% of which were verified as having met the above definition. Such a measure of accuracy is incomplete, however, as it can always be raised artificially by not forecasting storms under marginal conditions, even though such a strategy might diminish the value of the service.

The United States and Canada shared current meteorological information beginning in 1871. By 1880, seventeen Canadian stations reported meteorological data to the United States at least twice daily by telegraph. The number of Army Signal Service stations providing telegraphic reports three times a day stabilized at 138 stations in 1880, dipped to 121 stations in 1883, and grew to approximately 149 stations by 1888. (See Table 1 for a summary of the growth of the Army Signal Service Meteorological Network from 1870 to 1890.) Additional display stations only provided storm warnings at sea and lake ports. River stations monitored water levels in order to forecast floods. Special cotton-region stations, beginning in 1883, comprised a dense network of daily reporters of rainfall and maximum and minimum temperatures. Total Army Signal Service expenditures grew from a $15,000 supplemental appropriation for weather operations in fiscal year 1870 to about one million dollars for all Signal Service costs around 1880 and stabilized at that level. Figure 1 shows the extent geographical extent of the Army Signal Service telegraphic observation network in 1881.

Figure 1: Army Signal Service Observation Network in 1881
Click on the image for the larger, more detailed image (~600K)Source: Map between pages 250-51, Annual Report of the Chief Signal Officer, October 1, 1881, Congressional Serial Set Volume 2015. See the detailed map between pages 304-05 for the location of each of the different types of stations listed in Table 1.

Table 1: Growth of the United States Army Signal Service Meteorological Network

Budget (Real 1880 Dollars)

Stations of the Second Order

Stations of the Third Order

Repair Stations

Display Stations

Special River Stations

Special Cotton-Region Stations

1870

32,487

25

1871

112,456

54

1872

220,269

65

1873

549,634

80

9

1874

649,431

92

20

1875

749,228

98

20

1876

849,025

106

38

23

1877

849,025

116

29

10

9

23

1878

978,085

136

36

12

11

23

1879

1,043,604

158

30

17

46

30

1880

1,109,123

173

39

49

50

29

1881

1,080,254

171

47

44

61

29

87

1882

937,077

169

45

3

74

30

127

1883

950,737

143

42

27

7

30

124

1884

1,014,898

138

68

7

63

40

138

1885

1,085,479

152

58

8

64

66

137

1886

1,150,673

146

33

11

66

69

135

1887

1,080,291

145

31

13

63

70

133

1888

1,063,639

149

30

24

68

78

116

1889

1,022,031

148

32

23

66

72

114

1890

994,629

144

34

15

73

72

114

Sources: Report of the Chief Signal Officer: 1888, p. 171; 1889, p. 136; 1890, p. 203 and “Provision of Value of Weather Information Services,” Craft (1995), p. 34.

Notes: The actual total budgets for years 1870 through 1881 are estimated. Stations of the second order recorded meteorological conditions three times per day. Most immediately telegraphed the data. Stations of the third order recorded observations at sunset. Repair stations maintained Army telegraph lines. Display stations displayed storm warnings on the Great Lakes and Atlantic seaboard. Special river stations monitored water levels in order to forecast floods. Special cotton-region stations collected high temperature, low temperature, and precipitation data from a denser network of observation locations

Early Value of Weather Information

Budget reductions in the Army Signal Service’s weather activities in 1883 led to the reduction of fall storm-warning broadcast locations on the Great Lakes from 80 in 1882 to 43 in 1883. This one-year drop in the availability of storm-warnings creates a special opportunity to measure the value of warnings of extremely high winds on the Great Lakes (see Figure 2). Many other factors can be expected to affect the value of shipping losses on the Great Lakes: the level of commerce in a given season, the amount of shipping tonnage available to haul a season’s commerce, the relative composition of the tonnage (steam versus sail), the severity of the weather, and long-term trends in technological change or safety. Using a statistical technique know as multiple regression, in which the effect of these many factors on shipping losses are analyzed concurrently, Craft (1998) argued that each extra storm-warning location on the Great Lakes lowered losses by about one percent. This implies that the storm-warning system reduced losses on the Great Lakes by approximately one million dollars annually in the mid 1870s and between $1 million and $4.5 million dollars per year by the early 1880s.

Source: The data are found in the following: Chicago Daily Inter Ocean (December 5, 1874 p. 2; December 18, 1875; December 27, 1876 p. 6; December 17, 1878; December 29, 1879 p. 6; February 3, 1881 p. 12; December 28, 1883 p. 3; December 5, 1885 p. 4); Marine Record (December 27, 1883 p. 5; December 25, 1884 pp. 4-5; December 24, 1885 pp. 4-5; December 30, 1886 p. 6; December 15, 1887 pp 4-5); Chief Signal Officer, Annual Report of the Chief Signal Officer, 1871- 1890.

Note: Series E 52 of the Historical Statistics of the United States (U.S. Bureau of the Census, 1975) was used to adjust all values to real 1880 dollars.

There are additional indirect methods with which to confirm the preceding estimate of the value of early weather information. If storm-warnings actually reduced the risk of damage to cargo and ships due to bad weather, then the cost of shipping cargo would be expected to decline. In particular, such reductions in shipping prices due to savings in losses caused by storms can be differentiated from other types of technological improvements by studying how fall shipping prices changed relative to summer shipping prices. It was during the fall that ships were particularly vulnerable to accidents caused by storms. Changes is shipping prices of grain from Chicago to Buffalo during the summers and falls from the late 1860s to late 1880s imply that storm-warnings were valuable and are consistent with the more direct method estimating reductions in shipping losses. Although marine insurance premia data for shipments on the Great Lakes are limited and difficult to interpret due the waning and waxing of the insurance cartel’s cohesion, such data are also supportive of the overall interpretation.

Given Army Signal Service budgets of about one million dollars for providing meteorological services to the entire United States, a reasonable minimum bound for the rate of return to the creation of weather information from 1870 to 1888 is 64 percent. The figure includes no social benefits from any weather information other than Great Lakes storm warnings. This estimate of nineteenth century information implies that the creation and distribution of storm warnings by the United States Federal Government were a socially beneficial investment.

Transfer of Weather Services to the Department of Agriculture

The Allison Commission hearings in 1884 and 1885 sought to determine the appropriate organization of Federal agencies whose activities included scientific research. The Allison Commission’s long report included testimony and discussion relating to the organization of the Army Signal Service, the United States Geological Survey, the Coast and Geodetic Survey, and the Navy Hydrographic Office. Weather forecasting required a reliable network of observers, some of whom were the sole Army personnel at a location. Advantages of a military organizational structure included a greater range of disciplinary responses, including court-martials for soldiers, for deficient job performance. Problems, however, of the military organization included the limited ability to increase one’s rank while working for the Signal Service and tension between the civilian and military personnel. In 1891, after an unsuccessful Congressional attempt at reform in 1887, the Weather Bureau became a civilian organization when it joined the young Department of Agriculture.

Aviation and World War I

Interest in upper air weather conditions grew rapidly after the turn of the century on account of two related events: the development of aviation and World War I. Safe use of aircraft depended on more precise knowledge of weather conditions (winds, storms, and visibility) between takeoff and landing locations. Not only were military aircraft introduced during World War I, but understanding wind conditions was also crucial to the use of poison gas on the front lines. In the most important change of the Weather Bureau’s organizational direction since transfer to the Department of Agricultural, Congress passed the Air Commerce Act in 1926, which by 1932 led to 38% of the Weather Bureau’s budget being directed toward aerology research and support.

Transfer of the Weather Bureau to the Department of Commerce

Even though aerological expenditures by the Weather Bureau in support of aviation rivaled funding for general weather services by the late 1930s, the Weather Bureau came under increasing criticism from aviation interests. The Weather Bureau was transferred to the Department of Commerce in 1940 where other support for aviation already originated. This transition mirrored the declining role of agriculture in the United States and movement toward a more urban economy. Subsequently known as the United States Weather Service, it has remained there since.

World War II

During World War II, weather forecasts assumed greater importance, as aircraft and rapid troop movements became key parts of military strategy. Accurate long-range artillery use also depended on knowledge of prevailing winds. For extensive use of weather forecasts and climatological information during wartime, consider Allied plans the strike German oil refineries in Ploesti, Romania. In the winter of 1943 military weather teams parachuted into the mountains of Yugoslavia to relay weather data. Bombers from North Africa could only reach the refineries in the absence of headwinds in either direction of the sortie. Cloud cover en route was important for protection, clear skies were helpful for identification of targets, and southerly winds permitted the bombers to drop their ordinance on the first pass on the south side of the area’s infrastructure, allowing the winds to assist in spreading the fire. Historical data indicated that only March or August were possible windows. Though many aircraft were lost, the August 1 raid was considered a success.

Tide, wind, and cloud conditions were also crucial in the planning of the invasion of Normandy (planned for June 5 and postponed until June 6 in 1944). The German High Command had been advised by its chief meteorologist that conditions were not opportune for an Allied invasion on the days following June 4. Dissention among American and British military forecasters nearly delayed the invasion further. Had it been deferred until the next date of favorable tide conditions, the invasion would have taken place during the worst June storm in twenty years in the English Channel.

Forecasting in Europe

A storm on November 14, 1854 destroyed the French warship Henri IV and damaged other British and French vessels on the Black Sea involved in the Crimean War. A report from the state-supported Paris Observatory indicated that barometric readings showed that the storm has passed across Europe in about four days. Urban Leverrier, director of the Paris Observatory, concluded that had there been a telegraph line between Vienna and the Crimea, the British and French fleets could have received warnings. Although the United States weather network was preceded by storm-warning systems in the Netherlands in 1860, Great Britain in 1861, and France in 1863, the new United States observation network immediately dwarfed the European organizations in both financial resources and geographical magnitude.

Robert FitzRoy, captain of the Beagle during Darwin’s famous voyage, was appointed director of the Meteorological Department established by the British Board of Trade (a government organization) in 1854. The wreck of the well-constructed iron vessel Royal Charter in a storm with much loss of life in October of 1859 provided another opportunity for a meteorological leader to argue that storms could be tracked and forecast. With support from the Prince Consort, FitzRoy and the Meteorological Department were granted approval to establish a storm-warning service. On February 6, 1861 the first warnings were issued. By August 1861 weather forecasts were issued regularly. By 1863, the Meteorological Department had a budget of three thousand English pounds. Criticism arose from different groups. Scientists wished to establish meteorology on a sound theoretical foundation and differentiate it from astrology. At the time, many publishers of weather almanacs subscribed to various theories of the influence of the moon or other celestial bodies on weather (This is not as outlandish one might suppose; in 1875, well-known economist William Stanley Jevons studied connections between sunspot activity and meteorology with business cycles). Some members of this second group supported the practice of forecasting but were critical of FitzRoy’s technique, perhaps hoping to become alternative sources of forecasts. Amidst the criticism, FitzRoy committed suicide in 1865. Forecasts and warnings were discontinued in 1866 until the warnings resumed two years later. General forecasts were suspended until 1877.

In 1862, Leverrier wrote the French Ministry of Public Education that French naval and commercial interests might be compromised by their dependence on warnings from the British Board of Trade. A storm-warning service in France commenced in July of 1863. Given the general movement of storms westward, neither France nor Britain had the luxury of tracking storms well before they arrived, as would have been possible with the November 1854 storm in the Crimea and as the Army Signal Service soon would be able to do in America. On account of administrative difficulties that were to hinder effective functioning of the service until 1877, French warnings ceased in October 1865 but resumed in May the next year. The French Central Bureau Meteorology was founded only in 1878 with a budget of only $12,000.

After the initiation of storm warning systems that preceded the Army Signal Service weather network, Europe would not achieve meteorological prominence again until the Bergen School of meteorology developed new storm analysis techniques after World War I, which incorporated cold and warm fronts. In the difficult days in Norway during the conclusion of the Great War, meteorological information from the rest of Europe was unavailable. Theoretical physicist turned meteorological researcher Wilhelm Bjerknes appealed to Norway’s national interests in defense, in the development of commercial aviation, and in increased agricultural output to build a dense observation network, whose data helped yield a new paradigm for meteorology.

Conclusion

The first weather forecasts in the United States that were based on a large network of simultaneous observations provided information to society that was much more valuable than the cost of production. There was discussion in the early winter of 1870 between the scientist Increase Lapham and a businessman in Chicago of the feasibility of establishing a private forecasting organization in Wisconsin or Illinois (see Craft 1999). But previous attempts by private organizations in the United States had been unsuccessful in supporting any private weather-forecasting service. In the contemporary United States, the Federal government both collects data and offers forecasts, while private weather organizations provide a variety of customized services.

Weather Forecasting Timeline

1743

Benjamin Franklin, using reports of numerous postmasters, determined the northeastward path of a hurricane from the West Indies.

1772-1777

Thomas Jefferson at Monticello, Virginia and James Madison at Williamsburg, Virginia collect a series of contemporaneous weather observations.

1814

Surgeon General Tilton issues an order directing Army surgeons to keep a diary of the weather in order to ascertain any influences of weather upon disease.

1817

Josiah Meigs, Commission of the General Land Office, requests officials at their land offices to record meteorological observations.

1846-1848

Matthew F. Maury, Superintendent of the U.S. Naval Observatory, publishes his first charts compiled from ships’ log showing efficient sailing routes.

1847

Barometer used to issue storm warnings in Barbadoes.

1848

J. Jones of New York advertises meteorological reports costing between twelve and one half and twenty-five cents per city per day. There is no evidence the service was ever sold.

1848

Publication in the British Daily News of the first telegraphic daily weather report.

1849

The Smithsonian Institution begins a nearly three decade long project of collecting meteorological data with the goal of understanding storms.

1849

Captain Joseph Brooks, manager of the Portland Steamship Line, receives telegraphic reports three times a day from Albany, New York, and Plattsburg in order to determine if the line’s ships should remain in port in Maine.

1853-1855

Ebenezer E. Merriam of New York, using newspaper telegraphic reports, offers weather forecasts in New York’s newspapers on an apparently irregular basis.

1858

The U.S. Army Engineers begin collecting meteorological observations while surveying the Great Lakes.

1860

Christoph Buys Ballot issues first storm warnings in the Netherlands.

1861

Admiral Robert FitzRoy of the British Meteorological Office begins issuing storm-warnings.

1863

Urban Leverrier, director of the Paris Observatory, organizes a storm-warning service.

1868

Cleveland Abbe of the Cincinnati Observatory unsuccessfully proposes a weather service of one hundred observation stations to be supported by the Cincinnati Chamber of Commerce, Associated Press, Western Union, and local newspapers.

1869

The Cincinnati Chamber of Commerce funds a three-month trial of the Cincinnati Observatory’s weather bulletin. The Chicago Board of Trade declines to participate.

1869

Increase A. Lapham publishes a list of the shipping losses on the Great Lakes during the 1868 and 1869 seasons.

1870

Congress passes a joint resolution directing the Secretary of War to establish a meteorological network for the creation of storm warnings on the Great Lakes and Atlantic Seaboard. Storm-warnings are offered on November 8. Forecasts begin the following February 19.

1872

Congressional appropriations bill extends Army Signal Service duties to provide forecasts for agricultural and commercial interests.

1880

Frost warnings offered for Louisiana sugar producers.

1881-1884

Army Signal Service expedition to Lady Franklin Bay in support of international polar weather research. Only seven of the twenty-five member team survives.

1881

Special cotton-region weather reporting network established.

1891

Weather Bureau transferred to the Department of Agriculture.

1902

Daily weather forecasts sent by radio to Cunard Line steamships.

1905

First wireless weather report from a ship at sea.

1918

Norway expands its meteorological network and organization leading to the development of new forecasting theories centered on three-dimensional interaction of cold and warm fronts.

1919

American Meteorological Society founded.

1926

Air Commerce Act gives the Weather Bureau responsibility for providing weather services to aviation.

1934

First private sector meteorologist hired by a utility company.

1940

The Weather Bureau is transferred from the Department of Agriculture to the Department of Commerce.

1946

First private weather forecast companies begin service.

1960

The first meteorological satellite, Tiros I, enters orbit successfully.

1976

The United States launches its first geostationary weather satellites.

References

Abbe, Cleveland, Jr. “A Chronological Outline of the History of Meteorology in the United States.” Monthly Weather Review 37, no. 3-6 (1909): 87-89, 146- 49, 178-80, 252-53.

Alter, J. Cecil. “National Weather Service Origins.” Bulletin of the Historical and Philosophical Society of Ohio 7, no. 3 (1949): 139-85.

Anderson, Katharine. “The Weather Prophets: Science and Reputation in Victorian Meteorology.” History of Science 37 (1999): 179-216.

Burton, Jim. “Robert Fitzroy and the Early History of the Meteorological Office.” British Journal for the History of Science 19 (1986): 147-76.

Chief Signal Officer. Report of the Chief Signal Officer. Washington: GPO, 1871-1890.

Craft, Erik. “The Provision and Value of Weather Information Services in the United States during the Founding Period of the Weather Bureau with Special Reference to Transportation on the Great Lakes.” Ph.D. diss., University of Chicago, 1995.

Craft, Erik. “The Value of Weather Information Services for Nineteenth-Century Great Lakes Shipping.” American Economic Review 88, no.5 (1998): 1059-1076.

Craft, Erik. “Private Weather Organizations and the Founding of the United States Weather Bureau.” Journal of Economic History 59, no. 4 (1999): 1063- 1071.

Davis, John L. “Weather Forecasting and the Development of Meteorological Theory at the Paris Observatory.” Annals of Science 41 (1984): 359-82.

Fleming, James Rodger. Meteorology in America, 1800-1870. Baltimore: Johns Hopkins University Press, 1990.

Fleming, James Rodger, and Roy E. Goodman, editors. International Bibliography of Meteorology. Upland, Pennsylvania: Diane Publishing Co., 1994.

Friedman, Robert Marc. Appropriating the Weather: Vilhelm Bjerknes and the Construction of a Modern Meteorology. Ithaca: Cornell University Press, 1989.

Hughes, Patrick. A Century of Weather Service. New York: Gordon and Breach, 1970.

Miller, Eric R. “The Evolution of Meteorological Institutions in United States.” Monthly Weather Review 59 (1931): 1-6.

Miller, Eric R. “New Light on the Beginnings of the Weather Bureau from the Papers of Increase A. Lapham.” Monthly Weather Review 59 (1931): 65-70.

Sah, Raaj. “Priorities of Developing Countries in Weather and Climate.” World Development 7 no. 3 (1979): 337-47.

Spiegler, David B. “A History of Private Sector Meteorology.” In Historical Essays on Meteorology, 1919-1995, edited by James Rodger Fleming, 417- 41. Boston: American Meteorological Society, 1996.

Weber, Gustavus A. The Weather Bureau: Its History, Activities and Organization. New York: D. Appleton and Company, 1922.

Whitnah, Donald R. A History of the United States Weather Bureau. Urbana: University of Illinois Press, 1961.

Citation: Craft, Erik. “Economic History of Weather Forecasting”. EH.Net Encyclopedia, edited by Robert Whaples. October 6, 2001. URL http://eh.net/encyclopedia/an-economic-history-of-weather-forecasting/

Usury

Norman Jones, Utah State University

The question of when and if money can be lent at interest for a guaranteed return is one of the oldest moral and economic problems in Western Civilization. The Greeks argued about usury, Hebrews denounced it, Roman law controlled it, and Christians began pondering it in the late Roman Empire. Medieval canon lawyers adapted Greek and Roman ideas about usury to Christian theology, creating a body of Church law designed to control the sin of usury. By the early modern period the concept began to be secularized, but the issue of what usury is and when it occurs is still causing disputes in modern legal and theological systems.

Aristotle

The Greek philosophers wrestled with the question of whether money can be lent at interest. Most notably, Aristotle concluded that it could not. Aristotle defined money as a good that was consumed by use. Unlike houses and fields, which are not destroyed by use, money must be spent to be used. Therefore, as we cannot rent food, so we cannot rent money. Moreover, money does not reproduce. A house or a flock can produce new value by use, so it is not unreasonable to ask for a return on their use. Money, being barren, should not, therefore, be expected to produce excess value. Thus, interest is unnatural.

Roman Law

Roman lawyers were more subtle in their treatment of the problem. They recognized the right to lend and borrow for a specified return, the mutuum. A strict contract in which money, oil, or other fungible good could be lent on the expectation of an equal return in kind and quality of the substance loaned. Interest was not recognized in this obligation unless it was agreed upon by the parties ahead of time. Foenus was an illegal contract for interest without risk, with one exception. The foenus nauticum allowed lenders to contract for certain return on money lent for large projects, such as voyages. It was the Latin foenus that was used interchangeably with usuram in Latin biblical translations. Nonetheless, Roman law did, in the Lex Unicaria of 88 B.C., recognize an interest rate of up to 12%. Made the maximum rate in 50 B.C. by a decree of the Senate, the centesima usura stood until Justinian lowered the rates in 533 A.D., creating a sliding scale with 12% only applying to the foenus nauticum, 8% to business loans, 6% to those not in business, and 4% to distinguished persons and farmers.

Biblical References

The Christians of the late Empire were not so flexible. There is a steady condemnation of lending at interest running through the patristic literature. St. Jerome declared usury to be the same as murder, echoing Cato and Seneca, since it consumed the life of the borrower. Christians, however, seemed required by God to condemn it. Exodus 22:25 forbad oppressing one’s neighbor with usury. Deuteronomy 23:20-21 said you could not charge your brother usury. Ezekiel 18:7-8; 13 makes it clear that the righteous do not lend at usury; and that usurers “shall not live.” Leviticus 25:35-36 says if your brother is poor do not charge him usury. The final Old Testament word on the issue came from the Psalmist, who charged the godly to aid their neighbors, not lending to them at interest. The strongest rejection of loans at interest came from Christ in Luke 6:35, where He says “Lend, hoping for nothing in return.”

Medieval Christians

Given God’s hostility to usury, it is hardly surprising that Christian theologians from the fourth century on defined lending for gain as a sin. Aquinas and his fellow scholastics amplified authors like St. Jerome on the subject, and Gratian built it into the code of Canon Law. Aquinas must have been gratified to find that Aristotle shared his hostility toward usury. By the late Middle Ages there was a consensus that lending at interest for guaranteed return was illegal and damnable. However, they also agreed that if the lender shared in the risk of the venture, the loan was legal. Consequently, laws against usury seldom interfered with merchant capitalism. Businessmen could always get loans if their contracts made them partners in risk. Extrinsic titles of the canon law, for instance, made it legal to charge for damnum emergens and lucrum cessans, losses sustained because someone else was using one’s money. The difference between the amount lent and the profit it might have made was paid as interesse. However, one had to prove the loss to charge interesse. It was also possible to write contracts which specified poena conventionalis, a penalty for late payment that did not demand proof of loss. Merchant bankers like the Medici did not charge interest per se, but they often received gifts from grateful clients.

Legal Ruses

Canon law and secular law held usury to be malum in se, an evil in itself that must be outlawed because God condemned it. Nonetheless, there were many legal ruses that allowed invisible illegal interest to be charged. A contract for a false sale, in which an inflated price was paid for a good, might be constructed. Or the appearance of risk might be incorporated in a contract by conditioning the payment on some eventuality such as the length of someone’s life. Only the poor, lacking personal credit, were forced to pledge collateral to get money.

Poor Men’s Banks

The oppression of the poor by usurers offended many good Christians. As an anti-Semitic counter measure against the Jews who were outside the canon law’s prohibitions, the papal governor of Perugia, Hermolaus Barbarus, invented the mons pietatis, “poor men’s bank” in 1461. Publicly-run pawn shops approved by Paul II in 1467, these nonprofit banks lent to the deserving poor at very low rates of interest and, by the late fifteenth century, they began to accept deposits. By the sixteenth century these banks were spread by the Franciscans all over Europe, though not in England, where Parliament refused to legalize them.

Changing Interpretations in the Fifteenth Century

As the demand for capital grew theologians became increasingly aware that lending at interest was not always theft. In the fifteenth century, Paris’s Jean Gerson and Tubingen’s Conrad Summenhardt, Gabriel Biel and John Eck argued that usury occurred only when the lender intended to oppress the borrower. Eck, supported by the Fugger banking family, became famous for his book Tractates contractu quinque de centum (1515), defending five percent as a harmless and therefore legal rate of interest as long as the loan was for a bona fide business opportunity. For these nominalists the proper measure of usury was the intent of the borrower and lender. If they were in charity with one another the loan was licit.

Luther

Eck’s position horrified more conservative people, who continued to see usury as an antisocial crime. Not surprisingly, Eck’s great enemy, Luther, refused to accept the idea that intention was a proper test for usury. Luther refused even to accept the extrinsic titles, insisting that anyone who charged interest was a thief and murderer and should not be buried in consecrated ground. He allowed only one exception to his anathema. If money was lent at interest to support orphans, widows, students and ministers it was good. Melanchthon was less conservative than Luther, admitting the extrinsic titles.

Calvin

Bourgeois reformers like Martin Bucer and John Calvin were much more sympathetic to Eck’s argument. John Calvin’s letter on usury of 1545 made it clear that when Christ said “lend hoping for nothing in return,” He meant that we should help the poor freely. Following the rule of equity, we should judge people by their circumstances, not by legal definitions. Humanist that he was, Calvin knew there were two Hebrew words translated as “usury.” One, neshek, meant “to bite”; the other, tarbit, meant “to take legitimate increase.” Based on these distinctions, Calvin argued that only “biting” loans were forbidden. Thus, one could lend at interest to business people who would make a profit using the money. To the working poor one could lend without interest, but expect the loan to be repaid. To the impoverished one should give without expecting repayment.

The arguments in Calvin’s letter on usury are amplified in Charles du Moulin’s Tractatus commerciorum et usurarum, redituumque pecunia constitutorum et monetarum, written in 1542 and published in Paris in 1546. Du Moulin (“Molinaeus”) developed a utility theory of value for money, rejecting Aquinas’ belief that money could not be rented because it was consumed.

This attack on the Thomist understanding of money was taken up by Spanish commentators. Domingo de Soto, concerned about social justice, suggested that Luke 6:35 was not a precept, since it has no relation to the justice of lending at interest. Luis de Molina, writing in the late sixteenth century, agreed. He suggested that there was no biblical text which actually prohibited lending money at interest.

Increasing Tolerance toward the Legal of Charging Interest

By the second half of the sixteenth century Catholics and Protestant alike were increasingly tolerant of the idea that the legality of loans at interest was determined by the intentions of the parties involved. Theologians were often reluctant to admit much latitude for usury, but secular law and commercial practice embraced the idea that loans at interest, made with good intentions, were legitimate. By then most places permitted some form of lending at interest, often relying on Roman Law reified in Civil Law to justify it. In the Dutch Republic and England the issue was relegated to conscience. The state ceased to meddle in usury unless it was antisocial, leaving individuals to decide for themselves whether their actions were sinful. At about the same time the image of the usurer in literature changed from a sinister, grasping sinner to a socially inept fool.

17th-Century Debate Turns to Acceptable Interest Rates

As social good became the proper test of a loan’s propriety, there emerged two distinct debates about usury. By the first third of the seventeenth-century the issue of usury as a sin had been relegated to the conscience of the lender. The state was increasingly concerned only with whether or not the rate of interest was damagingly high. As the Act against Usury passed by the English Parliament in 1624 demonstrates, the rate of interest was important to the national economic well-being, lowering the maximum rate of 10%, established in 1571, to 8%. An amendment to the Act announced that this toleration of usury did not repeal the “law of God in conscience.”

This era saw the emergence of a casuistic debate about usury and an economic debate about credit. Robert Filmer, the English political theorist, wrote a book proclaiming that matters of conscience need not be subjected to state control. His contemporaries in the first generation of economists, Gerard de Malyne and Thomas Mun saw usury as a practical business problem. Malyne thought lending at interest was perfectly admissible if it was commercial credit; oppression of the poor by pawnbrokers was the evil usury condemned by God. Mun argued that there was no connection between usury and patterns of trade, and Edward Misselden saw interest rates as a matter of the money supply, not an oppression of the poor.

Most seventeenth-century Europeans knew usury was condemned by God, but many, while not admitting that usury should be legal, were espousing more radical views. Claudius Salmatius wrote a series of books with titles like De Usuris (1638) and De Modo Usurarum (1639) rejecting the Aristotelian definition of money as a good that was consumed. He insisted it could be rented. In this he was following Du Moulin’s argument from the sixteenth century. By the early eighteenth century Salmatius’ rejection of the traditional idea of usury was widely accepted. John Locke tried a slightly different argument, though to the same end. Lending at interest for productive purposes, he said, was no different from a landlord sharing the profits of a field with his tenant.

1700s: Worries about Usury Diminish, Lending at Interest Becomes Normal

By the eighteenth century the moral issue of usury was no longer of interest to most Protestant thinkers. In practice lending at interest with collateral had become normal, as had deposit banking. It was regulated by states, and this regulation was seen as benefiting business and protecting the poor. Adam Smith thought that since money can by made by money, so its use ought to be paid for. Nonetheless, he defended usury laws as the necessary in order to encourage productive investment and discourage consumptive spending. A cap on interest rates makes money cheaper for productive borrowers, while forcing up the cost of money to those borrowing simply to consume, since they would be getting their money outside the regulated money market. The expense of money borrowed for consumption actually keep many people from borrowing at all.

Debates among Catholics in the 1700s

Among Catholics the practice looked much the same, but in 1744 Scipio Maffei set off a debate with his three-volume defense of lending at interest, in which he suggested usury at moderate rates was not illicit, even if it was not charitable. This assertion was condemned by a papal encyclical, Vix Pervenit, in 1745. The encyclical reasserted the scholastic condemnation of usury, reinvigorating the tension between moral attitudes toward lending at interest and commercial necessity for doing it.

Nineteenth Century

In the early nineteenth century the Roman Congregations issued a series of rulings that took the pressure off. Faithful Catholics engaged in lending were not committing sin as long as they lent at a moderate rate. The moral condemnation of usury as an oppression of the poor did not disappear, however. It was adopted by socialists, whose antagonism toward capitalists convinced them that a market in money was evil. To them, usury was the “new slavery.”

Bentham’s Laissez-Faire Position

However, some economists were arguing that state regulation of credit was a distinctly bad thing. Jeremy Bentham wrote, in 1787, his Defence of Usury, in which he proclaimed a laissez-faire position, and introduced his concept of utility, urging “that no man of ripe years and of sound mind, acting freely, and with his eyes open, ought to be hindered, with a view to his advantage, from making such bargain, in the way of obtaining money, as he thinks fit: nor, (what is a necessary consequence) any body hindered from supplying him, upon any terms he thinks proper to accede to.” Bentham’s argument, written against proposed legislation in the Irish Parliament, won out in the English Parliament, which abolished the law against usury.

Usury Laws in the United States

In the United State usury was regulated by each state as it saw fit. Clearly basing themselves on English legislation (usually the 1664 Act against Usury), colonies and states generally assumed that lending at “immoral” rates of interest is wrong and must be prevented by regulation. The laws were eased in the early nineteenth century. Many states, but not all, repealed their anti-usury legislation. Hard economic times in the post-Civil War era caused the return of anti-usury measures, but these statutes had little impact on normal commercial operations. Attempts to regulate interest rates were complicated by the competition among states with varying laws. Thus American usury laws tend to vary the admissible rate of interest according to local economic circumstances, with some much more tolerant of high rates than others. In 1999, for instance, the legal rate of simple interest prescribed by state usury laws varied from 5% (Delaware and Wisconsin) to 15% (Washington and South Dakota). However, most state laws have complex definitions of usury that allow various rates for various types of transactions, which is why credit card companies can charge so much more than the legal usury rate. Moreover, during the 1980’s, when interest rates had reached record highs, the U.S. Congress exempted national banks from state usury laws and small loan regulations, tying their rates to the prime interest rate instead.

Islam and Usury

One of the striking developments in the twentieth century is the creation of a system of Islamic banks that do not lend money usuriously. The Qur’an forbids usury, or riba, and the prohibition of lending for interest without risk to the lender is expanded upon by a number of Hadith. Muslim scholars have followed the same Aristotelean path of analysis as did Christian theologians to understand the divine hostility to usury. In particular, they stress the consumable nature of money. This stress on consumption comes naturally, since the Qur’an says “O you who believe! Eat not Ribâ (usury)” (Al Imran 3:130).

One of the Islamic responses to the West in the past fifty years has been the rapid growth of banks serving Muslims that do not contract for a predetermined amount over and above the principal. These banks must share the risk with the borrower, and they must not make money from money.

Conclusion

Most nations continue to regulate usury, which is now, in the West, defined as contracting to charge interest on a loan without risk to the lender at an interest rate greater than that set by the law. However, moral arguments are still being made about whether or not contracting for any interest is permissible. Because both the Bible and the Qur’an can be read as forbidding usury, there will always be moral, as well as social and economic, reasons for arguing about the permissibility of lending at interest.

References

Bentham, Jeremy. Defence of Usury: Shewing the Impolicy of the Present Legal

Restraints on Pecuniary Bargains In a Series of Letters to a Friend. To Which is Added a Letter to Adam Smith, Esq.; LL.D. on the Discouragements opposed by the above Restraints to the Progress of Inventive Industry, fourth edition, 1818. http://www.econlib.org/library/Bentham/bnthUs.html

Divine, Thomas F. Interest: An Historical and Analytical Study of Economics and Modern Ethics. Milwaukee: Marquette University Press, 1959.

Gordon, Barry. Economic Analysis before Adam Smith: Hesiod to Lessius. London: Macmillan, 1975.

Jones, Norman. God and the Moneylenders: Usury and the Law in Early Modern England. Oxford: Blackwell, 1989.

Kerridge, Eric. Usury, Interest and the Reformation. Aldershot, Hants. and Burlington, VT: Ashgate, 2002.

Nelson, Benjamin. The Idea of Usury: From Tribal Brotherhood to Universal Otherhood. Chicago: University of Chicago Press, 1969.

Noonan, John T. The Scholastic Analysis of Usury. Cambridge, MA: Harvard University Press, 1957.

Rockoff, Hugh. Prodigals and Projecture: An Economic History of Usury Laws in the United States from Colonial Times to 1900. NBER Working Papers: 9742, 2003. http://www.nber.org/papers/w9742

Savelli, Rodolfo. “Diritto Romano e Teologia Riformata: du Moulin di Fronte al Problema dell’Interesse del Denaro.” Materialli per una Storia della Cultura Giuridica 23, no. 2 (1993): 291-324.

Thireau, Jean-Louis. Charles du Moulin, 1500-1566: Etude sur les sources, la methode, les idee politiques et economiques d’un juriste de la Renaissance. Geneva: Droz, 1980.

Citation: Jones, Norman. “Usury”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/usury/

An Overview of the Economic History of Uruguay since the 1870s

Luis Bértola, Universidad de la República — Uruguay

Uruguay’s Early History

Without silver or gold, without valuable species, scarcely peopled by gatherers and fishers, the Eastern Strand of the Uruguay River (Banda Oriental was the colonial name; República Oriental del Uruguay is the official name today) was, in the sixteenth and seventeenth centuries, distant and unattractive to the European nations that conquered the region. The major export product was the leather of wild descendants of cattle introduced in the early 1600s by the Spaniards. As cattle preceded humans, the state preceded society: Uruguay’s first settlement was Colonia del Sacramento, a Portuguese military fortress founded in 1680, placed precisely across from Buenos Aires, Argentina. Montevideo, also a fortress, was founded by the Spaniards in 1724. Uruguay was on the border between the Spanish and Portuguese empires, a feature which would be decisive for the creation, with strong British involvement, in 1828-1830, of an independent state.

Montevideo had the best natural harbor in the region, and rapidly became the end-point of the trans-Atlantic routes into the region, the base for a strong commercial elite, and for the Spanish navy in the region. During the first decades after independence, however, Uruguay was plagued by political instability, precarious institution building and economic retardation. Recurrent civil wars with intensive involvement by Britain, France, Portugal-Brazil and Argentina, made Uruguay a center for international conflicts, the most important being the Great War (Guerra Grande), which lasted from 1839 to 1851. At its end Uruguay had only about 130,000 inhabitants.

“Around the middle of the nineteenth century, Uruguay was dominated by the latifundium, with its ill-defined boundaries and enormous herds of native cattle, from which only the hides were exported to Great Britain and part of the meat, as jerky, to Brazil and Cuba. There was a shifting rural population that worked on the large estates and lived largely on the parts of beef carcasses that could not be marketed abroad. Often the landowners were also the caudillos of the Blanco or Colorado political parties, the protagonists of civil wars that a weak government was unable to prevent” (Barrán and Nahum, 1984, 655). This picture still holds, even if it has been excessively stylized, neglecting the importance of subsistence or domestic-market oriented peasant production.

Economic Performance in the Long Run

Despite its precarious beginnings, Uruguay’s per capita gross domestic product (GDP) growth from 1870 to 2002 shows an amazing persistence, with the long-run rate averaging around one percent per year. However, this apparent stability hides some important shifts. As shown in Figure 1, both GDP and population grew much faster before the 1930s; from 1930 to 1960 immigration vanished and population grew much more slowly, while decades of GDP stagnation and fast growth alternated; after the 1960s Uruguay became a net-emigration country, with low natural growth rates and a still spasmodic GDP growth.

GDP growth shows a pattern featured by Kuznets-like swings (Bértola and Lorenzo 2004), with extremely destructive downward phases, as shown in Table 1. This cyclical pattern is correlated with movements of the terms of trade (the relative price of exports versus imports), world demand and international capital flows. In the expansive phases exports performed well, due to increased demand and/or positive terms of trade shocks (1880s, 1900s, 1920s, 1940s and even during the Mercosur years from 1991 to 1998). Capital flows would sometimes follow these booms and prolong the cycle, or even be a decisive force to set the cycle up, as were financial flows in the 1970s and 1990s. The usual outcome, however, has been an overvalued currency, which blurred the debt problem and threatened the balance of trade by overpricing exports. Crises have been the result of a combination of changing trade conditions, devaluation and over-indebtedness, as in the 1880s, early 1910s, late 1920s, 1950s, early 1980s and late 1990s.

Population and per capita GDP of Uruguay, 1870-2002 (1913=100)

Table 1: Swings in the Uruguayan Economy, 1870-2003

Per capita GDP fall (%) Length of recession (years) Time to pre-crisis levels (years) Time to next crisis (years)
1872-1875 26 3 15 16
1888-1890 21 2 19 25
1912-1915 30 3 15 19
1930-1933 36 3 17 24-27
1954/57-59 9 2-5 18-21 27-24
1981-1984 17 3 11 17
1998-2003 21 5

Sources: See Figure 1.

Besides its cyclical movement, the terms of trade showed a sharp positive trend in 1870-1913, a strongly fluctuating pattern around similar levels in 1913-1960 and a deteriorating trend since then. While the volume of exports grew quickly up to the 1920s, it stagnated in 1930-1960 and started to grow again after 1970. As a result, the purchasing power of exports grew fourfold in 1870-1913, fluctuated along with the terms of trade in 1930-1960, and exhibited a moderate growth in 1970-2002.

The Uruguayan economy was very open to trade in the period up to 1913, featuring high export shares, which naturally declined as the rapidly growing population filled in rather empty areas. In 1930-1960 the economy was increasingly and markedly closed to international trade, but since the 1970s the economy opened up to trade again. Nevertheless, exports, which earlier were mainly directed to Europe (beef, wool, leather, linseed, etc.), were increasingly oriented to Argentina and Brazil, in the context of bilateral trade agreements in the 1970s and 1980s and of Mercosur (the trading zone encompassing Argentina, Brazil, Paraguay and Uruguay) in the 1990s.

While industrial output kept pace with agrarian export-led growth during the first globalization boom before World War I, the industrial share in GDP increased in 1930-54, and was mainly domestic-market orientated. Deindustrialization has been profound since the mid-1980s. The service sector was always large: focusing on commerce, transport and traditional state bureaucracy during the first globalization boom; focusing on health care, education and social services, during the import-substituting industrialization (ISI) period in the middle of the twentieth century; and focusing on military expenditure, tourism and finance since the 1970s.

The income distribution changed markedly over time. During the first globalization boom before World War I, an already uneven distribution of income and wealth seems to have worsened, due to massive immigration and increasing demand for land, both rural and urban. However, by the 1920s the relative prices of land and labor changed their previous trend, reducing income inequality. The trend later favored industrialization policies, democratization, introduction of wage councils, and the expansion of the welfare state based on an egalitarian ideology. Inequality diminished in many respects: between sectors, within sectors, between genders and between workers and pensioners. While the military dictatorship and the liberal economic policy implemented since the 1970s initiated a drastic reversal of the trend toward economic equality, the globalizing movements of the 1980s and 1990s under democratic rule didn’t increase equality. Thus, inequality remains at the higher levels reached during the period of dictatorship (1973-85).

Comparative Long-run Performance

If the stable long-run rate of Uruguayan per capita GDP growth hides important internal transformations, Uruguay’s changing position in the international scene is even more remarkable. During the first globalization boom the world became more unequal: the United States forged ahead as the world leader (nearly followed by other settler economies); Asia and Africa lagged far behind. Latin America showed a confusing map, in which countries as Argentina and Uruguay performed rather well, and others, such as the Andean region, lagged far behind (Bértola and Williamson 2003). Uruguay’s strong initial position tended to deteriorate in relation to the successful core countries during the late 1800s, as shown in Figure 2. This trend of negative relative growth was somewhat weak during the first half of the twentieth century, improved significantly during the 1960s, as the import-substituting industrialization model got exhausted, and has continued since the 1970s, despite policies favoring increased integration into the global economy.

 Per capita GDP of Uruguay relative to four core countries,  1870-2002

If school enrollment and literacy rates are reasonable proxies for human capital, in late 1800s both Argentina and Uruguay had a great handicap in relation to the United States, as shown in Table 2. The gap in literacy rates tended to disappear — as well as this proxy’s ability to measure comparative levels of human capital. Nevertheless, school enrollment, which includes college-level and technical education, showed a catching-up trend until the 1960’s, but reverted afterwards.

The gap in life-expectancy at birth has always been much smaller than the other development indicators. Nevertheless, some trends are noticeable: the gap increased in 1900-1930; decreased in 1930-1950; and increased again after the 1970s.

Table 2: Uruguayan Performance in Comparative Perspective, 1870-2000 (US = 100)

1870 1880 1890 1900 1910 1920 1930 1940 1950 1960 1970 1980 1990 2000
GDP per capita

Uruguay

101 65 63 27 32 27 33 27 26 24 19 18 15 16

Argentina

63 34 38 31 32 29 25 25 24 21 15 16

Brazil

23 8 8 8 8 8 7 9 9 13 11 10
Latin America 13 12 13 10 9 9 9 6 6

USA

100 100 100 100 100 100 100 100 100 100 100 100 100 100
Literacy rates

Uruguay

57 65 72 79 85 91 92 94 95 97 99

Argentina

57 65 72 79 85 91 93 94 94 96 98

Brazil

39 38 37 42 46 51 61 69 76 81 86

Latin America

28 30 34 37 42 47 56 65 71 77 83

USA

100 100 100 100 100 100 100 100 100 100 100
School enrollment

Uruguay

23 31 31 30 34 42 52 46 43

Argentina

28 41 42 36 39 43 55 44 45

Brazil

12 11 12 14 18 22 30 42

Latin America

USA

100 100 100 100 100 100 100 100 100
Life expectancy at birth

Uruguay

102 100 91 85 91 97 97 97 95 96 96

Argentina

81 85 86 90 88 90 93 94 95 96 95
Brazil 60 60 56 58 58 63 79 83 85 88 88
Latin America 65 63 58 58 59 63 71 77 81 88 87
USA 100 100 100 100 100 100 100 100 100 100 100

Sources: Per capita GDP: Maddison (2001) and Astorga, Bergés and FitzGerald (2003). Literacy rates and life expectancy; Astorga, Bergés and FitzGerald (2003). School enrollment; Bértola and Bertoni (1998).

Uruguay during the First Globalization Boom: Challenge and Response

During the post-Great-War reconstruction after 1851, Uruguayan population grew rapidly (fueled by high natural rates and immigration) and so did per capita output. Productivity grew due to several causes including: the steam ship revolution, which critically reduced the price spread between Europe and America and eased access to the European market; railways, which contributed to the unification of domestic markets and reduced domestic transport costs; the diffusion and adaptation to domestic conditions of innovations in cattle-breeding and services; a significant reduction in transaction costs, related to a fluctuating but noticeable process of institutional building and strengthening of the coercive power of the state.

Wool and woolen products, hides and leather were exported mainly to Europe; salted beef (tasajo) to Brazil and Cuba. Livestock-breeding (both cattle and sheep) was intensive in natural resources and dominated by large estates. By the 1880s, the agrarian frontier was exhausted, land properties were fenced and property rights strengthened. Labor became abundant and concentrated in urban areas, especially around Montevideo’s harbor, which played an important role as a regional (supranational) commercial center. By 1908, it contained 40 percent of the nation’s population, which had risen to more than a million inhabitants, and provided the main part of Uruguay’s services, civil servants and the weak and handicraft-dominated manufacturing sector.

By the 1910s, Uruguayan competitiveness started to weaken. As the benefits of the old technological paradigm were eroding, the new one was not particularly beneficial for resource-intensive countries such as Uruguay. International demand shifted away from primary consumption, the population of Europe grew slowly and European countries struggled for self-sufficiency in primary production in a context of soaring world supply. Beginning in the 1920s, the cattle-breeding sector showed a very poor performance, due to lack of innovation away from natural pastures. In the 1930’s, its performance deteriorated mainly due to unfavorable international conditions. Export volumes stagnated until the 1970s, while purchasing power fluctuated strongly following the terms of trade.

Inward-looking Growth and Structural Change

The Uruguayan economy grew inwards until the 1950s. The multiple exchange rate system was the main economic policy tool. Agrarian production was re-oriented towards wool, crops, dairy products and other industrial inputs, away from beef. The manufacturing industry grew rapidly and diversified significantly, with the help of protectionist tariffs. It was light, and lacked capital goods or technology-intensive sectors. Productivity growth hinged upon technology transfers embodied in imported capital goods and an intensive domestic adaptation process of mature technologies. Domestic demand grew also through an expanding public sector and the expansion of a corporate welfare state. The terms of trade substantially impacted protectionism, productivity growth and domestic demand — the government raised money by manipulating exchange rates, so that when export prices rose the state had a greater capacity to protect the manufacturing sector through low exchange rates for capital goods, raw material and fuel imports and to spur productivity increases by imports of capital, while protection allowed industry to pay higher wages and thus expand domestic demand.

However, rent-seeking industries searching for protectionism and a weak clienteslist state, crowded by civil servants recruited in exchange for political favors to the political parties, directed structural change towards a closed economy and inefficient management. The obvious limits to inward looking growth of a country peopled by only about two million inhabitants were exacerbated in the late 1950s as terms of trade deteriorated. The clientelist political system, which was created by both traditional parties while the state was expanding at the national and local level, was now not able to absorb the increasing social conflicts, dyed by stringent ideological confrontation, in a context of stagnation and huge fiscal deficits.

Re-globalization and Regional Integration

The dictatorship (1973-1985) started a period of increasing openness to trade and deregulation which has persisted until the present. Dynamic integration into the world market is still incomplete, however. An attempt to return to cattle-breeding exports, as the engine of growth, was hindered by the oil crises and the ensuing European response, which restricted meat exports to that destination. The export sector was re-orientated towards “non-traditional exports” — i.e., exports of industrial goods made of traditional raw materials, to which low-quality and low-wage labor was added. Exports were also stimulated by means of strong fiscal exemptions and negative real interest rates and were re-orientated to the regional market (Argentina and Brazil) and to other developing regions. At the end of the 1970s, this policy was replaced by the monetarist approach to the balance of payments. The main goal was to defeat inflation (which had continued above 50% since the 1960s) through deregulation of foreign trade and a pre-announced exchange rate, the “tablita.” A strong wave of capital inflows led to a transitory success, but the Uruguayan peso became more and more overvalued, thus limiting exports, encouraging imports and deepening the chronic balance of trade deficit. The “tablita” remained dependent on increasing capital inflows and obviously collapsed when the risk of a huge devaluation became real. Recession and the debt crisis dominated the scene of the early 1980s.

Democratic regimes since 1985 have combined natural resource intensive exports to the region and other emergent markets, with a modest intra-industrial trade mainly with Argentina. In the 1990s, once again, Uruguay was overexposed to financial capital inflows which fueled a rather volatile growth period. However, by the year 2000, Uruguay had a much worse position in relation to the leaders of the world economy as measured by per capita GDP, real wages, equity and education coverage, than it had fifty years earlier.

Medium-run Prospects

In the 1990s Mercosur as a whole and each of its member countries exhibited a strong trade deficit with non-Mercosur countries. This was the result of a growth pattern fueled by and highly dependent on foreign capital inflows, combined with the traditional specialization in commodities. The whole Mercosur project is still mainly oriented toward price competitiveness. Nevertheless, the strongly divergent macroeconomic policies within Mercosur during the deep Argentine and Uruguayan crisis of the beginning of the twenty-first century, seem to have given place to increased coordination between Argentina and Brazil, thus making of the region a more stable environment.

The big question is whether the ongoing political revival of Mercosur will be able to achieve convergent macroeconomic policies, success in international trade negotiations, and, above all, achievements in developing productive networks which may allow Mercosur to compete outside its home market with knowledge-intensive goods and services. Over that hangs Uruguay’s chance to break away from its long-run divergent siesta.

References

Astorga, Pablo, Ame R. Bergés and Valpy FitzGerald. “The Standard of Living in Latin America during the Twentieth Century.” University of Oxford Discussion Papers in Economic and Social History 54 (2004).

Barrán, José P. and Benjamín Nahum. “Uruguayan Rural History.” Latin American Historical Review, 1985.

Bértola, Luis. The Manufacturing Industry of Uruguay, 1913-1961: A Sectoral Approach to Growth, Fluctuations and Crisis. Publications of the Department of Economic History, University of Göteborg, 61; Institute of Latin American Studies of Stockholm University, Monograph No. 20, 1990.

Bértola, Luis and Reto Bertoni. “Educación y aprendizaje en escenarios de convergencia y divergencia.” Documento de Trabajo, no. 46, Unidad Multidisciplinaria, Facultad de Ciencias Sociales, Universidad de la República, 1998.

Bértola, Luis and Fernando Lorenzo. “Witches in the South: Kuznets-like Swings in Argentina, Brazil and Uruguay since the 1870s.” In The Experience of Economic Growth, edited by J.L. van Zanden and S. Heikenen. Amsterdam: Aksant, 2004.

Bértola, Luis and Gabriel Porcile. “Argentina, Brasil, Uruguay y la Economía Mundial: una aproximación a diferentes regímenes de convergencia y divergencia.” In Ensayos de Historia Económica by Luis Bertola. Montevideo: Uruguay en la región y el mundo, 2000.

Bértola, Luis and Jeffrey Williamson. “Globalization in Latin America before 1940.” National Bureau of Economic Research Working Paper, no. 9687 (2003).

Bértola, Luis and others. El PBI uruguayo 1870-1936 y otras estimaciones. Montevideo, 1998.

Maddison, A. Monitoring the World Economy, 1820-1992. Paris: OECD, 1995.

Maddison, A. The World Economy: A Millennial

Citation: Bertola, Luis. “An Overview of the Economic History of Uruguay since the 1870s”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/article/Bertola.Uruguay.final

Urban Decline (and Success) in the United States

Fred Smith and Sarah Allen, Davidson College

Introduction

Any discussion of urban decline must begin with a difficult task – defining what is meant by urban decline. Urban decline (or “urban decay”) is a term that evokes images of abandoned homes, vacant storefronts, and crumbling infrastructure, and if asked to name a city that has suffered urban decline, people often think of a city from the upper Midwest like Cleveland, Detroit, or Buffalo. Yet, while nearly every American has seen or experienced urban decline, the term is one that is descriptive and not easily quantifiable. Further complicating the story is this simple fact – metropolitan areas, like greater Detroit, may experience the symptoms of severe urban decline in one neighborhood while remaining economically robust in others. Indeed, the city of Detroit is a textbook case of urban decline, but many of the surrounding communities in metropolitan Detroit are thriving. An additional complication comes from the fact that modern American cities – cities like Dallas, Charlotte, and Phoenix – don’t look much like their early twentieth century counterparts. Phoenix of the early twenty-first century is an economically vibrant city, yet the urban core of Phoenix looks very, very different from the urban core found in “smaller” cities like Boston or San Francisco.[1] It is unlikely that a weekend visitor to downtown Phoenix would come away with the impression that Phoenix is a rapidly growing city, for downtown Phoenix does not contain the housing, shopping, or recreational venues that are found in downtown San Francisco or Boston.

There isn’t a single variable that will serve as a perfect choice for measuring urban decline, but this article will take an in depth look at urban decline by focusing on the best measure of a city’s well being – population. In order to provide a thorough understanding of urban decline, this article contains three additional sections. The next section employs data from a handful of sources to familiarize the reader with the location and severity of urban decline in the United States. Section three is dedicated to explaining the causes of urban decline in the U.S. Finally, the fourth section looks at the future of cities in the United States and provides some concluding remarks.

Urban Decline in the United States – Quantifying the Population Decline

Between 1950 and 2000 the population of the United States increased by approximately 120 million people, from 152 million to 272 million. Despite the dramatic increase in population experienced by the country as a whole, different cities and states experienced radically different rates of growth. Table 1 shows the population figures for a handful of U.S. cities for the years 1950 to 2000. (It should be noted that these figures are population totals for the cities in the list, not for the associated metropolitan areas.)

Table 1: Population for Selected U.S. Cities, 1950-2000

City

Population

% Change

1950 – 2000

1950

1960

1970

1980

1990

2000

New York

7,891,957

7,781,984

7,895,563

7,071,639

7,322,564

8,008,278

1.5

Philadelphia

2,071,605

2,002,512

1,949,996

1,688,210

1,585,577

1,517,550

-26.7

Boston

801,444

697,177

641,071

562,994

574,283

589,141

-26.5

Chicago

3,620,962

3,550,404

3,369,357

3,005,072

2,783,726

2,896,016

-20.0

Detroit

1,849,568

1,670,144

1,514,063

1,203,339

1,027,974

951,270

-48.6

Cleveland

914,808

876,050

750,879

573,822

505,616

478,403

-47.7

Kansas City

456,622

475,539

507,330

448,159

435,146

441,545

-3.3

Denver

415,786

493,887

514,678

492,365

467,610

554,636

33.4

Omaha

251,117

301,598

346,929

314,255

335,795

390,007

55.3

Los Angeles

1,970,358

2,479,015

2,811,801

2,966,850

3,485,398

3,694,820

87.5

San Francisco

775,357

740,316

715,674

678,974

723,959

776,733

0.2

Seattle

467,591

557,087

530,831

493,846

516,259

563,374

20.5

Houston

596,163

938,219

1,233,535

1,595,138

1,630,553

1,953,631

227.7

Dallas

434,462

679,684

844,401

904,078

1,006,877

1,188,580

173.6

Phoenix

106,818

439,170

584,303

789,704

983,403

1,321,045

1136.7

New Orleans

570,445

627,525

593,471

557,515

496,938

484,674

-15.0

Atlanta

331,314

487,455

495,039

425,022

394,017

416,474

25.7

Nashville

174,307

170,874

426,029

455,651

488,371

545,524

213.0

Washington

802,178

763,956

756,668

638,333

606,900

572,059

-28.7

Miami

249,276

291,688

334,859

346,865

358,548

362,470

45.4

Charlotte

134,042

201,564

241,178

314,447

395,934

540,828

303.5

Source: U.S. Census Bureau.

Several trends emerge from the data in Table 1. The cities in the table are clustered together by region, and the cities at the top of the table – cities from the Northeast and Midwest – experience no significant population growth (New York City) or experience dramatic population loss (Detroit and Cleveland). These cities’ experiences stand in stark contrast to that of the cities located in the South and West – cities found farther down the list. Phoenix, Houston, Dallas, Charlotte, and Nashville all experience triple digit population increases during the five decades from 1950 to 2000. Figure 1 displays this information even more dramatically:

Figure 1: Percent Change in Population, 1950 – 2000

Source: U.S. Census Bureau.

While Table 1 and Figure 1 clearly display the population trends within these cities, they do not provide any information about what was happening to the metropolitan areas in which these cities are located. Table 2 fills this gap. (Please note – these metropolitan areas do not correspond directly to the metropolitan areas identified by the U.S. Census Bureau. Rather, Jordan Rappaport – an economist at the Kansas City Federal Reserve Bank – created these metropolitan areas for his 2005 article “The Shared Fortunes of Cities and Suburbs.”)

Table 2: Population of Selected Metropolitan Areas, 1950 to 2000

Metropolitan Area

1950

1960

1970

2000

Percent Change 1950 to 2000

New York-Newark-Jersey City, NY

13,047,870

14,700,000

15,812,314

16,470,048

26.2

Philadelphia, PA

3,658,905

4,175,988

4,525,928

4,580,167

25.2

Boston, MA

3,065,344

3,357,607

3,708,710

4,001,752

30.5

Chicago-Gary, IL-IN

5,612,248

6,805,362

7,606,101

8,573,111

52.8

Detroit, MI

3,150,803

3,934,800

4,434,034

4,366,362

38.6

Cleveland, OH

1,640,319

2,061,668

2,238,320

1,997,048

21.7

Kansas City, MO-KS

972,458

1,232,336

1,414,503

1,843,064

89.5

Denver, CO

619,774

937,677

1,242,027

2,414,649

289.6

Omaha, NE

471,079

568,188

651,174

803,201

70.5

Los Angeles-Long Beach, CA

4,367,911

6,742,696

8,452,461

12,365,627

183.1

San Francisco-Oakland, CA

2,531,314

3,425,674

4,344,174

6,200,867

145.0

Seattle, WA

920,296

1,191,389

1,523,601

2,575,027

179.8

Houston, TX

1,021,876

1,527,092

2,121,829

4,540,723

344.4

Dallas, TX

780,827

1,119,410

1,555,950

3,369,303

331.5

Phoenix, AZ

NA

663,510

967,522

3,251,876

390.1*

New Orleans, LA

754,856

969,326

1,124,397

1,316,510

74.4

Atlanta, GA

914,214

1,224,368

1,659,080

3,879,784

324.4

Nashville, TN

507,128

601,779

704,299

1,238,570

144.2

Washington, DC

1,543,363

2,125,008

2,929,483

4,257,221

175.8

Miami, FL

579,017

1,268,993

1,887,892

3,876,380

569.5

Charlotte, NC

751,271

876,022

1,028,505

1,775,472

136.3

* The percentage change is for the period from 1960 to 2000.

Source: Rappaport; http://www.kc.frb.org/econres/staff/jmr.htm

Table 2 highlights several of the difficulties in conducting a meaningful discussion about urban decline. First, by glancing at the metro population figures for Cleveland and Detroit, it becomes clear that while these cities were experiencing severe urban decay, the suburbs surrounding them were not. The Detroit metropolitan area grew more rapidly than the Boston, Philadelphia, or New York metro areas, and even the Cleveland metro area experienced growth between 1950 and 2000. Next, we can see from Tables 1 and 2 that some of the cities experiencing dramatic growth between 1950 and 2000 did not enjoy similar increases in population at the metro level. The Phoenix, Charlotte, and Nashville metro areas experienced tremendous growth, but their metro growth rates were not nearly as large as their city growth rates. This raises an important question – did these cities experience tremendous growth rates because the population was growing rapidly or because the cities were annexing large amounts of land from the surrounding suburbs? Table 3 helps to answer this question. In Table 3, land area, measured in square miles, is provided for each of the cities initially listed in Table 1. The data in Table 3 clearly indicate that Nashville and Charlotte, as well as Dallas, Phoenix, and Houston, owe some of their growth to the expansion of their physical boundaries. Charlotte, Phoenix, and Nashville are particularly obvious examples of this phenomenon, for each city increased its physical footprint by over seven hundred percent between 1950 and 2000.

Table 3: Land Area for Selected U.S. Cities, 1950 – 2000

Metropolitan Area

1950

1960

1970

2000

Percent Change 1950 to 2000

New York, NY

315.1

300

299.7

303.3

-3.74

Philadelphia, PA

127.2

129

128.5

135.1

6.21

Boston, MA

47.8

46

46

48.4

1.26

Chicago, IL

207.5

222

222.6

227.1

9.45

Detroit, MI

139.6

138

138

138.8

-0.57

Cleveland, OH

75

76

75.9

77.6

3.47

Kansas City, MO

80.6

130

316.3

313.5

288.96

Denver, CO

66.8

68

95.2

153.4

129.64

Omaha, NE

40.7

48

76.6

115.7

184.28

Los Angeles, CA

450.9

455

463.7

469.1

4.04

San Francisco, CA

44.6

45

45.4

46.7

4.71

Seattle, WA

70.8

82

83.6

83.9

18.50

Houston, TX

160

321

433.9

579.4

262.13

Dallas, TX

112

254

265.6

342.5

205.80

Phoenix, AZ

17.1

187

247.9

474.9

2677.19

New Orleans, LA

199.4

205

197.1

180.6

-9.43

Atlanta, GA

36.9

136

131.5

131.7

256.91

Nashville, TN

22

29

507.8

473.3

2051.36

Washington, DC

61.4

61

61.4

61.4

0.00

Miami, FL

34.2

34

34.3

35.7

4.39

Charlotte, NC

30

64.8

76

242.3

707.67

Sources: Rappaport, http://www.kc.frb.org/econres/staff/jmr.htm; Gibson, Population of the 100 Largest Cities.

Taken together, Tables 1 through 3 paint a clear picture of what has happened in urban areas in the United States between 1950 and 2000: Cities in the Southern and Western U.S. have experienced relatively high rates of growth when they are compared to their neighbors in the Midwest and Northeast. And, as a consequence of this, central cities in the Midwest and Northeast have remained the same size or they have experienced moderate to severe urban decay. But, to complete this picture, it is worth considering some additional data. Table 4 presents regional population and housing data for the United States during the period from 1950 to 2000.

Table 4: Regional Population and Housing Data for the U.S., 1950 – 2000

1950

1960

1970

1980

1990

2000

Population Density – persons/(square mile)

50.9

50.7

57.4

64

70.3

79.6

Population by Region

West

19,561,525

28,053,104

34,804,193

43,172,490

52,786,082

63,197,932

South

47,197,088

54,973,113

62,795,367

75,372,362

85,445,930

100,236,820

Midwest

44,460,762

51,619,139

56,571,663

58,865,670

59,668,632

64,392,776

Northeast

39,477,986

44,677,819

49,040,703

49,135,283

50,809,229

53,594,378

Population by Region – % of Total

West

13

15.6

17.1

19.1

21.2

22.5

South

31.3

30.7

30.9

33.3

34.4

35.6

Midwest

29.5

28.8

27.8

26

24

22.9

Northeast

26.2

24.9

24.1

21.7

20.4

19

Population Living in non-Metropolitan Areas (millions)

66.2

65.9

63

57.1

56

55.4

Population Living in Metropolitan Areas (millions)

84.5

113.5

140.2

169.4

192.7

226

Percent in Suburbs in Metropolitan Area

23.3

30.9

37.6

44.8

46.2

50

Percent in Central City in Metropolitan Area

32.8

32.3

31.4

30

31.3

30.3

Percent Living in the Ten Largest Cities

14.4

12.1

10.8

9.2

8.8

8.5

Percentage Minority by Region

West

26.5

33.3

41.6

South

25.7

28.2

34.2

Midwest

12.5

14.2

18.6

Northeast

16.6

20.6

26.6

Housing Units by Region

West

6,532,785

9,557,505

12,031,802

17,082,919

20,895,221

24,378,020

South

13,653,785

17,172,688

21,031,346

29,419,692

36,065,102

42,382,546

Midwest

13,745,646

16,797,804

18,973,217

22,822,059

24,492,718

26,963,635

Northeast

12,051,182

14,798,360

16,642,665

19,086,593

20,810,637

22,180,440

Source: Hobbs and Stoops (2002).

There are several items of particular interest in Table 4. Every region in the United States becomes more diverse between 1980 and 2000. No region has a minority population greater than 26.5 percent minority in 1980, but only the Midwest remains below 26.5 percent minority by 2000. The U.S. population becomes increasingly urbanized over time, yet the percentage of Americans who live in central cities remains nearly constant. Thus, it is the number of Americans living in suburban communities that has fueled the dramatic increase in “urban” residents. This finding is reinforced by looking at the figures for average population density for the United States as a whole, the figures listing the numbers of Americans living in metropolitan versus non-metropolitan areas, and the figures listing the percentage of Americans living in the ten largest cities in the United States.

Other Measures of Urban Decline

While the population decline documented in the first part of this section suggests that cities in the Northeast and Midwest experienced severe urban decline, anyone who has visited the cities of Detroit and Boston would be able to tell you that the urban decline in these cities has affected their downtowns in very different ways. The central city in Boston is, for the most part, economically vibrant. A visitor to Boston would fine manicured public spaces as well as thriving retail, housing, and commercial sectors. Detroit’s downtown is still scarred by vacant office towers, abandoned retail space, and relatively little housing. Furthermore, the city’s public spaces would not compare favorably to those of Boston. While the leaders of Detroit have made some needed improvements to the city’s downtown in the past several years, the central city remains a mere shadow of its former self. Thus, the loss of population experienced by Detroit and Boston do not tell the full story about how urban decline has affected these cities. They have both lost population, yet Detroit has lost a great deal more – it no longer possesses a well-functioning urban economy.

To date, there have been relatively few attempts to quantify the loss of economic vitality in cities afflicted by urban decay. This is due, in part, to the complexity of the problem. There are few reliable historical measures of economic activity available at the city level. However, economists and other social scientists are beginning to better understand the process and the consequences of severe urban decline.

Economists Edward Glaeser and Joseph Gyourko (2005) developed a model that thoroughly explains the process of urban decline. One of their principal insights is that the durable nature of housing means that the process of urban decline will not mirror the process of urban expansion. In a growing city, the demand for housing is met through the construction of new dwellings. When a city faces a reduction in economic productivity and the resulting reduction in the demand for labor, workers will begin to leave the city. Yet, when population in a city begins to decline, housing units do not magically disappear from the urban landscape. Thus, in Glaeser and Gyourko’s model a declining city is characterized by a stock of housing that interacts with a reduction in housing demand, producing a rapid reduction in the real price of housing. Empirical evidence supports the assertions made by the model, for in cities like Cleveland, Detroit, and Buffalo the real price of housing declined in the second half of the twentieth century. An important implication of the Glaeser and Gyourko model is that declining housing prices are likely to attract individuals who are poor and who have acquired relatively little human capital. The presence of these workers makes it difficult for a declining city – like Detroit – to reverse its economic decline, for it becomes relatively difficult to attract businesses that need workers with high levels of human capital.

Complementing the theoretical work of Glaeser and Gyourko, Fred H. Smith (2003) used property values as a proxy for economic activity in order to quantify the urban decline experienced by Cleveland, Ohio. Smith found that the aggregate assessed value for the property in the downtown core of Cleveland fell from its peak of nearly $600 million in 1930 to a mere $45 million by 1980. (Both figures are expressed in 1980 dollars.) Economists William Collins and Robert Margo have also examined the impact of urban decline on property values. Their work focuses on how the value of owner occupied housing declined in cities that experienced a race riot in the 1960s, and, in particular, it focuses on the gap in property values that developed between white and black owned homes. Nonetheless, a great deal of work still remains to be done before the magnitude of urban decay in the United States is fully understood.

What Caused Urban Decline in the United States?

Having examined the timing and the magnitude of the urban decline experienced by U.S. cities, it is now necessary to consider why these cities decayed. In the subsections that follow, each of the principal causes of urban decline is considered in turn.

Decentralizing Technologies

In “Sprawl and Urban Growth,” Edward Glaeser and Matthew Kahn (2001) assert that “while many factors may have helped the growth of sprawl, it ultimately has only one root cause: the automobile” (p. 2). Urban sprawl is simply a popular term for the decentralization of economic activity, one of the principal symptoms of urban decline. So it should come as no surprise that many of the forces that have caused urban sprawl are in fact the same forces that have driven the decline of central cities. As Glaeser and Kahn suggest, the list of causal forces must begin with the emergence of the automobile.

In order to maximize profit, firm owners must choose their location carefully. Input prices and transportation costs (for inputs and outputs) vary across locations. Firm owners ultimately face two important decisions about location, and economic forces dictate the choices made in each instance. First, owners must decide in which city they will do business. Then, the firm owners must decide where the business should be located within the chosen city. In each case, transportation costs and input costs must dominate the owners’ decision making. For example, a business owner whose firm will produce steel must consider the costs of transporting inputs (e.g. iron ore), the costs of transporting the output (steel), and the cost of other inputs in the production process (e.g. labor). For steel firms operating in the late nineteenth century these concerns were balanced out by choosing locations in the Midwest, either on the Great Lakes (e.g. Cleveland) or major rivers (e.g. Pittsburgh). Cleveland and Pittsburgh were cities with plentiful labor and relatively low transport costs for both inputs and the output. However, steel firm owners choosing Cleveland or Pittsburgh also had to choose a location within these cities. Not surprisingly, the owners chose locations that minimized transportation costs. In Cleveland, for example, the steel mills were built near the shore of Lake Erie and relatively close to the main rail terminal. This minimized the costs of getting iron ore from ships that had come to the city via Lake Erie, and it also provided easy access to water or rail transportation for shipping the finished product. The cost of choosing a site near the rail terminal and the city’s docks was not insignificant: Land close to the city’s transportation hub was in high demand, and, therefore, relatively expensive. It would have been cheaper for firm owners to buy land on the periphery of these cities, but they chose not to do this because the costs associated with transporting inputs and outputs to and from the transportation hub would have dominated the savings enjoyed from buying cheaper land on the periphery of the city. Ultimately, it was the absence of cheap intra-city transport that compressed economic activity into the center of an urban area.

Yet, transportation costs and input prices have not simply varied across space; they’ve also changed over time. The introduction of the car and truck had a profound impact on transportation costs. In 1890, moving a ton of goods one mile cost 18.5 cents (measured in 2001 dollars). By 2003 the cost had fallen to 2.3 cents (measured in 2001 dollars) per ton-mile (Glaeser and Kahn 2001, p. 4). While the car and truck dramatically lowered transportation costs, they did not immediately affect firm owners’ choices about which city to choose as their base of operations. Rather, the immediate impact was felt in the choice of where within a city a firm should choose to locate. The intra-city truck made it easy for a firm to locate on the periphery of the city, where land was plentiful and relatively cheap. Returning to the example from the previous paragraph, the introduction of the intra-city truck allowed the owners of steel mills in Cleveland to build new plants on the periphery of the urban area where land was much cheaper (Encyclopedia of Cleveland History). Similarly, the car made it possible for residents to move away from the city center and out to the periphery of the city – or even to newly formed suburbs. (The suburbanization of the urban population had begun in the late nineteenth century when streetcar lines extended from the central city out to the periphery of the city or to communities surrounding the city; the automobile simply accelerated the process of decentralization.) The retail cost of a Ford Model T dropped considerably between 1910 and 1925 – from approximately $1850 to $470, measuring the prices in constant 1925 dollars (these values would be roughly $21,260 and $5400 in 2006 dollars), and the market responded accordingly. As Table 5 illustrates, the number of passenger car registrations increased dramatically during the twentieth century.

Table 5: Passenger Car Registrations in the United States, 1910-1980

Year

Millions of Registered Vehicles

1910

.5

1920

8.1

1930

23.0

1940

27.5

1950

40.4

1960

61.7

1970

89.2

1980

131.6

Source: Muller, p. 36.

While changes in transportation technology had a profound effect on firms’ and residents’ choices about where to locate within a given city, they also affected the choice of which city would be the best for the firm or resident. Americans began demanding more and improved roads to capitalize on the mobility made possible by the car. Also, the automotive, construction, and tourism related industries lobbied state and federal governments to become heavily involved in funding road construction, a responsibility previously relegated to local governments. The landmark National Interstate and Defense Highway Act of 1956 signified a long-term commitment by the national government to unite the country through an extensive network of interstates, while also improving access between cities’ central business district and outlying suburbs. As cars became affordable for the average American, and paved roads became increasingly ubiquitous, not only did the suburban frontier open up to a rising proportion of the population; it was now possible to live almost anywhere in the United States. (However, it is important to note that the widespread availability of air conditioning was a critical factor in Americans’ willingness to move to the South and West.)

Another factor that opened up the rest of the United States for urban development was a change in the cost of obtaining energy. Obtaining abundant, cheap energy is a concern for firm owners and for households. Historical constraints on production and residential locations continued to fall away in the late nineteenth and early twentieth century as innovations in energy production began to take hold. One of the most important of these advances was the spread of the alternating-current electric grid, which further expanded firms’ choices regarding plant location and layout. Energy could be generated at any site and could travel long distances through thin copper wires. Over a fifty-year period from 1890 to 1940, the proportion of goods manufactured using electrical power soared from 0.1 percent to 85.6 percent (Nye 1990). With the complementary advancements in transportation, factories now had the option of locating outside of the city where they could capture savings from cheaper land. The flexibility of electrical power also offered factories new freedom in the spatial organization of production. Whereas steam engines had required a vertical system of organization in multi-level buildings, the AC grid made possible a form of production that permanently transformed the face of manufacturing – the assembly line (Nye 1990).

The Great Migration

Technological advances were not bound by urban limits; they also extended into rural America where they had sweeping social and economic repercussions. Historically, the vast majority of African Americans had worked on Southern farms, first as slaves and then as sharecroppers. But progress in the mechanization of farming – particularly the development of the tractor and the mechanical cotton-picker – reduced the need for unskilled labor on farms. The dwindling need for farm laborers coupled with continuing racial repression in the South led hundreds of thousands of southern African Americans to migrate North in search of new opportunities. The overall result was a dramatic shift in the spatial distribution of African Americans. In 1900, more than three-fourths of black Americans lived in rural areas, and all but a handful of rural blacks lived in the South. By 1960, 73% of blacks lived in urban areas, and the majority of the urban blacks lived outside of the South (Cahill 1974).

Blacks had begun moving to Northern cities in large numbers at the onset of World War I, drawn by the lure of booming wartime industries. In the 1940s, Southern blacks began pouring into the industrial centers at more than triple the rate of the previous decade, bringing with them a legacy of poverty, poor education, and repression. The swell of impoverished and uneducated African Americans rarely received a friendly reception in Northern communities. Instead they frequently faced more of the treatment they had sought to escape (Groh 1972). Furthermore, the abundance of unskilled manufacturing jobs that had greeted the first waves of migrants had begun to dwindle. Manufacturing firms in the upper Midwest (the Rustbelt) faced increased competition from foreign firms, and many of the American firms that remained in business relocated to the suburbs or the Sunbelt to take advantage of cheap land. African Americans had difficulty accessing jobs at locations in the suburbs, and the result for many was a “spatial mismatch” – they lived in the inner city where employment opportunities were scarce, yet lacked access to transportation and that would allow them to commute to the suburban jobs (Kain 1968). Institutionalized racism, which hindered blacks’ attempts to purchase real estate in the suburbs, as well as the proliferation of inner city public housing projects, reinforced the spatial mismatch problem. As inner city African Americans coped with high unemployment rates, high crime rates and urban disturbances such as the race riots of the 1960s were obvious symptoms of economic distress. High crime rates and the race riots simply accelerated the demographic transformation of Northern cities. White city residents had once been “pulled” to the suburbs by the availability of cheap land and cheap transportation when the automobile became affordable; now white residents were being “pushed” by racism and the desire to escape the poverty and crime that had become common in the inner city. Indeed, by 2000 more than 80 percent of Detroit’s residents were African American – a stark contrast from 1950 when only 16 percent of the population was black.

The American City in the Twenty-First Century

Some believe that technology – specifically advances in information technology – will render the city obsolete in the twenty-first century. Urban economists find their arguments unpersuasive (Glaeser 1998). Recent history shows that the way we interact with one another has changed dramatically in a very short period of time. E-mail, cell phones, and text messages belonged to the world science fiction as recently as 1980. Clearly, changes in information technology no longer make it a requirement that we locate ourselves in close proximity to the people we want to interact with. Thus, one can understand the temptation to think that we will no longer need to live so close to one another in New York, San Francisco or Chicago. Ultimately, a person or a firm will only locate in a city if the benefits from being in the city outweigh the costs. What is missing from this analysis, though, is that people and firms locate in cities for reasons that are not immediately obvious.

Economists point to economies of agglomeration as one of the main reasons that firms will continue to choose urban locations over rural locations. Economics of agglomeration exist when a firm’s productivity is enhanced (or its cost of doing business is lowered) because it is located in a cluster of complementary firms of in a densely populated area. A classic example of an urban area that displays substantial economies of agglomeration is “Silicon Valley” (near San Jose, California). Firms choosing to locate in Silicon Valley benefit from several sources of economies of agglomeration, but two of the most easily understood are knowledge spillovers and labor pooling. Knowledge spillovers in Silicon Valley occur because individuals who work at “computer firms” (firms producing software, hardware, etc.) are likely to interact with one another on a regular basis. These interactions can be informal – playing together on a softball team, running into one another at a child’s soccer game, etc. – but they are still very meaningful because they promote the exchange of ideas. By exchanging ideas and information it makes it possible for workers to (potentially) increase their productivity at their own job. Another example of economies of agglomeration in Silicon Valley is the labor pooling that occurs there. Because workers who are trained in computer related fields know that computer firms are located in Silicon Valley, they are more likely to choose to live in or around Silicon Valley. Thus, firms operating in Silicon Valley have an abundant supply of labor in close proximity, and, similarly, workers enjoy the opportunities associated with having several firms that can make use of their skills in a small geographic area. The clustering of computer industry workers and firms allows firms to save money when they need to hire another worker, and it makes it easier for workers who need a job to find one.

In addition to economies of agglomeration, there are other economic forces that make the disappearance of the city unlikely. Another of the benefits that some individuals will associate with urban living is the diversity of products and experiences that are available in a city. For example, in a large city like Chicago it is possible to find deep dish pizza, thin crust pizza, Italian food, Persian food, Greek food, Swedish food, Indian food, Chinese food… literally almost any type of food that you might imagine. Why is all of this food available in Chicago but not in a small town in southern Illinois? Economists answer this question using the concept of demand density. Lots of people like Chinese food, so it is not uncommon to find a Chinese restaurant in a small town. Fewer people, though, have been exposed to Persian cuisine. While it is quite likely that the average American would like Persian food if it were available, most Americans haven’t had the opportunity to try it. Hence, the average American is unlikely to demand much Persian food in a given time period. So, individuals who are interested in operating a Persian food restaurant logically choose to operate in Chicago instead of a small town in southern Illinois. While each individual living in Chicago may not demand Persian food any more frequently than the individuals living in the small town, the presence of so many people in a relatively small area makes it possible for the Persian food restaurant to operate and thrive. Moreover, exposure to Persian food may change people’s tastes and preferences. Over time, the amount of Persian food demand (on average) from each inhabitant of the city may increase.

Individuals who value Persian food – or any of the other experiences that can only be found in a large city – will value the opportunity to live in a large city more than they will value the opportunity to live in a rural area. But the incredible diversity that a large city has to offer is a huge benefit to some individuals, not to everyone. Rural areas will continue to be populated as long as there are people who prefer the pleasures of low-density living. For these individuals, the pleasure of being able to walk in the woods or hike in the mountains may be more than enough compensation for living in a part of the country that doesn’t have a Persian restaurant.

As long as there are people (and firm owners) who believe that the benefits from locating in a city outweigh the costs, cities will continue to exist. The data shown above make it clear that Americans continue to value urban living. Indeed, the population figures for Chicago and New York suggest that in the 1990s more people were finding that there are net benefits to living in very large cities. The rapid expansion of cities in the South and Southwest simply reinforces this idea. To be sure, the urban living experienced in Charlotte is not the same as the urban living experience in Chicago or New York. So, while the urban cores of cities like Detroit and Cleveland are not likely to return to their former size anytime soon, and urban decline will continue to be a problem for these cities in the foreseeable future, it remains clear that Americans enjoy the benefits of urban living and that the American city will continue to thrive in the future.

References

Cahill, Edward E. “Migration and the Decline of the Black Population in Rural and Non-Metropolitan Areas.” Phylon 35, no. 3, (1974): 284-92.

Casadesus-Masanell, Ramon. “Ford’s Model-T: Pricing over the Product Life Cycle,” ABANTE –

Studies in Business Management 1, no. 2, (1998): 143-65.

Chudacoff, Howard and Judith Smith. The Evolution of American Urban Society, fifth edition. Upper Saddle River, NJ: Prentice Hall, 2000.

Collins, William and Robert Margo. “The Economic Aftermath of the 1960s Riots in American Cities: Evidence from Property Values.” Journal of Economic History 67, no. 4 (2007): 849 -83.

Collins, William and Robert Margo. “Race and the Value of Owner-Occupied Housing, 1940-1990.”

Regional Science and Urban Economics 33, no. 3 (2003): 255-86.

Cutler, David et al. “The Rise and Decline of the American Ghetto.” Journal of Political Economy 107, no. 3 (1999): 455-506.

Frey, William and Alden Speare, Jr. Regional and Metropolitan Growth and Decline in the United States. New York: Russell Sage Foundation, 1988.

Gibson, Campbell. “Population of the 100 Largest Cities and Other Urban Places in the United States: 1790 to 1990.” Population Division Working Paper, no. 27, U.S. Bureau of the Census, June 1998. Accessed at: http://www.census.gov/population/www/documentation/twps0027.html

Glaeser, Edward. “Are Cities Dying?” Journal of Economic Perspectives 12, no. 2 (1998): 139-60.

Glaeser, Edward and Joseph Gyourko. “Urban Decline and Durable Housing.” Journal of Political Economy 113, no. 2 (2005): 345-75.

Glaeser, Edward and Matthew Kahn. “Decentralized Employment and the Transformation of the American City.” Brookings-Wharton Papers on Urban Affairs, 2001.

Glaeser, Edward and Janet Kohlhase. “Cities, Regions, and the Decline of Transport Costs.” NBER Working Paper Series, National Bureau of Economic Research, 2003.

Glaeser, Edward and Albert Saiz. “The Rise of the Skilled City.” Brookings-Wharton Papers on Urban Affairs, 2004.

Glaeser, Edward and Jesse Shapiro. “Urban Growth in the 1990s: Is City Living Back?” Journal of Regional Science 43, no. 1 (2003): 139-65.

Groh, George. The Black Migration: The Journey to Urban America. New York: Weybright and Talley, 1972.

Gutfreund, Owen D. Twentieth Century Sprawl: Highways and the Reshaping of the American Landscape. Oxford: Oxford University Press, 2004.

Hanson, Susan, ed. The Geography of Urban Transportation. New York: Guilford Press, 1986.

Hobbs, Frank and Nicole Stoops. Demographic Trends in the Twentieth Century: Census 2000 Special Reports. Washington, DC: U.S. Census Bureau, 2002.

Kim, Sukkoo. “Urban Development in the United States, 1690-1990.” NBER Working Paper Series, National Bureau of Economic Research, 1999.

Mieszkowski, Peter and Edwin Mills. “The Causes of Metropolitan Suburbanization.” Journal of Economic Perspectives 7, no. 3 (1993): 135-47.

Muller, Peter. “Transportation and Urban Form: Stages in the Spatial Evolution of the American Metropolis.” In The Geography of Urban Transportation, edited by Susan Hanson. New York: Guilford Press, 1986.

Nye, David. Electrifying America: Social Meanings of a New Technology, 1880-1940. Cambridge, MA: MIT Press, 1990.

Nye, David. Consuming Power: A Social History of American Energies. Cambridge, MA: MIT Press, 1998.

Rae, Douglas. City: Urbanism and Its End. New Haven: Yale University Press, 2003.

Rappaport, Jordan. “U.S. Urban Decline and Growth, 1950 to 2000.” Economic Review: Federal Reserve Bank of Kansas City, no. 3, 2003: 15-44.

Rodwin, Lloyd and Hidehiko Sazanami, eds. Deindustrialization and Regional Economic Transformation: The Experience of the United States. Boston: Unwin Hyman, 1989.

Smith, Fred H. “Decaying at the Core: Urban Decline in Cleveland, Ohio.” Research in Economic History 21 (2003): 135-84.

Stanback, Thomas M. Jr. and Thierry J. Noyelle. Cities in Transition: Changing Job Structures in Atlanta, Denver, Buffalo, Phoenix, Columbus (Ohio), Nashville, Charlotte. Totowa, NJ: Allanheld, Osmun, 1982.

Van Tassel, David D. and John J. Grabowski, editors, The Encyclopedia of Cleveland History. Bloomington: Indiana University Press, 1996. Available at http://ech.case.edu/


[1] Reporting the size of a “city” should be done with care. In day-to-day usage, many Americans might talk about the size (population) of Boston and assert that Boston is a larger city than Phoenix. Strictly speaking, this is not true. The 2000 Census reports that the population of Boston was 589,000 while Phoenix had a population of 1.3 million. However, the Boston metropolitan area contained 4.4 million inhabitants in 2000 – substantially more than the 3.3 million residents of the Phoenix metropolitan area.

Citation: Smith, Fred and Sarah Allen. “Urban Decline (and Success), US”. EH.Net Encyclopedia, edited by Robert Whaples. June 5, 2008. URL http://eh.net/encyclopedia/urban-decline-and-success-in-the-united-states/