EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

Historical Political Business Cycles in the United States

Jac C. Heckelman, Wake Forest University

Macroeconomic Performance and Elections

Analyzing American presidential elections as far back as 1916, Ray Fair (1978) has shown that macroeconomic conditions consistently affect party vote shares. Specifically, the incumbent party is predicted to improve its vote share when economic growth is high and inflation is low. Using no information other than the growth rate, inflation rate, time trend, and the identity of the incumbent party, Fair was able to correctly predict the winning party for 15 of the 16 presidential elections from 1916-1978.

Given a strong connection between the economic environment and vote shares, incumbent politicians have an incentive to manipulate the economy as elections draw near. The notion that incumbents will alter the economic environment for their short-term political gain at the expense of long-term economic stability is referred to as generating a political business cycle. This theory of political business cycles has not generated much empirical support from myriad studies that concentrate on contemporary elections. Perhaps due to the lack of supporting evidence, and the belief that such manipulations were not possible before the advent of activist fiscal policy ushered in during the Keynesian revolution, there has been little attempt to test for political cycles in historical elections. There are, however, a few studies that do so, although their time samples and methodology differ widely.

National-Level Evidence on Historical Political Business Cycles

Adopting the standard procedure used in the empirical studies of contemporary political business cycles, Heckelman and Whaples (1996) test for cycles during the period after the Civil War and before the Great Depression. They find little evidence that either nominal or real GNP, or the GNP deflator, was significantly different than the expected level during the year of, or the year after, a presidential election from 1869-1929.

Davidson, Fratianni, and von Hagen (1990) employ a long time series from 1905-1984. They fail to find consistent evidence of a traditional political business cycle, or systematic differences by party control, of policy targets or policy measures during this time. However, they also test for alterations to the economy based on recent previous conditions and find that trends were significantly altered prior to elections only when macroeconomic outcomes in the recent past had been unfavorable to the incumbent: rising inflation, a rising rate of unemployment, a growing deficit, and a decline in monetary growth. In contrast, there were no changes in the dynamics when previous outcomes were favorable (p. 47), meaning, for example, that declining unemployment did not suddenly fall by an even larger degree just prior to the election. They find no electoral effects on the growth of real per capita GNP. They also present limited evidence that unemployment and inflation patterns differ by party control, but only following recent unfavorable outcomes in each, and the changes are further limited to the post-World War II period.

Klein (1996) takes a different approach. Instead of focusing on the actual values of the economic variables, Klein analyzes business cycle turning points, as identified by the National Bureau of Economic Research. He finds that 26 of the 34 presidential elections held from 1854-1990 were during an identified expansionary period. While expansions typically end in the period right after an election, he does not find that contractions are more likely to end in the period before an election. Thus, his evidence for political business cycles is somewhat mixed. Klein also finds that turning points differ by party control. Expansions are more likely to end following Republican victories, and contractions are more likely to end soon after Democratic victories. These partisan findings are much stronger after World War I.

It is perhaps not surprising that partisan influences on the economy are not stable during the long time series studies. In the earlier part of the Davidson-Fratianni-von Hagen, and Klein studies the Republicans, as the party of Lincoln and McKinley, had a large constituency base comprised of the industrial workers, and tended to support trade protectionism, the opposite of contemporary Republicans. It may still be true that significant differences in the structure of the business cycle occurred depending on which political party controlled policy, even in the period prior to the world wars, but since neither study examined these earlier time periods in isolation as they did for the later time period, that remains speculative.

Richard Nixon’s First Term

The strongest evidence for a political business cycle remains the first term of the Nixon administration. Some scholars have even argued this inspired Nordhaus’s (1975) early theoretical model of the political business cycle (Keech 1995, p.54) on which most empirical tests are based. Keller and May (1984) present a case study of the policy cycle driven by Nixon from 1969-1972, summarizing his use of contractionary monetary and fiscal policy in the first two years, followed by wage and price controls in mid-1971, and finally rapid fiscal expansion and high growth in late 1971 and 1972. They claim only the expansion portion of the cycle is evidence of electoral manipulation, and that the early contraction is merely consistent with modern Republican Party ideology. Although the latter is true, it does not disprove the conclusion of almost every other political business cycle scholar since it is not possible to pinpoint the motivation behind the policy change. Given, the abandonment of ideology displayed by Nixon in the second half of his term, it seems more likely the entire cycle, consistent with the predictions of a political policy cycle, was driven by electoral considerations rather than ideology. 1

State-Level Evidence

Little evidence has been accumulated for state-level political business cycles. An exception for historical gubernatorial elections is Heckelman (1998). Comparing the gainful employment rates across states with and without a gubernatorial election in the decennial years of 1870-1910, the evidence supports the notion of a political employment cycle for the states. This evidence is limited to the case of pooling all the years together, and may be driven by the strong result found for 1890. There is no further evidence of a federal employment cycle during the presidential election years of 1880 and 1900, or assistance directed at those states where the governor was of the same party as the sitting president.

Policy Cycles

Empirical studies of contemporary political cycles have turned more attention recently to policy, rather than business, cycles since policy instruments would need to be manipulated in order to affect the economy. Lack of evidence of political business cycles would be consistent either with no attempted manipulation, or policy cycles that did not have the desired effect due to other exogenous factors and the crudity of macroeconomic policy. There does appear to be strong evidence of modern policy cycles even when political business cycle evidence is weak or non-existent. (See for example Alesina, Roubini and Cohen 1999.) With the exception of the well-documented Nixonion policy cycles, there has been no attempt to document the occurrence of historical policy cycles. This remains the largest gap in the empirical literature and should prove a fertile ground for exploration.

New Deal Spending

There is, however, a related literature which examines New Deal spending from a political angle. Beginning with Gavin Wright’s (1974) study, scholars have generally concluded that allocations of spending across the states were directed more by Roosevelt’s electoral concerns than by economic need (Couch and Shughart 1998), since a disproportionate share of federal spending under the New Deal went to the potential swing states. Anderson and Tollison (1991) find that spending was also heavily influenced by congressional self-interest. In contrast, Wallis (1987) presents evidence that both political interest and economic need were important by noting that payments to Southern states were lower in part due to their reluctance to take advantage of federal matching grants. Most recently, Couch and Shughart (2000) test the matching grant hypothesis on one component of New Deal spending, namely the Works Progress Administration (WPA). They find that federal land ownership, political self-interest, and state economic need were all contributory factors to determining the allocation of WPA spending across the states. Wallis (1998) also showed that much of the prior empirical analysis of New Deal distributions depended critically on the inclusion or exclusion of Nevada, a state unique in its low population density and large proportion of federal land. The political aspects of New Deal spending are also summarized in the Fishback’s (1999) review. Fleck (2001) and Wallis (2001) provide the most recent exchange on this subject.

References

Alesina, Alberto, Nouriel Roubini, and Gerald D. Cohen. Political Cycles and the Macroeconomy, Cambridge, MA: MIT Press, 1997.

Anderson, Gary M. and Robert D. Tollison. “Congressional Influence and Patterns of New Deal Spending.” Journal of Law and Economics 34, (1991): 161-175.

Couch, Jim F. and William F. Shughart. The Political Economy of the New Deal, Cheltenham, UK: Edward Elgar, 1998.

Couch, Jim F. and William F. Shughart. “New Deal Spending and the States: The Politics of Public Works.” In Public Choice Interpretations of American Economic History, edited by Jac C. Heckelman, John C. Moorhouse, and Robert Whaples, 105-122. Norwell, MA: Kluwer Academic Publishers.

Davidson, Lawrence S., Michele Fratianni and Jurgen von Hagen. “Testing for Political Business Cycles.” Journal of Policy Modeling 12, (1992): 35-59.

Drazen, Allan. Political Economy in Macroeconomics. Princeton: Princeton University Press, 2000.

Fair, Ray. “The Effects of Economic Events on Votes for the President.” Review of Economics and Statistics 60, (1978): 159-173.

Fishback, Price V. “Review of Jim Couch and William F. Shughart II, The Political Economy of the New Deal.” Economic History Services, June 21, 1999. URL: htp://www.eh.net/bookreviews/library/0164.shtml

Fleck, Robert K. “Population, Land, Economic Conditions, and the Allocation of New Deal Spending.” Explorations in Economic History 38, (2001): 296-304.

Heckelman, Jac C. “Employment and Gubernatorial Elections during the Gilded Age.” Economics and Politics 10, (1998): 297-309.

Heckelman, Jac and Robert Whaples. “Political Business Cycles before the Great Depression.” Economics Letters 51, (1996): 247-251.

Keech, William R. Economic Politics: The Costs of Democracy. New York: Cambridge University Press, 1995.

Keller, Robert R. and Ann M. May. “The Presidential Political Business Cycle of 1972.” Journal of Economic History 44, (1984): 265-71.

Klein, Michael W. “Timing Is All: Elections and the Duration of the United States Business Cycles.” Journal of Money, Credit and Banking 28, (1996) 84-101.

Nordhaus, William D. “The Political Business Cycle.” Review of Economic Studies 42, (1975) 169-190.

Wallis, John J. “Employment, Politics, and Economic Recovery during the Great Depression.” Review of Economics and Statistics 69, (1987): 516-520.

Wallis, John J. “The Political Economy of New Deal Spending Revisited, Again: With and without Nevada.” Explorations in Economic History 35, (1998): 140-170.

Wallis, John J. “The Political Economy of New Deal Spending, Yet Again: A Reply to Fleck.” Explorations in Economic History 38, (2001): 305-314.

Wright, Gavin. “The Political Economy of New Deal Spending.” Review of Economics and Statistics 56, (1974): 30-38.

1 See also Drazen (2000, pp. 231-232) for a brief discussion of Nixon’s manipulation of taxation policy and Social Security payments.

Citation: Heckelman, Jac. “Historical Political Business Cycles in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL
http://eh.net/encyclopedia/historical-political-business-cycles-in-the-united-states/

The Bus Industry in the United States

Margaret Walsh, University of Nottingham

Despite its importance to everyday life, historians have paid surprisingly little attention to modern road transportation. There have been some valuable studies of the automobile, its production and its impact on society and the economy. This article surveys the history of a branch of modern transportation that has been almost completely ignored — the history of motorized buses.

Missing from History

Why has there been such neglect? Part of the explanation lies in the image problem. As the slowest form of motorized transportation and as the cheapest form of public transportation buses have, since the middle of the twentieth century, been perceived as the option of those who cannot afford to travel by car, train or plane. They have thus become associated with the young, the elderly, the poor, minority groups and women. Historians have avoided contact with bus history as they have avoided contact with bus travel. They have preferred to pay attention to trains and rail companies especially those of the nineteenth century. Particularly in the United States where rail service has become geographically very limited an ethos of pathos and romance is still associated with the ‘Iron Horse.’ Indeed there is an inverse relationship between the extent of academic and enthusiast knowledge and the use of modes of transportation. But perhaps of equal importance in encouraging rail and air travel research and writing is the maintenance of business records. These materials have been made available in either public or company depositories and they offer ample evidence to write splendid volumes, whether as corporate histories or as general interest reading. Bus records have not been easily accessible. Neither of the two major American bus carriers, Greyhound and Trailways, has an available corporate archive. Their historical materials deposited elsewhere have been scattered and haphazard. Other company archives are few in numbers and thin in volume. Bus information seems to be as scarce as bus passengers in recent times. Nevertheless enough materials do exist to demonstrate that the long-distance bus industry has offered a useful service and deserves to have its place in the nation’s history recognized.

The statistics on intercity passenger services provide the framework for understanding the growth and position of the motor bus in the United States. In 1910 railroad statistics were the only figures worthy of note. With 240,631 miles of rail track in operation trains provided a network capable of bringing the nation together. In the second decade of the twentieth century, however, the automobile, now being mass-produced, became more readily available and in the 1920s it became popular with one car per 6.6 persons. Then two other motor vehicles, the bus and the truck emerged in their own right and even the plane offered some pioneering passenger trips. As Table 1 documents, by 1929 when figures for the distribution of intercity travel become available, the train had already lost out to the auto, though it retained its dominance as a public carrier. For most of the remainder of the century, except for the gasoline shortages during the Second World War, the private automobile accounted for over eighty percent of domestic intercity travel.

Table 1

 

Intercity Travel in the United States by Mode

(Billions of Passenger Miles, 1929-1999)

 

 

PRIVATE CARRIER PUBLIC CARRIER
TOTAL INTERCITY TRAVEL TOTAL (1) AUTOMOBILE AIR TOTAL (1) (2) BUS RAIL AIR
Year Amount % Amount % Amount % Amount % Amount % Amount % Amount % Amount %
1929 216.0 100 175.0 81.0 175.0 81.0 - - 40.9 18.9 7.1 3.3 32.5 15.0 - -
1934 219.0 100 191.0 87.2 191.0 87.2 - - 27.5 12.6 7.4 3.4 18.8 8.6 0.2 0.1
1939 309.5 100 275.5 89.0 275.4 89.0 0.1 - 34.0 11.0 9.5 3.1 23.7 7.7 0.8 0.3
1944 309.3 100 181.4 58.6 181.4 58.6 - - 127.9 41.4 27.3 8.8 97.7 31.6 2.9 0.9
1949 478.0 100 410.2 85.8 409.4 85.6 0.8 0.2 67.8 14.2 24.0 5.0 36.0 7.5 7.8 1.6
1954 668.2 100 598.5 89.6 597.1 89.4 1.4 0.2 69.7 10.4 22.0 3.3 29.5 4.4 18.2 2.7
1959 762.8 100 689.5 90.4 687.4 90.1 2.1 0.3 73.3 9.6 20.4 2.7 22.4 2.9 30.5 4.0
1964 892.7 100 805.5 90.2 801.8 89.8 3.7 0.4 87.2 9.8 23.3 2.6 18.4 2.1 45.5 5.1
1969 1134.1 100 985.8 86.9 977.0 86.1 8.8 0.8 148.3 13.1 24.9 2.2 12.3 1.1 111.1 9.8
1974 1306.7 100 1133.1 86.7 1121.9 85.9 11.2 0.9 173.6 13.3 27.7 2.1 10.5 0.8 135.4 10.4
1979 1511.8 100 1259.8 83.3 1244.3 82.3 15.5 1.0 252.0 16.7 27.7 1.8 11.6 0.8 212.7 14.1
1984 1576.5 100 1290.4 81.9 1277.4 81.0 13.0 0.8 286.1 18.2 24.6 1.6 10.8 0.7 250.7 15.9
1989 1936.0 100 1563.9 80.8 1550.8 80.1 13.1 0.7 372.3 19.2 24.0 1.2 13.1 0.7 335.2 17.3
1994 2065.0 100 1634.6 79.2 1624.8 78.7 9.8 0.5 430.4 20.9 28.1 1.4 13.9 0.7 388.4 18.8
1999 2400.2 100 1863.4 77.6 1849.9 77.1 13.5 0.6 536.8 22.3 34.7 1.4 14.2 0.6 487.9 20.3

Sources: National Association of Motor Bus Operators. Bus Facts. 1966, pp. 6, 8; F. A. Smith, Transportation in America: Historical Compendium, 1939-1985. Washington DC: Eno Foundation for Transportation, 1986, p. 12; F. A. Smith, Transportation in America: A Statistical Analysis of Transportation in the United States. Washington DC: Eno Foundation for Transportation, 1990, p. 7; and Rosalyn A. Wilson, Transportation in America: Statistical Analysis of Transportation in the United States, eighteenth edition, with Historical Compendium, 1939-1999. Washington, DC: Eno Transportation Foundation, 2001, pp. 14-15.

(1) Percentages do not always sum to 100 because of rounding up.

(2) Early figures take count of waterways as well as railroads, buses and airlines.

Although intercity bus travel climbed from nothing to over seven billion passenger miles in 1929, it was always the choice of a relatively small number of people. Following modest growth in the 1930s, ridership soared during World War II, peaking just above 27 billion passenger miles and attaining its highest-ever share of the market. After World War II, as intercity rail ridership plummeted, intercity bus ridership dropped by much less. Measured in billions of passenger miles, bus ridership plateaued in the last half of the twentieth century at a level close to its World War II peak. However, its share of the market continued to fall, decade by decade. From the 1960s the faster and more comfortable jet plane offered better options for the long-distance traveler, but most Americans still chose to travel by land in their own automobiles.

No particular date marks the beginning of the American intercity or long-distance bus industry because so many individuals were attracted to it at a similar time when they perceived that they could make a profit by carrying fare-paying passengers over public highways. Early records suggest bus travel developed from being an adventure into a realistic business proposition in the second decade of the twentieth century when countless entrepreneurs scattered throughout the nation operated local services using automobile sedans, frequently known as ‘jitneys.’ Encouraged by their successes, ambitious pioneers in the 1920s developed longer networks either by connecting their routes with those of like-minded entrepreneurs or by buying out their rivals. They then needed to acquire larger, more comfortable and more reliable vehicles and to meet the requirements of state governments who imposed regulations for safety, competition, financing road construction and accounting procedures. Competition from the railroads threatened the well being of promising bus companies. Some railroads decided to run subsidiary bus operations in the hope of squeezing out motor carriers. Others preferred to attack bus entrepreneurs through a propaganda campaign claiming that buses were competing unfairly because they did not pay sufficient taxes for road use. Bus owners fought back, both verbally and practically. Those who had gained enough experience and expertise to organize their firms systematically took advantage of the flexibility of their vehicles that did not run on fixed tracks and of the lower running costs of coaches to provide a cheaper service. By the late 1920s regional bus lines were visible and the possibility of national lines suggested increased prospects.

The Impact of the Great Depression

The onset of the Great Depression, however, brought painful changes to this adolescent service sector. Many small carriers went out of business when passengers and ticket sales declined as unemployment grew and most Americans could not afford to travel. The larger companies, experiencing both a cash flow and capital shortage had to reorganize their financial and administrative structures and had to ensure system-wide economies in order to survive. The travails of the only burgeoning national enterprise, Greyhound, are instructive of the difficulties faced. Much of the corporation’s rapid expansion in the late 1920s had been financed by short-term loans, which could not be repaid as income fell. Two re-capitalization schemes in 1930 and in 1933 were essential to meet current obligations. These involved loans from banks, negotiations with General Motors and a re-floatation of shares. The corporation then took constructive as well as defensive action. It rationalized its divisional structure to become more competitive and continued to spend heavily on advertising and other media promotions. The strenuous efforts paid off and Greyhound not only survived, but also gained in market strength. Smaller firms with less credibility and credit worthiness struggled to remain solvent and were unable to expand while the disposable incomes of Americans remained low.

Federal Government Legislation

The federal government had expressed concern about the extent and shape of the developing long-distance bus industry before the Great Depression shattered the national economy. Starting in 1925 a series of forty bills calling for the regulation of motor passenger carriers came before Congress. Congressional hearings and two major investigations by the Interstate Commerce Commission (ICC) of the motor transport industry, in 1928 and 1932, made other suggestions for legislation, as did the Federal Coordinator of Transportation. But legislators felt under pressure from varied interest groups and were uncertain how to proceed. Emergency and short-term solutions came in the shape of the bus code of the National Industrial Recovery Act (NIRA) of 1933. But dissatisfaction with the code and the Supreme Court’s judgement about the unconstitutionality of the NIRA (1935) rallied support for specific legislation. The ensuing Motor Carrier Act (MCA) of 1935 entitled existing carriers to receive operating permits on filing applications and granted certificates to other firms only after an investigation or hearing which established that their business was in the public interest. Certificates could be suspended, changed or revoked. All interstate bus operators now had to conform to regulations governing safety, finance, insurance, accounting and records and they were required to consult the government over any rate changes.

Under the new regulations of the MCA competition between long-distance operators was limited. Existing companies who had filed for permits protested against applications from new competitors on their routes. If it was established that services were adequate and traffic was light, new applications were often turned down. The general thrust of the new policy supported larger companies, which more easily met federal government standards. The Greyhound Corporation, with its structure reorganized and already providing a national service, held a virtual monopoly of long-distance service in parts of the country. The administrative agency, the Motor Carrier Bureau (MCB) was well aware of both the potential abuse of monopoly power and the economies of scale achievable by larger operations. It thus encouraged an amalgamation of independent carriers to form a new nationwide system, National Trailways. Ironically this form of competition, which was officially encouraged in the bus industry, created a duopoly in many markets because most other operators were small companies that conducted much of their business in short-haul suburban and intra-regional transport. Influenced by historic concerns about regulating the railroads, the government had created a new public policy that insisted on competition within an industry even though that competition favored a small number of large firms. And even more ironically by the mid 1930s competition among different modes of transportation meant that there was little constructive thought given to a new national transportation policy that might coordinate these modes efficiently and effectively to use their natural advantages to best public effect.

For Better or Worse in the Second World War

War brought expansion to the bus industry, but under stressful conditions and with consequences that would have long-term implications. The need to carry both civilians and troops, combined with gasoline, rubber and parts shortages, forced Americans to move from their automobiles and onto public transportation. New records were set for passenger transportation. Seats were filled to capacity, with standing room only. Long-distance bus passenger miles doubled from 13.6 billion in 1941 to 26.9 billion in 1945. This business was not achieved in a free market. A wartime administrative bureau, the Office of Defense Transportation (ODT) created in December 1941 managed traffic flows throughout the war. It used relatively simple devices such as the rationing of parts, rubber allocation, speed limits, fuel control and the restriction of non-essential services to distribute scarce resources among transportation systems. Assisted by trade associations like the National Association of Motor Bus Operators (NAMBO), the ODT issued directives encouraging full capacity use and rational use of passenger operations.

Though bus companies abandoned competition with each other and with their long-standing rival, the railroads, they were unable to gain long-term benefits from their patriotic efforts to help win the war. Earnings rose, but it was impossible to invest part or all of these into the industry because of government curtailment of vehicle production and building construction activity. Hence buses were kept in service beyond their normal life expectancy and terminals were neither improved nor renovated. Speed limits of thirty-five miles per hour, imposed in 1942, created longer man-hours for drivers and lengthened journeys for passengers, already frustrated and tired by waiting in crowded terminals. Despite the industry’s wartime propaganda exhorting Americans either not to travel or to do so at off-peak times and to be patient for the good of the country, the unfavorable impressions of inconvenience and discomfort of traveling by bus remained with many patrons.

Emerging from the wartime conditions, bus managers considered that they could build on their increased business provided that they could both invest in new vehicles and buildings and could persuade Americans that buses offered many advantages over automobiles for long-distance travel. They were essentially optimistic about the future of their business. But they had not reckoned on either post-war inflation or on a lengthy federal government inquiry into the conduct of the industry. Funds accumulated during the war had been earmarked for investment in a variety of terminals and garages and for replacing and increasing rolling stock. New vehicles were ordered as soon as wartime restrictions were lifted, but not only were there delays in delivery due to shortages of materials and strikes in production plants, but these cost more than had been anticipated. The abandonment of effective wartime controls in 1946 brought rapid increases in prices and rents as consumers with huge pent-up savings chased scarce goods and housing. Older buses, which would typically have been retired, were retained. The double burden of depreciation charges of both new buses and restyled buses delayed the acquisition of more modern cruiser-type vehicles until the early 1950s. The normal investment in buildings was also held in check.

Post-war financial adjustments alone were not responsible for the slow progress towards modernization. The federal government inadvertently delayed infrastructure developments. The ICC was worried about the honest, efficient and cost-effective management of the intercity bus industry, its profit margins during and after the war and the lack of uniform bus fares. In July 1946 the agency instigated a comprehensive investigation of bus fares and charges in order to establish a fair national rate structure. The hearings concluded that the industry had conducted its affairs justly and that variations in fares were a result of local and regional conditions. In the future profit margins were to be established through a standard operating ratio, taken as the ratio of operating expenses to operating revenues. Bus operators were thus given a clean bill of health and a rate structure that suggested success in a competitive inter-modal marketplace. But the hearings were very lengthy, lasting until December 1949. During these years bus operators hesitated to take major decisions about future expansion. State governments also contributed to this climate of uncertainty. Multiple state registration fees and fuel taxes for vehicles crossing state boundaries increased both running and administrative costs for companies. Furthermore the lack of uniform size and weight limitations on vehicles between states had a negative influence on the selection of larger and more economical coaches and delayed the process of modernizing bus fleets. Entrepreneurs faced unusual problems in the post-war years, at a time when they needed to be forceful and dynamic.

These structural problems dominated bus company discussions at the expense of developing improved customer relations. Certainly time, effort and money were put into a vigorous advertising campaign telling the public that buses were available for both regular service and leisure time activities. The latter offered great potential as people had money in their pockets and desired recreation and entertainment. Advertisements emphasized the reliability, safety, flexibility and comfort of bus journeys while bus company employees were exhorted to develop a reputation for courtesy. But more proactive efforts were needed if new and old clients were to get on and stay on buses. The 25.8 million car registrations of 1945 had become 40.5 million by 1950 and then increased again to 52.1million in 1955. The United States had achieved mass ownership and automobility. The federal government encouraged this personal mobility by promoting the construction of interstate highways in the Federal-Aid Highway Act (Interstate Highways Act) of 1956. Certainly buses also benefited from new high-speed roads, but increasingly the private automobile won the contest for short-distance travel under four hundred miles. Americans preferred to drive themselves whether or not the total cost of personal travel was higher than that of public transport. They valued the convenience of their own vehicles and as more became suburban dwellers they were unwilling to go to bus terminals, often located in downtown city centers.

What could bus operators do to either conserve their position as passenger carriers or to advance this position? Efforts to improve management and internal company restructuring offered some possibilities while new publicity campaigns suggested other avenues for progress. The Greyhound Corporation, as the industry’s largest operator took the lead in adopting a modern professional appearance. In the mid 1950s it sought to raise efficiency by reducing divisional groupings from thirteen to seven, thereby making more effective use of equipment, procedures and personnel. Managers and mechanics now had to undergo systematic training, whether at business schools or in engineering technologies. Theoretical learning was a necessary complement to practical experience. But these administrative changes were insufficient by themselves. Increased trade was sought in transport-related outlets, for example, in carrying small freight and mail, in developing van lines and car rentals and in making connections with airlines to offer surface travel. The closure of many railroad routes offered opportunities to seize their business while road improvements and expansion created the possibility of new business. Yet more openings were envisaged as Greyhound and its major rival, Trailways, participated in the conglomerate movement. Greyhound, for example, not only ventured into bus and auxiliary transport services, but also moved into financial, food, consumer, pharmaceutical, equipment leasing and general activities. Trailways diversified into real estate, accident insurance, restaurants, car parking and ocean cargo shipping operations. The aim was to realize substantial benefits through exchange of clients and economies of scale.

The bus industry also adopted a fresh approach to consumer relations in the late 1950s and the 1960s. Again the Greyhound Corporation led the way. Its new advertising agency, Grey Advertising, developed a novel and long-lasting campaign using a real dog, ‘Lady Greyhound,’ rather than the traditional silhouette in bus publicity. The corporation was able to portray ‘Lady Greyhound’ as a caring and sharing personality as she gave press and radio ‘interviews,’ opened bus stations, civic events and charity functions and replied to the members of her fan club. The implications were that Greyhound and the bus industry were equally concerned ‘people.’ Greyhound also became the official bus line in the annual contest to find Mrs. America, a contest that emphasized homemaking skills. This promotion was clearly an effort to appeal to women who comprised the majority of the bus industry’s passengers. More dramatic was the contemporary 1960s campaign to attract the young, foreign visitors, those who did not drive and the poorer groups in society. ‘Go Greyhound and Leave the Driving to Us’ and the offer of up to ninety-nine days bus travel for $99.00 were attractive proposals. By now the bus industry was differentiating among its clients. There was a market for regular route travel among those who did not have access to an automobile or who preferred not to drive. This market could be increased as a result of specific offers if these were well publicized. There was also a potential market for specialized travel in the leisure sector. While middle-class Americans might not want to experience the inconvenience of scheduled journeys, they could be persuaded to charter a bus for special trips, for example, outings by the church choir and the youth club or to sports events and art galleries. They could also be persuaded to join a tour group, as the price of the vacation would ensure like-minded and similarly well-off company. Indeed charter and special services’ income rose during the 1960s.

Not all passengers chose the national bus lines. Indeed there was considerable variety among American bus companies. In some ways smaller companies felt at a disadvantage, but in other ways they clearly won out. Regional operatives, like Jefferson Lines in the Midwest or Peter Pan in New England and New York State remained primarily in transportation services. They operated regular routes on an interstate basis, with charter and special services providing important financial returns. Their durability in business was related to their local reputation for service and their standing, which they were able to exploit. Local companies like Badger Coaches in Madison and Milwaukee, Wisconsin or Wolf’s Bus Line of York Spring, Pennsylvania frequently relied on charter and special work, often within a two hundred mile radius. When they ran regular services, these were on intrastate routes. They frequently filled the gaps left by their larger counterparts. The bus industry was diversified.

The bus industry in the United States had always offered its services to a minority of the traveling public, but by the 1960s it had settled on catering to a smaller proportion of the nation’s travelers. For the rest of the century it would struggle to retain these customers. More people took the bus than took the train because the bus, as a flexible and relatively low cost vehicle, was able to serve more urban and rural communities and to serve them economically. But in an era which was punctuated by economic crises and rising energy prices, the federal government first intervened to protect a special interest group and then stepped out of managing transportation policy in the public interest concerns of communal values and social infrastructure. Though never acting consistently, it became more susceptible to the economic concerns of free market competition and the personal concerns of Americans as individuals. The bus industry thus faced serious problems in its efforts to provide a well-run and effective service in a nation dominated by automobile owners and air travelers.

By the 1970s the economic difficulties faced by buses and more urgently by trains resulted in public investigations. The crisis in public ground transportation emerged first on the railroads because freight had been cross subsidizing passengers for years and the companies had withdrawn from unprofitable passenger services whenever possible. Pressured by an active rail lobby and concerned to ensure a minimum route network, Congress intervened with a subsidy in 1970 and created the National Rail Passenger Corporation, better known as Amtrak, to run passenger operations. Though train services improved continuing federal subsidies were required. Intercity bus operators were outraged both by the creation of Amtrak and the ensuing cheaper rail fares and complained about unfair competition throughout the decade. Their efforts to remain competitive with their long-standing rival, especially in the busy northeastern corridor of the United States, proved to be very tough and revenue from the large bus operators dropped. Losses, however, were not solely due to railroad activities. Airlines continued to enlarge their share of long-distance travel, stimulated by greater use of wide-bodied jet aircraft that increased speed and fostered a relative decline in the price gap between plane and bus fares. At the same time automobile ownership and use continued to grow with over a third of American households possessing two or more vehicles. Competition from both public and private modes of transport became very intense.

This competition, however, could not fully explain the plight of the American bus industry. The troubled economic conditions of the 1970s required organizational readjustments. In a period marked by high unemployment and high inflation rates the bus industry found that its receipts did not match its higher production costs. Higher labor costs, significant increases in fuel costs and mounting charges for new vehicles meant that bus companies were unable to finance their operations from their profits. Outside investment funds were needed. But these were slow to materialize because the bus industry was perceived to be in difficulties. Both the trade association, the American Bus Association (ABA) and the major carriers discussed possible solutions including cutting labor costs, finding methods of increasing productivity, promoting marketing drives — both for regular route and special services — and taking on more small freight business. But these efforts were of no avail if the industry as a whole lacked federal government backing. Any improvements made by carriers needed to fit into a national transportation infrastructure that recognized the value of bus services as the only source of public transport in some communities. Individual travel and transportation decisions might be considered to be private decisions but they had public value and consequences. Two main policies were possible in the 1970s, supporting the bus industry financially within the existing transportation structure or altering the framework to stimulate more bus competition and thus hope to create greater efficiency.

The bus industry initially favored government financial assistance as the way forward. In congressional hearings in 1977 bus delegates proposed a revitalization strategy that included capital grants, operating subsidies, tax concessions and regulatory reform aimed in particular at rate flexibility. The Surface Transportation Assistance Act (1978) authorized limited funds in the hope of some industry recovery. But this assistance had only a temporary impact in the late 1970s because by then many government representatives, their advisors, economists and business managers, were more interested in altering public policy to non-government intervention, whether in terms of management, grants or planning. In an era of conservative politics the mood of the country moved in favor of free market enterprise. Within a few years much of the nation’s transport was partially deregulated. In 1978 the Airline Deregulation Act gave airlines considerable freedom in pricing policies and in entry to and exit from routes. In 1980 both trucks and railroads were substantially deregulated. In 1982 it was the turn of the buses. The Bus Regulatory Reform Act of that year did not completely deregulate industry, but it did noticeably lessen governmental authority. Entry into business was liberalized, state regulations about exit from unprofitable routes were eased and price flexibility was granted on fares.

The long-distance bus industry now faced a highly competitive transportation environment. Not only did companies engage in price warfare over potentially profitable bus routes while abandoning marginal routes, but they also had to contest for passengers with the new low-cost deregulated airlines and for package freight with trucks. Companies made considerable efforts to adjust to the new conditions by lowering prices, improving facilities, especially terminals, investing in new coaches, making rural connections with independent feeder lines and in establishing computer systems to assist with ticketing and routing. Their most contentious adjustment came in the area of industrial relations. Here the larger operations ran into difficulties. Facing competition from smaller companies who had hired cheaper labor, they needed to negotiate wage reductions and new conditions with their unionized work force. In 1982 Trailways Lines agreed to a settlement with the American Transit Union (ATU) that froze wages at a level already considerably lower than that of Greyhound who then sought similar wage reductions. Resistance led to a seven-week strike in 1983. But the resulting settlement was relatively short-lived. Negotiations for a new driver’s contact broke down and ended in more strike action in 1990. Violence followed as the company hired replacement drivers and continued to operate its buses. The ensuing costs of countering the violence together with reduced income from services instigated a financial crisis. Greyhound filed for bankruptcy under Chapter 11 in June 1990 to re-order its affairs. The restructured corporation emerged as a smaller operation able to compete in the deregulated world of transportation.

In the 1990s the long-distance bus industry reshaped itself to cater to a variety of markets. Composed of hundreds of operators, ranging from large to small, but primarily small, it remained an essential, albeit minor, part of the United States’ transportation network. Motor coaches provided regular route services to some 4000 communities and had the capacity to serve all groups of people with their leisure, charter, small package, airport and commuter services. They were a vital ingredient to rural life and offered important intermodal links. Indeed for the country as a whole buses carried more commercial passengers than any of their transportation rivals. As a flexible and reasonably priced means of travel they found a niche catering to specific groups in society for scheduled routes and another niche for leisure activities. Though perceived to offer a secondary form of transportation, the bus industry in fact has provided and continues to provide crucial services for many Americans.

Crandall, Burton B. The Growth of the Intercity Bus Industry. Syracuse: Syracuse University, 1954.

Jackson, Carlton. Hounds of the Road: A History of the Greyhound Bus Company. Bowling Green, OH: Bowling Green University Popular Press, 1984.

Meier, Albert E. and John P. Hoschek. Over the Road. A History of Intercity Bus Transportation in the United States. Upper Montclair, NJ: Motor Bus Society, 1975.

Schisgall, Oscar. The Greyhound Story. From Hibbing to Everywhere. Chicago: J.C. Ferguson, 1985.

Taff, Charles A. Commercial Motor Transportation. Homewood, IL: Richard D. Irving Inc., 1951; 7th edition, Centreville, MD: Cornell Maritime Press, 1986.

Thompson, Gregory L. The Passenger Train in the Motor Age. California’s Rail and Bus Industries, 1910-1941. Columbus: Ohio State University Press, 1993.

Walsh, Margaret. Making Connections. The Long-Distance Bus Industry in the USA . Aldershot, UK: Ashgate Publishing, 2000.

Citation: Walsh, Margaret. “The Bus Industry in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. January 27, 2003. URL http://eh.net/encyclopedia/the-bus-industry-in-the-united-states/

A Concise History of America’s Brewing Industry

Martin H. Stack, Rockhurst Universtiy

1650 to 1800: The Early Days of Brewing in America

Brewing in America dates to the first communities established by English and Dutch settlers in the early to mid seventeenth century. Dutch immigrants quickly recognized that the climate and terrain of present-day New York were particularly well suited to brewing beer and growing malt and hops, two of beer’s essential ingredients. A 1660 map of New Amsterdam details twenty-six breweries and taverns, a clear indication that producing and selling beer were popular and profitable trades in the American colonies (Baron, Chapter Three). Despite the early popularity of beer, other alcoholic beverages steadily grew in importance and by the early eighteenth century several of them had eclipsed beer commercially.

Between 1650 and the Civil War, the market for beer did not change a great deal: both production and consumption remained essentially local affairs. Bottling was expensive, and beer did not travel well. Nearly all beer was stored in, and then served from, wooden kegs. While there were many small breweries, it was not uncommon for households to brew their own beer. In fact, several of America’s founding fathers brewed their own beer, including George Washington and Thomas Jefferson (Baron, Chapters 13 and 16).

1800-1865: Brewing Begins to Expand

National production statistics are unavailable before 1810, an omission which reflects the rather limited importance of the early brewing industry. In 1810, America’s 140 commercial breweries collectively produced just over 180,000 barrels of beer.[1] During the next fifty years, total beer output continued to increase, but production remained small scale and local. This is not to suggest, however, that brewing could not prove profitable. In 1797, James Vassar founded a brewery in Poughkeepsie, New York whose successes echoed far beyond the brewing industry. After several booming years Vassar ceded control of the brewery to his two sons, Matthew and John. Following the death of his brother in an accident and a fire that destroyed the plant, Matthew Vassar rebuilt the brewery in 1811. Demand for his beer grew rapidly, and by the early 1840s, the Vassar brewery produced nearly 15,000 barrels of ale and porter annually, a significant amount for this period. Continued investment in the firm facilitated even greater production levels, and by 1860 its fifty employees turned out 30,000 barrels of beer, placing it amongst the nation’s largest breweries. Today, the Vassar name is better known for the college Matthew Vassar endowed in 1860 with earnings from the brewery (Baron, Chapter 17).

1865-1920: Brewing Emerges as a Significant Industry

While there were several hundred small scale, local breweries in the 1840s and 1850s, beer did not become a mass-produced, mass-consumed beverage until the decades following the Civil War. Several factors contributed to beer’s emergence as the nation’s dominant alcoholic drink. First, widespread immigration from strong beer drinking countries such as Britain, Ireland, and Germany contributed to the creation of a beer culture in the U.S.. Second, America was becoming increasingly industrialized and urbanized during these years, and many workers in the manufacturing and mining sectors drank beer during work and after. Third, many workers began to receive higher wages and salaries during these years, enabling them to buy more beer. Fourth, beer benefited from members of the temperance movement who advocated lower alcohol beer over higher alcohol spirits such as rum or whiskey.[2] Fifth, a series of technological and scientific developments fostered greater beer production and the brewing of new styles of beer. For example, artificial refrigeration enabled brewers to brew during warm American summers, and pasteurization, the eponymous procedure developed by Louis Pasteur, helped extend packaged beer’s shelf life, making storage and transportation more reliable (Stack, 2000). Finally, American brewers began brewing lager beer, a style that had long been popular in Germany and other continental European countries. Traditionally, beer in America meant British-style ale. Ales are brewed with top fermenting yeasts, and this category ranges from light pale ales to chocolate-colored stouts and porters. During the 1840s, American brewers began making German-style lager beers. In addition to requiring a longer maturation period than ales, lager beers use a bottom fermenting yeast and are much more temperature sensitive. Lagers require a great deal of care and attention from brewers, but to the increasing numbers of nineteenth century German immigrants, lager was synonymous with beer. As the nineteenth century wore on, lager production soared, and by 1900, lager outsold ale by a significant margin.

Together, these factors helped transform the market for beer. Total beer production increased from 3.6 million barrels in 1865 to over 66 million barrels in 1914. By 1910, brewing had grown into one of the leading manufacturing industries in America. Yet, this increase in output did not simply reflect America’s growing population. While the number of beer drinkers certainly did rise during these years, perhaps just as importantly, per capita consumption also rose dramatically, from under four gallons in 1865 to 21 gallons in the early 1910s.

Table 1: Industry Production and per Capita Consumption, 1865-1915

Year National Production (millions of barrels) Per Capita Consumption (gallons)
1865 3.7 3.4
1870 6.6 5.3
1875 9.5 6.6
1880 13.3 8.2
1885 19.2 10.5
1890 27.6 13.6
1895 33.6 15.0
1900 39.5 16.0
1905 49.5 18.3
1910 59.6 20.0
1915 59.8 18.7

Source: United States Brewers Association, 1979 Brewers Almanac, Washington, DC: 12-13.

An equally impressive transformation was underway at the level of the firm. Until the 1870s and 1880s, American breweries had been essentially small scale, local operations. By the late nineteenth century, several companies began to increase their scale of production and scope of distribution. Pabst Brewing Company in Milwaukee and Anheuser-Busch in St. Louis became two of the nation’s first nationally-oriented breweries, and the first to surpass annual production levels of one million barrels. By utilizing the growing railroad system to distribute significant amounts of their beer into distant beer markets, Pabst, Anheuser-Busch and a handful of other enterprises came to be called “shipping” breweries. Though these firms became very powerful, they did not control the pre-Prohibition market for beer. Rather, an equilibrium emerged that pitted large and regional shipping breweries that incorporated the latest innovations in pasteurizing, bottling, and transporting beer against a great number of locally-oriented breweries that mainly supplied draught beer in wooden kegs to their immediate markets (Stack, 2000).

Table 2: Industry Production, the Number of Breweries, and Average Brewery Size

1865-1915

Year National Production (millions of barrels) Number of Breweries Average Brewery Size (thousands of barrels)
1865 3.7 2,252 1,643
1870 6.6 3,286 2,009
1875 9.5 2,783 3,414
1880 13.3 2,741 4,852
1885 19.2 2,230 8,610
1890 27.6 2,156 12,801
1895 33.6 1,771 18,972
1900 39.5 1,816 21,751
1905 49.5 1,847 26,800
1910 59.6 1,568 38,010
1915 59.8 1,345 44,461

Source: United States Brewers Association, 1979 Brewers Almanac, Washington DC: 12-13.

Between the Civil War and national prohibition, the production and consumption of beer greatly outpaced spirits. Though consumption levels of absolute alcohol had peaked in the early 1800s, temperance and prohibition forces grew increasingly vocal and active as the century wore on, and by the late 1800s, they constituted one of the best-organized political pressure groups of the day (Kerr, Chapter 5, 1985). Their efforts culminated in the ratification of the Eighteenth Amendment on January 29, 1919 that, along with the Volstead Act, made the production and distribution of any beverages with more than one-half of one percent alcohol illegal. While estimates of alcohol activity during Prohibition’s thirteen year reign — from 1920 to 1933 — are imprecise, beer consumption almost certainly fell, though spirit consumption may have remained constant or actually even increased slightly (Rorbaugh, Appendices).

1920-1933: The Dark Years, Prohibition

The most important decision all breweries had to make after 1920 was what to do with their plants and equipment. As they grappled with this question, they made implicit bets as to whether Prohibition would prove to be merely a temporary irritant. Pessimists immediately divested themselves of all their brewing equipment, often at substantial losses. Other firms decided to carry on with related products, and so stay prepared for any modifications to the Volstead Act which would allow for beer. Schlitz, Blatz, Pabst, and Anheuser-Busch, the leading pre-Prohibition shippers, began producing near beer, a malt beverage with under one-half of one percent alcohol. While it was not a commercial success, its production allowed these firms to keep current their beer-making skills. Anheuser-Busch called its near beer “Budweiser” which was “simply the old Budweiser lager beer, brewed according to the traditional method, and then de-alcoholized. … August Busch took the same care in purchasing the costly materials as he had done during pre-prohibition days” (Krebs and Orthwein, 1953, 165). Anheuser-Busch and some of the other leading breweries were granted special licenses by the federal government for brewing alcohol greater than one half of one percent for “medicinal purposes” (Plavchan, 1969, 168). Receiving these licensees gave these breweries a competitive advantage as they were able to keep their brewing staff active in beer-making.

The shippers, and some local breweries, also made malt syrup. While they officially advertised it as an ingredient for baking cookies, and while its production was left alone by the government, it was readily apparent to all that its primary use was for homemade beer.

Of perhaps equal importance to the day-to-day business activities of the breweries were their investment decisions. Here, as in so many other places, the shippers exhibited true entrepreneurial insight. Blatz, Pabst, and Anheuser-Busch all expanded their inventories of automobiles and trucks, which became key assets after repeal. In the 1910s, Anheuser-Busch invested in motorized vehicles to deliver beer; by the 1920s, it was building its own trucks in great numbers. While it never sought to become a major producer of delivery vehicles, its forward expansion in this area reflected its appreciation of the growing importance of motorized delivery, an insight which they built on after repeal.

The leading shippers also furthered their investments in bottling equipment and machinery, which was used in the production of near beer, root beer, ginger ale, and soft drinks. These products were not the commercial successes beer had been, but they gave breweries important experience in bottling. While 85 percent of pre-Prohibition beer was kegged, during Prohibition over 80 percent of near beer and a smaller, though growing, percentage of soft drinks was sold in bottles.

This remarkable increase in packaged product impelled breweries to refine their packaging skills and modify their retailing practice. As they sold near beer and soft drinks to drugstores and drink stands, they encountered new marketing problems (Cochran, 1948, 340). Experience gained during these years helped the shippers meet radically different distribution requirements of the post-repeal beer market.

They were learning about canning as well as bottling. In 1925, Blatz’s canned malt syrup sales were more than $1.3 million, significantly greater than its bulk sales. Anheuser-Busch used cans from the American Can Company for its malt syrup in the early 1920s, a firm which would gain national prominence in 1935 for helping to pioneer the beer can. Thus, the canning of malt syrup helped create the first contacts between the leading shipping brewers and American Can Company (Plavchan, 1969, 178; Conny, 1990, 35-36; and American Can Company, 1969, 7-9).

These expensive investments in automobiles and bottling equipment were paid for in part by selling off branch properties, namely saloons (See Cochran, 1948; Plavchan, 1969; Krebs and Orthwein, 1953). Some had equipped their saloons with furniture and bar fixtures, but as Prohibition wore on, they progressively divested themselves of these assets.

1933-1945: The Industry Reawakens after the Repeal of Prohibition

In April 1933 Congress amended the Volstead Act to allow for 3.2 percent beer. Eight months later, in December, Congress and the states ratified the Twenty-first Amendment, officially repealing Prohibition. From repeal until World War II, the brewing industry struggled to regain its pre-Prohibition fortunes. Prior to prohibition, breweries owned or controlled many saloons, which were the dominant retail outlets for alcohol. To prevent the excesses that had been attributed to saloons from reoccurring, post-repeal legislation forbade alcohol manufacturers from owning bars or saloons, requiring them instead to sell their beer to wholesalers that in turn would distribute their beverages to retailers.

Prohibition meant the end of many small breweries that had been profitable, and that, taken together, had posed a formidable challenge to the large shipping breweries. The shippers, who had much greater investments, were not as inclined to walk away from brewing.[3] After repeal, therefore, they reopened for business in a radically new environment, one in which their former rivals were absent or disadvantaged. From this favorable starting point, they continued to consolidate their position. Several hundred locally oriented breweries did reopen, but were unable to regain their pre-Prohibition competitive edge, and they quickly exited the market. From 1935 to 1940, the number of breweries fell by ten percent.

Table 3: U.S. Brewing Industry Data, 1910-1940

Year Number of Breweries Number of Barrels Produced (millions) Average Barrelage per Brewery Largest Firm Production (millions of barrels) Per Capita Consumption (gallons)
1910 1,568 59.5 37,946 1.5 20.1
1915 1,345 59.8 44,461 1.1 18.7
1934 756 37.7 49,867 1.1 7.9
1935 766 45.2 59,008 1.1 10.3
1936 739 51.8 70,095 1.3 11.8
1937 754 58.7 77,851 1.8 13.3
1938 700 56.3 80,429 2.1 12.9
1939 672 53.8 80,059 2.3 12.3
1940 684 54.9 80,263 2.5 12.5

Source: Cochran, 1948; Krebs and Orthwein, 1953; and United States Brewers Almanac, 1956.

Annual industry output, after struggling in 1934 and 1935, began to approach the levels reached in the 1910s. Yet, these total increases are somewhat misleading, as the population of the U.S. had risen from 92 to 98 million in the 1910s to 125 to 130 million in the 1930s (Brewers Almanac, 1956, 10). This translated directly into the lower per capita consumption levels reported in Table 3.

The largest firms grew even larger in the years following repeal, quickly surpassing their pre-Prohibition annual production levels. The post-repeal industry leaders, Anheuser-Busch and Pabst, doubled their annual production levels from 1935 to 1940.

To take for granted the growing importance of the leading shippers during this period is to ignore their momentous reversal of pre-Prohibition trends. While medium-sized breweries dominated the industry output in the years leading up to Prohibition, the shippers regained in the 1930s the dynamism they manifested from the 1870s to the 1890s. Table 4 compares the fortunes of the shippers in relation to the industry as a whole. From 1877 to 1895, Anheuser-Busch and Pabst, the two most prominent shippers, grew much faster than the industry, and their successes helped pull the industry along. This picture changed during the years 1895 to 1915, when the industry grew much faster than the shippers (Stack, 2000). With the repeal of Prohibition, the tides changed again: from 1934 to 1940, the brewing industry grew very slowly, while Anheuser-Busch and Pabst enjoyed tremendous increases in their annual sales.

Table 4: Percentage Change in Output among Shipping Breweries, 1877-1940

Period Anheuser-Busch Pabst Industry
1877-1895 1,106% 685% 248%
1895-1914 58% -23% 78%
1934-1940 173% 87% 26%

Source: Cochran, 1948; Krebs and Orthwein, 1953; and Brewers Almanac, 1956.

National and regional shippers increasingly dominated the market. Breweries such as Anheuser-Busch, Pabst and Schlitz came to exemplify the modern business enterprise, as described by Alfred Chandler (Chandler, 1977), which adeptly integrated mass production and mass distribution.

Table 5: Leading Brewery Output Levels, 1938-1940

Brewery Plant Location 1938 (bls) 1939 (bls) 1940 (bls)
Anheuser-Busch St. Louis, MO 2,087,000 2,306,000 2,468,000
Pabst Brewing Milwaukee, WI

Peoria Heights, IL

1,640,000 1,650,000 1,730,000
Jos. Schlitz Milwaukee, WI 1,620,000 1,651,083 1,570,000
F & M Schafer Brooklyn, NY 1,025,000 1,305,000 1,390,200
P. Ballantine Newark, NJ 1,120,000 1,289,425 1,322,346
Jacob Ruppert New York, NY 1,417,000 1,325,350 1,228,400
Falstaff Brewing St. Louis, MO

New Orleans, LA

Omaha, NE

622,000 622,004 684,537
Duquesne Brewing Pittsburgh, PA

Carnegie, PA

McKees Rock, PA

625,000 680,000 690,000
Theo. Hamm Brewing St. Paul, MN 750,000 780,000 694,200
Liebman Breweries Brooklyn, NY 625,000 632,558 670,198

Source: Fein, 1942, 35.

World War One had presented a direct threat to the brewing industry. Government officials used war-time emergencies to impose grain rationing, a step that led to the lowering of the alcohol level of beer to 2.75 percent. World War Two had a completely different effect on the industry: rather than output falling, beer production rose from 1941 to 1945.

Table 6: Production and Per Capita Consumption, 1940-1945

=”607″>

Year Number of Breweries Number of barrels withdrawn (millions) Per Capita Consumption (gallons)
1940 684 54.9 12.5
1941 574 55.2 12.3
1942 530 63.7 14.1
1943 491 71.0 15.8
1944 469 81.7 18.0
1945 468 86.6 18.6

Source: 1979 USBA, 12-14.

During the war, the industry mirrored the nation at large by casting off its sluggish depression-era growth. As the war economy boomed, consumers, both troops and civilians, used some of their wages for beer, and per capita consumption grew by 50 percent between 1940 and 1945.

1945-1980: Following World War II, the Industry Continues to Grow and to Consolidate

Yet, the take-off registered during the World War II was not sustained during the ensuing decades. Total production continued to grow, but at a slower rate than overall population.

Table 7: Production and per Capita Consumption, 1945-1980

Year Number of Breweries Number of barrels withdrawn (millions) Per Capita Consumption (gallons)
1945 468 86.6 18.6
1950 407 88.8 17.2
1955 292 89.8 15.9
1960 229 94.5 15.4
1965 197 108.0 16.0
1970 154 134.7 18.7
1975 117 157.9 21.1
1980 101 188.4 23.1

Source: 1993 USBA, 7-8.

The period following WWII was characterized by great industry consolidation. Total output continued to grow, though per capita consumption fell into the 1960s before rebounding to levels above 21 gallons per capita in the 1970s, the highest rates in the nation’s history. Not since the 1910s, had consumption levels topped 21 gallons a year; however, there was a significant difference. Prior to Prohibition most consumers bought their beer from local or regional firms and over 85 percent of the beer was served from casks in saloons. Following World War II, two significant changes radically altered the market for beer. First, the total number of breweries operating fell dramatically. This signaled the growing importance of the large national breweries. While many of these firms — Anheuser-Busch, Pabst, Schlitz, and Blatz — had grown into prominence in the late nineteenth century, the scale of their operations grew tremendously in the years after the repeal of prohibition. From the mid 1940s to 1980, the five largest breweries saw their share of the national market grow from 19 to 75 percent (Adams, 125).

Table 8: Concentration of the Brewing Industry, 1947-1981

Year Five Largest (%) Ten Largest (%) Herfindahl Index[4]
1947 19.0 28.2 140
1954 24.9 38.3 240
1958 28.5 45.2 310
1964 39.0 58.2 440
1968 47.6 63.2 690
1974 64.0 80.8 1080
1978 74.3 92.3 1292
1981 75.9 93.9 1614

Source: Adams, 1995, 125.

The other important change concerned how beer was sold. Prior to Prohibition, nearly all beer was sold on-tap in bars or saloons; while approximately 10-15 percent of the beer was bottled, it was much more expensive than draught beer. In 1935, a few years after repeal, the American Can Company successfully canned beer for the first time. The spread of home refrigeration helped spur consumer demand for canned and bottled beer, and from 1935 onwards, draught beer sales have fallen markedly.

Table 9: Packaged vs. Draught Sales, 1935-1980

Year Packaged sales as a percentage of total sales

(bottled and canned)

Draught sales as a percentage of total sales
1935 30 70
1940 52 48
1945 64 36
1950 72 28
1955 78 22
1960 81 19
1965 82 18
1970 86 14
1975 88 12
1980 88 12

Source: 1979 USBA, 20; 1993 USBA, 14.

The rise of packaged beer contributed to the growing industry consolidation detailed in Table 8.

1980-2000: Continued Growth, the Microbrewery Movement, and International Dimensions of the Brewing Industry

From 1980 to 2000, beer production continued to rise, reaching nearly 200 million barrels in 2000. Per capita consumption hit its highest recorded level in 1981 with 23.8 gallons. Since then, though, consumption levels have dropped a bit, and during the 1990s, consumption was typically in the 21-22 gallon range.

Table 10: Production and Per Capita Consumption, 1980-1990

Year Number of Breweries Number of barrels withdrawn (millions) Per Capita Consumption (gallons)
1980 101 188.4 23.1
1985 105 193.8 22.7
1990 286 201.7 22.6

Source: 1993 USBA, 7-8.

Beginning around 1980, the long decline in the number of breweries slowed and then was reversed. Judging solely by the number of breweries in operation, it appeared that a significant change had occurred: the number of firms began to increase, and by the late 1990s, hundreds of new breweries were operating in the U.S. However, this number is rather misleading: the overall industry remained very concentrated, with a three firm concentration ratio in 2000 of 81 percent.

Table 11: Production Levels of the Leading Breweries, 2000

Production (millions of barrels)
Anheuser-Busch 99.2
Miller 39.8
Coors 22.7
Total Domestic Sales 199.4

Source: Beverage Industry, May 2003, 19.

Although entrepreneurs and beer enthusiasts began hundreds of new breweries during this period, most of them were very small, with annual production levels of between 5,000 to 100,000 barrels annually. Reflecting their small size, these new firms were nicknamed microbreweries. Collectively, microbreweries have grown to account for approximately 5-7 percent of the total beer market.

Microbreweries represented a new strategy in the brewing industry: rather than competing on the basis of price or advertising, they attempted to compete on the basis of inherent product characteristics. They emphasized the freshness of locally produced beer; they experimented with much stronger malt and hop flavors; they tried new and long-discarded brewing recipes, often reintroducing styles that had been popular in America decades earlier. Together, these breweries have had an influence much greater than their market share would suggest. The big three breweries, Anheuser Busch, Miller, and Coors, have all tried to incorporate ideas from the microbrewery movement. They have introduced new marquee brands intended to compete for some of this market, and when this failed, they have bought shares in or outright control of some microbreweries.

A final dimension of the brewing industry that has been changing concerns the emerging global market for beer. Until very recently, America was the biggest beer market in the world: as a result, American breweries have not historically looked abroad for additional sales, preferring to expand their share of the domestic market.[5] In the1980s, Anheuser-Busch began to systematically evaluate its market position. While it had done very well in the U.S., it had not tapped markets overseas; as a result, it began a series of international business dealings. It gradually moved from exporting small amounts of its flagship brand Budwesier to entering into licensing accords whereby breweries in a range of countries such as Ireland, Japan, and Argentina began to brew Budweiser for sale in their domestic markets. In 1995, it established its first breweries outside of the U.S., one in England for the European market and the other in China, to service the growing markets in China and East Asia.[6]

While U.S. breweries such as Anheuser-Busch have only recently begun to explore the opportunities abroad, foreign firms have long appreciated the significance of the American market. Beginning in the late 1990s, imports began to increase their market share and by the early 2000s, they accounted for approximately 12 percent of the large U.S. market. Imports and microbrews typically cost more than the big three’s beers and they provide a wider range of flavors and tastes. One of the most interesting developments in the international market for beer occurred in 2002 when South African Breweries (SAB), the dominant brewery in South Africa, and an active firm in Europe, acquired Miller, the second largest brewery in the U.S. Though not widely discussed in the U.S., this may portend a general move towards increased global integration in the world market for beer.

Annotated Bibliography

Adams, Walter and James Brock, editors. The Structure of American Industry, ninth edition. Englewood Cliffs, New Jersey: Prentice Hall, 1995.

Apps, Jerry. Breweries of Wisconsin. Madison, WI: University of Wisconsin Press, 1992. Detailed examination of the history of breweries and brewing in Wisconsin.

Baron, Stanley. Brewed In America: A History of Beer and Ale in the United States.

Boston: Little, Brown, and Co, 1962: Very good historical overview of brewing in America, from the Pilgrims through the post-World War II era.

Baum, Dan. Citizen Coors: A Grand Family Saga of Business, Politics, and Beer. New York: Harper Collins, 2000. Very entertaining story of the Coors family and the brewery they made famous.

Beverage Industry (May 2003): 19-20.

Blum, Peter. Brewed In Detroit: Breweries and Beers since 1830. Detroit: Wayne State University Press, 1999. Very good discussion of Detroit’s major breweries and how they evolved. Particularly strong on the Stroh brewery.

Cochran, Thomas. Pabst Brewing Company: The History of an American Business. New York: New York University Press, 1948: A very insightful, well-researched, and well- written history of one of America’s most important breweries. It is strongest on the years leading up to Prohibition.

Downard, William. The Cincinnati Brewing Industry: A Social and Economic History. Ohio University Press, 1973: A good history of brewing in Cincinnati; particularly strong in the years prior to Prohibition.

Downard, William. Dictionary of the History of the American Brewing and Distilling Industries. Westport, CT: Greenwood Press, 1980: Part dictionary and part encyclopedia, a useful compendium of terms, people, and events relating to the brewing and distilling industries.

Duis, Perry. The Saloon: Public Drinking in Chicagoand Boston, 1880-1920. Urbana: University of Illinois Press, 1983: An excellent overview of the institution of the saloon in pre-Prohibition America.

Eckhardt, Fred. The Essentials of Beer Style. Portland, OR: Fred Eckhardt Communications, 1995: A helpful introduction into the basics of how beer is made and how beer styles differ.

Ehert, George. Twenty-Five Years of Brewing. New York: Gast Lithograph and Engraving, 1891: An interesting snapshot of an important late nineteenth century New York City brewery.

Elzinga, Kenneth. “The Beer Industry.” In The Structure of American Industry, ninth edition, edited by W. Adams and J. Brock. Englewood Cliffs, New Jersey: Prentice Hall, 1995: A good overview summary of the history, structure, conduct, and performance of America’s brewing industry.

Fein, Edward. “The 25 Leading Brewers in the United States Produce 41.5% of the Nation’s Total Beer Output.” Brewers Digest 17 (October 1942): 35.

Greer, Douglas. “Product Differentiation and Concentration in the Brewing Industry,” Journal of Industrial Economics 29 (1971): 201-19.

Greer, Douglas. “The Causes of Concentration in the Brewing Industry,” Quarterly Review of Economics and Business 21 (1981): 87-106.

Greer, Douglas. “Beer: Causes of Structural Change.” In Industry Studies, second edition, edited by Larry Duetsch, Armonk, New York: M.E. Sharpe, 1998.

Hernon, Peter and Terry Ganey. Under the Influence: The Unauthorized Story of the Anheuser-Busch Dynasty. New York: Simon and Schuster, 1991: Somewhat sensationalistic history of the family that has controlled America’s largest brewery, but some interesting pieces on the brewery are included.

Horowitz, Ira and Ann Horowitz. “Firms in a Declining Market: The Brewing Case.” Journal of Industrial Economics 13 (1965): 129-153.

Jackson, Michael. The New World Guide To Beer. Philadelphia: Running Press, 1988: Good overview of the international world of beer and of America’s place in the international beer market.

Keithan, Charles. The Brewing Industry. Washington D.C: Federal Trade Commission, 1978.

Kerr, K. Austin. Organized for Prohibition. New Haven: Yale Press, 1985: Excellent study of the rise of the Anti-Saloon League in the United States.

Kostka, William. The Pre-prohibition History of Adolph Coors Company: 1873-1933. Golden, CO: self-published book, Adolph Coors Company, 1973: A self-published book by the Coors company that provides some interesting insights into the origins of the Colorado brewery.

Krebs, Roland and Orthwein, Percy. Making Friends Is Our Business: 100 Years of Anheuser-Busch. St. Louis, MO: self-published book, Anheuser-Busch, 1953: A self-published book by the Anheuser-Busch brewery that has some nice illustrations and data on firm output levels. The story is nicely told but rather self-congratulatory.

“Large Brewers Boost Share of U.S. Beer Business,” Brewers Digest, 15 (July 1940): 55-57.

Leisley, Bruce. A History of Leisley Brewing. North Newton Kansas: Mennonite Press, 1975: A short but useful history of the Leisley Brewing Company. This was the author’s undergraduate thesis.

Lender, Mark and James Martin. Drinking in America. New York: The Free Press, 1987: Good overview of the social history of drinking in America.

McGahan, Ann. “The Emergence of the National Brewing Oligopoly: Competition in the American Market, 1933-58.” Business History Review 65 (1991): 229-284: Excellent historical analysis of the origins of the brewing oligopoly following the repeal of Prohibition.

McGahan, Ann. “Cooperation in Prices and Capacities: Trade Associations in Brewing after Repeal.” Journal of Law and Economics 38 (1995): 521-559.

Meier, Gary and Meier, Gloria. Brewed in the Pacific Northwest: A History of Beer Making in Oregon and Washington. Seattle: Fjord Press, 1991: A survey of the history of brewing in the Pacific Northwest.

Miller, Carl. Breweries of Cleveland. Cleveland, OH: Schnitzelbank Press, 1998: Good historical overview of the brewing industry in Cleveland.

Norman, Donald. Structural Change and Performance in the U.S. Brewing Industry. Ph.D. dissertation, UCLA, 1975.

One Hundred Years of Brewing. Chicago and New York: Arno Press Reprint, 1903 (Reprint 1974): A very important work. Very detailed historical discussion of the American brewing industry through the end of the nineteenth century.

Persons, Warren. Beer and Brewing In America: An Economic Study. New York: United Brewers Industrial Foundation, 1940.

Plavchan, Ronald. A History of Anheuser-Busch, 1852-1933. Ph.D. dissertation, St. Louis University, 1969: Apart from Cochran’s analysis of Pabst, one of a very few detailed business histories of a major American brewery.

Research Company of America. A National Survey of the Brewing Industry. self-published, 1941: A well research industry analysis with a wealth of information and data.

Rorbaugh, William. The Alcoholic Republic: An American Tradition. New York: Oxford University Press, 1979: Excellent scholarly overview of drinking habits in America.

Rubin, Jay. “The Wet War: American Liquor, 1941-1945.” In Alcohol, Reform, and Society, edited by J. Blocker. Westport, CT: Greenwood Press, 1979: Interesting discussion of American drinking during World War II.

Salem, Frederick. 1880. Beer: Its History and Its Economic Value as a National Beverage. New York: Arno Press, 1880 (Reprint 1972): Early but valuable discussion of American brewing industry.

Scherer, F.M. Industry Structure, Strategy, and Public Policy. New York: Harper Collins, 1996: A very good essay on the brewing industry.

Shih, Ko Ching and C. Ying Shih. American Brewing Industry and the Beer Market. Brookfield, WI, 1958: Good overview of the industry with some excellent data tables.

Skilnik, Bob. The History of Beer and Brewing in Chicago: 1833-1978. Pogo Press, 1999: Good overview of the history of brewing in Chicago.

Smith, Greg. Beer in America: The Early Years, 1587 to 1840. Boulder, CO: Brewers Publications, 1998: Well written account of beer’s development in America, from the Pilgrims to mid-nineteenth century.

Stack, Martin. “Local and Regional Breweries in America’s Brewing Industry, 1865-1920.” Business History Review 74 (Autumn 2000): 435-63.

Thomann, Gallus. American Beer: Glimpses of Its History and Description of Its Manufacture. New York: United States Brewing Association, 1909: Interesting account of the state of the brewing industry at the turn of the twentieth century.

United States Brewers Association. Annual Year Book, 1909-1921. Very important primary source document published by the leading brewing trade association.

United States Brewers Foundation. Brewers Almanac, published annually, 1941-present: Very important primary source document published by the leading brewing trade association.

Van Wieren, Dale. American Breweries II. West Point, PA: Eastern Coast Brewiana Association, 1995. Comprehensive historical listing of every brewery in every state, arranged by city within each state.


[1] A barrel of beer is 31 gallons. One Hundred Years of Brewing, Chicagoand New York: Arno Press Reprint, 1974: 252.

[2] During the nineteenth century, there were often distinctions between temperance advocates, who differentiated between spirits and beer, and prohibition supporters, who campaigned on the need to eliminate all alcohol.

[3] The major shippers may have been taken aback by the loss suffered by Lemp, one of the leading pre-Prohibition shipping breweries. Lemp was sold at auction in 1922 at a loss of 90 percent on the investment (Baron, 1962, 315).

[4] The Herfinhahl Index sums the squared market shares of the fifty largest firms.

[5] China overtook the United States as the world’s largest beer market in 2002.

[6] http://www.anheuser-busch.com/over/international.html

tively extract content, Imported Full Body :( May need to used a more carefully tuned import template.–>

Martin H. Stack, Rockhurst Universtiy

1650 to 1800: The Early Days of Brewing in America

Brewing in America dates to the first communities established by English and Dutch settlers in the early to mid seventeenth century. Dutch immigrants quickly recognized that the climate and terrain of present-day New York were particularly well suited to brewing beer and growing malt and hops, two of beer’s essential ingredients. A 1660 map of New Amsterdam details twenty-six breweries and taverns, a clear indication that producing and selling beer were popular and profitable trades in the American colonies (Baron, Chapter Three). Despite the early popularity of beer, other alcoholic beverages steadily grew in importance and by the early eighteenth century several of them had eclipsed beer commercially.

Between 1650 and the Civil War, the market for beer did not change a great deal: both production and consumption remained essentially local affairs. Bottling was expensive, and beer did not travel well. Nearly all beer was stored in, and then served from, wooden kegs. While there were many small breweries, it was not uncommon for households to brew their own beer. In fact, several of America’s founding fathers brewed their own beer, including George Washington and Thomas Jefferson (Baron, Chapters 13 and 16).

1800-1865: Brewing Begins to Expand

National production statistics are unavailable before 1810, an omission which reflects the rather limited importance of the early brewing industry. In 1810, America’s 140 commercial breweries collectively produced just over 180,000 barrels of beer.[1] During the next fifty years, total beer output continued to increase, but production remained small scale and local. This is not to suggest, however, that brewing could not prove profitable. In 1797, James Vassar founded a brewery in Poughkeepsie, New York whose successes echoed far beyond the brewing industry. After several booming years Vassar ceded control of the brewery to his two sons, Matthew and John. Following the death of his brother in an accident and a fire that destroyed the plant, Matthew Vassar rebuilt the brewery in 1811. Demand for his beer grew rapidly, and by the early 1840s, the Vassar brewery produced nearly 15,000 barrels of ale and porter annually, a significant amount for this period. Continued investment in the firm facilitated even greater production levels, and by 1860 its fifty employees turned out 30,000 barrels of beer, placing it amongst the nation’s largest breweries. Today, the Vassar name is better known for the college Matthew Vassar endowed in 1860 with earnings from the brewery (Baron, Chapter 17).

1865-1920: Brewing Emerges as a Significant Industry

While there were several hundred small scale, local breweries in the 1840s and 1850s, beer did not become a mass-produced, mass-consumed beverage until the decades following the Civil War. Several factors contributed to beer’s emergence as the nation’s dominant alcoholic drink. First, widespread immigration from strong beer drinking countries such as Britain, Ireland, and Germany contributed to the creation of a beer culture in the U.S.. Second, America was becoming increasingly industrialized and urbanized during these years, and many workers in the manufacturing and mining sectors drank beer during work and after. Third, many workers began to receive higher wages and salaries during these years, enabling them to buy more beer. Fourth, beer benefited from members of the temperance movement who advocated lower alcohol beer over higher alcohol spirits such as rum or whiskey.[2] Fifth, a series of technological and scientific developments fostered greater beer production and the brewing of new styles of beer. For example, artificial refrigeration enabled brewers to brew during warm American summers, and pasteurization, the eponymous procedure developed by Louis Pasteur, helped extend packaged beer’s shelf life, making storage and transportation more reliable (Stack, 2000). Finally, American brewers began brewing lager beer, a style that had long been popular in Germany and other continental European countries. Traditionally, beer in America meant British-style ale. Ales are brewed with top fermenting yeasts, and this category ranges from light pale ales to chocolate-colored stouts and porters. During the 1840s, American brewers began making German-style lager beers. In addition to requiring a longer maturation period than ales, lager beers use a bottom fermenting yeast and are much more temperature sensitive. Lagers require a great deal of care and attention from brewers, but to the increasing numbers of nineteenth century German immigrants, lager was synonymous with beer. As the nineteenth century wore on, lager production soared, and by 1900, lager outsold ale by a significant margin.

Together, these factors helped transform the market for beer. Total beer production increased from 3.6 million barrels in 1865 to over 66 million barrels in 1914. By 1910, brewing had grown into one of the leading manufacturing industries in America. Yet, this increase in output did not simply reflect America’s growing population. While the number of beer drinkers certainly did rise during these years, perhaps just as importantly, per capita consumption also rose dramatically, from under four gallons in 1865 to 21 gallons in the early 1910s.

Table 1: Industry Production and per Capita Consumption, 1865-1915

width=”540″>

Year National Production (millions of barrels) Per Capita Consumption (gallons)
1865 3.7 3.4
1870 6.6 5.3
1875 9.5 6.6
1880 13.3 8.2
1885 19.2 10.5
1890 27.6 13.6
1895 33.6 15.0
1900 39.5 16.0
1905 49.5 18.3
1910 59.6 20.0
1915 59.8 18.7

Source: United States Brewers Association, 1979 Brewers Almanac, Washington, DC: 12-13.

An equally impressive transformation was underway at the level of the firm. Until the 1870s and 1880s, American breweries had been essentially small scale, local operations. By the late nineteenth century, several companies began to increase their scale of production and scope of distribution. Pabst Brewing Company in Milwaukee and Anheuser-Busch in St. Louis became two of the nation’s first nationally-oriented breweries, and the first to surpass annual production levels of one million barrels. By utilizing the growing railroad system to distribute significant amounts of their beer into distant beer markets, Pabst, Anheuser-Busch and a handful of other enterprises came to be called “shipping” breweries. Though these firms became very powerful, they did not control the pre-Prohibition market for beer. Rather, an equilibrium emerged that pitted large and regional shipping breweries that incorporated the latest innovations in pasteurizing, bottling, and transporting beer against a great number of locally-oriented breweries that mainly supplied draught beer in wooden kegs to their immediate markets (Stack, 2000).

Table 2: Industry Production, the Number of Breweries, and Average Brewery Size

1865-1915

width=”504″>

Year National Production (millions of barrels) Number of Breweries Average Brewery Size (thousands of barrels)
1865 3.7 2,252 1,643
1870 6.6 3,286 2,009
1875 9.5 2,783 3,414
1880 13.3 2,741 4,852
1885 19.2 2,230 8,610
1890 27.6 2,156 12,801
1895 33.6 1,771 18,972
1900 39.5 1,816 21,751
1905 49.5 1,847 26,800
1910 59.6 1,568 38,010
1915 59.8 1,345 44,461

Source: United States Brewers Association, 1979 Brewers Almanac, Washington DC: 12-13.

Between the Civil War and national prohibition, the production and consumption of beer greatly outpaced spirits. Though consumption levels of absolute alcohol had peaked in the early 1800s, temperance and prohibition forces grew increasingly vocal and active as the century wore on, and by the late 1800s, they constituted one of the best-organized political pressure groups of the day (Kerr, Chapter 5, 1985). Their efforts culminated in the ratification of the Eighteenth Amendment on January 29, 1919 that, along with the Volstead Act, made the production and distribution of any beverages with more than one-half of one percent alcohol illegal. While estimates of alcohol activity during Prohibition’s thirteen year reign — from 1920 to 1933 — are imprecise, beer consumption almost certainly fell, though spirit consumption may have remained constant or actually even increased slightly (Rorbaugh, Appendices).

1920-1933: The Dark Years, Prohibition

The most important decision all breweries had to make after 1920 was what to do with their plants and equipment. As they grappled with this question, they made implicit bets as to whether Prohibition would prove to be merely a temporary irritant. Pessimists immediately divested themselves of all their brewing equipment, often at substantial losses. Other firms decided to carry on with related products, and so stay prepared for any modifications to the Volstead Act which would allow for beer. Schlitz, Blatz, Pabst, and Anheuser-Busch, the leading pre-Prohibition shippers, began producing near beer, a malt beverage with under one-half of one percent alcohol. While it was not a commercial success, its production allowed these firms to keep current their beer-making skills. Anheuser-Busch called its near beer “Budweiser” which was “simply the old Budweiser lager beer, brewed according to the traditional method, and then de-alcoholized. … August Busch took the same care in purchasing the costly materials as he had done during pre-prohibition days” (Krebs and Orthwein, 1953, 165). Anheuser-Busch and some of the other leading breweries were granted special licenses by the federal government for brewing alcohol greater than one half of one percent for “medicinal purposes” (Plavchan, 1969, 168). Receiving these licensees gave these breweries a competitive advantage as they were able to keep their brewing staff active in beer-making.

The shippers, and some local breweries, also made malt syrup. While they officially advertised it as an ingredient for baking cookies, and while its production was left alone by the government, it was readily apparent to all that its primary use was for homemade beer.

Of perhaps equal importance to the day-to-day business activities of the breweries were their investment decisions. Here, as in so many other places, the shippers exhibited true entrepreneurial insight. Blatz, Pabst, and Anheuser-Busch all expanded their inventories of automobiles and trucks, which became key assets after repeal. In the 1910s, Anheuser-Busch invested in motorized vehicles to deliver beer; by the 1920s, it was building its own trucks in great numbers. While it never sought to become a major producer of delivery vehicles, its forward expansion in this area reflected its appreciation of the growing importance of motorized delivery, an insight which they built on after repeal.

The leading shippers also furthered their investments in bottling equipment and machinery, which was used in the production of near beer, root beer, ginger ale, and soft drinks. These products were not the commercial successes beer had been, but they gave breweries important experience in bottling. While 85 percent of pre-Prohibition beer was kegged, during Prohibition over 80 percent of near beer and a smaller, though growing, percentage of soft drinks was sold in bottles.

This remarkable increase in packaged product impelled breweries to refine their packaging skills and modify their retailing practice. As they sold near beer and soft drinks to drugstores and drink stands, they encountered new marketing problems (Cochran, 1948, 340). Experience gained during these years helped the shippers meet radically different distribution requirements of the post-repeal beer market.

They were learning about canning as well as bottling. In 1925, Blatz’s canned malt syrup sales were more than $1.3 million, significantly greater than its bulk sales. Anheuser-Busch used cans from the American Can Company for its malt syrup in the early 1920s, a firm which would gain national prominence in 1935 for helping to pioneer the beer can. Thus, the canning of malt syrup helped create the first contacts between the leading shipping brewers and American Can Company (Plavchan, 1969, 178; Conny, 1990, 35-36; and American Can Company, 1969, 7-9).

These expensive investments in automobiles and bottling equipment were paid for in part by selling off branch properties, namely saloons (See Cochran, 1948; Plavchan, 1969; Krebs and Orthwein, 1953). Some had equipped their saloons with furniture and bar fixtures, but as Prohibition wore on, they progressively divested themselves of these assets.

1933-1945: The Industry Reawakens after the Repeal of Prohibition

In April 1933 Congress amended the Volstead Act to allow for 3.2 percent beer. Eight months later, in December, Congress and the states ratified the Twenty-first Amendment, officially repealing Prohibition. From repeal until World War II, the brewing industry struggled to regain its pre-Prohibition fortunes. Prior to prohibition, breweries owned or controlled many saloons, which were the dominant retail outlets for alcohol. To prevent the excesses that had been attributed to saloons from reoccurring, post-repeal legislation forbade alcohol manufacturers from owning bars or saloons, requiring them instead to sell their beer to wholesalers that in turn would distribute their beverages to retailers.

Prohibition meant the end of many small breweries that had been profitable, and that, taken together, had posed a formidable challenge to the large shipping breweries. The shippers, who had much greater investments, were not as inclined to walk away from brewing.[3] After repeal, therefore, they reopened for business in a radically new environment, one in which their former rivals were absent or disadvantaged. From this favorable starting point, they continued to consolidate their position. Several hundred locally oriented breweries did reopen, but were unable to regain their pre-Prohibition competitive edge, and they quickly exited the market. From 1935 to 1940, the number of breweries fell by ten percent.

Table 3: U.S. Brewing Industry Data, 1910-1940

Year Number of Breweries Number of Barrels Produced (millions) Average Barrelage per Brewery Largest Firm Production (millions of barrels) Per Capita Consumption (gallons)
1910 1,568 59.5 37,946 1.5 20.1
1915 1,345 59.8 44,461 1.1 18.7
1934 756 37.7 49,867 1.1 7.9
1935 766 45.2 59,008 1.1 10.3
1936 739 51.8 70,095 1.3 11.8
1937 754 58.7 77,851 1.8 13.3
1938 700 56.3 80,429 2.1 12.9
1939 672 53.8 80,059 2.3 12.3
1940 684 54.9 80,263 2.5 12.5

Source: Cochran, 1948; Krebs and Orthwein, 1953; and United States Brewers Almanac, 1956.

Annual industry output, after struggling in 1934 and 1935, began to approach the levels reached in the 1910s. Yet, these total increases are somewhat misleading, as the population of the U.S. had risen from 92 to 98 million in the 1910s to 125 to 130 million in the 1930s (Brewers Almanac, 1956, 10). This translated directly into the lower per capita consumption levels reported in Table 3.

The largest firms grew even larger in the years following repeal, quickly surpassing their pre-Prohibition annual production levels. The post-repeal industry leaders, Anheuser-Busch and Pabst, doubled their annual production levels from 1935 to 1940.

To take for granted the growing importance of the leading shippers during this period is to ignore their momentous reversal of pre-Prohibition trends. While medium-sized breweries dominated the industry output in the years leading up to Prohibition, the shippers regained in the 1930s the dynamism they manifested from the 1870s to the 1890s. Table 4 compares the fortunes of the shippers in relation to the industry as a whole. From 1877 to 1895, Anheuser-Busch and Pabst, the two most prominent shippers, grew much faster than the industry, and their successes helped pull the industry along. This picture changed during the years 1895 to 1915, when the industry grew much faster than the shippers (Stack, 2000). With the repeal of Prohibition, the tides changed again: from 1934 to 1940, the brewing industry grew very slowly, while Anheuser-Busch and Pabst enjoyed tremendous increases in their annual sales.

Table 4: Percentage Change in Output among Shipping Breweries, 1877-1940

Period Anheuser-Busch Pabst Industry
1877-1895 1,106% 685% 248%
1895-1914 58% -23% 78%
1934-1940 173% 87% 26%

Source: Cochran, 1948; Krebs and Orthwein, 1953; and Brewers Almanac, 1956.

National and regional shippers increasingly dominated the market. Breweries such as Anheuser-Busch, Pabst and Schlitz came to exemplify the modern business enterprise, as described by Alfred Chandler (Chandler, 1977), which adeptly integrated mass production and mass distribution.

Table 5: Leading Brewery Output Levels, 1938-1940

Brewery Plant Location 1938 (bls) 1939 (bls) 1940 (bls)
Anheuser-Busch St. Louis, MO 2,087,000 2,306,000 2,468,000
Pabst Brewing Milwaukee, WI

Peoria Heights, IL

1,640,000 1,650,000 1,730,000
Jos. Schlitz Milwaukee, WI 1,620,000 1,651,083 1,570,000
F & M Schafer Brooklyn, NY 1,025,000 1,305,000 1,390,200
P. Ballantine Newark, NJ 1,120,000 1,289,425 1,322,346
Jacob Ruppert New York, NY 1,417,000 1,325,350 1,228,400
Falstaff Brewing St. Louis, MO

New Orleans, LA

Omaha, NE

622,000 622,004 684,537
Duquesne Brewing Pittsburgh, PA

Carnegie, PA

McKees Rock, PA

625,000 680,000 690,000
Theo. Hamm Brewing St. Paul, MN 750,000 780,000 694,200
Liebman Breweries Brooklyn, NY 625,000 632,558 670,198

Source: Fein, 1942, 35.

World War One had presented a direct threat to the brewing industry. Government officials used war-time emergencies to impose grain rationing, a step that led to the lowering of the alcohol level of beer to 2.75 percent. World War Two had a completely different effect on the industry: rather than output falling, beer production rose from 1941 to 1945.

Table 6: Production and Per Capita Consumption, 1940-1945

width=”607″>

Year Number of Breweries Number of barrels withdrawn (millions) Per Capita Consumption (gallons)
1940 684 54.9 12.5
1941 574 55.2 12.3
1942 530 63.7 14.1
1943 491 71.0 15.8
1944 469 81.7 18.0
1945 468 86.6 18.6

Source: 1979 USBA, 12-14.

During the war, the industry mirrored the nation at large by casting off its sluggish depression-era growth. As the war economy boomed, consumers, both troops and civilians, used some of their wages for beer, and per capita consumption grew by 50 percent between 1940 and 1945.

1945-1980: Following World War II, the Industry Continues to Grow and to Consolidate

Yet, the take-off registered during the World War II was not sustained during the ensuing decades. Total production continued to grow, but at a slower rate than overall population.

Table 7: Production and per Capita Consumption, 1945-1980

width=”607″>

Year Number of Breweries Number of barrels withdrawn (millions) Per Capita Consumption (gallons)
1945 468 86.6 18.6
1950 407 88.8 17.2
1955 292 89.8 15.9
1960 229 94.5 15.4
1965 197 108.0 16.0
1970 154 134.7 18.7
1975 117 157.9 21.1
1980 101 188.4 23.1

Source: 1993 USBA, 7-8.

The period following WWII was characterized by great industry consolidation. Total output continued to grow, though per capita consumption fell into the 1960s before rebounding to levels above 21 gallons per capita in the 1970s, the highest rates in the nation’s history. Not since the 1910s, had consumption levels topped 21 gallons a year; however, there was a significant difference. Prior to Prohibition most consumers bought their beer from local or regional firms and over 85 percent of the beer was served from casks in saloons. Following World War II, two significant changes radically altered the market for beer. First, the total number of breweries operating fell dramatically. This signaled the growing importance of the large national breweries. While many of these firms — Anheuser-Busch, Pabst, Schlitz, and Blatz — had grown into prominence in the late nineteenth century, the scale of their operations grew tremendously in the years after the repeal of prohibition. From the mid 1940s to 1980, the five largest breweries saw their share of the national market grow from 19 to 75 percent (Adams, 125).

Table 8: Concentration of the Brewing Industry, 1947-1981

Year Five Largest (%) Ten Largest (%) Herfindahl Index[4]
1947 19.0 28.2 140
1954 24.9 38.3 240
1958 28.5 45.2 310
1964 39.0 58.2 440
1968 47.6 63.2 690
1974 64.0 80.8 1080
1978 74.3 92.3 1292
1981 75.9 93.9 1614

Source: Adams, 1995, 125.

The other important change concerned how beer was sold. Prior to Prohibition, nearly all beer was sold on-tap in bars or saloons; while approximately 10-15 percent of the beer was bottled, it was much more expensive than draught beer. In 1935, a few years after repeal, the American Can Company successfully canned beer for the first time. The spread of home refrigeration helped spur consumer demand for canned and bottled beer, and from 1935 onwards, draught beer sales have fallen markedly.

Table 9: Packaged vs. Draught Sales, 1935-1980

Year Packaged sales as a percentage of total sales

(bottled and canned)

Draught sales as a percentage of total sales
1935 30 70
1940 52 48
1945 64 36
1950 72 28
1955 78 22
1960 81 19
1965 82 18
1970 86 14
1975 88 12
1980 88 12

Source: 1979 USBA, 20; 1993 USBA, 14.

The rise of packaged beer contributed to the growing industry consolidation detailed in Table 8.

1980-2000: Continued Growth, the Microbrewery Movement, and International Dimensions of the Brewing Industry

From 1980 to 2000, beer production continued to rise, reaching nearly 200 million barrels in 2000. Per capita consumption hit its highest recorded level in 1981 with 23.8 gallons. Since then, though, consumption levels have dropped a bit, and during the 1990s, consumption was typically in the 21-22 gallon range.

Table 10: Production and Per Capita Consumption, 1980-1990

Year Number of Breweries Number of barrels withdrawn (millions) Per Capita Consumption (gallons)
1980 101 188.4 23.1
1985 105 193.8 22.7
1990 286 201.7 22.6

Source: 1993 USBA, 7-8.

Beginning around 1980, the long decline in the number of breweries slowed and then was reversed. Judging solely by the number of breweries in operation, it appeared that a significant change had occurred: the number of firms began to increase, and by the late 1990s, hundreds of new breweries were operating in the U.S. However, this number is rather misleading: the overall industry remained very concentrated, with a three firm concentration ratio in 2000 of 81 percent.

Table 11: Production Levels of the Leading Breweries, 2000

Production (millions of barrels)
Anheuser-Busch 99.2
Miller 39.8
Coors 22.7
Total Domestic Sales 199.4

Source: Beverage Industry, May 2003, 19.

Although entrepreneurs and beer enthusiasts began hundreds of new breweries during this period, most of them were very small, with annual production levels of between 5,000 to 100,000 barrels annually. Reflecting their small size, these new firms were nicknamed microbreweries. Collectively, microbreweries have grown to account for approximately 5-7 percent of the total beer market.

Microbreweries represented a new strategy in the brewing industry: rather than competing on the basis of price or advertising, they attempted to compete on the basis of inherent product characteristics. They emphasized the freshness of locally produced beer; they experimented with much stronger malt and hop flavors; they tried new and long-discarded brewing recipes, often reintroducing styles that had been popular in America decades earlier. Together, these breweries have had an influence much greater than their market share would suggest. The big three breweries, Anheuser Busch, Miller, and Coors, have all tried to incorporate ideas from the microbrewery movement. They have introduced new marquee brands intended to compete for some of this market, and when this failed, they have bought shares in or outright control of some microbreweries.

A final dimension of the brewing industry that has been changing concerns the emerging global market for beer. Until very recently, America was the biggest beer market in the world: as a result, American breweries have not historically looked abroad for additional sales, preferring to expand their share of the domestic market.[5] In the1980s, Anheuser-Busch began to systematically evaluate its market position. While it had done very well in the U.S., it had not tapped markets overseas; as a result, it began a series of international business dealings. It gradually moved from exporting small amounts of its flagship brand Budwesier to entering into licensing accords whereby breweries in a range of countries such as Ireland, Japan, and Argentina began to brew Budweiser for sale in their domestic markets. In 1995, it established its first breweries outside of the U.S., one in England for the European market and the other in China, to service the growing markets in China and East Asia.[6]

While U.S. breweries such as Anheuser-Busch have only recently begun to explore the opportunities abroad, foreign firms have long appreciated the significance of the American market. Beginning in the late 1990s, imports began to increase their market share and by the early 2000s, they accounted for approximately 12 percent of the large U.S. market. Imports and microbrews typically cost more than the big three’s beers and they provide a wider range of flavors and tastes. One of the most interesting developments in the international market for beer occurred in 2002 when South African Breweries (SAB), the dominant brewery in South Africa, and an active firm in Europe, acquired Miller, the second largest brewery in the U.S. Though not widely discussed in the U.S., this may portend a general move towards increased global integration in the world market for beer.

Annotated Bibliography

Adams, Walter and James Brock, editors. The Structure of American Industry, ninth edition. Englewood Cliffs, New Jersey: Prentice Hall, 1995.

Apps, Jerry. Breweries of Wisconsin. Madison, WI: University of Wisconsin Press, 1992. Detailed examination of the history of breweries and brewing in Wisconsin.

Baron, Stanley. Brewed In America: A History of Beer and Ale in the United States.

Boston: Little, Brown, and Co, 1962: Very good historical overview of brewing in America, from the Pilgrims through the post-World War II era.

Baum, Dan. Citizen Coors: A Grand Family Saga of Business, Politics, and Beer. New York: Harper Collins, 2000. Very entertaining story of the Coors family and the brewery they made famous.

Beverage Industry (May 2003): 19-20.

Blum, Peter. Brewed In Detroit: Breweries and Beers since 1830. Detroit: Wayne State University Press, 1999. Very good discussion of Detroit’s major breweries and how they evolved. Particularly strong on the Stroh brewery.

Cochran, Thomas. Pabst Brewing Company: The History of an American Business. New York: New York University Press, 1948: A very insightful, well-researched, and well- written history of one of America’s most important breweries. It is strongest on the years leading up to Prohibition.

Downard, William. The Cincinnati Brewing Industry: A Social and Economic History. Ohio University Press, 1973: A good history of brewing in Cincinnati; particularly strong in the years prior to Prohibition.

Downard, William. Dictionary of the History of the American Brewing and Distilling Industries. Westport, CT: Greenwood Press, 1980: Part dictionary and part encyclopedia, a useful compendium of terms, people, and events relating to the brewing and distilling industries.

Duis, Perry. The Saloon: Public Drinking in Chicagoand Boston, 1880-1920. Urbana: University of Illinois Press, 1983: An excellent overview of the institution of the saloon in pre-Prohibition America.

Eckhardt, Fred. The Essentials of Beer Style. Portland, OR: Fred Eckhardt Communications, 1995: A helpful introduction into the basics of how beer is made and how beer styles differ.

Ehert, George. Twenty-Five Years of Brewing. New York: Gast Lithograph and Engraving, 1891: An interesting snapshot of an important late nineteenth century New York City brewery.

Elzinga, Kenneth. “The Beer Industry.” In The Structure of American Industry, ninth edition, edited by W. Adams and J. Brock. Englewood Cliffs, New Jersey: Prentice Hall, 1995: A good overview summary of the history, structure, conduct, and performance of America’s brewing industry.

Fein, Edward. “The 25 Leading Brewers in the United States Produce 41.5% of the Nation’s Total Beer Output.” Brewers Digest 17 (October 1942): 35.

Greer, Douglas. “Product Differentiation and Concentration in the Brewing Industry,” Journal of Industrial Economics 29 (1971): 201-19.

Greer, Douglas. “The Causes of Concentration in the Brewing Industry,” Quarterly Review of Economics and Business 21 (1981): 87-106.

Greer, Douglas. “Beer: Causes of Structural Change.” In Industry Studies, second edition, edited by Larry Duetsch, Armonk, New York: M.E. Sharpe, 1998.

Hernon, Peter and Terry Ganey. Under the Influence: The Unauthorized Story of the Anheuser-Busch Dynasty. New York: Simon and Schuster, 1991: Somewhat sensationalistic history of the family that has controlled America’s largest brewery, but some interesting pieces on the brewery are included.

Horowitz, Ira and Ann Horowitz. “Firms in a Declining Market: The Brewing Case.” Journal of Industrial Economics 13 (1965): 129-153.

Jackson, Michael. The New World Guide To Beer. Philadelphia: Running Press, 1988: Good overview of the international world of beer and of America’s place in the international beer market.

Keithan, Charles. The Brewing Industry. Washington D.C: Federal Trade Commission, 1978.

Kerr, K. Austin. Organized for Prohibition. New Haven: Yale Press, 1985: Excellent study of the rise of the Anti-Saloon League in the United States.

Kostka, William. The Pre-prohibition History of Adolph Coors Company: 1873-1933. Golden, CO: self-published book, Adolph Coors Company, 1973: A self-published book by the Coors company that provides some interesting insights into the origins of the Colorado brewery.

Krebs, Roland and Orthwein, Percy. Making Friends Is Our Business: 100 Years of Anheuser-Busch. St. Louis, MO: self-published book, Anheuser-Busch, 1953: A self-published book by the Anheuser-Busch brewery that has some nice illustrations and data on firm output levels. The story is nicely told but rather self-congratulatory.

“Large Brewers Boost Share of U.S. Beer Business,” Brewers Digest, 15 (July 1940): 55-57.

Leisley, Bruce. A History of Leisley Brewing. North Newton Kansas: Mennonite Press, 1975: A short but useful history of the Leisley Brewing Company. This was the author’s undergraduate thesis.

Lender, Mark and James Martin. Drinking in America. New York: The Free Press, 1987: Good overview of the social history of drinking in America.

McGahan, Ann. “The Emergence of the National Brewing Oligopoly: Competition in the American Market, 1933-58.” Business History Review 65 (1991): 229-284: Excellent historical analysis of the origins of the brewing oligopoly following the repeal of Prohibition.

McGahan, Ann. “Cooperation in Prices and Capacities: Trade Associations in Brewing after Repeal.” Journal of Law and Economics 38 (1995): 521-559.

Meier, Gary and Meier, Gloria. Brewed in the Pacific Northwest: A History of Beer Making in Oregon and Washington. Seattle: Fjord Press, 1991: A survey of the history of brewing in the Pacific Northwest.

Miller, Carl. Breweries of Cleveland. Cleveland, OH: Schnitzelbank Press, 1998: Good historical overview of the brewing industry in Cleveland.

Norman, Donald. Structural Change and Performance in the U.S. Brewing Industry. Ph.D. dissertation, UCLA, 1975.

One Hundred Years of Brewing. Chicago and New York: Arno Press Reprint, 1903 (Reprint 1974): A very important work. Very detailed historical discussion of the American brewing industry through the end of the nineteenth century.

Persons, Warren. Beer and Brewing In America: An Economic Study. New York: United Brewers Industrial Foundation, 1940.

Plavchan, Ronald. A History of Anheuser-Busch, 1852-1933. Ph.D. dissertation, St. Louis University, 1969: Apart from Cochran’s analysis of Pabst, one of a very few detailed business histories of a major American brewery.

Research Company of America. A National Survey of the Brewing Industry. self-published, 1941: A well research industry analysis with a wealth of information and data.

Rorbaugh, William. The Alcoholic Republic: An American Tradition. New York: Oxford University Press, 1979: Excellent scholarly overview of drinking habits in America.

Rubin, Jay. “The Wet War: American Liquor, 1941-1945.” In Alcohol, Reform, and Society, edited by J. Blocker. Westport, CT: Greenwood Press, 1979: Interesting discussion of American drinking during World War II.

Salem, Frederick. 1880. Beer: Its History and Its Economic Value as a National Beverage. New York: Arno Press, 1880 (Reprint 1972): Early but valuable discussion of American brewing industry.

Scherer, F.M. Industry Structure, Strategy, and Public Policy. New York: Harper Collins, 1996: A very good essay on the brewing industry.

Shih, Ko Ching and C. Ying Shih. American Brewing Industry and the Beer Market. Brookfield, WI, 1958: Good overview of the industry with some excellent data tables.

Skilnik, Bob. The History of Beer and Brewing in Chicago: 1833-1978. Pogo Press, 1999: Good overview of the history of brewing in Chicago.

Smith, Greg. Beer in America: The Early Years, 1587 to 1840. Boulder, CO: Brewers Publications, 1998: Well written account of beer’s development in America, from the Pilgrims to mid-nineteenth century.

Stack, Martin. “Local and Regional Breweries in America’s Brewing Industry, 1865-1920.” Business History Review 74 (Autumn 2000): 435-63.

Thomann, Gallus. American Beer: Glimpses of Its History and Description of Its Manufacture. New York: United States Brewing Association, 1909: Interesting account of the state of the brewing industry at the turn of the twentieth century.

United States Brewers Association. Annual Year Book, 1909-1921. Very important primary source document published by the leading brewing trade association.

United States Brewers Foundation. Brewers Almanac, published annually, 1941-present: Very important primary source document published by the leading brewing trade association.

Van Wieren, Dale. American Breweries II. West Point, PA: Eastern Coast Brewiana Association, 1995. Comprehensive historical listing of every brewery in every state, arranged by city within each state.


[1] A barrel of beer is 31 gallons. One Hundred Years of Brewing, Chicagoand New York: Arno Press Reprint, 1974: 252.

[2] During the nineteenth century, there were often distinctions between temperance advocates, who differentiated between spirits and beer, and prohibition supporters, who campaigned on the need to eliminate all alcohol.

[3] The major shippers may have been taken aback by the loss suffered by Lemp, one of the leading pre-Prohibition shipping breweries. Lemp was sold at auction in 1922 at a loss of 90 percent on the investment (Baron, 1962, 315).

[4] The Herfinhahl Index sums the squared market shares of the fifty largest firms.

[5] China overtook the United States as the world’s largest beer market in 2002.

[6] http://www.anheuser-busch.com/over/international.html

1650 to 1800: The Early Days of Brewing in America

Brewing in America dates to the first communities established by English and Dutch settlers in the early to mid seventeenth century. Dutch immigrants quickly recognized that the climate and terrain of present-day New York were particularly well suited to brewing beer and growing malt and hops, two of beer’s essential ingredients. A 1660 map of New Amsterdam details twenty-six breweries and taverns, a clear indication that producing and selling beer were popular and profitable trades in the American colonies (Baron, Chapter Three). Despite the early popularity of beer, other alcoholic beverages steadily grew in importance and by the early eighteenth century several of them had eclipsed beer commercially.

Between 1650 and the Civil War, the market for beer did not change a great deal: both production and consumption remained essentially local affairs. Bottling was expensive, and beer did not travel well. Nearly all beer was stored in, and then served from, wooden kegs. While there were many small breweries, it was not uncommon for households to brew their own beer. In fact, several of America’s founding fathers brewed their own beer, including George Washington and Thomas Jefferson (Baron, Chapters 13 and 16).

1800-1865: Brewing Begins to Expand

National production statistics are unavailable before 1810, an omission which reflects the rather limited importance of the early brewing industry. In 1810, America’s 140 commercial breweries collectively produced just over 180,000 barrels of beer.[1] During the next fifty years, total beer output continued to increase, but production remained small scale and local. This is not to suggest, however, that brewing could not prove profitable. In 1797, James Vassar founded a brewery in Poughkeepsie, New York whose successes echoed far beyond the brewing industry. After several booming years Vassar ceded control of the brewery to his two sons, Matthew and John. Following the death of his brother in an accident and a fire that destroyed the plant, Matthew Vassar rebuilt the brewery in 1811. Demand for his beer grew rapidly, and by the early 1840s, the Vassar brewery produced nearly 15,000 barrels of ale and porter annually, a significant amount for this period. Continued investment in the firm facilitated even greater production levels, and by 1860 its fifty employees turned out 30,000 barrels of beer, placing it amongst the nation’s largest breweries. Today, the Vassar name is better known for the college Matthew Vassar endowed in 1860 with earnings from the brewery (Baron, Chapter 17).

1865-1920: Brewing Emerges as a Significant Industry

While there were several hundred small scale, local breweries in the 1840s and 1850s, beer did not become a mass-produced, mass-consumed beverage until the decades following the Civil War. Several factors contributed to beer’s emergence as the nation’s dominant alcoholic drink. First, widespread immigration from strong beer drinking countries such as Britain, Ireland, and Germany contributed to the creation of a beer culture in the U.S.. Second, America was becoming increasingly industrialized and urbanized during these years, and many workers in the manufacturing and mining sectors drank beer during work and after. Third, many workers began to receive higher wages and salaries during these years, enabling them to buy more beer. Fourth, beer benefited from members of the temperance movement who advocated lower alcohol beer over higher alcohol spirits such as rum or whiskey.[2] Fifth, a series of technological and scientific developments fostered greater beer production and the brewing of new styles of beer. For example, artificial refrigeration enabled brewers to brew during warm American summers, and pasteurization, the eponymous procedure developed by Louis Pasteur, helped extend packaged beer’s shelf life, making storage and transportation more reliable (Stack, 2000). Finally, American brewers began brewing lager beer, a style that had long been popular in Germany and other continental European countries. Traditionally, beer in America meant British-style ale. Ales are brewed with top fermenting yeasts, and this category ranges from light pale ales to chocolate-colored stouts and porters. During the 1840s, American brewers began making German-style lager beers. In addition to requiring a longer maturation period than ales, lager beers use a bottom fermenting yeast and are much more temperature sensitive. Lagers require a great deal of care and attention from brewers, but to the increasing numbers of nineteenth century German immigrants, lager was synonymous with beer. As the nineteenth century wore on, lager production soared, and by 1900, lager outsold ale by a significant margin.

Together, these factors helped transform the market for beer. Total beer production increased from 3.6 million barrels in 1865 to over 66 million barrels in 1914. By 1910, brewing had grown into one of the leading manufacturing industries in America. Yet, this increase in output did not simply reflect America’s growing population. While the number of beer drinkers certainly did rise during these years, perhaps just as importantly, per capita consumption also rose dramatically, from under four gallons in 1865 to 21 gallons in the early 1910s.

Table 1: Industry Production and per Capita Consumption, 1865-1915

Year National Production (millions of barrels) Per Capita Consumption (gallons)
1865 3.7 3.4
1870 6.6 5.3
1875 9.5 6.6
1880 13.3 8.2
1885 19.2 10.5
1890 27.6 13.6
1895 33.6 15.0
1900 39.5 16.0
1905 49.5 18.3
1910 59.6 20.0
1915 59.8 18.7

Source: United States Brewers Association, 1979 Brewers Almanac, Washington, DC: 12-13.

An equally impressive transformation was underway at the level of the firm. Until the 1870s and 1880s, American breweries had been essentially small scale, local operations. By the late nineteenth century, several companies began to increase their scale of production and scope of distribution. Pabst Brewing Company in Milwaukee and Anheuser-Busch in St. Louis became two of the nation’s first nationally-oriented breweries, and the first to surpass annual production levels of one million barrels. By utilizing the growing railroad system to distribute significant amounts of their beer into distant beer markets, Pabst, Anheuser-Busch and a handful of other enterprises came to be called “shipping” breweries. Though these firms became very powerful, they did not control the pre-Prohibition market for beer. Rather, an equilibrium emerged that pitted large and regional shipping breweries that incorporated the latest innovations in pasteurizing, bottling, and transporting beer against a great number of locally-oriented breweries that mainly supplied draught beer in wooden kegs to their immediate markets (Stack, 2000).

Table 2: Industry Production, the Number of Breweries, and Average Brewery Size

1865-1915

Year National Production (millions of barrels) Number of Breweries Average Brewery Size (thousands of barrels)
1865 3.7 2,252 1,643
1870 6.6 3,286 2,009
1875 9.5 2,783 3,414
1880 13.3 2,741 4,852
1885 19.2 2,230 8,610
1890 27.6 2,156 12,801
1895 33.6 1,771 18,972
1900 39.5 1,816 21,751
1905 49.5 1,847 26,800
1910 59.6 1,568 38,010
1915 59.8 1,345 44,461

Source: United States Brewers Association, 1979 Brewers Almanac, Washington DC: 12-13.

Between the Civil War and national prohibition, the production and consumption of beer greatly outpaced spirits. Though consumption levels of absolute alcohol had peaked in the early 1800s, temperance and prohibition forces grew increasingly vocal and active as the century wore on, and by the late 1800s, they constituted one of the best-organized political pressure groups of the day (Kerr, Chapter 5, 1985). Their efforts culminated in the ratification of the Eighteenth Amendment on January 29, 1919 that, along with the Volstead Act, made the production and distribution of any beverages with more than one-half of one percent alcohol illegal. While estimates of alcohol activity during Prohibition’s thirteen year reign — from 1920 to 1933 — are imprecise, beer consumption almost certainly fell, though spirit consumption may have remained constant or actually even increased slightly (Rorbaugh, Appendices).

1920-1933: The Dark Years, Prohibition

The most important decision all breweries had to make after 1920 was what to do with their plants and equipment. As they grappled with this question, they made implicit bets as to whether Prohibition would prove to be merely a temporary irritant. Pessimists immediately divested themselves of all their brewing equipment, often at substantial losses. Other firms decided to carry on with related products, and so stay prepared for any modifications to the Volstead Act which would allow for beer. Schlitz, Blatz, Pabst, and Anheuser-Busch, the leading pre-Prohibition shippers, began producing near beer, a malt beverage with under one-half of one percent alcohol. While it was not a commercial success, its production allowed these firms to keep current their beer-making skills. Anheuser-Busch called its near beer “Budweiser” which was “simply the old Budweiser lager beer, brewed according to the traditional method, and then de-alcoholized. … August Busch took the same care in purchasing the costly materials as he had done during pre-prohibition days” (Krebs and Orthwein, 1953, 165). Anheuser-Busch and some of the other leading breweries were granted special licenses by the federal government for brewing alcohol greater than one half of one percent for “medicinal purposes” (Plavchan, 1969, 168). Receiving these licensees gave these breweries a competitive advantage as they were able to keep their brewing staff active in beer-making.

The shippers, and some local breweries, also made malt syrup. While they officially advertised it as an ingredient for baking cookies, and while its production was left alone by the government, it was readily apparent to all that its primary use was for homemade beer.

Of perhaps equal importance to the day-to-day business activities of the breweries were their investment decisions. Here, as in so many other places, the shippers exhibited true entrepreneurial insight. Blatz, Pabst, and Anheuser-Busch all expanded their inventories of automobiles and trucks, which became key assets after repeal. In the 1910s, Anheuser-Busch invested in motorized vehicles to deliver beer; by the 1920s, it was building its own trucks in great numbers. While it never sought to become a major producer of delivery vehicles, its forward expansion in this area reflected its appreciation of the growing importance of motorized delivery, an insight which they built on after repeal.

The leading shippers also furthered their investments in bottling equipment and machinery, which was used in the production of near beer, root beer, ginger ale, and soft drinks. These products were not the commercial successes beer had been, but they gave breweries important experience in bottling. While 85 percent of pre-Prohibition beer was kegged, during Prohibition over 80 percent of near beer and a smaller, though growing, percentage of soft drinks was sold in bottles.

This remarkable increase in packaged product impelled breweries to refine their packaging skills and modify their retailing practice. As they sold near beer and soft drinks to drugstores and drink stands, they encountered new marketing problems (Cochran, 1948, 340). Experience gained during these years helped the shippers meet radically different distribution requirements of the post-repeal beer market.

They were learning about canning as well as bottling. In 1925, Blatz’s canned malt syrup sales were more than $1.3 million, significantly greater than its bulk sales. Anheuser-Busch used cans from the American Can Company for its malt syrup in the early 1920s, a firm which would gain national prominence in 1935 for helping to pioneer the beer can. Thus, the canning of malt syrup helped create the first contacts between the leading shipping brewers and American Can Company (Plavchan, 1969, 178; Conny, 1990, 35-36; and American Can Company, 1969, 7-9).

These expensive investments in automobiles and bottling equipment were paid for in part by selling off branch properties, namely saloons (See Cochran, 1948; Plavchan, 1969; Krebs and Orthwein, 1953). Some had equipped their saloons with furniture and bar fixtures, but as Prohibition wore on, they progressively divested themselves of these assets.

1933-1945: The Industry Reawakens after the Repeal of Prohibition

In April 1933 Congress amended the Volstead Act to allow for 3.2 percent beer. Eight months later, in December, Congress and the states ratified the Twenty-first Amendment, officially repealing Prohibition. From repeal until World War II, the brewing industry struggled to regain its pre-Prohibition fortunes. Prior to prohibition, breweries owned or controlled many saloons, which were the dominant retail outlets for alcohol. To prevent the excesses that had been attributed to saloons from reoccurring, post-repeal legislation forbade alcohol manufacturers from owning bars or saloons, requiring them instead to sell their beer to wholesalers that in turn would distribute their beverages to retailers.

Prohibition meant the end of many small breweries that had been profitable, and that, taken together, had posed a formidable challenge to the large shipping breweries. The shippers, who had much greater investments, were not as inclined to walk away from brewing.[3] After repeal, therefore, they reopened for business in a radically new environment, one in which their former rivals were absent or disadvantaged. From this favorable starting point, they continued to consolidate their position. Several hundred locally oriented breweries did reopen, but were unable to regain their pre-Prohibition competitive edge, and they quickly exited the market. From 1935 to 1940, the number of breweries fell by ten percent.

Table 3: U.S. Brewing Industry Data, 1910-1940

Year Number of Breweries Number of Barrels Produced (millions) Average Barrelage per Brewery Largest Firm Production (millions of barrels) Per Capita Consumption (gallons)
1910 1,568 59.5 37,946 1.5 20.1
1915 1,345 59.8 44,461 1.1 18.7
1934 756 37.7 49,867 1.1 7.9
1935 766 45.2 59,008 1.1 10.3
1936 739 51.8 70,095 1.3 11.8
1937 754 58.7 77,851 1.8 13.3
1938 700 56.3 80,429 2.1 12.9
1939 672 53.8 80,059 2.3 12.3
1940 684 54.9 80,263 2.5 12.5

Source: Cochran, 1948; Krebs and Orthwein, 1953; and United States Brewers Almanac, 1956.

Annual industry output, after struggling in 1934 and 1935, began to approach the levels reached in the 1910s. Yet, these total increases are somewhat misleading, as the population of the U.S. had risen from 92 to 98 million in the 1910s to 125 to 130 million in the 1930s (Brewers Almanac, 1956, 10). This translated directly into the lower per capita consumption levels reported in Table 3.

The largest firms grew even larger in the years following repeal, quickly surpassing their pre-Prohibition annual production levels. The post-repeal industry leaders, Anheuser-Busch and Pabst, doubled their annual production levels from 1935 to 1940.

To take for granted the growing importance of the leading shippers during this period is to ignore their momentous reversal of pre-Prohibition trends. While medium-sized breweries dominated the industry output in the years leading up to Prohibition, the shippers regained in the 1930s the dynamism they manifested from the 1870s to the 1890s. Table 4 compares the fortunes of the shippers in relation to the industry as a whole. From 1877 to 1895, Anheuser-Busch and Pabst, the two most prominent shippers, grew much faster than the industry, and their successes helped pull the industry along. This picture changed during the years 1895 to 1915, when the industry grew much faster than the shippers (Stack, 2000). With the repeal of Prohibition, the tides changed again: from 1934 to 1940, the brewing industry grew very slowly, while Anheuser-Busch and Pabst enjoyed tremendous increases in their annual sales.

Table 4: Percentage Change in Output among Shipping Breweries, 1877-1940

Period Anheuser-Busch Pabst Industry
1877-1895 1,106% 685% 248%
1895-1914 58% -23% 78%
1934-1940 173% 87% 26%

Source: Cochran, 1948; Krebs and Orthwein, 1953; and Brewers Almanac, 1956.

National and regional shippers increasingly dominated the market. Breweries such as Anheuser-Busch, Pabst and Schlitz came to exemplify the modern business enterprise, as described by Alfred Chandler (Chandler, 1977), which adeptly integrated mass production and mass distribution.

Table 5: Leading Brewery Output Levels, 1938-1940

Brewery Plant Location 1938 (bls) 1939 (bls) 1940 (bls)
Anheuser-Busch St. Louis, MO 2,087,000 2,306,000 2,468,000
Pabst Brewing Milwaukee, WI

Peoria Heights, IL

1,640,000 1,650,000 1,730,000
Jos. Schlitz Milwaukee, WI 1,620,000 1,651,083 1,570,000
F & M Schafer Brooklyn, NY 1,025,000 1,305,000 1,390,200
P. Ballantine Newark, NJ 1,120,000 1,289,425 1,322,346
Jacob Ruppert New York, NY 1,417,000 1,325,350 1,228,400
Falstaff Brewing St. Louis, MO

New Orleans, LA

Omaha, NE

622,000 622,004 684,537
Duquesne Brewing Pittsburgh, PA

Carnegie, PA

McKees Rock, PA

625,000 680,000 690,000
Theo. Hamm Brewing St. Paul, MN 750,000 780,000 694,200
Liebman Breweries Brooklyn, NY 625,000 632,558 670,198

Source: Fein, 1942, 35.

World War One had presented a direct threat to the brewing industry. Government officials used war-time emergencies to impose grain rationing, a step that led to the lowering of the alcohol level of beer to 2.75 percent. World War Two had a completely different effect on the industry: rather than output falling, beer production rose from 1941 to 1945.

Table 6: Production and Per Capita Consumption, 1940-1945

Year Number of Breweries Number of barrels withdrawn (millions) Per Capita Consumption (gallons)
1940 684 54.9 12.5
1941 574 55.2 12.3
1942 530 63.7 14.1
1943 491 71.0 15.8
1944 469 81.7 18.0
1945 468 86.6 18.6

Source: 1979 USBA, 12-14.

During the war, the industry mirrored the nation at large by casting off its sluggish depression-era growth. As the war economy boomed, consumers, both troops and civilians, used some of their wages for beer, and per capita consumption grew by 50 percent between 1940 and 1945.

1945-1980: Following World War II, the Industry Continues to Grow and to Consolidate

Yet, the take-off registered during the World War II was not sustained during the ensuing decades. Total production continued to grow, but at a slower rate than overall population.

Table 7: Production and per Capita Consumption, 1945-1980

Year Number of Breweries Number of barrels withdrawn (millions) Per Capita Consumption (gallons)
1945 468 86.6 18.6
1950 407 88.8 17.2
1955 292 89.8 15.9
1960 229 94.5 15.4
1965 197 108.0 16.0
1970 154 134.7 18.7
1975 117 157.9 21.1
1980 101 188.4 23.1

Source: 1993 USBA, 7-8.

The period following WWII was characterized by great industry consolidation. Total output continued to grow, though per capita consumption fell into the 1960s before rebounding to levels above 21 gallons per capita in the 1970s, the highest rates in the nation’s history. Not since the 1910s, had consumption levels topped 21 gallons a year; however, there was a significant difference. Prior to Prohibition most consumers bought their beer from local or regional firms and over 85 percent of the beer was served from casks in saloons. Following World War II, two significant changes radically altered the market for beer. First, the total number of breweries operating fell dramatically. This signaled the growing importance of the large national breweries. While many of these firms — Anheuser-Busch, Pabst, Schlitz, and Blatz — had grown into prominence in the late nineteenth century, the scale of their operations grew tremendously in the years after the repeal of prohibition. From the mid 1940s to 1980, the five largest breweries saw their share of the national market grow from 19 to 75 percent (Adams, 125).

Table 8: Concentration of the Brewing Industry, 1947-1981

Year Five Largest (%) Ten Largest (%) Herfindahl Index[4]
1947 19.0 28.2 140
1954 24.9 38.3 240
1958 28.5 45.2 310
1964 39.0 58.2 440
1968 47.6 63.2 690
1974 64.0 80.8 1080
1978 74.3 92.3 1292
1981 75.9 93.9 1614

Source: Adams, 1995, 125.

The other important change concerned how beer was sold. Prior to Prohibition, nearly all beer was sold on-tap in bars or saloons; while approximately 10-15 percent of the beer was bottled, it was much more expensive than draught beer. In 1935, a few years after repeal, the American Can Company successfully canned beer for the first time. The spread of home refrigeration helped spur consumer demand for canned and bottled beer, and from 1935 onwards, draught beer sales have fallen markedly.

Table 9: Packaged vs. Draught Sales, 1935-1980

Year Packaged sales as a percentage of total sales

(bottled and canned)

Draught sales as a percentage of total sales
1935 30 70
1940 52 48
1945 64 36
1950 72 28
1955 78 22
1960 81 19
1965 82 18
1970 86 14
1975 88 12
1980 88 12

Source: 1979 USBA, 20; 1993 USBA, 14.

The rise of packaged beer contributed to the growing industry consolidation detailed in Table 8.

1980-2000: Continued Growth, the Microbrewery Movement, and International Dimensions of the Brewing Industry

From 1980 to 2000, beer production continued to rise, reaching nearly 200 million barrels in 2000. Per capita consumption hit its highest recorded level in 1981 with 23.8 gallons. Since then, though, consumption levels have dropped a bit, and during the 1990s, consumption was typically in the 21-22 gallon range.

Table 10: Production and Per Capita Consumption, 1980-1990

Year Number of Breweries Number of barrels withdrawn (millions) Per Capita Consumption (gallons)
1980 101 188.4 23.1
1985 105 193.8 22.7
1990 286 201.7 22.6

Source: 1993 USBA, 7-8.

Beginning around 1980, the long decline in the number of breweries slowed and then was reversed. Judging solely by the number of breweries in operation, it appeared that a significant change had occurred: the number of firms began to increase, and by the late 1990s, hundreds of new breweries were operating in the U.S. However, this number is rather misleading: the overall industry remained very concentrated, with a three firm concentration ratio in 2000 of 81 percent.

Table 11: Production Levels of the Leading Breweries, 2000

Production (millions of barrels)
Anheuser-Busch 99.2
Miller 39.8
Coors 22.7
Total Domestic Sales 199.4

Source: Beverage Industry, May 2003, 19.

Although entrepreneurs and beer enthusiasts began hundreds of new breweries during this period, most of them were very small, with annual production levels of between 5,000 to 100,000 barrels annually. Reflecting their small size, these new firms were nicknamed microbreweries. Collectively, microbreweries have grown to account for approximately 5-7 percent of the total beer market.

Microbreweries represented a new strategy in the brewing industry: rather than competing on the basis of price or advertising, they attempted to compete on the basis of inherent product characteristics. They emphasized the freshness of locally produced beer; they experimented with much stronger malt and hop flavors; they tried new and long-discarded brewing recipes, often reintroducing styles that had been popular in America decades earlier. Together, these breweries have had an influence much greater than their market share would suggest. The big three breweries, Anheuser Busch, Miller, and Coors, have all tried to incorporate ideas from the microbrewery movement. They have introduced new marquee brands intended to compete for some of this market, and when this failed, they have bought shares in or outright control of some microbreweries.

A final dimension of the brewing industry that has been changing concerns the emerging global market for beer. Until very recently, America was the biggest beer market in the world: as a result, American breweries have not historically looked abroad for additional sales, preferring to expand their share of the domestic market.[5] In the1980s, Anheuser-Busch began to systematically evaluate its market position. While it had done very well in the U.S., it had not tapped markets overseas; as a result, it began a series of international business dealings. It gradually moved from exporting small amounts of its flagship brand Budwesier to entering into licensing accords whereby breweries in a range of countries such as Ireland, Japan, and Argentina began to brew Budweiser for sale in their domestic markets. In 1995, it established its first breweries outside of the U.S., one in England for the European market and the other in China, to service the growing markets in China and East Asia.[6]

While U.S. breweries such as Anheuser-Busch have only recently begun to explore the opportunities abroad, foreign firms have long appreciated the significance of the American market. Beginning in the late 1990s, imports began to increase their market share and by the early 2000s, they accounted for approximately 12 percent of the large U.S. market. Imports and microbrews typically cost more than the big three’s beers and they provide a wider range of flavors and tastes. One of the most interesting developments in the international market for beer occurred in 2002 when South African Breweries (SAB), the dominant brewery in South Africa, and an active firm in Europe, acquired Miller, the second largest brewery in the U.S. Though not widely discussed in the U.S., this may portend a general move towards increased global integration in the world market for beer.

Annotated Bibliography

Adams, Walter and James Brock, editors. The Structure of American Industry, ninth edition. Englewood Cliffs, New Jersey: Prentice Hall, 1995.

Apps, Jerry. Breweries of Wisconsin. Madison, WI: University of Wisconsin Press, 1992. Detailed examination of the history of breweries and brewing in Wisconsin.

Baron, Stanley. Brewed In America: A History of Beer and Ale in the United States.

Boston: Little, Brown, and Co, 1962: Very good historical overview of brewing in America, from the Pilgrims through the post-World War II era.

Baum, Dan. Citizen Coors: A Grand Family Saga of Business, Politics, and Beer. New York: Harper Collins, 2000. Very entertaining story of the Coors family and the brewery they made famous.

Beverage Industry (May 2003): 19-20.

Blum, Peter. Brewed In Detroit: Breweries and Beers since 1830. Detroit: Wayne State University Press, 1999. Very good discussion of Detroit’s major breweries and how they evolved. Particularly strong on the Stroh brewery.

Cochran, Thomas. Pabst Brewing Company: The History of an American Business. New York: New York University Press, 1948: A very insightful, well-researched, and well- written history of one of America’s most important breweries. It is strongest on the years leading up to Prohibition.

Downard, William. The Cincinnati Brewing Industry: A Social and Economic History. Ohio University Press, 1973: A good history of brewing in Cincinnati; particularly strong in the years prior to Prohibition.

Downard, William. Dictionary of the History of the American Brewing and Distilling Industries. Westport, CT: Greenwood Press, 1980: Part dictionary and part encyclopedia, a useful compendium of terms, people, and events relating to the brewing and distilling industries.

Duis, Perry. The Saloon: Public Drinking in Chicagoand Boston, 1880-1920. Urbana: University of Illinois Press, 1983: An excellent overview of the institution of the saloon in pre-Prohibition America.

Eckhardt, Fred. The Essentials of Beer Style. Portland, OR: Fred Eckhardt Communications, 1995: A helpful introduction into the basics of how beer is made and how beer styles differ.

Ehert, George. Twenty-Five Years of Brewing. New York: Gast Lithograph and Engraving, 1891: An interesting snapshot of an important late nineteenth century New York City brewery.

Elzinga, Kenneth. “The Beer Industry.” In The Structure of American Industry, ninth edition, edited by W. Adams and J. Brock. Englewood Cliffs, New Jersey: Prentice Hall, 1995: A good overview summary of the history, structure, conduct, and performance of America’s brewing industry.

Fein, Edward. “The 25 Leading Brewers in the United States Produce 41.5% of the Nation’s Total Beer Output.” Brewers Digest 17 (October 1942): 35.

Greer, Douglas. “Product Differentiation and Concentration in the Brewing Industry,” Journal of Industrial Economics 29 (1971): 201-19.

Greer, Douglas. “The Causes of Concentration in the Brewing Industry,” Quarterly Review of Economics and Business 21 (1981): 87-106.

Greer, Douglas. “Beer: Causes of Structural Change.” In Industry Studies, second edition, edited by Larry Duetsch, Armonk, New York: M.E. Sharpe, 1998.

Hernon, Peter and Terry Ganey. Under the Influence: The Unauthorized Story of the Anheuser-Busch Dynasty. New York: Simon and Schuster, 1991: Somewhat sensationalistic history of the family that has controlled America’s largest brewery, but some interesting pieces on the brewery are included.

Horowitz, Ira and Ann Horowitz. “Firms in a Declining Market: The Brewing Case.” Journal of Industrial Economics 13 (1965): 129-153.

Jackson, Michael. The New World Guide To Beer. Philadelphia: Running Press, 1988: Good overview of the international world of beer and of America’s place in the international beer market.

Keithan, Charles. The Brewing Industry. Washington D.C: Federal Trade Commission, 1978.

Kerr, K. Austin. Organized for Prohibition. New Haven: Yale Press, 1985: Excellent study of the rise of the Anti-Saloon League in the United States.

Kostka, William. The Pre-prohibition History of Adolph Coors Company: 1873-1933. Golden, CO: self-published book, Adolph Coors Company, 1973: A self-published book by the Coors company that provides some interesting insights into the origins of the Colorado brewery.

Krebs, Roland and Orthwein, Percy. Making Friends Is Our Business: 100 Years of Anheuser-Busch. St. Louis, MO: self-published book, Anheuser-Busch, 1953: A self-published book by the Anheuser-Busch brewery that has some nice illustrations and data on firm output levels. The story is nicely told but rather self-congratulatory.

“Large Brewers Boost Share of U.S. Beer Business,” Brewers Digest, 15 (July 1940): 55-57.

Leisley, Bruce. A History of Leisley Brewing. North Newton Kansas: Mennonite Press, 1975: A short but useful history of the Leisley Brewing Company. This was the author’s undergraduate thesis.

Lender, Mark and James Martin. Drinking in America. New York: The Free Press, 1987: Good overview of the social history of drinking in America.

McGahan, Ann. “The Emergence of the National Brewing Oligopoly: Competition in the American Market, 1933-58.” Business History Review 65 (1991): 229-284: Excellent historical analysis of the origins of the brewing oligopoly following the repeal of Prohibition.

McGahan, Ann. “Cooperation in Prices and Capacities: Trade Associations in Brewing after Repeal.” Journal of Law and Economics 38 (1995): 521-559.

Meier, Gary and Meier, Gloria. Brewed in the Pacific Northwest: A History of Beer Making in Oregon and Washington. Seattle: Fjord Press, 1991: A survey of the history of brewing in the Pacific Northwest.

Miller, Carl. Breweries of Cleveland. Cleveland, OH: Schnitzelbank Press, 1998: Good historical overview of the brewing industry in Cleveland.

Norman, Donald. Structural Change and Performance in the U.S. Brewing Industry. Ph.D. dissertation, UCLA, 1975.

One Hundred Years of Brewing. Chicago and New York: Arno Press Reprint, 1903 (Reprint 1974): A very important work. Very detailed historical discussion of the American brewing industry through the end of the nineteenth century.

Persons, Warren. Beer and Brewing In America: An Economic Study. New York: United Brewers Industrial Foundation, 1940.

Plavchan, Ronald. A History of Anheuser-Busch, 1852-1933. Ph.D. dissertation, St. Louis University, 1969: Apart from Cochran’s analysis of Pabst, one of a very few detailed business histories of a major American brewery.

Research Company of America. A National Survey of the Brewing Industry. self-published, 1941: A well research industry analysis with a wealth of information and data.

Rorbaugh, William. The Alcoholic Republic: An American Tradition. New York: Oxford University Press, 1979: Excellent scholarly overview of drinking habits in America.

Rubin, Jay. “The Wet War: American Liquor, 1941-1945.” In Alcohol, Reform, and Society, edited by J. Blocker. Westport, CT: Greenwood Press, 1979: Interesting discussion of American drinking during World War II.

Salem, Frederick. 1880. Beer: Its History and Its Economic Value as a National Beverage. New York: Arno Press, 1880 (Reprint 1972): Early but valuable discussion of American brewing industry.

Scherer, F.M. Industry Structure, Strategy, and Public Policy. New York: Harper Collins, 1996: A very good essay on the brewing industry.

Shih, Ko Ching and C. Ying Shih. American Brewing Industry and the Beer Market. Brookfield, WI, 1958: Good overview of the industry with some excellent data tables.

Skilnik, Bob. The History of Beer and Brewing in Chicago: 1833-1978. Pogo Press, 1999: Good overview of the history of brewing in Chicago.

Smith, Greg. Beer in America: The Early Years, 1587 to 1840. Boulder, CO: Brewers Publications, 1998: Well written account of beer’s development in America, from the Pilgrims to mid-nineteenth century.

Stack, Martin. “Local and Regional Breweries in America’s Brewing Industry, 1865-1920.” Business History Review 74 (Autumn 2000): 435-63.

Thomann, Gallus. American Beer: Glimpses of Its History and Description of Its Manufacture. New York: United States Brewing Association, 1909: Interesting account of the state of the brewing industry at the turn of the twentieth century.

United States Brewers Association. Annual Year Book, 1909-1921. Very important primary source document published by the leading brewing trade association.

United States Brewers Foundation. Brewers Almanac, published annually, 1941-present: Very important primary source document published by the leading brewing trade association.

Van Wieren, Dale. American Breweries II. West Point, PA: Eastern Coast Brewiana Association, 1995. Comprehensive historical listing of every brewery in every state, arranged by city within each state.


[1] A barrel of beer is 31 gallons. One Hundred Years of Brewing, Chicagoand New York: Arno Press Reprint, 1974: 252.

[2] During the nineteenth century, there were often distinctions between temperance advocates, who differentiated between spirits and beer, and prohibition supporters, who campaigned on the need to eliminate all alcohol.

[3] The major shippers may have been taken aback by the loss suffered by Lemp, one of the leading pre-Prohibition shipping breweries. Lemp was sold at auction in 1922 at a loss of 90 percent on the investment (Baron, 1962, 315).

[4] The Herfinhahl Index sums the squared market shares of the fifty largest firms.

[5] China overtook the United States as the world’s largest beer market in 2002.

[6] http://www.anheuser-busch.com/over/international.html

Citation: Stack, Martin. “A Concise History of America’s Brewing Industry”. EH.Net Encyclopedia, edited by Robert Whaples. July 4, 2003. URL http://eh.net/encyclopedia/a-concise-history-of-americas-brewing-industry/

The Economic Impact of the Black Death

David Routt, University of Richmond

The Black Death was the largest demographic disaster in European history. From its arrival in Italy in late 1347 through its clockwise movement across the continent to its petering out in the Russian hinterlands in 1353, the magna pestilencia (great pestilence) killed between seventeen and twenty—eight million people. Its gruesome symptoms and deadliness have fixed the Black Death in popular imagination; moreover, uncovering the disease’s cultural, social, and economic impact has engaged generations of scholars. Despite growing understanding of the Black Death’s effects, definitive assessment of its role as historical watershed remains a work in progress.

A Controversy: What Was the Black Death?

In spite of enduring fascination with the Black Death, even the identity of the disease behind the epidemic remains a point of controversy. Aware that fourteenth—century eyewitnesses described a disease more contagious and deadlier than bubonic plague (Yersinia pestis), the bacillus traditionally associated with the Black Death, dissident scholars in the 1970s and 1980s proposed typhus or anthrax or mixes of typhus, anthrax, or bubonic plague as the culprit. The new millennium brought other challenges to the Black Death—bubonic plague link, such as an unknown and probably unidentifiable bacillus, an Ebola—like haemorrhagic fever or, at the pseudoscientific fringes of academia, a disease of interstellar origin.

Proponents of Black Death as bubonic plague have minimized differences between modern bubonic and the fourteenth—century plague through painstaking analysis of the Black Death’s movement and behavior and by hypothesizing that the fourteenth—century plague was a hypervirulent strain of bubonic plague, yet bubonic plague nonetheless. DNA analysis of human remains from known Black Death cemeteries was intended to eliminate doubt but inability to replicate initially positive results has left uncertainty. New analytical tools used and new evidence marshaled in this lively controversy have enriched understanding of the Black Death while underscoring the elusiveness of certitude regarding phenomena many centuries past.

The Rate and Structure of mortality

The Black Death’s socioeconomic impact stemmed, however, from sudden mortality on a staggering scale, regardless of what bacillus caused it. Assessment of the plague’s economic significance begins with determining the rate of mortality for the initial onslaught in 1347—53 and its frequent recurrences for the balance of the Middle Ages, then unraveling how the plague chose victims according to age, sex, affluence, and place.

Imperfect evidence unfortunately hampers knowing precisely who and how many perished. Many of the Black Death’s contemporary observers, living in an epoch of famine and political, military, and spiritual turmoil, described the plague apocalyptically. A chronicler famously closed his narrative with empty membranes should anyone survive to continue it. Others believed as few as one in ten survived. One writer claimed that only fourteen people were spared in London. Although sober eyewitnesses offered more plausible figures, in light of the medieval preference for narrative dramatic force over numerical veracity, chroniclers’ estimates are considered evidence of the Black Death’s battering of the medieval psyche, not an accurate barometer of its demographic toll.

Even non—narrative and presumably dispassionate, systematic evidence — legal and governmental documents, ecclesiastical records, commercial archives — presents challenges. No medieval scribe dragged his quill across parchment for the demographer’s pleasure and convenience. With a paucity of censuses, estimates of population and tracing of demographic trends have often relied on indirect indicators of demographic change (e.g., activity in the land market, levels of rents and wages, size of peasant holdings) or evidence treating only a segment of the population (e.g., assignment of new priests to vacant churches, payments by peasants to take over holdings of the deceased). Even the rare census—like record, like England’s Domesday Book (1086) or the Poll Tax Return (1377), either enumerates only heads of households or excludes slices of the populace or ignores regions or some combination of all these. To compensate for these imperfections, the demographer relies on potentially debatable assumptions about the size of the medieval household, the representativeness of a discrete group of people, the density of settlement in an undocumented region, the level of tax evasion, and so forth.

A bewildering array of estimates for mortality from the plague of 1347—53 is the result. The first outbreak of the Black Death indisputably was the deadliest but the death rate varied widely according to place and social stratum. National estimates of mortality for England, where the evidence is fullest, range from five percent, to 23.6 percent among aristocrats holding land from the king, to forty to forty—five percent of the kingdom’s clergy, to over sixty percent in a recent estimate. The picture for the continent likewise is varied. Regional mortality in Languedoc (France) was forty to fifty percent while sixty to eighty percent of Tuscans (Italy) perished. Urban death rates were mostly higher but no less disparate, e.g., half in Orvieto (Italy), Siena (Italy), and Volterra (Italy), fifty to sixty—six percent in Hamburg (Germany), fifty—eight to sixty—eight percent in Perpignan (France), sixty percent for Barcelona’s (Spain) clerical population, and seventy percent in Bremen (Germany). The Black Death was often highly arbitrary in how it killed in a narrow locale, which no doubt broadened the spectrum of mortality rates. Two of Durham Cathedral Priory’s manors, for instance, had respective death rates of twenty—one and seventy—eighty percent (Shrewsbury, 1970; Russell, 1948; Waugh, 1991; Ziegler, 1969; Benedictow, 2004; Le Roy Ladurie, 1976; Bowsky, 1964; Pounds, 1974; Emery, 1967; Gyug, 1983; Aberth, 1995; Lomas, 1989).

Credible death rates between one quarter and three quarters complicate reaching a Europe—wide figure. Neither a casual and unscientific averaging of available estimates to arrive at a probably misleading composite death rate nor a timid placing of mortality somewhere between one and two thirds is especially illuminating. Scholars confronting the problem’s complexity before venturing estimates once favored one third as a reasonable aggregate death rate. Since the early 1970s demographers have found higher levels of mortality plausible and European mortality of one half is considered defensible, a figure not too distant from less fanciful contemporary observations.

While the Black Death of 1347—53 inflicted demographic carnage, had it been an isolated event European population might have recovered to its former level in a generation or two and its economic impact would have been moderate. The disease’s long—term demographic and socioeconomic legacy arose from it recurrence. When both national and local epidemics are taken into account, England endured thirty plague years between 1351 and 1485, a pattern mirrored on the continent, where Perugia was struck nineteen times and Hamburg, Cologne, and Nuremburg at least ten times each in the fifteenth century. Deadliness of outbreaks declined — perhaps ten to twenty percent in the second plague (pestis secunda) of 1361—2, ten to fifteen percent in the third plague (pestis tertia) of 1369, and as low as five and rarely above ten percent thereafter — and became more localized; however, the Black Death’s persistence ensured that demographic recovery would be slow and socioeconomic consequences deeper. Europe’s population in 1430 may have been fifty to seventy—five percent lower than in 1290 (Cipolla, 1994; Gottfried, 1983).

Enumeration of corpses does not adequately reflect the Black Death’s demographic impact. Who perished was equally significant as how many; in other words, the structure of mortality influenced the time and rate of demographic recovery. The plague’s preference for urbanite over peasant, man over woman, poor over affluent, and, perhaps most significantly, young over mature shaped its demographic toll. Eyewitnesses so universally reported disproportionate death among the young in the plague’s initial recurrence (1361—2) that it became known as the Childen’s Plague (pestis puerorum, mortalité des enfants). If this preference for youth reflected natural resistance to the disease among plague survivors, the Black Death may have ultimately resembled a lower—mortality childhood disease, a reality that magnified both its demographic and psychological impact.

The Black Death pushed Europe into a long—term demographic trough. Notwithstanding anecdotal reports of nearly universal pregnancy of women in the wake of the magna pestilencia, demographic stagnancy characterized the rest of the Middle Ages. Population growth recommenced at different times in different places but rarely earlier than the second half of the fifteenth century and in many places not until c. 1550.

The European Economy on the Cusp of the Black Death

Like the plague’s death toll, its socioeconomic impact resists categorical measurement. The Black Death’s timing made a facile labeling of it as a watershed in European economic history nearly inevitable. It arrived near the close of an ebullient high Middle Ages (c. 1000 to c. 1300) in which urban life reemerged, long—distance commerce revived, business and manufacturing innovated, manorial agriculture matured, and population burgeoned, doubling or tripling. The Black Death simultaneously portended an economically stagnant, depressed late Middle Ages (c. 1300 to c. 1500). However, even if this simplistic and somewhat misleading portrait of the medieval economy is accepted, isolating the Black Death’s economic impact from manifold factors at play is a daunting challenge.

Cognizant of a qualitative difference between the high and late Middle Ages, students of medieval economy have offered varied explanations, some mutually exclusive, others not, some favoring the less dramatic, less visible, yet inexorable factor as an agent of change rather than a catastrophic demographic shift. For some, a cooling climate undercut agricultural productivity, a downturn that rippled throughout the predominantly agrarian economy. For others, exploitative political, social, and economic institutions enriched an idle elite and deprived working society of wherewithal and incentive to be innovative and productive. Yet others associate monetary factors with the fourteenth— and fifteenth—century economic doldrums.

The particular concerns of the twentieth century unsurprisingly induced some scholars to view the medieval economy through a Malthusian lens. In this reconstruction of the Middle Ages, population growth pressed against the society’s ability to feed itself by the mid—thirteenth century. Rising impoverishment and contracting holdings compelled the peasant to cultivate inferior, low—fertility land and to convert pasture to arable production and thereby inevitably reduce numbers of livestock and make manure for fertilizer scarcer. Boosting gross productivity in the immediate term yet driving yields of grain downward in the longer term exacerbated the imbalance between population and food supply; redressing the imbalance became inevitable. This idea’s adherents see signs of demographic correction from the mid—thirteenth century onward, possibly arising in part from marriage practices that reduced fertility. A more potent correction came with subsistence crises. Miserable weather in 1315 destroyed crops and the ensuing Great Famine (1315—22) reduced northern Europe’s population by perhaps ten to fifteen percent. Poor harvests, moreover, bedeviled England and Italy to the eve of the Black Death.

These factors — climate, imperfect institutions, monetary imbalances, overpopulation — diminish the Black Death’s role as a transformative socioeconomic event. In other words, socioeconomic changes already driven by other causes would have occurred anyway, merely more slowly, had the plague never struck Europe. This conviction fosters receptiveness to lower estimates of the Black Death’s deadliness. Recent scrutiny of the Malthusian analysis, especially studies of agriculture in source—rich eastern England, has, however, rehabilitated the Black Death as an agent of socioeconomic change. Growing awareness of the use of “progressive” agricultural techniques and of alternative, non—grain economies less susceptible to a Malthusian population—versus—resources dynamic has undercut the notion of an absolutely overpopulated Europe and has encouraged acceptance of higher rates of mortality from the plague (Campbell, 1983; Bailey, 1989).

The Black Death and the Agrarian Economy

The lion’s share of the Black Death’s effect was felt in the economy’s agricultural sector, unsurprising in a society in which, except in the most urbanized regions, nine of ten people eked out a living from the soil.

A village struck by the plague underwent a profound though brief disordering of the rhythm of daily life. Strong administrative and social structures, the power of custom, and innate human resiliency restored the village’s routine by the following year in most cases: fields were plowed, crops were sown, tended, and harvested, labor services were performed by the peasantry, the village’s lord collected dues from tenants. Behind this seeming normalcy, however, lord and peasant were adjusting to the Black Death’s principal economic consequence: a much smaller agricultural labor pool. Before the plague, rising population had kept wages low and rents and prices high, an economic reality advantageous to the lord in dealing with the peasant and inclining many a peasant to cleave to demeaning yet secure dependent tenure.

As the Black Death swung the balance in the peasant’s favor, the literate elite bemoaned a disintegrating social and economic order. William of Dene, John Langland, John Gower, and others polemically evoked nostalgia for the peasant who knew his place, worked hard, demanded little, and squelched pride while they condemned their present in which land lay unplowed and only an immediate pang of hunger goaded a lazy, disrespectful, grasping peasant to do a moment’s desultory work (Hatcher, 1994).

Moralizing exaggeration aside, the rural worker indeed demanded and received higher payments in cash (nominal wages) in the plague’s aftermath. Wages in England rose from twelve to twenty—eight percent from the 1340s to the 1350s and twenty to forty percent from the 1340s to the 1360s. Immediate hikes were sometimes more drastic. During the plague year (1348—49) at Fornham All Saints (Suffolk), the lord paid the pre—plague rate of 3d. per acre for more half of the hired reaping but the rest cost 5d., an increase of 67 percent. The reaper, moreover, enjoyed more and larger tips in cash and perquisites in kind to supplement the wage. At Cuxham (Oxfordshire), a plowman making 2s. weekly before the plague demanded 3s. in 1349 and 10s. in 1350 (Farmer, 1988; Farmer, 1991; West Suffolk Record Office 3/15.7/2.4; Harvey, 1965).

In some instances, the initial hikes in nominal or cash wages subsided in the years further out from the plague and any benefit they conferred on the wage laborer was for a time undercut by another economic change fostered by the plague. Grave mortality ensured that the European supply of currency in gold and silver increased on a per—capita basis, which in turned unleashed substantial inflation in prices that did not subside in England until the mid—1370s and even later in many places on the continent. The inflation reduced the purchasing power (real wage) of the wage laborer so significantly that, even with higher cash wages, his earnings either bought him no more or often substantially less than before the magna pestilencia (Munro, 2003; Aberth, 2001).

The lord, however, was confronted not only by the roving wage laborer on whom he relied for occasional and labor—intensive seasonal tasks but also by the peasant bound to the soil who exchanged customary labor services, rent, and dues for holding land from the lord. A pool of labor services greatly reduced by the Black Death enabled the servile peasant to bargain for less onerous responsibilities and better conditions. At Tivetshall (Norfolk), vacant holdings deprived its lord of sixty percent of his week—work and all his winnowing services by 1350—51. A fifth of winter and summer week—work and a third of reaping services vanished at Redgrave (Suffolk) in 1349—50 due to the magna pestilencia. If a lord did not make concessions, a peasant often gravitated toward any better circumstance beckoning elsewhere. At Redgrave, for instance, the loss of services in 1349—50 directly due to the plague was followed in 1350—51 by an equally damaging wave of holdings abandoned by surviving tenants. For the medieval peasant, never so tightly bound to the manor as once imagined, the Black Death nonetheless fostered far greater rural mobility. Beyond loss of labor services, the deceased or absentee peasant paid no rent or dues and rendered no fees for use of manorial monopolies such as mills and ovens and the lord’s revenues shrank. The income of English lords contracted by twenty percent from 1347 to 1353 (Norfolk Record Office WAL 1247/288×1; University of Chicago Bacon 335—6; Gottfried, 1983).

Faced with these disorienting circumstances, the lord often ultimately had to decide how or even whether the pre—plague status quo could be reestablished on his estate. Not capitalistic in the sense of maximizing productivity for reinvestment of profits to enjoy yet more lucrative future returns, the medieval lord nonetheless valued stable income sufficient for aristocratic ostentation and consumption. A recalcitrant peasantry, diminished dues and services, and climbing wages undermined the material foundation of the noble lifestyle, jostled the aristocratic sense of proper social hierarchy, and invited a response.

In exceptional circumstances, a lord sometimes kept the peasant bound to the land. Because the nobility in Spanish Catalonia had already tightened control of the peasantry before the Black Death, because underdeveloped commercial agriculture provided the peasantry narrow options, and because the labor—intensive demesne agriculture common elsewhere was largely absent, the Catalan lord through a mix of coercion (physical intimidation, exorbitant fees to purchase freedom) and concession (reduced rents, conversion of servile dues to less humiliating fixed cash payments) kept the Catalan peasant in place. In England and elsewhere on the continent, where labor services were needed to till the demesne, such a conservative approach was less feasible. This, however, did not deter some lords from trying. The lord of Halesowen (Worcestershire) not only commanded the servile tenant to perform the full range of services but also resuscitated labor obligations in abeyance long before the Black Death, tantamount to an unwillingness to acknowledge anything had changed (Freedman, 1991; Razi, 1981).

Europe’s political elite also looked to legal coercion not only to contain rising wages and to limit the peasant’s mobility but also to allay a sense of disquietude and disorientation arising from the Black Death’s buffeting of pre—plague social realities. England’s Ordinance of Laborers (1349) and Statute of Laborers (1351) called for a return to the wages and terms of employment of 1346. Labor legislation was likewise promulgated by the Córtes of Aragon and Castile, the French crown, and cities such as Siena, Orvieto, Pisa, Florence, and Ragusa. The futility of capping wages by legislative fiat is evident in the French crown’s 1351 revision of its 1349 enactment to permit a wage increase of one third. Perhaps only in England, where effective government permitted robust enforcement, did the law slow wage increases for a time (Aberth, 2001; Gottfried, 1983; Hunt and Murray, 1999; Cohn, 2007).

Once knee—jerk conservatism and legislative palliatives failed to revivify pre—plague socioeconomic arrangements, the lord cast about for a modus vivendi in a new world of abundant land and scarce labor. A sober triage of the available sources of labor, whether it was casual wage labor or a manor’s permanent stipendiary staff (famuli) or the dependent peasant, led to revision of managerial policy. The abbot of Saint Edmund’s, for example, focused on reconstitution of the permanent staff (famuli) on his manors. Despite mortality and flight, the abbot by and large achieved his goal by the mid—1350s. While labor legislation may have facilitated this, the abbot’s provision of more frequent and lucrative seasonal rewards, coupled with the payment of grain stipends in more valuable and marketable cereals such as wheat, no doubt helped secure the loyalty of famuli while circumventing statutory limits on higher wages. With this core of labor solidified, the focus turned to preserving the most essential labor services, especially those associated with the labor—intensive harvesting season. Less vital labor services were commuted for cash payments and ad hoc wage labor then hired to fill gaps. The cultivation of the demesne continued, though not on the pre—plague scale.

For a time in fact circumstances helped the lord continue direct management of the demesne. The general inflation of the quarter—century following the plague as well as poor harvests in the 1350s and 1360s boosted grain prices and partially compensated for more expensive labor. This so—called “Indian summer” of demesne agriculture ended quickly in the mid—1370s in England and subsequently on the continent when the post—plague inflation gave way to deflation and abundant harvests drove prices for commodities downward, where they remained, aside from brief intervals of inflation, for the rest of the Middle Ages. Recurrences of the plague, moreover, placed further stress on new managerial policies. For the lord who successfully persuaded new tenants to take over vacant holdings, such as happened at Chevington (Suffolk) by the late 1350s, the pestis secunda of 1361—62 often inflicted a decisive blow: a second recovery at Chevington never materialized (West Suffolk Records Office 3/15.3/2.9—2.23).

Under unremitting pressure, the traditional cultivation of the demesne ceased to be viable for lord after lord: a centuries—old manorial system gradually unraveled and the nature of agriculture was transformed. The lord’s earliest concession to this new reality was curtailment of cultivated acreage, a trend that accelerated with time. The 590.5 acres sown on average at Great Saxham (Suffolk) in the late 1330s was more than halved (288.67 acres) in the 1360s, for instance (West Suffolk Record Office, 3/15.14/1.1, 1.7, 1.8).

Beyond reducing the demesne to a size commensurate with available labor, the lord could explore types of husbandry less labor—intensive than traditional grain agriculture. Greater domestic manufacture of woolen cloth and growing demand for meat enabled many English lords to reduce arable production in favor of sheep—raising, which required far less labor. Livestock husbandry likewise became more significant on the continent. Suitable climate, soil, and markets made grapes, olives, apples, pears, vegetables, hops, hemp, flax, silk, and dye—stuffs attractive alternatives to grain. In hope of selling these cash crops, rural agriculture became more attuned to urban demand and urban businessmen and investors more intimately involved in what and how much of it was grown in the countryside (Gottfried, 1983; Hunt and Murray, 1999).

The lord also looked to reduce losses from demesne acreage no longer under the plow and from the vacant holdings of onetime tenants. Measures adopted to achieve this end initiated a process that gained momentum with each passing year until the face of the countryside was transformed and manorialism was dead. The English landlord, hopeful for a return to the pre—plague regime, initially granted brief terminal leases of four to six years at fixed rates for bits of demesne and for vacant dependent holdings. Leases over time lengthened to ten, twenty, thirty years, or even a lifetime. In France and Italy, the lord often resorted to métayage or mezzadria leasing, a type of sharecropping in which the lord contributed capital (land, seed, tools, plow teams) to the lessee, who did the work and surrendered a fraction of the harvest to the lord.

Disillusioned by growing obstacles to profitable cultivation of the demesne, the lord, especially in the late fourteenth century and the early fifteenth, adopted a more sweeping type of leasing, the placing of the demesne or even the entire manor “at farm” (ad firmam). A “farmer” (firmarius) paid the lord a fixed annual “farm” (firma) for the right to exploit the lord’s property and take whatever profit he could. The distant or unprofitable manor was usually “farmed” first and other manors followed until a lord’s personal management of his property often ceased entirely. The rising popularity of this expedient made direct management of demesne by lord rare by c. 1425. The lord often became a rentier bound to a fixed income. The tenurial transformation was completed when the lord sold to the peasant his right of lordship, a surrender to the peasant of outright possession of his holding for a fixed cash rent and freedom from dues and services. Manorialism, in effect, collapsed and was gone from western and central Europe by 1500.

The landlord’s discomfort ultimately benefited the peasantry. Lower prices for foodstuffs and greater purchasing power from the last quarter of the fourteenth century onward, progressive disintegration of demesnes, and waning customary land tenure enabled the enterprising, ambitious peasant to lease or purchase property and become a substantial landed proprietor. The average size of the peasant holding grew in the late Middle Ages. Due to the peasant’s generally improved standard of living, the century and a half following the magna pestilencia has been labeled a “golden age” in which the most successful peasant became a “yeoman” or “kulak” within the village community. Freed from labor service, holding a fixed copyhold lease, and enjoying greater disposable income, the peasant exploited his land exclusively for his personal benefit and often pursued leisure and some of the finer things in life. Consumption of meat by England’s humbler social strata rose substantially after the Black Death, a shift in consumer tastes that reduced demand for grain and helped make viable the shift toward pastoralism in the countryside. Late medieval sumptuary legislation, intended to keep the humble from dressing above his station and retain the distinction between low— and highborn, attests both to the peasant’s greater income and the desire of the elite to limit disorienting social change (Dyer, 1989; Gottfried, 1983; Hunt and Murray, 1999).

The Black Death, moreover, profoundly altered the contours of settlement in the countryside. Catastrophic loss of population led to abandonment of less attractive fields, contraction of existing settlements, and even wholesale desertion of villages. More than 1300 English villages vanished between 1350 and 1500. French and Dutch villagers abandoned isolated farmsteads and huddled in smaller villages while their Italian counterparts vacated remote settlements and shunned less desirable fields. The German countryside was mottled with abandoned settlements. Two thirds of named villages disappeared in Thuringia, Anhalt, and the eastern Harz mountains, one fifth in southwestern Germany, and one third in the Rhenish palatinate, abandonment far exceeding loss of population and possibly arising from migration from smaller to larger villages (Gottfried, 1983; Pounds, 1974).

The Black Death and the Commercial Economy

As with agriculture, assessment of the Black Death’s impact on the economy’s commercial sector is a complex problem. The vibrancy of the high medieval economy is generally conceded. As the first millennium gave way to the second, urban life revived, trade and manufacturing flourished, merchant and craft gilds emerged, commercial and financial innovations proliferated (e.g., partnerships, maritime insurance, double—entry bookkeeping, fair letters, letters of credit, bills of exchange, loan contracts, merchant banking, etc.). The integration of the high medieval economy reached its zenith c. 1250 to c. 1325 with the rise of large companies with international interests, such as the Bonsignori of Siena and the Buonaccorsi of Florence and the emergence of so—called “super companies” such as the Florentine Bardi, Peruzzi, and Acciaiuoli (Hunt and Murray, 1999).

How to characterize the late medieval economy has been more fraught with controversy, however. Historians a century past, uncomprehending of how their modern world could be rooted in a retrograde economy, imagined an entrepreneurially creative and expansive late medieval economy. Succeeding generations of historians darkened this optimistic portrait and fashioned a late Middle Ages of unmitigated decline, an “age of adversity” in which the economy was placed under the rubric “depression of the late Middle Ages.” The historiographical pendulum now swings away from this interpretation and a more nuanced picture has emerged that gives the Black Death’s impact on commerce its full due but emphasizes the variety of the plague’s impact from merchant to merchant, industry to industry, and city to city. Success or failure was equally possible after the Black Death and the game favored adaptability, creativity, nimbleness, opportunism, and foresight.

Once the magna pestilencia had passed, the city had to cope with a labor supply even more greatly decimated than in the countryside due to a generally higher urban death rate. The city, however, could reverse some of this damage by attracting, as it had for centuries, new workers from the countryside, a phenomenon that deepened the crisis for the manorial lord and contributed to changes in rural settlement. A resurgence of the slave trade occurred in the Mediterranean, especially in Italy, where the female slave from Asia or Africa entered domestic service in the city and the male slave toiled in the countryside. Finding more labor was not, however, a panacea. A peasant or slave performed an unskilled task adequately but could not necessarily replace a skilled laborer. The gross loss of talent due to the plague caused a decline in per capita productivity by skilled labor remediable only by time and training (Hunt and Murray, 1999; Miskimin, 1975).

Another immediate consequence of the Black Death was dislocation of the demand for goods. A suddenly and sharply smaller population ensured a glut of manufactured and trade goods, whose prices plummeted for a time. The businessman who successfully weathered this short—term imbalance in supply and demand then had to reshape his business’ output to fit a declining or at best stagnant pool of potential customers.

The Black Death transformed the structure of demand as well. While the standard of living of the peasant improved, chronically low prices for grain and other agricultural products from the late fourteenth century may have deprived the peasant of the additional income to purchase enough manufactured or trade items to fill the hole in commercial demand. In the city, however, the plague concentrated wealth, often substantial family fortunes, in fewer and often younger hands, a circumstance that, when coupled with lower prices for grain, left greater per capita disposable income. The plague’s psychological impact, moreover, it is believed, influenced how this windfall was used. Pessimism and the specter of death spurred an individualistic pursuit of pleasure, a hedonism that manifested itself in the purchase of luxuries, especially in Italy. Even with a reduced population, the gross volume of luxury goods manufactured and sold rose, a pattern of consumption that endured even after the extra income had been spent within a generation or so after the magna pestilencia.

Like the manorial lord, the affluent urban bourgeois sometimes employed structural impediments to block the ambitious parvenu from joining his ranks and becoming a competitor. A tendency toward limiting the status of gild master to the son or son—in—law of a sitting master, evident in the first half of the fourteenth century, gained further impetus after the Black Death. The need for more journeymen after the plague was conceded in the shortening of terms of apprenticeship, but the newly minted journeyman often discovered that his chance of breaking through the glass ceiling and becoming a master was virtually nil without an entrée through kinship. Women also were banished from gilds as unwanted competition. The urban wage laborer, by and large controlled by the gilds, was denied membership and had no access to urban structures of power, a potent source of frustration. While these measures may have permitted the bourgeois to hold his ground for a time, the winds of change were blowing in the city as well as the countryside and gild monopolies and gild restrictions were fraying by the close of the Middle Ages.

In the new climate created by the Black Death, the individual businessman did retain an advantage: the business judgment and techniques honed during the high Middle Ages. This was crucial in a contracting economy in which gross productivity never attained its high medieval peak and in which the prevailing pattern was boom and bust on a roughly generational basis. A fluctuating economy demanded adaptability and the most successful post—plague businessman not merely weathered bad times but located opportunities within adversity and exploited them. The post—plague entrepreneur’s preference for short—term rather than long—term ventures, once believed a product of a gloomy despondency caused by the plague and exacerbated by endemic violence, decay of traditional institutions, and nearly continuous warfare, is now viewed as a judicious desire to leave open entrepreneurial options, to manage risk effectively, and to take advantage of whatever better opportunity arose. The successful post—plague businessman observed markets closely and responded to them while exercising strict control over his concern, looking for greater efficiency, and trimming costs (Hunt and Murray, 1999).

The fortunes of the textile industry, a trade singularly susceptible to contracting markets and rising wages, best underscores the importance of flexibility. Competition among textile manufacturers, already great even before the Black Death due to excess productive capacity, was magnified when England entered the market for low— and medium—quality woolen cloth after the magna pestilencia and was exporting forty—thousand pieces annually by 1400. The English took advantage of proximity to raw material, wool England itself produced, a pattern increasingly common in late medieval business. When English producers were undeterred by a Flemish embargo on English cloth, the Flemish and Italians, the textile trade’s other principal players, were compelled to adapt in order to compete. Flemish producers that emphasized higher—grade, luxury textiles or that purchased, improved, and resold cheaper English cloth prospered while those that stubbornly competed head—to—head with the English in lower—quality woolens suffered. The Italians not only produced luxury woolens, improved their domestically—produced wool, found sources for wool outside England (Spain), and increased production of linen but also produced silks and cottons, once only imported into Europe from the East (Hunt and Murray, 1999).

The new mentality of the successful post—plague businessman is exemplified by the Florentines Gregorio Dati and Buonaccorso Pitti and especially the celebrated merchant of Prato, Francesco di Marco Datini. The large companies and super companies, some of which failed even before the Black Death, were not well suited to the post—plague commercial economy. Datini’s family business, with its limited geographical ambitions, better exercised control, was more nimble and flexible as opportunities vanished or materialized, and more effectively managed risk, all keys to success. Datini through voluminous correspondence with his business associates, subordinates, and agents and his conspicuously careful and regular accounting grasped the reins of his concern tightly. He insulated himself from undue risk by never committing too heavily to any individual venture, by dividing cargoes among ships or by insuring them, by never lending money to notoriously uncreditworthy princes, and by remaining as apolitical as he could. His energy and drive to complete every business venture likewise served him well and made him an exemplar for commercial success in a challenging era (Origo, 1957; Hunt and Murray, 1999).

The Black Death and Popular Rebellion

The late medieval popular uprising, a phenomenon with undeniable economic ramifications, is often linked with the demographic, cultural, social, and economic reshuffling caused by the Black Death; however, the connection between pestilence and revolt is neither exclusive nor linear. Any single uprising is rarely susceptible to a single—cause analysis and just as rarely was a single socioeconomic interest group the fomenter of disorder. The outbreak of rebellion in the first half of the fourteenth century (e.g., in urban [1302] and maritime [1325—28] Flanders and in English monastic towns [1326—27]) indicates the existence of socioeconomic and political disgruntlement well before the Black Death.

Some explanations for popular uprising, such as the placing of immediate stresses on the populace and the cumulative effect of centuries of oppression by manorial lords, are now largely dismissed. At times of greatest stress —— the Great Famine and the Black Death —— disorder but no large—scale, organized uprising materialized. Manorial oppression likewise is difficult to defend when the peasant in the plague’s aftermath was often enjoying better pay, reduced dues and services, broader opportunities, and a higher standard of living. Detailed study of the participants in the revolts most often labeled “peasant” uprisings has revealed the central involvement and apparent common cause of urban and rural tradesmen and craftsmen, not only manorial serfs.

The Black Death may indeed have made its greatest contribution to popular rebellion by expanding the peasant’s horizons and fueling a sense of grievance at the pace of change, not at its absence. The plague may also have undercut adherence to the notion of a divinely—sanctioned, static social order and buffeted a belief that preservation of manorial socioeconomic arrangements was essential to the survival of all, which in turn may have raised receptiveness to the apocalyptic socially revolutionary message of preachers like England’s John Ball. After the Black Death, change was inevitable and apparent to all.

The reasons for any individual rebellion were complex. Measures in the environs of Paris to check wage hikes caused by the plague doubtless fanned discontent and contributed to the outbreak of the Jacquerie of 1358 but high taxation to finance the Hundred Years’ War, depredation by marauding mercenary bands in the French countryside, and the peasantry’s conviction that the nobility had failed them in war roiled popular discontent. In the related urban revolt led by étienne Marcel (1355—58), tensions arose from the Parisian bourgeoisie’s discontent with the war’s progress, the crown’s imposition of regressive sales and head taxes, and devaluation of currency rather than change attributable to the Black Death.

In the English Peasants’ Rebellion of 1381, continued enforcement of the Statute of Laborers no doubt rankled and perhaps made the peasantry more open to provocative sermonizing but labor legislation had not halted higher wages or improvement in the standard of living for peasant. It seems likely that discontent may have arisen from an unsatisfying pace of improvement of the peasant’s lot. The regressive Poll Taxes of 1380 and 1381 also contributed to the discontent. It is furthermore noteworthy that the rebellion began in relatively affluent eastern England, not in the poorer west or north.

In the Ciompi revolt in Florence (1378—83), restrictive gild regulations and denial of political voice to workers due to the Black Death raised tensions; however, Florence’s war with the papacy and an economic slump in the 1370s resulting in devaluation of the penny in which the worker was paid were equally if not more important in fomenting unrest. Once the value of the penny was restored to its former level in 1383 the rebellion in fact subsided.

In sum, the Black Death played some role in each uprising but, as with many medieval phenomena, it is difficult to gauge its importance relative to other causes. Perhaps the plague’s greatest contribution to unrest lay in its fostering of a shrinking economy that for a time was less able to absorb socioeconomic tensions than had the growing high medieval economy. The rebellions in any event achieved little. Promises made to the rebels were invariably broken and brutal reprisals often followed. The lot of the lower socioeconomic strata was improved incrementally by the larger economic changes already at work. Viewed from this perspective, the Black Death may have had more influence in resolving the worker’s grievances than in spurring revolt.

Conclusion

The European economy at the close of the Middle Ages (c. 1500) differed fundamentally from the pre—plague economy. In the countryside, a freer peasant derived greater material benefit from his toil. Fixed rents if not outright ownership of land had largely displaced customary dues and services and, despite low grain prices, the peasant more readily fed himself and his family from his own land and produced a surplus for the market. Yields improved as reduced population permitted a greater focus on fertile lands and more frequent fallowing, a beneficial phenomenon for the peasant. More pronounced socioeconomic gradations developed among peasants as some, especially more prosperous ones, exploited the changed circumstances, especially the availability of land. The peasant’s gain was the lord’s loss. As the Middle Ages waned, the lord was commonly a pure rentier whose income was subject to the depredations of inflation.

In trade and manufacturing, the relative ease of success during the high Middle Ages gave way to greater competition, which rewarded better business practices and leaner, meaner, and more efficient concerns. Greater sensitivity to the market and the cutting of costs ultimately rewarded the European consumer with a wider range of good at better prices.

In the long term, the demographic restructuring caused by the Black Death perhaps fostered the possibility of new economic growth. The pestilence returned Europe’s population roughly its level c. 1100. As one scholar notes, the Black Death, unlike other catastrophes, destroyed people but not property and the attenuated population was left with the whole of Europe’s resources to exploit, resources far more substantial by 1347 than they had been two and a half centuries earlier, when they had been created from the ground up. In this environment, survivors also benefited from the technological and commercial skills developed during the course of the high Middle Ages. Viewed from another perspective, the Black Death was a cataclysmic event and retrenchment was inevitable, but it ultimately diminished economic impediments and opened new opportunity.

References and Further Reading:

Aberth, John. “The Black Death in the Diocese of Ely: The Evidence of the Bishop’s Register.” Journal of Medieval History 21 (1995): 275—87.

Aberth, John. From the Brink of the Apocalypse: Confronting Famine, War, Plague, and Death in the Later Middle Ages. New York: Routledge, 2001.

Aberth, John. The Black Death: The Great Mortality of 1348—1350, a Brief History with Documents . Boston and New York: Bedford/St. Martin’s, 2005.

Aston, T. H. and C. H. E. Philpin, eds. The Brenner Debate: Agrarian Class Structure and Economic Development in Pre—Industrial Europe. Cambridge: Cambridge University Press, 1985.

Bailey, Mark D. “Demographic Decline in Late Medieval England: Some Thoughts on Recent Research.” Economic History Review 49 (1996): 1—19.

Bailey, Mark D. A Marginal Economy? East Anglian Breckland in the Later Middle Ages. Cambridge: Cambridge University Press, 1989.

Benedictow, Ole J. The Black Death, 1346—1353: The Complete History. Woodbridge, Suffolk: Boydell Press, 2004.

Bleukx, Koenraad. “Was the Black Death (1348—49) a Real Plague Epidemic? England as a Case Study.” In Serta Devota in Memoriam Guillelmi Lourdaux. Pars Posterior: Cultura Medievalis, edited by W. Verbeke, M. Haverals, R. de Keyser, and J. Goossens, 64—113. Leuven: Leuven University Press, 1995.

Blockmans, Willem P. “The Social and Economic Effects of Plague in the Low Countries, 1349—1500.” Revue Belge de Philologie et d’Histoire 58 (1980): 833—63.

Bolton, Jim L. “‘The World Upside Down': Plague as an Agent of Economic and Social Change.” In The Black Death in England, edited by M. Ormrod and P. Lindley. Stamford: Paul Watkins, 1996.

Bowsky, William M. “The Impact of the Black Death upon Sienese Government and Society.” Speculum 38 (1964): 1—34.

Campbell, Bruce M. S. “Agricultural Progress in Medieval England: Some Evidence from Eastern Norfolk.” Economic History Review 36 (1983): 26—46.

Campbell, Bruce M. S., ed. Before the Black Death: Studies in the ‘Crisis’ of the Early Fourteenth Century. Manchester: Manchester University Press, 1991.

Cipolla, Carlo M. Before the Industrial Revolution: European Society and Economy, 1000—1700, Third edition. New York: Norton, 1994.

Cohn, Samuel K. The Black Death Transformed: Disease and Culture in Early Renaissance Europe. London: Edward Arnold, 2002.

Cohn, Sameul K. “After the Black Death: Labour Legislation and Attitudes toward Labour in Late—Medieval Western Europe.” Economic History Review 60 (2007): 457—85.

Davis, David E. “The Scarcity of Rats and the Black Death.” Journal of Interdisciplinary History 16 (1986): 455—70.

Davis, R. A. “The Effect of the Black Death on the Parish Priests of the Medieval Diocese of Coventry and Lichfield.” Bulletin of the Institute of Historical Research 62 (1989): 85—90.

Drancourt, Michel, Gerard Aboudharam, Michel Signoli, Olivier Detour, and Didier Raoult. “Detection of 400—Year—Old Yersinia Pestis DNA in Human Dental Pulp: An Approach to the Diagnosis of Ancient Septicemia.” Proceedings of the National Academy of the United States 95 (1998): 12637—40.

Dyer, Christopher. Standards of Living in the Middle Ages: Social Change in England, c. 1200—1520. Cambridge: Cambridge University Press, 1989.

Emery, Richard W. “The Black Death of 1348 in Perpignan.” Speculum 42 (1967): 611—23.

Farmer, David L. “Prices and Wages.” In The Agrarian History of England and Wales, Vol. II, edited H. E. Hallam, 715—817. Cambridge: Cambridge University Press, 1988.

Farmer, D. L. “Prices and Wages, 1350—1500.” In The Agrarian History of England and Wales, Vol. III, edited E. Miller, 431—94. Cambridge: Cambridge University Press, 1991.

Flinn, Michael W. “Plague in Europe and the Mediterranean Countries.” Journal of European Economic History 8 (1979): 131—48.

Freedman, Paul. The Origins of Peasant Servitude in Medieval Catalonia. New York: Cambridge University Press, 1991.

Gottfried, Robert. The Black Death: Natural and Human Disaster in Medieval Europe. New York: Free Press, 1983.

Gyug, Richard. “The Effects and Extent of the Black Death of 1348: New Evidence for Clerical Mortality in Barcelona.” Mediæval Studies 45 (1983): 385—98.

Harvey, Barbara F. “The Population Trend in England between 1300 and 1348.” Transactions of the Royal Historical Society 4th ser. 16 (1966): 23—42.

Harvey, P. D. A. A Medieval Oxfordshire Village: Cuxham, 1240—1400. London: Oxford University Press, 1965.

Hatcher, John. “England in the Aftermath of the Black Death.” Past and Present 144 (1994): 3—35.

Hatcher, John and Mark Bailey. Modelling the Middle Ages: The History and Theory of England’s Economic Development. Oxford: Oxford University Press, 2001.

Hatcher, John. Plague, Population, and the English Economy 1348—1530. London and Basingstoke: MacMillan Press Ltd., 1977.

Herlihy, David. The Black Death and the Transformation of the West, edited by S. K. Cohn. Cambridge and London: Cambridge University Press, 1997.

Horrox, Rosemary, transl. and ed. The Black Death. Manchester: Manchester University Press, 1994.

Hunt, Edwin S.and James M. Murray. A History of Business in Medieval Europe, 1200—1550. Cambridge: Cambridge University Press, 1999.

Jordan, William C. The Great Famine: Northern Europe in the Early Fourteenth Century. Princeton: Princeton University Press, 1996.

Lehfeldt, Elizabeth, ed. The Black Death. Boston: Houghton and Mifflin, 2005.

Lerner, Robert E. The Age of Adversity: The Fourteenth Century. Ithaca: Cornell University Press, 1968.

Le Roy Ladurie, Emmanuel. The Peasants of Languedoc, transl. J. Day. Urbana: University of Illinois Press, 1976.

Lomas, Richard A. “The Black Death in County Durham.” Journal of Medieval History 15 (1989): 127—40.

McNeill, William H. Plagues and Peoples. Garden City, New York: Anchor Books, 1976.

Miskimin, Harry A. The Economy of the Early Renaissance, 1300—1460. Cambridge: Cambridge University Press, 1975.

Morris, Christopher “The Plague in Britain.” Historical Journal 14 (1971): 205—15.

Munro, John H. “The Symbiosis of Towns and Textiles: Urban Institutions and the Changing Fortunes of Cloth Manufacturing in the Low Countries and England, 1270—1570.” Journal of Early Modern History 3 (1999): 1—74.

Munro, John H. “Wage—Stickiness, Monetary Changes, and the Real Incomes in Late—Medieval England and the Low Countries, 1300—1500: Did Money Matter?” Research in Economic History 21 (2003): 185—297.

Origo. Iris The Merchant of Prato: Francesco di Marco Datini, 1335—1410. Boston: David R. Godine, 1957, 1986.

Platt, Colin. King Death: The Black Death and its Aftermath in Late—Medieval England. Toronto: University of Toronto Press, 1996.

Poos, Lawrence R. A Rural Society after the Black Death: Essex 1350—1575. Cambridge: Cambridge University Press, 1991.

Postan, Michael M. The Medieval Economy and Society: An Economic History of Britain in the Middle Ages. Harmondswworth, Middlesex: Penguin, 1975.

Pounds, Norman J. D. An Economic History of Europe. London: Longman, 1974.

Raoult, Didier, Gerard Aboudharam, Eric Crubézy, Georges Larrouy, Bertrand Ludes, and Michel Drancourt. “Molecular Identification by ‘Suicide PCR’ of Yersinia Pestis as the Agent of Medieval Black Death.” Proceedings of the National Academy of Sciences of the United States of America 97 (7 Nov. 2000): 12800—3.

Razi, Zvi “Family, Land, and the Village Community in Later Medieval England.” Past and Present 93 (1981): 3—36.

Russell, Josiah C. British Medieval Population. Albuquerque: University of New Mexico Press, 1948.

Scott, Susan and Christopher J. Duncan. Return of the Black Death: The World’s Deadliest Serial Killer. Chicester, West Sussex and Hoboken, NJ: Wiley, 2004.

Shrewsbury, John F. D. A History of Bubonic Plague in the British Isles. Cambridge: Cambridge University Press, 1970.

Twigg, Graham The Black Death: A Biological Reappraisal. London: Batsford Academic and Educational, 1984.

Waugh, Scott L. England in the Reign of Edward III. Cambridge: Cambridge University Press, 1991.

Ziegler, Philip. The Black Death. London: Penguin, 1969, 1987.

Citation: Routt, David. “The Economic Impact of the Black Death”. EH.Net Encyclopedia, edited by Robert Whaples. July 20, 2008. URL http://eh.net/encyclopedia/the-economic-impact-of-the-black-death/

Bimetallism

Angela Redish, University of British Columbia

A bimetallic monetary standard can be defined as one in which coins of two different metals are legal tender. Such standards were commonplace in Western economies throughout most of the last millennium, although their details differed. Under a typical bimetallic standard coins of gold and silver were produced by the Mint under orders of the sovereign, and they were given exchange values that reflected their intrinsic value. For example, in the late eighteenth century the US established a coinage system comprising a silver Dollar, containing 371.25 Troy grains of silver, and a gold Eagle containing 247.5 Troy grains of gold. The relative market values of gold to silver at that time were 15:1, and the legal tender value of the silver dollar was $1 and of the Eagle was $10, reflecting their relative values (ten silver Dollars would contain 3712.5 grains of silver, which is 15 times the 247.5 grain weight of the gold Eagle).

The mint typically bought gold and silver freely that is, from anyone willing to sell at the mint price, which usually was slightly lower that the value of the coins produced, to pay for the costs of coining and, sometimes, profits or seignorage as well. As in the case of the gold standard – a more well-understood commodity money standard – bimetallism provided a nominal anchor for the monetary system. The price of gold and silver were determined by their relative supply and demand (both monetary and non-monetary) and this determined the stock of money and the general price level.

In a world where the principal components of the money stock were full-bodied coins (that is, the coins circulated at roughly their intrinsic value, and there were no bank notes) bimetallic standards had the merit that they enabled currencies to have coins for high and low valued transactions without having exceptionally large or small coins. There was, however, a difficulty with bimetallic standards. If coins were to be given values according to the relative market value of gold to silver a change in the market value of the metals would disrupt the monetary system. Essentially one of three things would happen. Firstly, the coin whose relative market value had risen could be withdrawn from circulation making the monetary system either all gold or all silver. This phenomenon is predicted by Gresham’s Law – bad money drives out good – named after Sir Thomas Gresham, the advisor to Elizabeth I of England, who noted the behavior in the sixteenth century. Another possibility was that both gold and silver coins would continue to circulate but not at par values: the coin whose value had risen would circulate at a premium. Finally, perhaps the coins would both continue to circulate at par. The debate over which of these possibilities was in fact more prevalent, and in theory more likely, continues.

Europe during the Dark Ages minted only silver coins – and only pennies made of debased (low purity) silver at that. With the rise of commerce in the Mediterranean, a need for larger denominations and a more reliable medium of exchange emerged leading to the minting of first pure silver coins and then gold coins. The gold coins were first minted in Florence – the well-known Florin and in Venice – the Ducat – in the mid-thirteenth century. Many other countries also introduced their own gold coins, but the Florin and Ducat became the dollar of their time, and were used in global commerce at least until the sixteenth century. Meantime most local or retail trade was conducted in the locally produced silver coinage.

At the beginning of the nineteenth century most Western economies used bimetallic standards, but by the end of the century the gold standard – that is, a monometallic standard – covered the West and much of the rest of the global economy. There is now a considerable literature on why this transition occurred, and on its merits.

The underlying factors affecting the choice of monetary standard were technological change and globalization. In the early nineteenth century steam engines were harnessed to rolling mills and coining presses. This mechanization made it possible to produce coins that were virtually uniform in dimension and that had very high definition impressions on their faces. Such coins were much more difficult to counterfeit so that it became feasible to produce coins that were not full-bodied and yet would not be counterfeited. Similarly convertible bank notes became more common as a medium of exchange. Both factors meant that it was possible to have high and low denomination money without bimetallism. In the early nineteenth century Britain formally rejected bimetallism and fixed the value of the pound in terms of its gold content only.

The nineteenth century saw dramatic reductions in transportation costs and a resulting integration of economies that is only now being recreated. This raised the benefits of a common currency, indeed in the 1860s a world monetary conference endorsed a world money based on a gold coin. (The proposal was not ratified and fell victim to the Franco-Prussian War!) The choice of gold may have reflected its higher value – more prestige – and the desire to emulate the economic successes of Britain.

However, the major transition occurred between 1850 (when only Britain and Portugal were on the gold standard) and 1880 by which time the US and almost all of Western Europe had adopted gold. A key factor in this timing was the California gold rush in the mid-nineteenth century. This increased the world gold supply and caused a fall in the relative price of gold. As Gresham’s law predicted, the result was a withdrawal of silver from circulation both in the US and in Europe. Gold became the de facto money as it became unprofitable to sell silver to the Mint. In earlier times the monetary authorities would have responded by altering the weight of the gold coins, however, in the environment of the mid-nineteenth century, the response was to provide low-value coins by producing token coins on government account.

In the US subsidiary token coins were introduced in 1853 and in 1873 Congress passed a new Coinage Act that precluded the minting of the silver dollar. The silver dollar had not in fact been minted for decades but the Act was subsequently derided as the Crime of ’73 on the grounds that it had inadvertently led to the adoption of the gold standard. Belgium, Italy and Switzerland, whose gold and silver coinages were identical to that of France each adopted different subsidiary coinages, but in 1865 joined with France to form the Latin Monetary Union (LMU) to create a uniform subsidiary coinage. In 1870 newly unified Germany adopted the gold standard and financed the acquisition of gold (i.e. sale of the existing silver coins) through the indemnity it imposed on France at the conclusion of the Franco-Prussian War. In order to avoid providing a sink for German silver the French refused to buy silver, leading all the LMU countries to abandon bimetallism.

Although the gold standard was entrenched by 1880, during the last two decades of the nineteenth century, there were attempts in both the US and Europe to return to bimetallism. The arguments were both theoretical and partisan. A significant motivation was the rise in the price of gold after 1870 (in part due to the increased monetary demands for gold) which generated a secular deflation. Furthermore new silver discoveries reduced the price of silver, so that if the previous bimetallic standards had remained in place there would have been an inflation.

In the US Westerners with nominal debts, who felt penalized by the deflation, supported William Jennings Bryan who campaigned for the Presidency in 1896 on the slogan that Americans should not be “crucified on a cross of gold.” However, Bryan lost the election and gold discoveries in the late 1890s generated a gradual inflation, and in 1900 the US adopted the Gold Standard Act cementing the adoption of the gold standard.

In Europe the debate focused on the welfare properties of bimetallism, with advocates arguing that international bimetallism – in which all countries adopted the same relative prices for gold and silver – would alleviate the problems associated with Gresham’s Law, and that bimetallism would promote greater price stability than the gold standard provided. The LMU countries were at the forefront of the promotion of bimetallism, but Britain and Germany were never really on board, and the requisite degree of international co-operation was not forthcoming. By 1900 bimetallism was dead.

Further Reading

Bordo, Michael D. “Bimetallism.” In The New Palgrave Encyclopedia of Money and Finance edited by Peter K. Newman, Murray Milgate and John Eatwell. New York: Stockton Press, 1992.

Flandreau, Marc. L’or du monde: La France et al Stabilité du Système Monétaire International, 1848-1873. Paris: L’Harmattan,1995.

Friedman, Milton. “Bimetallism Revisited.” Journal of Economic Perspectives 4: (1990): 95-104.

Garber, Peter M. “Nominal Contracts in a Bimetallic Standard.” American Economic Review 76, (1986): 1012-30.

Redish, Angela. Bimetallism: An Economic and Historical Analysis. New York: Cambridge University Press, 2000.

Rockoff, Hugh. “The Wizard of Oz as a Monetary Allegory.” Journal of Political Economy 98, (1990): 739-60.

Rolnick, Arthur J. and Warren E. Weber “Gresham’s Law or Gresham’s Fallacy?” Journal of Political Economy 94, (1986):185-99.

Citation: Redish, Angela. “Bimetallism”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/bimetallism/

The Economic History of Major League Baseball

Michael J. Haupert, University of Wisconsin — La Crosse

“The reason baseball calls itself a game is because it’s too screwed up to be a business” — Jim Bouton, author and former MLB player

Origins

The origin of modern baseball is usually considered the formal organization of the New York Knickerbocker Base Ball Club in 1842. The rules they played by evolved into the rules of the organized leagues surviving today. In 1845 they organized into a dues paying club in order to rent the Elysian Fields in Hoboken, New Jersey to play their games on a regular basis. Typically these were amateur teams in name, but almost always featured a few players who were covertly paid. The National Association of Base Ball Players was organized in 1858 in recognition of the profit potential of baseball. The first admission fee (50 cents) was charged that year for an All Star game between the Brooklyn and New York clubs. The association formalized playing rules and created an administrative structure. The original association had 22 teams, and was decidedly amateur in theory, if not practice, banning direct financial compensation for players. In reality of course, the ban was freely and wantonly ignored by teams paying players under the table, and players regularly jumping from one club to another for better financial remuneration.

The Demand for Baseball

Before there were professional players, there was a recognition of the willingness of people to pay to see grown men play baseball. The demand for baseball extends beyond the attendance at live games to television, radio and print. As with most other forms of entertainment, the demand ranges from casual interest to a fanatical following. Many tertiary industries have grown around the demand for baseball, and sports in general, including the sports magazine trade, dedicated sports television and radio stations, tour companies specializing in sports trips, and an active memorabilia industry. While not all of this is devoted exclusively to baseball, it is indicative of the passion for sports, including baseball.

A live baseball game is consumed at the same time as the last stage of production of the game. It is like an airline seat or a hotel room, in that it is a highly perishable good that cannot be inventoried. The result is that price discrimination can be employed. Since the earliest days of paid attendance teams have discriminated based on seat location, sex and age of the patron. The first “ladies day,” which offered free admission to any woman accompanied by a man, was offered by the Gotham club in 1883. The tradition would last for nearly a century. Teams have only recently begun to exploit the full potential of price discrimination by varying ticket prices according to the expected quality, date and time of the game.

Baseball and the Media

Telegraph Rights

Baseball and the media have enjoyed a symbiotic relationship since newspapers began regularly covering games in the 1860s. Games in progress were broadcast by telegraph to saloons as early as the 1890s. In 1897 the first sale of broadcast rights took place. Each team received $300 in free telegrams as part of a league-wide contract to transmit game play-by-play over the telegraph wire. In 1913 Western Union paid each team $17,000 per year over five years for the rights to broadcast the games. The movie industry purchased the rights to film and show the highlights of the 1910 World Series for $500. In 1911 the owners managed to increase that rights fee to $3500.

Radio

It is hard to imagine that Major League Baseball (MLB) teams once saw the media as a threat to the value of their franchises. But originally, they resisted putting their games on the radio for fear that customers would stay home and listen to the game for free rather than come to the park. They soon discovered that radio (and eventually television) was a source of income and free advertising, helping to attract even more fans as well as serving as an additional source of revenue. By 2002, media revenue exceeded gate revenue for the average MLB team.

Originally, local radio broadcasts were the only source of media revenue. National radio broadcasts of regular season games were added in 1950 by the Liberty Broadcasting System. The contract lasted only one year however, before radio reverted to local broadcasting. The World Series, however has been nationally broadcast since 1922. For national broadcasts, the league negotiates a contract with a provider and splits the proceeds equally among all the teams. Thus, national radio and television contracts enrich the pot for all teams on an equal basis.

In the early days of radio, teams saw the broadcasting of their games as free publicity, and charged little or nothing for the rights. The Chicago Cubs were the first team to regularly broadcast their home games, giving them away to local radio in 1925. It would be another fourteen years, however, before every team began regular radio broadcasts of their games.

Television

1939 was also the year that the first game was televised on an experimental basis. In 1946 the New York Yankees became the first team with a local television contract when they sold the rights to their games for $75,000. By the end of the century they sold those same rights for $52 million per season. By 1951 the World Series was a television staple, and by 1955 all teams sold at least some of their games to local television. In 1966 MLB followed the lead of the NFL and sold its first national television package, netting $300,000 per team. The latest national television contract paid $24 million to each team in 2002.

Table 1:

MLB Television Revenue, Ticket Prices and Average Player Salary 1964-2002

(real (inflation-adjusted) values are in 2002 dollars)

Year Total TV revenue(millions of $) Average ticket price Average player salary
nominal real nominal real nominal real
1964 $ 21.28 $ 123 $ 2.25 $13.01 $ 14,863.00 $ 85,909
1965 $ 25.67 $ 146 $ 2.29 $13.02 $ 14,341.00 $ 81,565
1966 $ 27.04 $ 149 $ 2.35 $12.95 $ 17,664.00 $ 97,335
1967 $ 28.93 $ 156 $ 2.37 $12.78 $ 19,000.00 $ 102,454
1968 $ 31.04 $ 160 $ 2.44 $12.58 $ 20,632.00 $ 106,351
1969 $ 38.04 $ 186 $ 2.61 $12.76 $ 24,909.00 $ 121,795
1970 $ 38.09 $ 176 $ 2.72 $12.57 $ 29,303.00 $ 135,398
1971 $ 40.70 $ 180 $ 2.91 $12.87 $ 31,543.00 $ 139,502
1972 $ 41.09 $ 176 $ 2.95 $12.64 $ 34,092.00 $ 146,026
1973 $ 42.39 $ 171 $ 2.98 $12.02 $ 36,566.00 $ 147,506
1974 $ 43.25 $ 157 $ 3.10 $11.25 $ 40,839.00 $ 148,248
1975 $ 44.21 $ 147 $ 3.30 $10.97 $ 44,676.00 $ 148,549
1976 $ 50.01 $ 158 $ 3.45 $10.90 $ 52,300.00 $ 165,235
1977 $ 52.21 $ 154 $ 3.69 $10.88 $ 74,000.00 $ 218,272
1978 $ 52.31 $ 144 $ 3.98 $10.96 $ 97,800.00 $ 269,226
1979 $ 54.50 $ 135 $ 4.12 $10.21 $ 121,900.00 $ 301,954
1980 $ 80.00 $ 174 $ 4.45 $9.68 $ 146,500.00 $ 318,638
1981 $ 89.10 $ 176 $ 4.93 $9.74 $ 196,500.00 $ 388,148
1982 $ 117.60 $ 219 $ 5.17 $9.63 $ 245,000.00 $ 456,250
1983 $ 153.70 $ 277 $ 5.69 $10.25 $ 289,000.00 $ 520,839
1984 $ 268.40 $ 464 $ 5.81 $10.04 $ 325,900.00 $ 563,404
1985 $ 280.50 $ 468 $ 6.08 $10.14 $ 368,998.00 $ 615,654
1986 $ 321.60 $ 527 $ 6.21 $10.18 $ 410,517.00 $ 672,707
1987 $ 349.80 $ 553 $ 6.21 $9.82 $ 402,579.00 $ 636,438
1988 $ 364.10 $ 526 $ 6.21 $8.97 $ 430,688.00 $ 622,197
1989 $ 246.50 $ 357 $ 489,539.00 $ 708,988
1990 $ 659.30 $ 907 $ 589,483.00 $ 810,953
1991 $ 664.30 $ 877 $ 8.84 $11.67 $ 845,383.00 $ 1,116,063
1992 $ 363.00 $ 465 $ 9.41 $12.05 $1,012,424.00 $ 1,296,907
1993 $ 618.25 $ 769 $ 9.73 $12.10 $1,062,780.00 $ 1,321,921
1994 $ 716.05 $ 868 $ 10.62 $12.87 $1,154,486.00 $ 1,399,475
1995 $ 516.40 $ 609 $ 10.76 $12.69 $1,094,440.00 $ 1,290,693
1996 $ 706.30 $ 810 $ 11.32 $12.98 $1,101,455.00 $ 1,263,172
1997 $ 12.06 $13.51 $1,314,420.00 $ 1,472,150
1998 $ 13.58 $14.94 $1,378,506.00 $ 1,516,357
1999 $ 14.45 $15.61 $1,726,282.68 $ 1,864,385
2000 $ 16.22 $16.87 $1,987,543.03 $ 2,067,045
2001 $1,291.06 $ 1,310 $ 17.20 $17.45 $2,343,710.00 $ 2,378,093
2002 $ 17.85 $17.85 $2,385,903.07 $ 2,385,903

Notes: 1989 and 1992 national TV data only, no local TV included. Real values are calculated using Consumer Price Index.

As the importance of local media contracts grew, so did the problems associated with them. As cable and pay per view television became more popular, teams found them attractive sources of revenue. A fledgling cable channel could make its reputation by carrying the local ball team. In a large enough market, this could result in substantial payments to the local team. These local contracts did not pay all teams, only the home team. The problem from MLB’s point of view was not the income, but the variance in that income. That variance has increased over time, and is the primary source of the gap in payrolls, which is linked to the gap in quality, which is cited as the “competitive balance problem.” In 1962 the MLB average for local media income was $640,000 ranging from a low of $300,000 (Washington) to a high of $1.2 million (New York Yankees). In 2001, the average team garnered $19 million from local radio and television contracts, but the gap between the bottom and top had widened to an incredible $51.5 million. The Montreal Expos received $536,000 for their local broadcast rights while the New York Yankees received more than $52 million for theirs. Revenue sharing has resulted in a redistribution of some of these funds from the wealthiest to the poorest teams, but the impact of this on the competitive balance problem remains to be seen.

 

Franchise values

Baseball has been about profits since the first admission fee was charged. The first professional league, the National Association, founded in 1871, charged a $10 franchise fee. The latest teams to join MLB, paid $130 million apiece for the privilege in 1998.

Early Ownership Patterns

The value of franchises has mushroomed over time. In the early part of the twentieth century, owning a baseball team was a career choice for a wealthy sportsman. In some instances, it was a natural choice for someone with a financial interest in a related business, such as a brewery, that provided complementary goods. More commonly, the operation of a baseball team was a full time occupation of the owner, who was usually one individual, occasionally a partnership, but never a corporation.

Corporate Ownership

This model of ownership has since changed. The typical owner of a baseball team is now either a conglomerate, such as Disney, AOL Time Warner, the Chicago Tribune Company, or a wealthy individual who owns a (sometimes) related business, and operates the baseball team on the side – perhaps as a hobby, or as a complementary business. This transition began to occur when the tax benefits of owning a baseball team became significant enough that they were worth more to a wealthy conglomerate than a family owner. A baseball team that can show a negative bottom line while delivering a positive cash flow can provide significant tax benefits by offsetting income from another business. Another advantage of corporate ownership is the ability to cross-market products. For example, the Tribune Company owns the Chicago Cubs, and is able to use the team as part of its television programming. If it is more profitable for the company to show income on the Tribune ledger than the Cubs ledger, then it decreases the payment made to the team for the broadcast rights to its games. If a team owner does not have another source of income, then the ability to show a loss on a baseball team does not provide a tax break on other income. One important source of the tax advantage of owning a franchise comes from the ability to depreciate player salaries. In 1935 the IRS ruled that baseball teams could depreciate the value of their player contracts. This is an anomaly since labor is not a depreciating asset.

Table 2: Comparative Prices for MLB Salaries, Tickets and Franchise Values for Selected Years

Nominal values
year Salary ($000) Average ticketprice Average franchisevalue ($millions$)
minimum mean maximum
1920 5 20 1.00 0.794
1946 11.3 18.5 1.40 2.5
1950 13.3 45 1.54 2.54
1960 3 16 85 1.96 5.58
1970 12 29.3 78 2.72 10.13
1980 30 143.8 1300 4.45 32.1
1985 60 371.2 2130 6.08 40
1991 100 851.5 3200 8.84 110
1994 109 1153 5975 10.62 111
1997 150 1370 10,800 12.06 194
2001 200 2261 22,000 18.42 286
Real values (2002 dollars)
year Salary ($000) Average ticketprice Average franchisevalue ($millions)
minimum mean maximum
1920 44.85 179.4 8.97 7.12218
1946 104.299 170.755 12.922 23.075
1950 99.351 336.15 11.5038 18.9738
1960 18.24 97.28 516.8 11.9168 33.9264
1970 55.44 135.366 360.36 12.5664 46.8006
1980 65.4 313.484 2834 9.701 69.978
1985 100.2 619.904 3557.1 10.1536 66.8
1991 132 1123.98 4224 11.6688 145.2
1994 131.89 1395.13 7229.75 12.8502 134.31
1997 168 1534.4 12096 13.5072 217.28
2001 202 2283.61 22220 18.6042 288.86

The most significant change in the value of franchises has occurred in the last decade as a function of new stadium construction. The construction of a new stadium creates additional sources of revenue for a team owner, which impacts the value of the franchise. It is the increase in the value of franchises which is the most profitable part of ownership. Eight new stadiums were constructed between 1991 and 1999 for existing MLB teams. The average franchise value for the teams in those stadiums increased twenty percent the year the new stadium opened.

 

The Market Structure of MLB and Players’ Organizations

Major League Baseball is a highly successful oligopoly of professional baseball teams. The teams have successfully protected themselves against competition from other leagues for more than 125 years. The closest call came when two rival leagues, the established National League, and a former minor league, the Western League, renamed the American League in 1900, merged in 1903 to form the structure that exists to this day. The league lost some of its power in 1976 when it lost its monopsonistic control over the player labor market, but it retains its monopolistic hold on the number and location of franchises. Now the franchise owners must share a greater percentage of their revenue with the hired help, whereas prior to 1976 they controlled how much of the revenue to divert to the players.

The owners of professional baseball teams have acted in unison since the very beginning. They conspired to hold down the salaries of players with a secret reserve agreement in 1878. This created a monopsony whereby a player could only bargain with the team that originally signed him. This stranglehold on the labor market would last a century.

The baseball labor market is one of extremes. Baseball players began their labor history as amateurs whose skills quickly became highly demanded. For some, this translated into a career. Ultimately, all players became victims of a well-organized and obstinate cartel. Players lost their ability to bargain and offer their services competitively for a century. Despite several attempts to organize and a few attempts to create additional demand for their services from outside sources, they failed to win the right to sell their labor to the employer of their choice.

Beginning of Professionalization

The first team of baseball players to be openly paid was the 1869 Redstockings of Cincinnati. Prior to that, teams were organized as amateur squads who played for the pride of their hometown, club or college. The stakes in these games were bragging rights, often a trophy or loving cup, and occasionally a cash prize put up by a benefactor, or as a wager between the teams. It was inevitable that professional players would soon follow.

The first known professional players were paid under the table. The desire to win had eclipsed the desire to observe good sportsmanship, and the first step down the slope toward full professionalization of the sport had been taken. Just a few years later, in 1869, the first professional team was established. The Redstockings are as famous for being the first professional team as they are for their record and barnstorming accomplishments. The team was openly professional, and thus served as a worthy goal for other teams, amateur, semi-professional, and professional alike. The Cincinnati squad spent the next year barnstorming across America, taking on, and defeating, all challengers. In the process they drew attention to the game of baseball, and played a key part in its growing popularity. Just two years later, the first entirely professional baseball league would be established.

National Association of Professional Baseball Players

The formation of the National Association of Professional Base Ball Players in 1871 created a different level of competition for baseball players. The professional organization, which originally included nine teams, broke away from the National Association of Base Ball Players, which used amateur players. The amateur league folded three years after the split. The league was reorganized and renamed the National League in 1876. Originally, professional teams competed to sign players, and the best were rewarded handsomely, earning as much as $4500 per season. This was good money, given that a skilled laborer might earn $1200-$1500 per year for a 60 hour work week.

This system, however, proved to be problematic. Teams competed so fiercely for players that they regularly raided each other’s rosters. It was not uncommon for players to jump from one team to another during the season for a pay increase. This not only cost team owners money, but also created havoc with the integrity of the game, as players moved among teams, causing dramatic mid-season swings in the quality of teams.

Beginning of the Reserve Clause, 1878-79

During the winter of 1878-79, team owners gathered to discuss the problem of player roster jumping. They made a secret agreement among themselves not to raid one another’s rosters during the season. Furthermore, they agreed to restrain themselves during the off-season as well. Each owner would circulate to the other owners a list of five players he intended to keep on his roster the following season. By agreement, none of the owners would offer a contract to any of these “reserved” players. Hence, the reserve clause was born. It would take nearly a century before this was struck down. In the meantime, it went from five players (about half the team) to the entire team (1883) and to a formal contract clause (1887) agreed to by the players. Owners would ultimately make such a convincing case for the necessity of the reserve clause, that players themselves testified to its necessity in the Celler Anti-monopoly Hearings in 1951.

In 1892 the minor league teams agreed to a system that allowed the National League teams to draft players from their teams. This agreement was in response to their failure to get the NL to honor their reserve clause. In other words, what was good for the goose, was not good for the gander. While NL owners agreed to honor their reserve lists among one another, they paid no such honor to the reserve lists of teams in other organized, professional leagues. They believed they were at the top of the pyramid, where all the best players should be, and therefore they would get those players when they wanted them. As part of the draft agreement, the minor league teams allowed the NL teams to select players from their roster for fixed payments. The NL sacrificed some money, but restored a bit of order to the process, not to mention eliminated expensive bidding wars among teams for the services of players from the minor league teams.

The Players League

The first revolt by the players came in 1890, when they formed their own league, called the Players League, to compete with the National League and its rival, the American Association (AA), founded in 1882. The Players League was the first and only example of a cooperative league. The league featured profit sharing with players, an abolition of unilateral contract transfers, and no reserve clause. The competing league caused a bidding war for talent, leading to salary increases for the best players. The “war” ended after just one season, when the National League and American Association agreed to allow owners of some Players League teams to buy existing franchises. The following year, the NL and AA merged by buying out four AA franchises for $130,000 and merging the other four into the National League, to form a single twelve-team circuit.

Syndicates

This proved to be an unwieldy league arrangement however, and some of the franchises proved financially unstable. In order to preserve the structure of the league and avoid bankruptcy of some teams, syndicate ownership evolved, in which owners purchased a controlling interest in two teams. This did not help the stability of the league. Instead, it became a situation in which the syndicates used one team to train young players and feed the best of them to the other team. This period in league history exhibits some of the greatest examples of disparity between the best and worst teams in the league. In 1899 the Cleveland Spiders, the poor stepsister in the Cleveland-St. Louis syndicate, would lose a record 134 out of 154 games, a level of futility that has never been equaled. In 1900 the NL reduced to eight teams, buying out four of the existing franchises (three of the original AA franchises) for $60,000.

Western League Competes with National League

Syndicate ownership was ended in 1900 as the final part of the reorganization of the NL. It also sparked the minor Western League to declare major league status, and move some teams into NL markets for direct competition (Chicago, Boston, St. Louis, Philadelphia and Manhattan). All out competition followed in 1901, complete with roster raiding, salary increases, and team jumping, much to the benefit of the players. Syndicate ownership appeared again in 1902 when the owners of the Pittsburgh franchise purchased an interest in the Philadelphia club. Owners briefly entertained the idea of turning the entire league into a syndicate, transferring players to the market where they might be most valuable. The idea was dropped, however, for fear that the game would lose credibility and result in a decrease in attendance. In 1910 syndicate ownership was formally banned, though it did occur again in 2002, when the Montreal franchise was purchased by the other 29 MLB franchises as part of a three way franchise swap involving Boston, Miami and Montreal. MLB is currently looking to sell the franchise and move it to a more profitable market.

National and American Leagues End Competition

Team owners quickly saw the light, and in 1903 they made an agreement to honor one another’s rosters. Once more the labor wars were ended, this time in an agreement that would establish the major leagues as an organization of two cooperating leagues: the National League and the American League, each with eight teams, located in the largest cities east of the Mississippi (with the exception of St. Louis), and each league honoring the reserved rosters of teams in the other. This structure would prove remarkably stable, with no changes until 1953 when the Boston Braves became the first team to relocate in half a century when they moved to Milwaukee.

Franchise Numbers and Movements

The location and number of franchises has been a tightly controlled issue for teams since leagues were first organized. Though franchise movements were not rare in the early days of the league, they have always been under the control of the league, not the individual franchise owners. An owner is accepted into the league, but may not change the location of his or her franchise without the approval of the other members of the league. In addition, moving the location of a franchise within the vicinity of another franchise requires the permission of the affected franchise. As a result, MLB franchises have been very stable over time in regard to location. The size of the league has also been stable. From the merger of the AL and NL in 1903 until 1961, the league retained the same sixteen teams. Since that time, expansion has occurred fairly regularly, increasing to its present size of 30 teams with the latest round of expansion in 1998. In 2001, the league proposed going in the other direction, suggesting that it would contract by two teams in response to an alleged fiscal crisis and breakdown in competitive balance. Those plans were postponed at least four years by the labor agreement signed in 2002.

Table 3: MLB Franchise Sales Data by Decade

Decade Average purchase price in millions (2002 dollars) Average annual rate of increase in franchise sales price Average annual rate of return on DJIA (includes capital appreciation and annual dividends) Average tenure of ownership of MLB franchisein years Number of franchise sales
1910s .585(10.35) 6 6
1920s 1.02(10.4) 5.7 14.8 12 9
1930s .673(8.82) -4.1 - 0.3 19.5 4
1940s 1.56(15.6) 8.8 10.8 15.5 11
1950s 3.52(23.65) 8.5 16.7 13.5 10
1960s 7.64(43.45) 8.1 7.4 16 10
1970s 12.62(41.96) 5.1 7.7 10 9
1980s 40.7(67.96) 12.4 14.0 11 12
1990s 172.71(203.68) 15.6 12.6 15.8 14

Note: 2002 values calculated using the Consumer Price Index for decade midpoint

Negro Leagues

Separate professional leagues for African Americans existed, since they were excluded from participating in MLB until 1947 when Jackie Robinson broke the color barrier. The first was formed in 1920, and the last survived until 1960, though their future was doomed by the integration of the major and minor leagues.

Relocations

As revenues dried up or new markets beckoned due to shifts in population and the decreasing cost of trans-continental transportation, franchises began relocating in the second half of the twentieth century. The period from 1953-1972 saw a spate of franchise relocation: teams moved to Kansas City, Minneapolis, Baltimore, Los Angeles, Oakland, Dallas and San Francisco in pursuit of new markets. Most of these moves involved one team moving out of a market it shared with another team. The last team to relocate was the Washington D.C. franchise, which moved to suburban Dallas in 1972. It was the second time in just over a decade that a franchise had moved from the nation’s capitol. The original franchise, a charter member of the American League, had moved to Minneapolis in 1961. While there have been no relocations since then, there have been plenty of examples of threats to relocate. The threat to relocate has frequently been used by a team trying to get a new stadium built with public financing.

There were still a couple of challenges to the reserve clause. Until the 1960s, these came in the form of rival leagues creating competition for players, not a challenge to the legality of the reserve clause.

Federal League and the 1922 Supreme Court Antitrust Exemption

In 1914 the Federal League debuted. The new league did not recognize the reserve clause of the existing leagues, and raided their rosters, successfully luring some of the best players to the rival league with huge salary increases. Other players benefited from the new competition, and were able to win handsome raises from their NL and AL employers in return for not jumping leagues. The Federal League folded after two seasons when some of the franchise owners were granted access to the major leagues. No new teams were added, but a few owners were allowed to purchase existing NL and AL teams.

The first attack on the organizational structure of the major leagues to reach the US Supreme Court occurred when the shunned owner of the Baltimore club of the Federal League sued major league baseball for violation of antitrust law. Federal Baseball Club of Baltimore v the National League eventually reached the Supreme Court, where in 1922 the famous decision that baseball was not interstate commerce, and therefore was exempt from antitrust laws was rendered.

Early Strike and Labor Relations Problems

The first player strike actually occurred in 1912. The Detroit Tigers, in a show of unison for their embattled star Ty Cobb, refused to play in protest of what they regarded as an unfair suspension of Cobb, refusing to take the field unless the suspension was lifted. When warned that the team faced the prospect of a forfeit and a $5000 fine if they did not field a team, owner Frank Navin recruited local amateur players to suit up for the Tigers. The results were not surprising: a 24-2 victory for the visiting Philadelphia Athletics.

This was not an organized strike against the system per se, but it was indicative of the problems existent in the labor relations between players and owners. Cobb’s suspension was determined by the owner of the team, with no chance for a hearing for Cobb, and with no guidance from any existing labor agreement regarding suspensions. The owner was in total control, and could mete out whatever punishment for whatever length he deemed appropriate.

Mexican League

The next competing league appeared in 1946 from an unusual source: Mexico. Again, as in previous league wars, the competition benefited the players. In this case the players who benefited most were those players who were able to use Mexican League offers as leverage to gain better contracts from their major league teams. Those players who accepted offers from Mexican League teams would ultimately regret it. The league was under-financed, the playing and travel conditions far below major league standards, and the wrath of the major leagues deep. When the first paychecks were missed, the players began to head back to the U.S. However, they found no jobs waiting for them. Major League Baseball Commissioner Happy Chandler blacklisted them from the league. This led to a lawsuit, Gardella v MLB. The case was eventually settled out of court after a Federal Appeals court sided with Danny Gardella in 1949. Gardella was one of the blacklisted players who sued MLB for restraint of trade after being prevented from returning to the league after accepting a Mexican League offer for the 1946 season. While many of the players ultimately returned to the major leagues, they lost several years of their careers in the process.

Player Organizations

The first organization of baseball players came in 1885, in part a response to the reserve clause enacted by owners. The National Brotherhood of Professional Base Ball Players was not particularly successful however. In fact, just two years later, the players agreed to the reserve clause, and it became a part of the standard players contract for the next 90 years.

In 1900 another player organization was founded, the Players Protective Association. Competition broke out the next year, when the Western League declared itself a major league, and became the American League. It would merge with the National League for the 1903 season, and the brief period of roster raiding and increasing player salaries ended, as both leagues agreed to recognize one another’s rosters and reserve clauses. The Players Protective Association faded into obscurity amid the brief period of increased competition and player salaries.

Failure and Consequences of the American Baseball Guild

In 1946 the foundation was laid for the current Major League Baseball Player’s Association (MLBPA). Labor lawyer Robert Murphy created the American Baseball Guild, a player’s organization, after holding secret talks with players. Ultimately, the players voted not to form a union, and instead followed the encouragement of the owners, and formed their own committee of player representatives to bargain directly with the owners. The outcome of the negotiations was changes in the standard labor contract. Up to this point, the contract had been pretty much dictated by the owners. It contained such features as the right to waive a player with only ten days notice, the right to unilaterally decrease salary from one year to the next by any amount, and of course the reserve clause.

The players did not make major headway with the owners, but they did garner some concessions. Among them were a maximum pay cut of 25%, a minimum salary of $5000, a promise by the owners to create a pension plan, and $25 per week in living expenses for spring training camp. Until 1947, players received only expense money for spring training, no salary. The players today, despite their multimillion-dollar contracts, still receive “Murphy money” for spring training as well as a meal allowance for each day they are on the road traveling with the club.

Facing eight antitrust lawsuits in 1950, MLB requested Congress to pass a general immunity bill for all professional sports leagues. The request ultimately led to MLB’s inclusion in the Celler Anti-monopoly hearings in 1951. However, no legislative action was recommended. In fact, the owners by this time had so thoroughly convinced the players of the necessity of the reserve clause to the very survival of MLB that several players testified in favor of the monopsonistic structure of the league. They cited it as necessary to maintain the competitive balance among the teams that made the league viable. In 1957 the House Antitrust Subcommittee revisited the issue, once again recommending no change in the status quo.

Impacts of the Reserve Clause

Simon Rottenberg was the first economist to seriously look into professional baseball with the publication of his classic 1956 article “The Baseball Players’ Labor Market.” His conclusion, not surprisingly, was that the reserve clause transferred wealth from the players to owners, but had only a marginal impact on where the best players ended up. They would end up playing for the teams in the market in the best position to exploit their talents for the benefit of paying customers – in other words, the biggest markets: primarily New York. Given the quality of the New York teams (one in Manhattan, one in the Bronx and one in Brooklyn) during the era of Rottenberg’s study, his conclusion seems rather obvious. During the decade preceding his study, the three New York teams consistently performed better than their rivals. The New York Yankees won eight of ten American League pennants, and the two National League New York entries won eight of ten NL pennants (six for the Brooklyn Dodgers, two for the New York Giants).

Foundation of the Major League Baseball Players Association

The current players organization, the Major League Baseball Players Association, was formed in 1954. It remained in the background, however, until the players hired Marvin Miller in 1966 to head the organization. Hiring Miller, a former negotiator for the U.S. steel workers, would turn out to be a stroke of genius. Miller began with a series of small gains for players, including increases in the minimum salary, pension contributions by owners and limits to the maximum salary reduction owners could impose. The first test of the big item – the reserve clause – reached the Supreme Court in 1972.

Free Agency, Arbitration and the Reserve Clause

Curt Flood

Curt Flood, a star player for the St. Louis Cardinals, had been traded to the Philadelphia Phillies in 1970. Flood did not want to move from St. Louis, and informed both teams and the commissioner’s office that he did not intend to leave. He would play out his contract in St. Louis. Commissioner Bowie Kuhn ruled that Flood had no right to act in this way, and ordered him to play for Philadelphia, or not play at all. Flood chose the latter and sued MLB for violation of antitrust laws. The case reached the Supreme Court in 1972, and the court sided with MLB in Flood v. Kuhn. The court acknowledged that the 1922 ruling that MLB was exempt from antitrust law was an anomaly and should be overturned, but it refused to overturn the decision itself, arguing instead that if Congress wanted to rectify this anomaly, they should do so. Therefore the court stood pat, and the owners felt the case was settled permanently: the reserve clause had once again withstood legal challenge. They could not, however, have been more badly mistaken. While the reserve clause never has been overturned in a court of law, it would soon be drastically altered at the bargaining table, and ultimately lead to a revolution in the way baseball talent is dispersed and revenues are shared in the professional sports industry.

Curt Flood lost the legal battle, but the players ultimately won the war, and are no longer restrained by the reserve clause beyond the first two years of their major league contract. In a series of labor market victories beginning in the wake of the Flood decision in 1972 and continuing through the rest of the century, the players won the right to free agency (i.e. to bargain with any team for their services) after six years of service, escalating pension contributions, salary arbitration (after two to three seasons, depending on their service time), individual contract negotiations with agent representatives, hearing committees for disciplinary actions, reductions in maximum salary cuts, increases in travel money and improved travel conditions, the right to have disputes between players and owners settled by an independent arbitrator, and a limit to the number of times their contract could be assigned to a minor league team. Of course the biggest victory was free agency.

Impact of Free Agency – Salary Gains

The right to bargain with other teams for their services changed the landscape of the industry dramatically. No longer were players shackled to one team forever, subject to the whims of the owner for their salary and status. Now they were free to bargain with any and all teams. The impact on salaries was incredible. The average salary skyrocketed from $45,000 in 1975 to $289,000 in 1983.

Table 4: Maximum and Average MLB Player Salaries by Decade

(real values in 2002 dollars)

Period Highest Salary Year Player Team Average Salary Notes
Nominal Real Nominal Real
1800s $ 12,500.00
$ 246,250.00
1892 King Kelly Boston NL $ 3,054
$ 60,163.80
22 observations
1900s $ 10,000.00
$ 190,000.00
1907 Honus Wagner Pittsburgh Pirates $ 6,523
$ 123,937.00
13 observations
1910s $ 20,000.00
$ 360,000.00
1913 Frank Chance New York Yankees $ 2,307
$ 41,526.00
339 observations
1920s $ 80,000.00
$ 717,600.00
1927 Ty Cobb Philadelphia Athletics $ 6,992
$ 72,017.60
340 observations
1930s $ 84,098.33 $899,852 1930 Babe Ruth New York Yankees $ 7,748
$ 82,903.60
210 observations
1940s $ 100,000.00 $755,000 1949 Joe DiMaggio New York Yankees $ 11,197
$ 84,537.35
Average salary calculated using 1949 and 1943 seasons plus 139 additional observations.
1950s $ 125,000.00 $772,500 1959 Ted Williams Boston Red Sox $ 12,340
$ 76,261.20
Average salary estimate based on average of 1949 and 1964 salaries.
1960s $ 111,000.00
$572,164.95
1968 Curt Flood St. Louis Cardinals $ 18,568
$95,711.34
624 observations
1970s $ 561,500.00
$1,656,215.28
1977 Mike Schmidt Philadelphia Phillies $ 55,802
$164,595.06
2208 observations
1980s $ 2,766,666.00
$4,006,895.59
1989 Orel Hershiser, Frank Viola Dodgers, Twins $ 333,686
$483,269.38
approx 6500 observations
1990s $11,949,794.00
$ 12,905,777.52
1999 Albert Belle Baltimore Orioles $1,160,548
$ 1,253,391.84
approx 7000 observations
2000s $22,000,000.00
$22,322,742.55
2001 Alex Rodriguez Texas Rangers $2,165,627
$2,197,397.00
2250 observations

Real values based on 2002 Consumer Price Index.

Over the long haul, the changes have been even more dramatic. The average salary increased from $45,000 in 1975 to $2.4 million in 2002, while the minimum salary increased from $6000 to $200,000 and the highest paid player increased from $240,000 to $22 million. This is a 5200% increase in the average salary. Of course, not all of that increase is due to free agency. Revenues increased during this period by nearly 1800% from an average of $6.4 million to $119 million, primarily due to the 2800% increase in television revenue over the same period. Ticket prices increased by 439% while attendance doubled (the number of MLB teams increased from 24 to 30).

Strikes and Lockouts

Miller organized the players and unified them as no one had done before. The first test of their resolve came in 1972, when the owners refused to bargain on pension and salary issues. The players responded by going out on the first league-wide strike in American professional sports history. The strike began during spring training, and carried on into the season. The owners finally conceded in early April after nearly 100 games were lost to the strike. The labor stoppage became the favorite weapon of the players, who would employ it again in 1981, 1985, and 1994. The latter strike cancelled the World Series for the first time since 1904, and carried on into the 1995 season. The owners preempted strikes in two other labor disputes, locking out the players in 1976 and 1989. After each work stoppage, the players won the concessions they demanded and fended off attempts by owners to reverse previous player gains, particularly in the areas of free agency and arbitration. From the first strike in 1972 through 1994, every time the labor agreement between the two sides expired, a work stoppage ensued. In August of 2002 that pattern was broken when the two sides agreed to a new labor contract for the first time without a work stoppage.

Catfish Hunter

The first player to become a free agent did so due to a technicality. In 1974 Catfish Hunter, a pitcher for the Oakland Athletics, negotiated a contract with the owner, Charles Finley, which required Finley to make a payment into a trust fund for Hunter on a certain date. When Finley missed the date, and then tried to pay Hunter directly instead of honoring the clause, Hunter and Miller filed a complaint charging the contract should be null and void because Finley had broken it. The case went to an arbitrator who sided with Hunter and voided the contract, making Hunter a free agent. In a bidding frenzy, Hunter ultimately signed what was then a record contract with the New York Yankees. It set precedents for both its length – five years guaranteed, and its annual salary of $750,000. Prior to the dawning of free agency, it was a rare circumstance for a player to get anything more than a one-year contract, and a guaranteed contract was virtually unheard of. If a player was injured or fell off in performance, an owner would slash his salary or release him and vacate the remainder of his contract.

The End of the Reserve Clause – Messersmith and McNally

The first real test of the reserve clause came in 1975, when, on the advice of Miller, Andy Messersmith played the season without signing a contract. Dave McNally also refused to sign a contract, though he had unofficially retired at the time. Up to this time, the reserve clause meant that a team could renew a player’s contract at their discretion. The only changes in this clause that occurred since 1879 were the maximum amount by which the owner could reduce the player’s salary. In order to test the clause, which allowed teams to maintain contractual rights to players in perpetuity, Messersmith and McNally refused to sign contracts. Their teams automatically renewed their contracts from the previous season, per the reserve clause. The argument the players put forth was that if no contract was signed, then there was no reserve clause. They argued that Messersmith and McNally would be free to negotiate with any team at the end of the season. The reserve clause was struck down by arbitrator Peter Seitz on Dec. 23, 1975, clearing the way for players to become free agents and sell their services to the highest bidder. Messersmith and McNally became the first players to challenge and successfully escape the reserve clause. The baseball labor market changed permanently and dramatically in favor of the players, and has never turned back.

Current Labor Arrangements

The baseball labor market as it exists today is a result of bargaining between owners and players. Owners ultimately conceded the reserve clause and negotiated a short period of exclusivity for a team with a player. The argument they put forward was that the cost of developing players was so high, they needed a window of time when they could recoup those investments. The existing situation allows them six years. A player is bound to his original team for the first six years of his MLB contract, after which he can become a free agent – though some players bargain away that right by signing long-term contracts before the end of their sixth year.

During that six-year period however, players are not bound to the salary whims of the owners. The minimum salary will rise to $300,000 in 2003, there is a 10% maximum salary cut from one year to the next, and after two seasons players are eligible to have their contract decided by an independent arbitrator if they cannot come to an agreement with the team.

Arbitration

After their successful strike in 1972, the players had increased their bargaining position substantially. The next year they claimed a major victory when the owners agreed to a system of salary arbitration for players who did not yet qualify for free agency. Arbitration was won by the players at in 1973, and has since proved to be one of the costliest concessions the owners ever made. Arbitration requires each side to submit a final offer to an arbitrator, who must then choose one or the other offer. The arbitrator may not compromise on the offers, but must choose one. Once chosen, both sides are then obligated to accept that contract.

Once eligible for arbitration, a player, while not a free agent, does stand to reap a financial windfall. If a player and owner (realistically, a player’s agent and the owner’s agent – the general manager) cannot agree on a contract, either side may file for arbitration. If the other does not agree to go to arbitration, then the player becomes a free agent, and may bargain with any team. If arbitration is accepted, then both sides are bound to accept the contract awarded by the arbitrator. In practice, most of the contracts are settled before they reach the arbitrator. A player will file for arbitration, both sides will submit their final contract offers to the arbitrator, and then will usually settle somewhere in between the final offers. If they do not settle, then the arbitrator must hear the case and make a decision. Both sides will argue their point, which essentially boils down to comparing the player to other players in the league and their salaries. The arbitrator then decides which of the two final offers is closer to the market value for that player, and picks that one.

Collusion under Ueberroth

The owners, used to nearly a century of one-sided labor negotiations, quickly grew tired of the new economics of the player labor markets. They went through a series of labor negotiators, each one faring as poorly as the next, until they hit upon a different solution. Beginning in 1986, under the guidance of commissioner Peter Ueberroth, they tried collusion to stem the increase in player salaries. Teams agreed not to bid on one another’s free agents. The strategy worked, for awhile. During the next two seasons, player salaries grew at lower rates and high profile free agents routinely had difficulty finding anybody interested in their services. The players filed a complaint, charging the owners with a violation of the labor agreement signed by owners and players in 1981, which prohibited collusive action. They filed separate collusion charges for each of the three seasons from 1985-87, and won each time. The ruling resulted in the voiding of the final years of some players contracts, thus awarding them “second look” free agency status, and levied fines in excess of $280 million dollars on the owners. The result was a return to unfettered free agency for the players, a massive financial windfall for the impacted players, a black eye for the owners, and the end of the line for Commissioner Ueberroth.

Table 5:

Average MLB Payroll as a Percentage of Total Team Revenues for Selected Years

Year Percentage
1929 35.3
1933 35.9
1939 32.4
1943 24.8
1946 22.1
1950 17.6
1974 20.5
1977 25.1
1980 39.1
1985 39.7
1988 34.2
1989 31.6
1990 33.4
1991 42.9
1992 50.7
1994 60.5
1997 53.6
2001 54.1

Exploitation Patterns

Economist Andrew Zimbalist calculated the degree of market exploitation for baseball players for the years 1986-89, a decade after free agency began, and during the years of collusion, using a measure of the marginal revenue product of players. The marginal revenue product of a player is a measure of the additional revenue a team receives due to the addition of that player to the team. This is done by calculating the impact of the player on the performance of the team, and the subsequent impact of team performance on total revenue. He found that on average, the degree of exploitation, as measured by the ratio of marginal revenue product to salary, declined each year, from 1.32 in 1986 to 1.01 in 1989. The degree of exploitation, however, was not uniform across players. Not surprisingly, it decreased as players obtained the leverage to bargain. The degree of exploitation was highest for players in their first two years, before they were arbitration eligible, fell for players in the 2-5 year category, between arbitration and free agency, and disappeared altogether for players with six or more years of experience. In fact, for all four years, Zimbalist found that this group of players was overpaid with an average MRP of less than 75% of salary in 1989. No similar study has been done for players before free agency, in part due to the paucity of salary data before this time.

Negotiations under the Reserve Clause

Player contracts have changed dramatically since free agency. Players used to be subject to whatever salary the owner offered. The only recourse for a player was to hold out for a better salary. This strategy seldom worked, because the owner had great influence on the media, and usually was able to turn the public against the player, adding another source of pressure on the player to sign for the terms offered by the team. The pressure of no payday – a payday that, while less than the player’s MRP, still exceeded his opportunity cost by a fair amount, was usually sufficient to minimize the length of most holdouts. The owner influenced the media because the sports reporters were actually paid by the teams in cash or in kind, traveled with them, and enjoyed a relatively luxurious lifestyle for their chosen occupation. A lifestyle that could be halted by edict of the team at any time. The team controlled media passes and access and therefore had nearly total control of who covered the team. It was a comfortable lifestyle for a reporter, and spreading owner propaganda on occasion was seldom seen as an unacceptable price to pay.

Recent Concerns

The major labor issue in the game has shifted from player exploitation, the cry until free agency was granted, to competitive imbalance. Today, critics of the salary structure point to its impact on the competitive balance of the league as a way of criticizing the rising payrolls. Many fans of the game openly pine for a return for “the good old days,” when players played for the love of the game. It should be recognized however, that the game has always been a business. All that has changed has been the amount of money at stake and how it is divided among the employers and their employees.

Suggested Readings

A Club Owner. “The Baseball Trust.” Literary Digest, December 7, 1912.

Burk, Robert F. Much More Than a Game: Players, Owners, and American Baseball since 1921. Chapel Hill: University of North Carolina Press, 2001.

Burk, Robert F. Never Just a Game: Players, Owners, and American Baseball to 1920. Chapel Hill: University of North Carolina Press, 1994.

Dworkin, James B. Owners versus Players: Baseball and Collective Bargaining. Dover, MA: Auburn House, 1981.

Haupert, Michael, baseball financial database.

Haupert, Michael and Ken Winter. “Pay Ball: Estimating the Profitability of the New York Yankees 1915-37.” Essays in Economic and Business History 21 (2002).

Helyar, John. Lords of the Realm: The Real History of Baseball. New York: Villard Books, 1994.

Korr, Charles. The End of Baseball as We Knew It: The Players Union, 1960-1981. Champagne: University of Illinois Press, 2002.

Kuhn, Bowie, Hardball: The Education of a Baseball Commissioner. New York: Times Books, 1987.

Lehn, Ken. “Property Rights, Risk Sharing, and Player Disability in Major League Baseball.” Journal of Law and Economics 25, no. 2 (October1982): 273-79.

Lowe, Stephen. The Kid on the Sandlot: Congress and Professional Sports, 1910-1992. Bowling Green: Bowling Green University Press, 1995.

Lowenfish, Lee. “A Tale of Many Cities: The Westward Expansion of Major League Baseball in the 1950s.” Journal of the West 17 (July 1978).

Lowenfish, Lee. “What Were They Really Worth?” The Baseball Research Journal 20 (1991): 81-2.

Lowenfish, Lee. The Imperfect Diamond: A History of Baseball’s Labor Wars. New York: Da Capo Press, 1980.

Miller, Marvin. A Whole Different Ball Game: The Sport and Business of Baseball. New York: Birch Lane Press, 1991.

Noll, Roger G. and Andrew S. Zimbalist, editors. Sports Jobs and Taxes: Economic Impact of Sports Teams and Facilities. Washington, D.C.: Brookings Institute, 1997.

Noll, Roger, editor. Government and the Sports Business. Washington, D.C.: Brookings Institution, 1974.

Okkonen, Mark. The Federal League of 1914-1915: Baseball’s Third Major League. Cleveland: Society of American Baseball Research, 1989.

Orenstein, Joshua B. “The Union Association of 1884: A Glorious Failure.” The Baseball Research Journal 19 (1990): 3-6.

Pearson, Daniel M. Baseball in 1889: Players v Owners. Bowling Green, OH: Bowling Green State University Popular Press, 1993.

Quirk, James. “An Economic Analysis of Team Movements in Professional Sports.” Law and Contemporary Problems 38 (Winter-Spring 1973): 42-66.

Rottenberg, Simon. “The Baseball Players’ Labor Market.” Journal of Political Economy 64, no. 3 (December 1956) 242-60.

Scully, Gerald. The Business of Major League Baseball. Chicago: University of Chicago Press, 1989.

Sherony, Keith, Michael Haupert and Glenn Knowles. “Competitive Balance in Major League Baseball: Back to the Future.” Nine: A Journal of Baseball History & Culture 9, no. 2 (Spring 2001): 225-36.

Sommers, Paul M., editor. Diamonds Are Forever: The Business of Baseball. Washington, D.C.: Brookings Institution, 1992.

Sullivan, Neil J. The Diamond in the Bronx: Yankee Stadium and the Politics of New York. New York: Oxford University Press, 2001.

Sullivan, Neil J. The Diamond Revolution. New York: St. Martin’s Press, 1992.

Sullivan, Neil J. The Dodgers Move West. New York: Oxford University Press, 1987.

Thorn, John and Peter Palmer, editors. Total Baseball. New York: HarperPerennial, 1993.

Voigt, David Q. The League That Failed, Lanham, MD: Scarecrow Press, 1998.

White, G. Edward. Creating the National Pastime: Baseball Transforms Itself, 1903-1953. Princeton: Princeton University Press, 1996.

Wood, Allan. 1918: Babe Ruth and the World Champion Boston Red Sox. New York: Writers Club Press, 2000.

Zimbalist, Andrew. Baseball and Billions. New York: Basic Books, 1992.

Zingg, Paul, “Bitter Victory: The World Series of 1918: A Case Study in Major League Labor-Management Relations.” Nine: A Journal of Baseball History and Social Policy Perspectives 1, no. 2 (Spring 1993): 121-41.

Zweig, Jason, “Wild Pitch: How American Investors Financed the Growth of Baseball.” Friends of Financial History 43 (Summer 1991).

Citation: Haupert, Michael. “The Economic History of Major League Baseball”. EH.Net Encyclopedia, edited by Robert Whaples. December 3, 2007. URL
http://eh.net/encyclopedia/the-economic-history-of-major-league-baseball/

A History of the Bar Code

Stephen A. Brown, Uniform Code Council

Beginnings of the Bar Code

In 1949, a young graduate student was wrestling with the concept of automatically capturing information about a product. He believed that the dots and dashes of Morse code would to be a good model, but he could not figure out how to use those familiar patterns to solve his problem. Then, one day as he relaxed at the beach, he idly drew dots and dashes in the sand. As his fingers elongated the dashes he looked at the result and said, “Hey, I’ve got it.”

Three years later that graduate student Joseph Woodland and his partner received a patent on what began as lines in the sand, and the linear bar code was born. Much to the inventor’s surprise, however, it was not a rapid commercial success. Fifteen years were to pass before the first commercial use of the bar code. It was not a particularly successful use.

Bar codes were placed on the sides of railroad freight cars. As the freight car rolled past a trackside scanner, it was to be identified and, inferentially, its destination and cargo. The system failed, however, to take into account that freight cars bounced as they passed the scanner. Consequently, the accuracy of the scanning was poor.

The Technology of the Bar Code

A linear bar code is a binary code (1s and 0s). The lines and spaces are of varying thicknesses and printed in different combinations. To be scanned, there must be accurate printing and adequate contrast between the bars and spaces. Scanners employ various technologies to “read” codes. The two most common are lasers and cameras. Scanners may be fixed position, like most supermarket checkout scanners, or hand-held devices, often used for the taking of inventories. There should be (but typically is not), a distinction drawn between the code, which is a structure for the conveyance of data, and the symbol, the machine-readable representation of the code. The code is text, which can be translated into a multiplicity of languages – English, French, Japanese, symbol.

Notwithstanding its inauspicious beginning, the bar code has become a remarkable success, a workhorse in many and varied applications. One of the first successful bar codes, Code 39 developed by Dr. David Allais, is widely used in logistical and defense applications. Code 39 is still in use today, although it is less sophisticated than some of the newer bar codes. Code 128 and Interleaved 2 of 5 are other codes that attained some success in niche markets.

Bar Codes Are Now Everywhere

Today, bar codes are everywhere. Rental car companies keep track of their fleet by means of bar codes on the car bumper. Airlines track passenger luggage, reducing the chance of loss (believe it or not). Researchers have placed tiny bar codes on individual bees to track the insects’ mating habits. NASA relies on bar codes to monitor the thousands of heat tiles that need to be replaced after every space shuttle trip, and the movement of nuclear waste is tracked with a bar-code inventory system. Bar codes even appear on humans! Fashion designers stamp bar codes on their models to help coordinate fashion shows. (The codes store information about what outfits each model should be wearing and when they are due on the runway.) In the late 1990’s in Tokyo, there was a fad for temporary bar code shaped tattoos among high school girls.

The Universal Product Code

The best-known and most widespread use of bar codes has been on consumer products. The Universal Product Code, or U.P.C., is unique because the user community developed it. Most technological innovations are first invented and then a need is found for the invention. The U.P.C. is a response to a business need first identified by the US grocery industry in the early 1970s.

Believing that automating the grocery checkout process could reduce labor costs, improve inventory control, speed up the process, and improve customer service, six industry associations, representing both product manufacturers and supermarkets, created an industry wide committee of industry leaders. Their two-year effort resulted in the announcement of the Universal Product Code and the U.P.C. bar code symbol on April 1, 1973. The U.P.C. made its first commercial appearance on a package of Wrigley’s gum sold in Marsh’s Supermarket in Troy, Ohio in June 1974.

Economic studies conducted for the grocery industry committee projected over $40 million in savings to the industry from scanning by the mid-1970s. Those numbers were not achieved in that time frame and there were those who predicted the demise of bar code scanning. The usefulness of the barcode required the adoption of expensive scanners by a critical mass of retailers while manufacturers simultaneously adopted bar code labels. Neither wanted to move first and results looked unpromising for the first couple of years, with Business Week eulogizing “The Supermarket Scanner That Failed.”

Economic Impact of the U.P.C.

As scanning spread, however, the $40 million projection began to look very small. A 1999 analysis by Price Waterhouse Coopers estimates the U.P.C. represents $17 billion in savings to the grocery industry annually. Even more astounding, the study concludes that the industry has not yet taken advantage of billions of dollars of potential savings that could be derived from maximizing the use of the U.P.C.

The big winners – as one should have expected given the competitive nature of the markets involved – were consumers, since U.P.C. scanning generated efficiencies and productivity improvements that led to lower costs and/or greater customer service. Ironically, consumer advocates initially resisted the innovation and jeopardized its success by insisting that retailers forego substantial cost savings by continuing to mark prices on individual units. While the rise of bar coding benefited both manufacturers and retailers, it was the retailer who benefited the most. In addition to the labor savings, retailers now had access to detailed product movement data, which they turned into a profit center by selling the data to their suppliers.

Current Level of Use

The developers of the U.P.C. believed that there would be fewer than 10,000 companies, almost all in the US grocery industry, who would ever use the U.P.C. Today, there are over one million companies in more than 100 countries in over twenty different industry sectors enjoying the benefits of scanning, thanks to the U.P.C. U.P.C. symbols are everywhere in the retail environment. They can also be found in industries as diverse as construction, utilities, and cosmetics. Today, the U.P.C. is also spreading up the supply chain to use by the suppliers of raw materials. At the dawn of the twenty-first century, the Uniform Code Council, Inc., the administrator of the U.P.C., could say with confidence that the U.P.C. symbol was being scanned over five billion times a day.

But innovation is dynamic. The linear bar code continues to evolve. Today, there are two-dimensional bar codes such as PDF 417 and MaxiCode capable of incorporating the Gettysburg Address in a symbol one-quarter of an inch square. RSS and Composite symbologies will enable the bar code identification of very small items such as individual pills or a single strawberry.

Future Uses

The future of automatic identification, however, is probably in radio frequency (RFID). Tiny transmitters embedded in items do not require a line of sight to the scanner, nor are they subject to degradation by exposure. Already in use in retail stores to help prevent shoplifting and on toll roads to speed traffic, the primary deterrent to wider use of RFID has been the cost of the silicon chips required. Today, the five-cent chip is close at hand. If the cost can be reduced to less than one cent a chip, in the future your breakfast cereal box will be a radio transmitter.

©

Stephen A. Brown, April 2001

References

Books

Brown, Stephen A. Revolution at the Checkout Counter: The Explosion of the Bar Code. Cambridge: Harvard University Press, 1997

Harmon, Craig K. Lines of Communication, Petersborough: Helmers Publishing, 1994

Nelson, Benjamin. Punchcards to Barcodes: A 200-Year Journey. Petersborough: Helmers Publishing, 1997

Other

Collins, Jim. “A Quick Scan on Bar Codes,” Attache, January 1998

Green, Alan. “Big Brother is Scanning You,” Regardies, December 1990

Leibowitz, Ed. “Bar Codes: Reading Between the Lines.” Smithsonian, February 1999.

Price Waterhouse Coopers, “17 Billion Reasons to Say Thanks: The 25th Anniversary of the U.P.C. and Its Impact on the Grocery Industry.” 1999

See also, the Uniform Code Council, Inc.’s homepage: http://www.uc-council.org/

Citation: Brown, Stephen. “A History of the Bar Code”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/a-history-of-the-bar-code/

Bankruptcy Law in the United States

Bradley Hansen, Mary Washington College

Since 1996 over a million people a year have filed for bankruptcy in the United States. Most seek a discharge of debts in exchange for having their assets liquidated for the benefit of their creditors. The rest seek the assistance of bankruptcy courts in working out arrangements with their creditors. The law has not always been so kind to insolvent debtors. Throughout most of the nineteenth century there was no bankruptcy law in the United States, and most debtors found it impossible to receive a discharge from their debts. Early in the century debtors could have expected even harsher treatment, such as imprisonment for debt.

Table 1. Chronology of Bankruptcy Law in The United States, 1789-1978

Date Event
1789 The Constitution empowers Congress to enact uniform laws on the subject of bankruptcy.
1800 First bankruptcy law is enacted. The law allows only for involuntary bankruptcy of traders.
1803 First bankruptcy law is repealed amid complaints of excessive expenses and corruption.
1841 Second bankruptcy law is enacted in the wake of the Panics of 1837 and 1839. The law allows both voluntary and involuntary bankruptcy.
1843 1841 Bankruptcy Act is repealed, amid complaints about expenses and corruption.
1867 Prompted by demands arising from financial failures during the Panic of 1857 and the Civil War, Congress enacts the third bankruptcy law.
1874 The 1867 Bankruptcy Act is amended to allow for compositions.
1878 The 1867 Bankruptcy Law is repealed.
1881 The National Convention of Boards of Trade is formed to lobby for bankruptcy legislation.
1889 The National Convention of Representatives of Commercial Bodies is formed to lobby for bankruptcy legislation. The president of the Convention, Jay L. Torrey, drafts a bankruptcy bill.
1898 Congress passes a bankruptcy bill based on the Torrey bill.
1933-34 The 1898 Bankruptcy Act is amended to include railroad reorganization, corporate reorganization, and individual debtor arrangements.
1938 The Chandler Act amends the 1898 Bankruptcy Act, creating a menu of options for both business and non-business debtors.
1978 The 1898 Bankruptcy Act is replaced by The Bankruptcy Reform Act.

To say that there was no bankruptcy law in the United States for most of the nineteenth century is not to say that there were no laws governing insolvency or the collection of debts. Americans have always relied on credit and have always had laws governing the collection of debts. Debtor-creditor laws and their enforcement are important because they influence the supply and demand for credit. Laws that do not encourage the repayment of debts increase risk for creditors and reduce the supply of credit. On the other hand, laws that are too strict also have costs. Strict laws such as imprisonment for debt can discourage entrepreneurs from experimenting. Many of America’s most famous entrepreneurs, such as Henry Ford, failed at least once before making their fortunes.

Over the last two hundred years the United States has shifted from a legal regime that was primarily directed at the strict enforcement of debt contracts to one that provides numerous means to alter the terms of debt contracts. As the economy developed groups of people became convinced that strict enforcement of credit contracts was unfair, inefficient, contrary to the public interest, or simply not in their own self interest. Periodic financial crises in the nineteenth century generated demands for bankruptcy laws to discharge debts. They also led to the introduction of voluntary bankruptcy and the extension of the right to file for bankruptcy to all individuals. The expansion of interstate commerce in the late nineteenth century led to demands for a uniform and efficient bankruptcy law throughout the United States. The rise of railroads gave rise to a demand for corporate reorganization. The expansion of consumer credit in the twentieth century and the rise in consumer bankruptcy cases led to the introduction of arrangements into bankruptcy law, and continue to fuel demands for revision of bankruptcy law today.

Origins of American Bankruptcy Law

Like much of American law, the origins of both state laws for the collection of debt and federal bankruptcy law can be found in England. State laws are, in general, derived from common law procedures for the collection of debt. Under the common law a variety of procedures evolved to aid a creditor in collecting a debt. Generally, the creditor can obtain a judgment from a court for the amount that he is owed and then have a legal official seize some of the debtor’s property or wages to satisfy this judgement. In the past a defaulting debtor could also be placed in prison to coerce repayment. Bankruptcy law does not replace other collection laws but does supercede them. Creditors still use procedures such as garnishing a debtor’s wages, but if the debtor or another creditor files for bankruptcy such collection efforts are stopped.

Under the U.S. Constitution, adopted 1789, bankruptcy law became a federal law in the United States. There are two clauses of the Constitution that influenced the evolution of bankruptcy law. First, in Article One, Section Eight Congress was empowered to enact uniform laws on the subject of bankruptcy. Second, the Contract Clause prohibited states from passing laws that impair the obligation of contracts. Courts have generally interpreted these clauses so as to give wide latitude to the federal government to alter the obligations of debt contracts while restricting state governments. States, however, are not completely barred from altering the terms of contracts. In its 1827 decision on Ogden vs. Saunders the Supreme Court declared that states could pass laws that granted a discharge for debts that were incurred after the law was passed; however, a state discharge can not be binding on creditors who are citizens of other states.

The evolution of bankruptcy law in the United States can be divided into two periods. In the first period, which encompasses most of the nineteenth century, Congress enacted three laws in the wake of financial crises. In each case the law was repealed within a few years amid complaints of high costs and corruption. The second period begins in 1881 when associations of merchants and manufacturers banded together to form a national association to lobby for a federal bankruptcy law. In contrast to previous demands for bankruptcy law, which were prompted largely by crises, late nineteenth century demands for bankruptcy law were for a permanent law suited to the needs of a commercial nation. In 1898 the Act to Establish a Uniform System of Bankruptcy was enacted and the United States has had a bankruptcy law ever since.

The Temporary Bankruptcy Acts of 1800, 1841 and 1867

Congress first exercised its power to enact uniform laws on bankruptcy in 1800. The debates in the Annals of Congress are brief but suggest that the demand for the law arose from individuals who were in financial distress. The law was modeled after the English bankruptcy law of the time. The law applied only to traders. Creditors could file a bankruptcy petition against a debtor, the debtor’s assets would be divided on a pro rata basis among his creditors, and the debtor would receive a discharge. Although debtors could not file a voluntary bankruptcy petition, it was generally believed that many debtors asked a friendly creditor to petition them into the bankruptcy court so that they could obtain a discharge. The law was intended to remain in effect for five years. Complaints that the law was expensive to administer, that it was difficult and costly to travel to federal courts, and that the law provided opportunities for fraud led to its repeal after only two years. Similar complaints were to follow the passage of subsequent bankruptcy laws.

Bankruptcy law largely disappeared from national politics until the Panic of 1839. A few petitions and memorials were sent to Congress in the wake of the Panic of 1819, but no law was passed. The Panic of 1839 and the recession that followed it brought forward a flood of petitions and memorials for bankruptcy legislation. Memorials typically declared that many business people had been brought to ruin by economic conditions that were beyond their control not through any fault of their own. In the wake of the Panic, Whigs made the attack on Democratic economic policies and the passage of bankruptcy relief central parts of their platform. After gaining control of Congress and the Presidency, the Whigs pushed through the 1841 Bankruptcy Act. The law went into effect February 2, 1842.

Like its predecessor, the Bankruptcy Act of 1841 was short-lived. The law was repealed March 3, 1843. The rapid about-face on bankruptcy was the result of the collapse of a bargain between Northern and Southern Whigs. Democrats overwhelmingly opposed the passage of the Act and supported its repeal. Southern Whigs also generally opposed a federal bankruptcy law. Northern Whigs appear to have obtained the Southern Whigs votes for passage by agreeing to distribute the proceeds from the sales of federal lands to the states. A majority of Southern Whigs voted for passage but then reversed their votes the next year. Despite its short life, over 41,000 petitions for bankruptcy, most of them voluntary, were filed under the 1841 law.

The primary innovations of the Bankruptcy Act of 1841 were the introduction of voluntary bankruptcy and the widening of the scope of occupations that could use the law. With the introduction of voluntary bankruptcy, debtors no longer had to resort to the assistance of a friendly creditor. Unlike the previous law in which only traders could become bankrupts, under the 1841 Act traders, bankers, brokers, factors, underwriters, and marine insurers could be made involuntary bankrupts and any person could apply for voluntary bankruptcy.

After repeal of the Bankruptcy Act of 1841, the subject of bankruptcy again disappeared from congressional consideration until the Panic of 1857, when appeals for a bankruptcy law resurfaced. The financial distress caused to Northern merchants by the Civil War further fueled demands for bankruptcy legislation. Though demands for a bankruptcy law persisted throughout the War, considerable opposition also existed to passing a law before the War was over. In the first Congress after the end of the War, the Bankruptcy Act of 1867 was enacted. The 1867 Act was amended several times and lasted longer than its predecessors. An 1874 amendment added compositions to bankruptcy law for the first time. Under the composition provision a debtor could offer a plan to distribute his assets among his creditors to settle the case. Again, complaints of excessive fees and expenses led to the repeal of the Bankruptcy Act in 1878. Table 2 shows the number of petitions filed under the 1867 law between 1867 and 1872.

Table 2. Bankruptcy Petitions, 1867-1872

Year Petitions
1867 7,345
1868 29,539
1869 5,921
1870 4,301
1871 5,438
1872 6,074

Source: Expenses of Proceedings in Bankruptcy In United States Courts. Senate Executive Document 19 (43-1) 1580.

During the first three quarters of the nineteenth century the demand for bankruptcy legislation rose with financial panics and fell as they passed. Many people came to believe that the forces that brought people to insolvency were often beyond their control and that to give them a fresh start was not only fair but in the best interest of society. Burdened with debts they had no hope of paying they had no incentive to be productive, creditors would take anything they earned. Freed from these debts they could once again become productive members of society. The spread of the belief that debtors should not be subjected to the harshest elements of debt collection law can also be seen in numerous state laws enacted during the nineteenth century. Homestead and exemption laws declared property that creditors could not take. Stay and moratoria laws were passed during recessions to stall collection efforts. Over the course of the nineteenth century, states also abolished imprisonment for debt.

Demand For A Permanent Bankruptcy Law

The repeal of the 1867 Bankruptcy Act was followed almost immediately by a well-organized movement to obtain a new Bankruptcy law. A national campaign by merchants and manufacturers to obtain bankruptcy legislation began in 1881 when The New York Board of Trade and Transportation organized a National Convention of Boards of Trade.The participants at the Convention endorsed a bankruptcy bill prepared by John Lowell, a judge from Massachusetts. They continued to lobby for the bill throughout the 1880s.

After failing to obtain passage of the Lowell bill, associations of merchants and manufacturers met again in 1889. Under the name of The National Convention of Representatives of Commercial Bodies they held meetings in St. Louis and in Minneapolis. The president of the Convention, a lawyer and businessman named Jay Torrey, drafted a bill that the Convention lobbied for throughout the 1890s. The bill allowed both voluntary and involuntary petitions, though wage earners and farmers could not be made involuntary bankrupts. The bill was primarily directed at liquidation but did include a provision for composition. A composition had to be approved by a majority of creditors in both number and value. In a compromise with states’ rights advocates, the bill declared that exemptions would be determined by the states.

The merchants and manufacturers, who organized the conventions, provided credit to their customers whenever they delivered goods in advance of payment. They were troubled by three features of state debtor-creditor laws. First, the details of collection laws varied from state to state, forcing them to learn the laws in all the states in which they wished to sell goods. Second, many state laws discriminated against foreign creditors, that is, creditors who were not citizens of the state. Third, many of the state laws provided for a first-come, first-served distribution of assets rather than a pro rata division. With the first-come, first-served rule, the first creditor to go to court could claim all the assets necessary to pay his debts leaving the last to receive nothing. The first-come, first-served rule of collection tended to create incentives for creditors to race to be the first to file a claim. The effect of this rule was described by Jay Torrey: “If a creditor suspects his debtor is in financial trouble, he usually commences an attachment suit, and as a result the debtor is thrown into liquidation irrespective of whether he is solvent or insolvent. This course is ordinarily imperative because if he does not pursue that course some other creditor will.” Thus the law could actually precipitate business failures. As interstate commerce expanded in the late nineteenth century more merchants and manufacturers experienced these three problems

Merchants and manufacturers also found it easier to form a national organization in the late nineteenth century because of the growth of trade associations, boards of trade, chambers of commerce and other commercial organizations. By forming a national organization composed of businessmen’s associations from all over the country, merchants and manufacturers were able to act in unison in drafting a bankruptcy bill and lobbying for a bankruptcy bill. The bill they drafted not only provided uniformity and a pro rata distribution, but was designed to prevent the excessive fees and expenses that had been a major complaint against previous bankruptcy laws.

As early as 1884, the Republican Party supported the bankruptcy bills put forward by the merchants and manufacturers. A majority in both the Republican and Democratic parties supported bankruptcy legislation during the late nineteenth century. It took nearly twenty years to enact bankruptcy legislation because they supported different versions of bankruptcy law. The Democratic Party supported bills that were purely voluntary (creditors could not initiate proceedings) and temporary (the law would only remain in effect for a few years). The requirement that the law be temporary was crucial to Democrats because a vote for a permanent bankruptcy law would have been a vote for the expansion of federal power and against states’ rights, a central component of Democratic policy. Throughout the 1880s and 1890s, votes on bankruptcy split strictly along party lines. The majority of Republicans preferred the status quo to the Democrats bills and the majority of Democrats preferred the status quo to the Republican bills. Because control of Congress was split between the two parties for most of the last quarter of the nineteenth century neither side could force through their version of bankruptcy law. This period of divided government ended with the 55th Congress, in which the Bankruptcy Act of 1898 was passed.

Railroad Receivership and the Origins of Corporate Reorganization

The 1898 Bankruptcy Act was designed to aid creditors in liquidation of an insolvent debtor’s assets, but one of the important features of current bankruptcy law is the provision for reorganization of insolvent corporations. To find the origins of corporate reorganization one has to look outside the early evolution of bankruptcy law and look instead at the evolution of receiverships for insolvent railroads. A receiver is an individual appointed by a court to take control of some property, but courts in the nineteenth century developed this tool as a means to reorganize troubled railroads. The first reorganization through receivership occurred in 1846, when a Georgia court appointed a receiver over the insolvent Munroe Railway Co. and successfully reorganized it as the Macon and Western Railway. In the last two decades of the nineteenth century the number of receiverships increased dramatically; see Table 3. In theory, courts were supposed to appoint an indifferent party as receiver, and the receiver was merely to conserve the railroad while the best means to liquidate it was ascertained. In fact, judges routinely appointed the president, vice-president or other officers of the insolvent railway and assigned them the task of getting the railroad back on its feet. The object of the receivership was typically a sale of the railroad as a whole. But the sale was at least partly a fiction. The sole bidder was usually a committee of the bondholders using their bonds as payment. Thus the receivership involved a financial reorganization of the firm in which the bond and stock holders of the railroad traded in their old securities for new ones. The task of the reorganizers was to find a plan acceptable to the bondholders. For example, in the Wabash receivership of 1886, first mortgage bondholders ultimately agreed to exchange their 7 percent bonds for new ones of 5 percent. The sale resulted in the creation of a new railroad with the assets of the old. Often the transformation was simply a matter of changing “Railway” to “Railroad” in the name of the corporation. Throughout the late nineteenth and early twentieth centuries judges denied other corporations the right to reorganize through receivership. They emphasized that railroads were special because of their importance to the public.

Unlike the credit supplied by merchants and manufacturers, much of the debt of railroads was secured. For example, bondholders might have a mortgage that said they could claim a specific line of track if the railroad failed to make its bond payments. If a railroad became insolvent different groups of bondholders might claim different parts of the railroad. Such piecemeal liquidation of a business presented two problems in the case of railroads. First, many people believed that piecemeal liquidation would destroy much of the value of the assets. In his 1859 Treatise on the Law of Railways, Isaac Redfield explained that, “The railway, like a complicated machine, consists of a great number of parts, the combined action of which is necessary to produce revenue.” Second, railroads were regarded as quasi-public corporations. They were given subsidies and special privileges. Their charters often stated that their corporate status had been granted in exchange for service to the public. Courts were reluctant to treat railroads like other enterprises when they became insolvent and instead used receivership proceedings to make sure that the railroad continued to operate while its finances were reorganized.

Table 3. Railroad Receiverships, 1870-1897

Percentage of
Receiverships Mileage in Mileage put in
Year Established Receivership Receivership
1870 3 531 1
1871 4 644 1.07
1872 4 535 0.81
1873 10 1,357 1.93
1874 33 4,414 6.1
1875 43 7,340 9.91
1876 25 4,714 6.14
1877 33 3,090 3.91
1878 27 2,371 2.9
1879 12 1,102 1.27
1880 13 940 1.01
1881 5 110 0.11
1882 13 912 0.79
1883 12 2,041 1.68
1884 40 8,731 6.96
1885 44 7,523 5.86
1886 12 1,602 1.17
1887 10 1,114 0.74
1888 22 3,205 2.05
1889 24 3,784 2.35
1890 20 2,460 1.48
1891 29 2,017 1.18
1892 40 4,313 2.46
1893 132 27,570 15.51
1894 50 4,139 2.31
1895 32 3,227 1.78
1896 39 3,715 2.03
1897 21 1,536 0.83

Source: Swain, H. H. “Economic Aspects of Railroad Receivership.” Economic Studies 3, (1898): 53-161.

Depression Era Bankruptcy Reforms

Reorganization and bankruptcy were brought together by the amendments to the 1898 Bankruptcy Act during the Great Depression. By the late 1920s, a number of problems had become apparent with both the bankruptcy law and receivership. Table 4 shows the number of bankruptcy petitions filed each year since the law was enacted. The use of consumer credit expanded rapidly in the 1920s and so did wage earner bankruptcy cases. As Table 5 shows, voluntary bankruptcy by wage earners became an increasingly large proportion of bankruptcy petitions. Unlike mercantile bankruptcy cases, in many wage earner cases there were no assets. Expecting no return, many creditors paid little attention to bankruptcy cases and corruption spread in the bankruptcy courts. An investigation into bankruptcy in the southern district of New York recorded numerous abuses and led to the disbarment of of more than a dozen lawyers. In the wake of the investigation President Hoover appointed Thomas Thacher to investigate bankruptcy procedure in the United States. The Thacher Report recommended that an administrative staff be created to oversee bankruptcies. The bankruptcy administrators would be empowered to investigate bankrupts and reject requests for discharge. The report also suggested that many debtors could pay their debts if given an opportunity to work out an arrangement with their creditors. It suggested that procedures for the adjustment or extension of debts be added to the law. Corporate lawyers also identified three problems with the corporate receiverships. First, it was necessary to obtain an ancillary receivership in each federal district in which the corporation had assets. Second, some creditors might try to withhold their approval of a reorganization plan in exchange for a better deal for themselves. Third, judges were unwilling to apply reorganization through receivership to corporations other than railroads. Consequently, the Thacher report suggested that procedures for corporate reorganization also be incorporated into bankruptcy law.

Table 4. Bankruptcy Petitions Filed, 1899-1997

Petitions per Percentage
Year Voluntary Involuntary Total 10,000 Population Involuntary
1899 20,994 1,452 22,446 3.00 6.47
1900 20,128 1,810 21,938 2.88 8.25
1901 17,015 1,992 19,007 2.45 10.48
1902 16,374 2,108 18,482 2.33 11.41
1903 14,308 2,567 16,875 2.09 15.21
1904 13,784 3,298 17,082 2.08 19.31
1905 13,852 3,094 16,946 2.02 18.26
1906 10,526 2,446 12,972 1.52 18.86
1907 11,127 3,033 14,160 1.63 21.42
1908 13,109 4,709 17,818 2.01 26.43
1909 13,638 4,380 18,018 1.99 24.31
1910 14,059 3,994 18,053 1.95 22.12
1911 14,907 4,431 19,338 2.06 22.91
1912 15,313 4,432 19,745 2.07 22.45
1913 16,361 4,569 20,930 2.15 21.83
1914 17,924 5,035 22,959 2.32 21.93
1915 21,979 5,653 27,632 2.75 20.46
1916 23,027 4,341 27,368 2.68 15.86
1917 21,161 3,677 24,838 2.41 14.80
1918 17,261 3,124 20,385 1.98 15.32
1919 12,035 2,013 14,048 1.34 14.33
1920 11,333 2,225 13,558 1.27 16.41
1921 16,645 6,167 22,812 2.10 27.03
1922 28,879 9,286 38,165 3.47 24.33
1923 33,922 7,832 41,754 3.73 18.76
1924 36,977 6,542 43,519 3.81 15.03
1925 39,328 6,313 45,641 3.94 13.83
1926 40,962 5,412 46,374 3.95 11.67
1927 43,070 5,688 48,758 4.10 11.67
1928 47,136 5,928 53,064 4.40 11.17
1929 51,930 5,350 57,280 4.70 9.34
1930 57,299 5,546 62,845 5.11 8.82
1931 58,780 6,555 65,335 5.27 10.03
1932 62,475 7,574 70,049 5.61 10.81
1933 56,049 6,207 62,256 4.96 9.97
1934 58,888 4.66
1935 69,153 5.43
1936 60,624 4.73
1937 55,842 1,643 57,485 4.46 2.86
1938 55,137 2,169 57,306 4.41 3.78
1939 48,865 2,132 50,997 3.90 4.18
1940 43,902 1,752 45,654 3.46 3.84
1941 47,581 1,491 49,072 3.69 3.04
1942 44,366 1,295 45,661 3.41 2.84
1943 30,913 649 31,562 2.35 2.06
1944 17,629 277 17,906 1.35 1.55
1945 11,101 264 11,365 0.86 2.38
1946 8,293 268 8,561 0.61 3.13
1947 9,657 697 10,354 0.72 6.73
1948 13,546 1,029 14,575 1.00 7.06
1949 18,882 1,240 20,122 1.35 6.16
1950 25,263 1,369 26,632 1.76 5.14
1951 26,594 1,099 27,693 1.81 3.97
1952 25,890 1,059 26,949 1.73 3.93
1953 29,815 1,064 30,879 1.95 3.45
1954 41,335 1,398 42,733 2.65 3.27
1955 47,650 1,249 48,899 2.98 2.55
1956 50,655 1,240 51,895 3.10 2.39
1957 60,335 1,189 61,524 3.61 1.93
1958 76,048 1,413 77,461 4.47 1.82
1959 85,502 1,288 86,790 4.90 1.48
1960 94,414 1,296 95,710 5.43 1.35
1961 124,386 1,444 125,830 6.99 1.15
1962 122,499 1,382 123,881 6.77 1.12
1963 128,405 1,409 129,814 6.99 1.09
1964 141,828 1,339 143,167 7.60 0.94
1965 149,820 1,317 151,137 7.91 0.87
1966 161,840 1,165 163,005 8.42 0.72
1967 173,884 1,241 175,125 8.95 0.71
1968 164,592 1,001 165,593 8.39 0.60
1969 154,054 946 155,000 7.77 0.61
1970 161,366 1,085 162,451 8.07 0.67
1971 167,149 1,215 168,364 8.26 0.72
1972 152,840 1,094 153,934 7.33 0.71
1973 144,929 985 145,914 6.89 0.68
1974 156,958 1,009 157,967 7.39 0.64
1975 208,064 1,266 209,330 9.69 0.60
1976 207,926 1,141 209,067 9.59 0.55
1977 180,062 1,132 181,194 8.23 0.62
1978 167,776 995 168,771 7.58 0.59
1979 182,344 915 183,259 8.14 0.50
1980 359,768 1,184 360,952 15.85 0.33
1981 358,997 1,332 360,329 15.67 0.37
1982 366,331 1,535 367,866 15.84 0.42
1983 373,064 1,670 374,734 15.99 0.45
1984 342,848 1,447 344,295 14.57 0.42
1985 362,939 1,597 364,536 15.29 0.44
1986 476,214 1,642 477,856 19.86 0.34
1987 559,658 1,620 561,278 23.12 0.29
1988 593,158 1,409 594,567 24.27 0.24
1989 641,528 1,465 642,993 25.71 0.23
1990 723,886 1,598 725,484 29.03 0.22
1991 878,626 1,773 880,399 34.85 0.20
1992 971,047 1,443 972,490 38.08 0.15
1993 917,350 1,384 918,734 35.60 0.15
1994 844,087 1,170 845,257 32.43 0.14
1995 856,991 1,113 858,104 32.62 0.13
1996 1,040,915 1,195 1,042,110 39.26 0.11
1997 1,315,782 1,217 1,316,999 49.16 0.09

Sources: 1899-1938 Annual Report of the Attorney General of the United States; 1939-1997; and Statistical Abstract of the United States. Various years. The Report of the Attorney General did not provide the numbers voluntary and involuntary from 1934-36.

Table 5. Wage Earner Bankruptcy and No Asset Cases, 1899-1933

Percentage of Cases
Year Wage Earners With No Assets
1899 5,288 51.12
1900 7,516 40.52
1901 7,068 48.99
1902 6,859 47.25
1903 4,852 41.36
1904 5,291 40.55
1905 5,426 40.75
1906 2,748 42.29
1907 3,257 42.11
1908 3,492 40.29
1909 3,528 38.46
1910 4,366 36.49
1911 4,139 48.14
1912 4,161 50.70
1913 4,863 49.63
1914 5,773 49.96
1915 6,632 49.88
1916 6,418 53.29
1917 7,787 57.12
1918 8,230 57.05
1919 6,743 64.53
1920 5,601 67.41
1921 5,897 65.66
1922 7,550 52.70
1923 10,173 61.10
1924 13,126 62.17
1925 14,444 61.23
1926 16,770 64.02
1927 18,494 64.86
1928 21,510 63.19
1929 25,478 67.34
1930 28,979 68.44
1931 29,698 69.15
1932 29,742 66.25
1933 27,385 62.76

Sources: 1899-1938 Annual Report of the Attorney General of the United States; 1939-1997; and Statistical Abstract of the United States. Various years. The Report of the Attorney General did not provide the numbers voluntary and involuntary from 1934-36.

In 1933, Congress enacted amendments that allowed farmers and wage earners to seek arrangements. Arrangements offered more flexibility than compositions. Debtors could offer to pay all or part of their debts over a longer period of time. Congress also added section 77, which provided for railroad reorganization. Section 77 solved two of the problems that had plagued corporate reorganization. Bankruptcy courts had jurisdiction of the assets throughout the country so that ancillary receiverships were not needed. The amendment also alleviated the holdout problem by making 2/3 votes of a class of creditors binding on all the members of the class. In 1934, Congress extended reorganization to non-railroad corporations as well. The Thacher Report’s recommendations for a bankruptcy administrator were not enacted, largely because of opposition from bankruptcy lawyers. The 1898 Bankruptcy Act had created a well-organized group with a vested interest in the evolution of the law–bankruptcy lawyers.

Although the 1933-34 reforms were ones that bankruptcy lawyers and judges had wanted, many of them believed that the law could be further improved. In 1932, The Commercial Law League, the American Bar Association, the National Association of Credit Management and the National Association of Referees in Bankruptcy joined together to form the National Bankruptcy Conference. The culmination of their efforts was the Chandler Act of 1938. The Chandler Act created a menu of options for both individual and corporate debtors. Debtors could choose traditional liquidation. They could seek an arrangement with their creditors through Chapter 10 of the Act. They could attempt to obtain an extension through Chapter 12. A corporation could seek an arrangement through Chapter 11 or reorganization through Chapter 10. Chapter 11 only allowed corporations to alter their unsecured debt, whereas Chapter 10 allowed reorganization of both secured and unsecured debt. However, corporations tended to prefer Chapter 11 because Chapter 10 required Securities and Exchange Commission review for all publicly traded firms with more than $250,000 in liabilities.

By 1938 modern American bankruptcy law had obtained its central features. The law dealt with all types of individuals and businesses. It allowed both voluntary and involuntary petitions. It enabled debtors to choose liquidation and a discharge, or to choose some type of readjustment of their debts. By 1939, the vast majority of bankruptcy cases were, as they are now, voluntary consumer bankruptcy cases. After 1939 involuntary bankruptcy cases never again rose above 2,000. (See Table 4). The decline of involuntary bankruptcy cases appears to have been associated with the decline in business failures. According to Dun and Bradstreet, the number of failures per 10,000 listed concerns averaged 100 per year from 1870 to 1933. From 1934-1988 the failure rate averaged 50 per 10,000 concerns. The failure rate did not rise above 70 per 10,000 listed concerns again until the 1980s. Also, the number of failures, which had averaged over 20,000 a year in the 1920s did not reach 20,000 a year again until the 1980s. The mercantile failures which had so troubled late nineteenth century merchants and manufacturers were much less of a problem after the Great Depression.

Table 6. Business Failures, 1870-1997

Failures per
Year Failures 10,000 Firms
1870 3,546 83
1871 2,915 64
1872 4,069 81
1873 5,183 105
1874 5,830 104
1875 7,740 128
1876 9,092 142
1877 8,872 139
1878 10,478 158
1879 6,658 95
1880 4,735 63
1881 5,582 71
1882 6,738 82
1883 9,184 106
1884 10,968 121
1885 10,637 116
1886 9,834 101
1887 9,634 97
1888 10,679 103
1889 10,882 103
1890 10,907 99
1891 12,273 107
1892 10,344 89
1893 15,242 130
1894 13,885 123
1895 13,197 112
1896 15,088 133
1897 13,351 125
1898 12,186 111
1899 9,337 82
1900 10,774 92
1901 11,002 90
1902 11,615 93
1903 12,069 94
1904 12,199 92
1905 11,520 85
1906 10,682 77
1907 11,725 83
1908 15,690 108
1909 12,924 87
1910 12,652 84
1911 13,441 88
1912 15,452 100
1913 16,037 98
1914 18,280 118
1915 22,156 133
1916 16,993 100
1917 13,855 80
1918 9,982 59
1919 6,451 37
1920 8,881 48
1921 19,652 102
1922 23,676 120
1923 18,718 93
1924 20,615 100
1925 21,214 100
1926 21,773 101
1927 23,146 106
1928 23,842 109
1929 22,909 104
1930 26,355 122
1931 28,285 133
1932 31,822 154
1933 19,859 100
1934 12,091 61
1935 12,244 62
1936 9,607 48
1937 9,490 46
1938 12,836 61
1939 14,768 70
1940 13,619 63
1941 11,848 55
1942 9,405 45
1943 3,221 16
1944 1,222 7
1945 809 4
1946 1,129 5
1947 3,474 14
1948 5,250 20
1949 9,246 34
1950 9,162 34
1951 8,058 31
1952 7,611 29
1953 8,862 33
1954 11,086 42
1955 10,969 42
1956 12,686 48
1957 13,739 52
1958 14,964 56
1959 14,053 52
1960 15,445 57
1961 17,075 64
1962 15,782 61
1963 14,374 56
1964 13,501 53
1965 13,514 53
1966 13,061 52
1967 12,364 49
1968 9,636 39
1969 9,154 37
1970 10,748 44
1971 10,326 42
1972 9,566 38
1973 9,345 36
1974 9,915 38
1975 11,432 43
1976 9,628 35
1977 7,919 28
1978 6,619 24
1979 7,564 28
1980 11,742 42
1981 16,794 61
1982 24,908 88
1983 31,334 110
1984 52,078 107
1985 57,078 115
1986 61,616 120
1987 61,111 102
1988 57,098 98
1989 50,631 65
1990 60,747 74
1991 88,140 107
1992 97,069 110
1993 86,133 96
1994 71,558 86
1995 71,128 82
1996 71,931 86
1997 84,342 89

Source: United States. Historical Statistics of the United States: Bicentennial Edition. 1975; and United States. Statistical Abstract of the United States. Washington D.C.: GPO. Various years.

The Bankruptcy Reform Act of 1978

In contrast to the decline in business failures, personal bankruptcy climbed steadily. Prompted by a rise in personal bankruptcy in the 1960s, Congress initiated an investigation of bankruptcy law that culminated in the Bankruptcy Reform Act of 1978, which replaced the much amended 1898 Bankruptcy Act. The Bankruptcy Reform Act, also known as the Bankruptcy Code or just “the Code”, maintains the menu of options for debtors embodied in the Chandler Act. It provides Chapter 7 liquidation for businesses and individuals, Chapter 11 reorganization, Chapter 13 adjustment of debts for individuals with regular income, and Chapter 12 readjustment for farmers. In 1991, seventy-one percent of all cases were Chapter 7 and twenty-seven percent were Chapter 13. Many of the changes introduced by the Code made bankruptcy, especially Chapter 13, more attractive to debtors. The number of bankruptcy petitions did climb rapidly after the law was enacted. Lobbying by creditor groups and a Supreme Court decision that ruled certain administrative parts of the Act unconstitutional led to the Bankruptcy Amendments and Federal Judgeship Act of 1984. The 1984 amendments attempted to roll back some of the pro-debtor provisions of the Code. Because bankruptcy filings continued their rapid ascent after the 1984, recent studies have tended to look toward changes in other factors, such as consumer finance, to explain the explosion in bankruptcy cases.

Bankruptcy law continues to evolve. To understand the evolution of bankruptcy law is to understand why groups of people came to believe that existing debt collection law was inadequate and to see how those people were able to use courts and legislatures to change the law. In the early nineteenth century demands were largely driven by victims of financial crises. In the late nineteenth century, merchants and manufacturers demanded a law that would facilitate interstate commerce. Unlike its predecessors, the 1898 Bankruptcy Act was not repealed after a few years and over time it gave rise to a group with a vested interest in bankruptcy law, bankruptcy lawyers. Bankruptcy lawyers have played a prominent role in drafting and lobbying for bankruptcy reform since the 1930s. Credit card companies and customers may be expected to play a significant role in changing bankruptcy law in the future.

References

Balleisen, Edward. Navigating Failure: Bankruptcy and Commercial Society in Antebellum America. Chapel Hill: University of North Carolina Press. 2001.

Balleisen, Edward. “Vulture Capitalism in Antebellum America: The 1841 Federal Bankruptcy Act and the Exploitation of Financial Distress.” Business History Review 70, Spring (1996): 473-516

Berglof, Erik and Howard Rosenthal (1999) “The Political Economy of American Bankruptcy: The Evidence from Roll Call Voting, 1800-1978.” working paper, Princeton University.

Coleman, Peter J. Debtors and Creditors in America: Insolvency, Imprisonment for Debt, and Bankruptcy, 1607-1900. Madison: The State Historical Society of Wisconsin. 1974.

Hansen, Bradley. “The Political Economy of Bankruptcy: The 1898 Act to Establish A Uniform System of Bankruptcy.” Essays in Economic and Business History 15, (1997):155-71.

Hansen, Bradley. “Commercial Associations and the Creation of a National Economy: The Demand for Federal Bankruptcy Law.” Business History Review 72, Spring (1998): 86-113.

Hansen, Bradley. “The People’s Welfare and the Origins of Corporate Reorganization: The Wabash Receivership Reconsidered.” Business History Review 74, Autumn (2000): 377-405.

Martin, Albro. “Railroads and the Equity Receivership: An Essay on Institutional Change.” Journal of Economic History 34, (1974): 685-709.

Matthews, Barbara. Forgive Us Our Debts: Bankruptcy And Insolvency in America, 1763-1841. Ph. D. diss. Brown University. 1994.

Moss, David and Gibbs A. Johnson. “The Rise of Consumer Bankruptcy: Evolution, Revolution or Both?” American Bankruptcy Law Journal 73, Spring (1999): 311-51.

Sandage, Scott. Deadbeats, Drunkards and Dreamers: A Cultural History of Failure in America, 1819-1893. Ph. D. diss. Rutgers University. 1995.

Skeel, David A. “An Evolutionary Theory of Corporate Law and Corporate Bankruptcy.” Vanderbilt Law Review, 51 (1998):1325-1398.

Skeel, David A. “The Genius of the 1898 Bankruptcy Act.” Bankruptcy Developments Journal 15, (1999): 321-341.

Skeel, David A. Debt’s Dominion: A History of Bankruptcy Law in America. Princeton: Princeton University Press. 2001.

Sullivan, Theresa, Elizabeth Warren and Jay Westbrook. As We Forgive Our Debtors: Bankruptcy and Consumer Credit in America. Oxford: Oxford University Press. 1989.

Swain, H.H. “Economic Aspects of Railroad Receivership.” Economic Studies 3, (1898): 53-161.

Tufano, Peter. “Business Failure, Judicial Intervention, and Financial Innovation: Restructuring U. S. Railroads in the Nineteenth Century.” Business History Review 71, Spring (1997):1-40.

United States. Report of the Attorney-General. Washington D.C.: GPO. Various years.

United States. Statistical Abstract of the United States. Washington D.C.: GPO. Various years.

United States. Historical Statistics of the United States: Bicentennial Edition. 1975.

Warren, Charles. Bankruptcy In United States History. Cambridge: Harvard University Press. 1935.

Citation: Hansen, Bradley. “Bankruptcy Law in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/bankruptcy-law-in-the-united-states/

Banking in the Western U.S.

Lynne Pierson Doti, Chapman University

Banking in the western United States has had a distinctive history, marked by a great variety of banking arrangements and generally loose regulation.

California Banks in the Gold Rush Era

In the early frontier years, private individuals and outposts of the Hudson Bay Company and other trading companies provided banking services. As states west of the Mississippi began developing after the 1840s, capital flowed fairly readily from the east coast and also from foreign sources. This is particularly true of California. Gold was discovered in early 1848, and the population exploded. San Francisco and Sacramento quickly became cities. By 1852, numerous banks representing investors from St. Louis, Boston, New York and even other countries were operating in San Francisco. Lazard Freres, a French bank, has remained there to the present time. St. Louis, an earlier financial center of the west, and New York banks were well represented until the mid-1850s, when a bank panic forced the reevaluation of distant branches or subsidiaries. As California grew, spurred at first by gold production and then by the Nevada silver discoveries in the1860s, San Francisco became the financial center of the western states.

One of the largest banks in California in the 1850s was started by D.O. Mills, a young New York bank employee who came to California to mine gold. Soon tiring of mining, he opened a mercantile establishment in Sacramento. Mills began storing gold for the miners, and later began buying gold and issuing notes that circulated as money. Within a few years he changed the sign on his building from “store” to “bank.” The Bank of D.O. Mills survived into the 1920s. Merchants started many early western banks in just this manner, since the lack of regulation or enforcement meant that potential depositors needed the security of a trusted, widely respected individual. A previous business often was the route to this trust.

Although the character of the individuals in control was of foremost importance, housing the bank in a solid structure also reassured customers. Because depositors worried about “wildcat banks” which accepted deposits, then relocated far away to discourage withdrawals, it was hard to gather deposits without proof of stability. So the bank was often the most solid structure in town. Although there are a few spectacular instances of bankers leaving town with deposits, the system generally worked extremely well with minimal regulation.

Large Banks Come to Dominate in the Late 1800s

Just as a few New York banks dominated financial markets on the east coast, a few large San Francisco banks — created first by the “Silver Kings” made rich by the Virginia City, Nevada mines, then by the railroad barons — dominated the West Coast financial world in the late 1800s. For example, in 1865, William Ralston started the Bank of California, which quickly branched into Nevada, and then Oregon and other western states. As in the case of New York City’s large banks, correspondent relationships with smaller banks and direct deposits and loans in larger amounts from customers located far from the bank’s offices allowed the Bank of California to become important throughout the far west.

Although, the western bank network still had many ties to the East at the end of the nineteenth century, the lag between the east coast and west coast in financial crises was still a few years. For example, in 1893, New York experienced a panic where customers rushed to withdraw their funds, fearing that the banks would fail. The New York panic spread quickly to the other east coast banks, but reached San Francisco only in 1895.

A. P. Giannini and Branch Banking

By 1930, A.P. Giannini was the most powerful financier in the Western U.S. He had started his first bank, the Bank of Italy, in 1905 to appeal to the Italian immigrants of North Beach in San Francisco. Having many connections and having learned about customer service in the produce industry helped him turn the setback of the 1906 earthquake into an advantage. As the city burned, Giannini loaded the contents of his safe into a produce wagon and relocated. While some other bankers waited in frustration for their safes to cool enough to open, Giannini was making loans. Perhaps this impressed upon him the benefits of diverse locations. Diversification meant that if business was bad in one area, it might be better in another. He opened his first branch in 1907 in the Mission district of San Francisco. After 1915 he began to add new offices and buy other banks at a rate that became alarming to rival banks.

Giannini was a pioneer of branch banking, maneuvering around state and federal regulators to eventually establish over one thousand branches in California. He dreamed of a bank with branches around the world, but it did not occur in his lifetime. However, his banking system, consolidated in 1930 as Bank of America, N.T. and S.A., moved from California into neighboring states in the 1970s and (along with Citibank of New York) created the pressure that eventually lead to interstate branching in the 1990s. The 1994 Riegle-Neal Interstate Banking and Branching Efficiency Act allowed banks to combine across state lines. Bank of America was purchased by Nationsbank of North Carolina in 1999, creating a truly national bank using the name “Bank of America.”

Restrictions on Branch Banking

Although California regulators, undoubtedly spurred on by rival bankers, tried to confine Giannini’s operations to a small geographic area in the 1920s, they never actually banned branch banking for state-chartered banks. This was atypical, even in the less regulated west. National banks were banned from opening branches under the National Banking Act of 1864, confirming the general attitude, carefully cultivated by local monopolist bankers, that branching only existed to drain funds from the countryside to finance growth in the cities.

Among the western states Texas, Oregon, Washington, Utah, Colorado, New Mexico and Idaho had severe restrictions on branching and only reduced or removed these limits during the Depression, when acquisition by a stronger bank became the only alternative to failure for many rural banks. Also, it became clear that the banks with branches were surviving much better than the unit banks (those with only one location). Nevada, Idaho, Oregon and Washington removed most of their onerous restrictions on branching during the 1930s, but Texas, Wyoming, Montana, New Mexico and Colorado remained almost entirely branchless until late in the twentieth century. Colorado was the last of the fifty states to allow branch banking. In the states where branch banking was limited, the number of small banks grew as the economy grew, but many failed in bad times.

Methods of Avoiding Branch Banking Restrictions

Mergers and chain banking were substitutes for branch banking and their formation came in waves generated by economic or technological change. The first merger movement came in the mid-1890s in Portland, Denver, Seattle and Salt Lake City, when many banks experienced problems due to the panic. Another wave of mergers came in the 1920s, inspired by Giannini’s expansion (though still limited in unit banking states). Chain banking — common ownership of several legally independent banks — was another way to achieve diversification without branching. In Nevada, George Wingfield built a chain of twelve banks starting in 1908 as part of his business empire. However, low prices for wool put a majority of his customers in trouble and the banks were permanently closed in 1933. The chain system survived, however. The three Walker brothers started chain banks in Utah in 1859, which survived until their acquisition by another chain system in the1950s. First Interstate Bank, in spite of the common name for all its banks, was a chain of twenty-one banks in eleven western states, all owned by First Interstate Bancorporation. Joe Pinola, chairman of the chain, created the appearance of one bank operating in several states fifteen years before it became legal to actually have a single bank operating across state lines.

Developments after World War II

World War II brought population increases for the western states and many of the military personnel and workers in war industries settled permanently in the west after the war. The increases in population occurred disproportionately in the suburbs. A rash of new banks, and new branches, followed. Deposit insurance enacted in the 1930s now replaced the need for reassuring edifices and bank buildings acquired a new, often inexpensive demeanor. Drive-in banking facilities became common. Treats provided for the children and, sometimes, the dogs confined to the car were almost a requirement.

During the Depression and World War II, Giannini continued his expansion. To facilitate diversification he created Transamerica Corporation as a holding company for Bank of America, several insurance companies and about two hundred other banks. In 1948, the Securities and Exchange Commission and the Federal Reserve Board charged Transamerica with monopolizing western banking. In 1952, Transamerica won the case, but the attention precipitated the Bank Holding Company Act of 1956, which made bank holding companies subject to the same restrictions in crossing state lines as individual banks. Transamerica sold off its banks and remained a powerhouse in the insurance industry into the twenty-first century.

Savings and Loan Failures

In the mid 1980s into the mid 1990s, a large number of savings and loan institutions failed, and because California and Texas were among the states with the largest numbers of these institutions, they produced some notable disasters. Sharply rising interest rates and changing regulations brought on this national phenomenon. Established institutions with large long-term loans on the books at low interest rates could not attract deposits without paying high interest rates. As the real estate market had escalated in the west in the postwar period, the western banks and savings and loans had proportionately larger amounts of the low interest loans. They became desperate to find a way to increase their earnings. Many of them ventured into unfamiliar territory, including auto loans, business lending and real estate development. California’s Columbia Savings and Loan, for example, tried investing in high-interest, high-risk (junk) bonds in a spectacularly unsuccessful quest for high earnings. Unscrupulous businessmen, who manipulated insurance guarantees to make large profits at little risk to themselves, purchased some weak banks. One of the most notorious of the villains of the disaster, Charles Keating, purchased California’s Lincoln Savings & Loan. He expanded the bank by offering high interest rates on large deposits, and later by selling bonds to confused retirees to replace these deposits. The funds were invested in several of Keating’s own projects, including an extravagant hotel in Arizona. Contributions to several Congressmen and taxpayer insured losses that totaled about $2 billion made Keating a notorious national figure.

California had the most losses in the savings and loan industry of any state, but Arizona and Colorado were also high. The problem worsened as troubled banks tried to unload repossessed real estate. The federal government ultimately paid the bill to bail out depositors, and established the Resolution Trust Corporation (RTC) to absorb the real estate holdings of failed institutions.

Conclusions

Banking in the American west has often been innovative. The more varied, but generally lower, level of regulation has allowed various banking experiments to run in the west. These innovations have often shaped national legislation and produced models eventually followed by many eastern states. The less stringent legal environment has produced the best and the worst banking in the nation, but has also shaped banking in the rest of the country and in the rest of the world.

References

Doti, Lynne Pierson. “Banking in California: Some Evidence on Structure 1878-1905.” Ph.D. dissertation, University of California, Riverside, 1978.

Doti, Lynne Pierson. “Banking in California: The First Branching Era.” Journal of the West 23, no. 2 (April 1984): 65-71.

Doti, Lynne Pierson and Larry Schweikart. Banking in the American West: From Gold Rush to Deregulation. Norman, OK: University of Oklahoma Press, 1991.

Doti, Lynne Pierson. “Nationwide Branching: Some Lessons from California.” Essays in Economic and Business History 9 (May 1991): 141 -161.

Doti, Lynne Pierson and Larry Schweikart. California Bankers, 1848-1993. Needham Heights, MA: Ginn Press, 1994.

Schweikart, Larry. A History of Banking in Arizona. Tucson: University of Arizona Press, 1982.

Schweikart, Larry, editor. Encyclopedia of American Business History and Biography.: Banking and Finance, 1913-1989. New York: Facts on File, 1990.

Citation: Doti, Lynne. “Banking in the Western US”. EH.Net Encyclopedia, edited by Robert Whaples. June 10, 2003. URL http://eh.net/encyclopedia/banking-in-the-western-u-s/

Savings and Loan Industry (U.S.)

David Mason, Young Harris College

The savings and loan industry is the leading source of institutional finance for residential home mortgages in America. From the appearance of the first thrift in Philadelphia in 1831, savings and loans (S&Ls) have been primarily local lenders focused on helping people of modest means to acquire homes. This mission was severely compromised by the financial scandals that enveloped the industry in the 1980s, and although the industry was severely tarnished by these events S&Ls continue to thrive.

Origins of the Thrift Industry

The thrift industry traces its origins to the British building society movement that emerged in the late eighteenth century. American thrifts (known then as “building and loans” or “B&Ls”) shared many of the same basic goals of their foreign counterparts — to help working-class men and women save for the future and purchase homes. A person became a thrift member by subscribing to shares in the organization, which were paid for over time in regular monthly installments. When enough monthly payments had accumulated, the members were allowed to borrow funds to buy homes. Because the amount each member could borrow was equal to the face value of the subscribed shares, these loans were actually advances on the unpaid shares. The member repaid the loan by continuing to make the regular monthly share payments as well as loan interest. This interest plus any other fees minus operating expenses (which typically accounted for only one to two percent of revenues) determined the profit of the thrift, which the members received as dividends.

For the first forty years following the formation of the first thrift in 1831, B&Ls were few in number and found in only a handful of Midwestern and Eastern states. This situation changed in the late nineteenth century as urban growth (and the demand for housing) related to the Second Industrial Revolution caused the number of thrifts to explode. By 1890, cities like Philadelphia, Chicago, and New York each had over three hundred thrifts, and B&Ls could be found in every state of the union, as well as the territory of Hawaii.

Differences between Thrifts and Commercial Banks

While industrialization gave a major boost to the growth of the thrift industry, there were other reasons why these associations could thrive along side larger commercial banks in the 19th and early 20th centuries. First, thrifts were not-for-profit cooperative organizations that were typically managed by the membership. Second, thrifts in the nineteenth century were very small; the average B&L held less than $90,000 in assets and had fewer than 200 members, which reflected the fact that these were local institutions that served well-defined groups of aspiring homeowners.

Another major difference was in the assets of these two institutions. Bank mortgages were short term (three to five years) and were repaid interest only with the entire principle due at maturity. In contrast, thrift mortgages were longer term (eight to twelve years) in which the borrower repaid both the principle and interest over time. This type of loan, known as the amortizing mortgage, was commonplace by the late nineteenth century, and was especially beneficial to borrowers with limited resources. Also, while banks offered a wide array of products to individuals and businesses, thrifts often made only home mortgages primarily to working-class men and women.

There was also a significant difference in the liabilities of banks and thrifts. Banks held primarily short-term deposits (like checking accounts) that could be withdrawn on demand by accountholders. In contrast, thrift deposits (called share accounts) were longer term, and because thrift members were also the owners of the association, B&Ls often had the legal right to take up to thirty days to honor any withdrawal request, and even charge penalties for early withdrawals. Offsetting this disadvantage was the fact that because profits were distributed as direct credits to member share balances, thrifts members earned compound interest on their savings.

A final distinction between thrifts and banks was that thrift leaders believed they were part of a broader social reform effort and not a financial industry. According to thrift leaders, B&Ls not only helped people become better citizens by making it easier to buy a home, they also taught the habits of systematic savings and mutual cooperation which strengthened personal morals. This attitude of social uplift was so pervasive that the official motto of the national thrift trade association was “The American Home. Safeguard of American Liberties” and its leaders consistently referred to their businesses as being part of a “movement” as late as the 1930s.

The “Nationals” Crisis

The early popularity of B&Ls led to the creation of a new type of thrift in the 1880s called the “national” B&L. While these associations employed the basic operating procedures used by traditional B&Ls, there were several critical differences. First, the “nationals” were often for-profit businesses formed by bankers or industrialists that employed promoters to form local branches to sell shares to prospective members. The members made their share payments at their local branch, and the money was sent to the home office where it was pooled with other funds members could borrow from to buy homes. The most significant difference between the “nationals” and traditional B&Ls was that the “nationals” promised to pay savings rates up to four times greater than any other financial institution. While the “nationals” also charged unusually high fees and late payment fines as well as higher rates on loans, the promise of high returns caused the number of “nationals” to surge. When the effects of the Depression of 1893 resulted in a decline in members, the “nationals” experienced a sudden reversal of fortunes. Because a steady stream of new members was critical for a “national” to pay both the interest on savings and the hefty salaries for the organizers, the falloff in payments caused dozens of “nationals” to fail, and by the end of the nineteenth century nearly all the “nationals” were out of business.

The “nationals” crisis had several important effects on the thrift industry, the first of which was the creation of the first state regulations governing B&Ls, designed both to prevent another “nationals” crisis and to make thrift operations more uniform. Significantly, thrift leaders were often responsible for securing these new guidelines. The second major change was the formation of a national trade association to not only protect B&L interests, but also promote business growth. These changes, combined with improved economic conditions, ushered in a period of prosperity for thrifts, as seen below:

Year Number of B&Ls
B&L
Assets (000,000)
1888 3,500 $300
1900 5,356 $571
1914 6,616 $1,357

Source: Carroll D. Wright, Ninth Annual Report of the Commissioner of Labor: Building and Loan Associations (Washington, D.C.: USGPO, 1894), 214; Josephine Hedges Ewalt, A Business Reborn: The Savings and Loan Story, 1930-1960 (Chicago: American Savings and Loan Institute Publishing Co., 1962), 391. (All monetary figures in this study are in current dollars.)

The Thrift Trade Association and Business Growth

The national trade association that emerged from the “nationals” crisis became a prominent force in shaping the thrift industry. Its leaders took an active role in unifying the thrift industry and modernizing not only its operations but also its image. The trade association led efforts to create more uniform accounting, appraisal, and lending procedures. It also spearheaded the drive to have all thrifts refer to themselves as “savings and loans” not B&Ls, and to convince managers of the need to assume more professional roles as financiers.

The consumerism of the 1920s fueled strong growth for the industry, so that by 1929 thrifts provided 22 percent of all mortgages. At the same time, the average thrift held $704,000 in assets, and more than one hundred thrifts had over $10 million in assets each. Similarly, the percentage of Americans belonging to B&Ls rose steadily so that by the end of the decade 10 percent of the population belonged to a thrift, up from just 4 percent in 1914. Significantly, many of these members were upper- and middle-class men and women who joined to invest money safely and earn good returns. These changes led to broad industry growth as seen below:

Year Number of B&Ls Assets (000,000)
1914 6,616 $1,357
1924 11,844 $4,766
1930 11,777 $8,829

Source: Ewalt, A Business Reborn, 391

The Depression and Federal Regulation

The success during the “Roaring Twenties” was tempered by the financial catastrophe of the Great Depression. Thrifts, like banks, suffered from loan losses, but in comparison to their larger counterparts, thrifts tended to survive the 1930s with greater success. Because banks held demand deposits, these institutions were more susceptible to “runs” by depositors, and as a result between 1931 and 1932 almost 20 percent of all banks went out of business while just over 2 percent of all thrifts met a similar fate. While the number of thrifts did fall by the late 1930s, the industry was able to quickly recover from the turmoil of the Great Depression as seen below:

Year Number of B&Ls Assets (000,000)
1930 11,777 $8,829
1937 9,225 $5,682
1945 6,149 $8,747

Source: Savings and Loan Fact Book, 1955 (Chicago: United States Savings and Loan League, 1955), 39.

Even through fewer thrifts failed than banks, the industry still experienced significant foreclosures and problems attracting funds. As a result, some thrift leaders looked to the federal government for assistance. In 1932, the thrift trade association worked with Congress to create a federal home loan bank that would make loans to thrifts facing fund shortages. By 1934, the other two major elements of federal involvement in the thrift business, a system of federally-chartered thrifts, and a federal deposit insurance program, were in place.

The creation of federal regulation was the most significant accomplishment for the thrift industry in the 1930s. While thrift leaders initially resisted regulation, in part because they feared the loss of business independence, their attitudes changed when they saw the benefits regulation gave to commercial banks. As a result, the industry quickly assumed an active role in the design and implementation of thrift oversight. In the years that followed, relations between thrift leaders and federal regulators became so close that some critics alleged that the industry had effectively “captured” their regulatory agencies.

The Postwar “Glory Years”

By all measures, the two decades that followed the end of World War II were the most successful period in the history of the thrift industry. The return of millions of servicemen eager to take up their prewar lives led to a dramatic increase in new families, and this “baby boom” caused a surge in new (mostly suburban) home construction. By the 1940s S&Ls (the name change occurred in the late 1930s) provided the majority of the financing for this expansion. The result was strong industry expansion that lasted through the early 1960s. In addition to meeting the demand for mortgages, thrifts expanded their sources of revenue and achieved greater asset growth by entering into residential development and consumer lending areas. Finally, innovations like drive-up teller windows and the ubiquitous “time and temperature” signs helped solidify the image of S&Ls as consumer-friendly, community-oriented institutions.

By 1965, the industry bore little resemblance to the business that had existed in the 1940s. S&Ls controlled 26 percent of consumer savings and provided 46 percent of all single-family home loans (tremendous gains over the comparable figures of 7 percent and 23 percent, respectively, for 1945), and this increase in business led to a considerable increase size as seen below:

Year Number of S&Ls Assets (000,000)
1945 6,149 $8,747
1952 6,004 $22,585
1959 6,223 $63,401
1965 6,071 $129,442

Source:Savings and Loan Fact Book, 1966, (Chicago: United States Savings and Loan League, 1966)92-4.

This expansion, however, was not uniform. More than a third of all thrifts had fewer than $5 million in assets each, while the one hundred largest thrifts held an average of $340 million each; three S&Ls approached $5 billion in assets. While regional expansion in states like California, account for part of this disparity, there were other controversial actions that fueled individual thrift growth. Some thrifts attracted funds by issuing stock to the public and become publicly held corporations. Another important trend involved raising rates paid on savings to lure deposits, a practice that resulted in periodic “rate wars” between thrifts and even commercial banks. These wars became so severe that in 1966 Congress took the highly unusual move of setting limits on savings rates for both commercial banks and S&Ls. Although thrifts were given the ability to pay slightly higher rates than banks, the move signaled an end to the days of easy growth for the thrift industry.

Moving from Regulation to Deregulation

The thirteen years following the enactment of rate controls presented thrifts with a number of unprecedented challenges, chief of which was finding ways to continue to expand in an economy characterized by slow growth, high interest rates and inflation. These conditions, which came to be known as “stagflation,” wrecked havoc with thrift finances for a variety of reasons. Because regulators controlled the rates thrifts could pay on savings, when interest rates rose depositors often withdrew their funds and placed them in accounts that earned market rates, a process known as disintermediation. At the same time, rising rates and a slow growth economy made it harder for people to qualify for mortgages that in turn limited the ability to generate income.

In response to these complex economic conditions, thrift managers came up with several innovations, such as alternative mortgage instruments and interest-bearing checking accounts, as a way to retain funds and generate lending business. Such actions allowed the industry to continue to record steady asset growth and profitability during the 1970s even though the actual number of thrifts was falling, as seen below.

Year Number of S&Ls Assets (000,000)
1965 6,071 $129,442
1970 5,669 $176,183
1974 5,023 $295,545
1979 4,709 $579,307

Source: Savings and Loan Fact Book, 1980, (Chicago: United States Savings and Loan League, 1980)48-51.

Despite such growth, there were still clear signs that the industry was chafing under the constraints of regulation. This was especially true with the large S&Ls in the western United States that yearned for additional lending powers to ensure continued growth. At the same time, major changes in financial markets, including the emergence of new competitors and new technologies, fueled the need to revise federal regulations for thrifts. Despite several efforts to modernize these laws in the 1970s, few substantive changes were enacted.

The S&L Crisis of the 1980s

In 1979 the financial health of the thrift industry was again challenged by a return of high interest rates and inflation, sparked this time by a doubling of oil prices. Because the sudden nature of these changes threatened to cause hundreds of S&L failures, Congress finally acted on deregulating the thrift industry. It passed two laws (the Depository Institutions Deregulation and Monetary Control Act of 1980 and the Garn-St. Germain Act of 1982) that not only allowed thrifts to offer a wider array of savings products, but also significantly expanded their lending authority. These changes were intended to allow S&Ls to “grow” out of their problems, and as such represented the first time that the government explicitly sought to increase S&L profits as opposed to promoting housing and homeownership. Other changes in thrift oversight included authorizing the use of more lenient accounting rules to report their financial condition, and the elimination of restrictions on the minimum numbers of S&L stockholders. Such policies, combined with an overall decline in regulatory oversight (known as forbearance), would later be cited as factors in the later collapse of the thrift industry.

While thrift deregulation was intended to give S&Ls the ability to compete effectively with other financial institutions, it also contributed to the worst financial crisis since the Great Depression as seen below:

Year S&L Failures Assets (000,000) Year Total S&Ls Industry Assets (000,000)
1980-2 118 $43,101 1980 3,993 $603,777
1983-5 137 $39,136 1983 3,146 $813,770
1986-7 118 $32,248 1985 3,274 $1,109,789
1988 205 $100,705 1988 2,969 $1,368,843
1989 327 $135,245 1989 2,616 $1,186,906

Source: Statistics on failures: Norman Strunk and Fred Case, Where Deregulation Went Wrong (Chicago: United States League of Savings Institutions, 1988), 10; Lawrence White, The S&L Debacle: Public Policy Lessons for Bank and Thrift Regulation (New York: Oxford University Press, 1991), 150; Managing the Crisis: The FDIC and RTC Experience, 1980‑1994 (Washington, D.C.: Resolution Trust Corporation, 1998), 795, 798; Historical Statistics on Banking, Bank and Thrift Failures, FDIC web page http://www2.fdic.gov/hsob accessed 31 August 2000; Total industry statistics: 1999 Fact Book: A Statistical Profile on the United States Thrift Industry. (Washington, D.C.: Office of Thrift Supervision, June 2000), 1, 4.

The level of thrift failures at the start of the 1980s was the largest since the Great Depression, and the primary reason for these insolvencies was the result of losses incurred when interest rates rose suddenly. Even after interest rates had stabilized and economic growth returned by the mid-1980s, however, thrift failures continued to grow. One reason for this latest round of failures was because of lender misconduct and fraud. The first such failure tied directly to fraud was Empire Savings of Mesquite, TX in March 1984, an insolvency that eventually cost the taxpayers nearly $300 million. Another prominent fraud-related failure was Lincoln Savings and Loan headed by Charles Keating. When Lincoln came under regulatory scrutiny in 1987, Senators Dennis DeConcini, John McCain, Alan Cranston, John Glenn, and Donald Riegle (all of whom received campaign contributions from Keating and would become known as the “Keating Five”) questioned the appropriateness of the investigation. The subsequent Lincoln failure is estimated to have cost the taxpayers over $2 billion. By the end of the decade, government officials estimated that lender misconduct cost taxpayers more than $75 billion, and the taint of fraud severely tarnished the overall image of the savings and loan industry.

Because most S&Ls were insured by the Federal Savings & Loan Insurance Corporation (FSLIC), few depositors actually lost money when thrifts failed. This was not true for thrifts covered by state deposit insurance funds, and the fragility of these state systems became apparent during the S&L crisis. In 1985, the anticipated failure of Home State Savings Bank of Cincinnati, Ohio sparked a series of deposit runs that threatened to bankrupt that state’s insurance program, and eventually prompted the governor to close all S&Ls in the state. Maryland, which also operated a state insurance program, experienced a similar panic when reports of fraud surfaced at Old Court Savings and Loan in Baltimore. In theaftermath of the failures in these two states all other state deposit insurance funds were terminated and the thrifts placed under the FSLIC. Eventually, even the FSLIC began to run out of money, and in 1987 the General Accounting Office declared the fund insolvent. Although Congress recapitalized the FSLIC when it passed the Competitive Equality Banking Act, it also authorized regulators to delay closing technically insolvent S&Ls as a way to limit insurance payoffs. The unfortunate consequence of such a policy was that allowing troubled thrifts to remain open and grow eventually increased the losses when failure did occur.

In 1989, the federal government finally created a program to resolve the S&L crisis. In August, Congress passed the Financial Institutions Reform Recovery and Enforcement Act (FIRREA), a measure that both bailed out the industry and began the process of re-regulation. FIRREA abolished the Federal Home Loan Bank Board and switched S&L regulation to the newly created Office of Thrift Supervision. It also terminated the FSLIC and moved the deposit insurance function to the FDIC. Finally, the Resolution Trust Corporation was created to dispose of the assets held by failed thrifts, while S&Ls still in business were placed under stricter oversight. Among the new regulations thrifts had to meet were higher net worth standards and a “Qualified Thrift Lender Test” that forced them to hold at least 70 percent of assets in areas related to residential real estate.

By the time the S&L crisis was over by the early 1990s, it was by most measures the most expensive financial collapse in American history. Between 1980 and 1993, 1,307 S&Ls with more than $603 billion in assets went bankrupt, at a cost to taxpayers of nearly $500 billion. It should be noted that S&Ls were not the only institutions to suffer in the 1980s, as the decade also witnessed the failure of 1,530 commercial banks controlling more than $230 billion in assets.

Explaining the S&L Crisis

One reason why so many thrifts failed in the 1980s was in the nature of how thrifts were deregulated. S&Ls historically were specialized financial institutions that used relatively long-term deposits to fund long-term mortgages. When thrifts began to lose funds to accounts that paid higher interest rates, initial deregulation focused on loosening deposit restrictions so thrifts could also offer higher rates. Unfortunately, because thrifts still lacked the authority to make variable rate mortgages many S&Ls were unable to generate higher income to offset expenses. While the Garn-St. Germain Act tried to correct this problem, the changes authorized were exceptionally broad and included virtually every type of lending power.

The S&L crisis was magnified by the fact that deregulation was accompanied by an overall reduction in regulatory oversight. As a result, unscrupulous thrift managers were able to dodge regulatory scrutiny, or use an S&L for their own personal gain. This, in turn, related to another reasons why S&Ls failed — insider fraud and mismanagement. Because most thrifts were covered by federal deposit insurance, some lenders facing insolvency embarked on a “go for broke” lending strategy that involved making high risk loans as a way to recover from their problems. The rationale behind this was that if the risky loan worked the thrift would make money, and if the loan went bad insurance would cover the losses.

One of the most common causes of insolvency, however, was that many thrift managers lacked the experience or knowledge to evaluate properly the risks associated with lending in deregulated areas. This applied to any S&L that made secured or unsecured loans that were not traditional residential mortgages, since each type of financing entailed unique risks that required specific skills and expertise on how to identify and mitigate. Such factors meant that bad loans, and in turn thrift failures, could easily result from well-intentioned decisions based on incorrect information.

The S&L Industry in the 21st Century

Although the thrift crisis of the 1980s severely tarnished the S&L image, the industry survived the period and, now under greater government regulation, is once again growing. At the start of the twenty-first century, America’s 1,103 thrift institutions control more than $863 billion in assets, and remain the second-largest repository for consumer savings. While thrift products and services are virtually indistinguishable from those offered by commercial banks (thrifts can even call themselves banks), these institutions have achieved great success by marketing themselves as community-oriented home lending specialists. This strategy is intended to appeal to consumers disillusioned with the emergence of large multi-state banking conglomerates. Despite this rebound, the thrift industry (like the commercial banking industry) continues to face competitive challenges from nontraditional banking services, innovations in financial technology, and the potential for increased regulation.

References

Barth, James. The Great Savings and Loan Debacle. Washington, D.C.: AEI Press, 1991.

Bodfish, Morton. editor. History of Buildings & Loan in the United States. Chicago: United States Building and Loan League, 1932.

Ewalt, Josephine Hedges. A Business Reborn: The Savings and Loan Story, 1930‑1960. Chicago: American Savings and Loan Institute Press, 1964.

Lowy, Martin. High Rollers: Inside the Savings and Loan Debacle. New York: Praeger, 1991.

Mason, David L. “From Building and Loans to Bail-Outs: A History of the American Savings and Loan Industry, 1831-1989.”Ph.D dissertation, Ohio State University, 2001.

Riegel, Robert and J. Russell Doubman. The Building‑and‑Loan Association. New York: J. Wiley & Sons, Inc., 1927.

Rom, Mark Carl. Public Spirit in the Thrift Tragedy. Pittsburgh: University of Pittsburgh Press, 1996.

Strunk, Norman and Fred Case. Where Deregulation Went Wrong: A Look at the Causes Behind Savings and Loan Failures in the 1980s. Chicago: United States League of Savings Institutions, 1988.

White, Lawrence, J. The S&L Debacle: Public Policy Lessons for Bank and Thrift Regulation. New York: Oxford University Press, 1991.

Citation: Mason, David. “Savings and Loan Industry, US”. EH.Net Encyclopedia, edited by Robert Whaples. June 10, 2003. URL http://eh.net/encyclopedia/savings-and-loan-industry-u-s/