EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

A Concise History of America’s Brewing Industry

Martin H. Stack, Rockhurst Universtiy

1650 to 1800: The Early Days of Brewing in America

Brewing in America dates to the first communities established by English and Dutch settlers in the early to mid seventeenth century. Dutch immigrants quickly recognized that the climate and terrain of present-day New York were particularly well suited to brewing beer and growing malt and hops, two of beer’s essential ingredients. A 1660 map of New Amsterdam details twenty-six breweries and taverns, a clear indication that producing and selling beer were popular and profitable trades in the American colonies (Baron, Chapter Three). Despite the early popularity of beer, other alcoholic beverages steadily grew in importance and by the early eighteenth century several of them had eclipsed beer commercially.

Between 1650 and the Civil War, the market for beer did not change a great deal: both production and consumption remained essentially local affairs. Bottling was expensive, and beer did not travel well. Nearly all beer was stored in, and then served from, wooden kegs. While there were many small breweries, it was not uncommon for households to brew their own beer. In fact, several of America’s founding fathers brewed their own beer, including George Washington and Thomas Jefferson (Baron, Chapters 13 and 16).

1800-1865: Brewing Begins to Expand

National production statistics are unavailable before 1810, an omission which reflects the rather limited importance of the early brewing industry. In 1810, America’s 140 commercial breweries collectively produced just over 180,000 barrels of beer.[1] During the next fifty years, total beer output continued to increase, but production remained small scale and local. This is not to suggest, however, that brewing could not prove profitable. In 1797, James Vassar founded a brewery in Poughkeepsie, New York whose successes echoed far beyond the brewing industry. After several booming years Vassar ceded control of the brewery to his two sons, Matthew and John. Following the death of his brother in an accident and a fire that destroyed the plant, Matthew Vassar rebuilt the brewery in 1811. Demand for his beer grew rapidly, and by the early 1840s, the Vassar brewery produced nearly 15,000 barrels of ale and porter annually, a significant amount for this period. Continued investment in the firm facilitated even greater production levels, and by 1860 its fifty employees turned out 30,000 barrels of beer, placing it amongst the nation’s largest breweries. Today, the Vassar name is better known for the college Matthew Vassar endowed in 1860 with earnings from the brewery (Baron, Chapter 17).

1865-1920: Brewing Emerges as a Significant Industry

While there were several hundred small scale, local breweries in the 1840s and 1850s, beer did not become a mass-produced, mass-consumed beverage until the decades following the Civil War. Several factors contributed to beer’s emergence as the nation’s dominant alcoholic drink. First, widespread immigration from strong beer drinking countries such as Britain, Ireland, and Germany contributed to the creation of a beer culture in the U.S.. Second, America was becoming increasingly industrialized and urbanized during these years, and many workers in the manufacturing and mining sectors drank beer during work and after. Third, many workers began to receive higher wages and salaries during these years, enabling them to buy more beer. Fourth, beer benefited from members of the temperance movement who advocated lower alcohol beer over higher alcohol spirits such as rum or whiskey.[2] Fifth, a series of technological and scientific developments fostered greater beer production and the brewing of new styles of beer. For example, artificial refrigeration enabled brewers to brew during warm American summers, and pasteurization, the eponymous procedure developed by Louis Pasteur, helped extend packaged beer’s shelf life, making storage and transportation more reliable (Stack, 2000). Finally, American brewers began brewing lager beer, a style that had long been popular in Germany and other continental European countries. Traditionally, beer in America meant British-style ale. Ales are brewed with top fermenting yeasts, and this category ranges from light pale ales to chocolate-colored stouts and porters. During the 1840s, American brewers began making German-style lager beers. In addition to requiring a longer maturation period than ales, lager beers use a bottom fermenting yeast and are much more temperature sensitive. Lagers require a great deal of care and attention from brewers, but to the increasing numbers of nineteenth century German immigrants, lager was synonymous with beer. As the nineteenth century wore on, lager production soared, and by 1900, lager outsold ale by a significant margin.

Together, these factors helped transform the market for beer. Total beer production increased from 3.6 million barrels in 1865 to over 66 million barrels in 1914. By 1910, brewing had grown into one of the leading manufacturing industries in America. Yet, this increase in output did not simply reflect America’s growing population. While the number of beer drinkers certainly did rise during these years, perhaps just as importantly, per capita consumption also rose dramatically, from under four gallons in 1865 to 21 gallons in the early 1910s.

Table 1: Industry Production and per Capita Consumption, 1865-1915

Year National Production (millions of barrels) Per Capita Consumption (gallons)
1865 3.7 3.4
1870 6.6 5.3
1875 9.5 6.6
1880 13.3 8.2
1885 19.2 10.5
1890 27.6 13.6
1895 33.6 15.0
1900 39.5 16.0
1905 49.5 18.3
1910 59.6 20.0
1915 59.8 18.7

Source: United States Brewers Association, 1979 Brewers Almanac, Washington, DC: 12-13.

An equally impressive transformation was underway at the level of the firm. Until the 1870s and 1880s, American breweries had been essentially small scale, local operations. By the late nineteenth century, several companies began to increase their scale of production and scope of distribution. Pabst Brewing Company in Milwaukee and Anheuser-Busch in St. Louis became two of the nation’s first nationally-oriented breweries, and the first to surpass annual production levels of one million barrels. By utilizing the growing railroad system to distribute significant amounts of their beer into distant beer markets, Pabst, Anheuser-Busch and a handful of other enterprises came to be called “shipping” breweries. Though these firms became very powerful, they did not control the pre-Prohibition market for beer. Rather, an equilibrium emerged that pitted large and regional shipping breweries that incorporated the latest innovations in pasteurizing, bottling, and transporting beer against a great number of locally-oriented breweries that mainly supplied draught beer in wooden kegs to their immediate markets (Stack, 2000).

Table 2: Industry Production, the Number of Breweries, and Average Brewery Size

1865-1915

Year National Production (millions of barrels) Number of Breweries Average Brewery Size (thousands of barrels)
1865 3.7 2,252 1,643
1870 6.6 3,286 2,009
1875 9.5 2,783 3,414
1880 13.3 2,741 4,852
1885 19.2 2,230 8,610
1890 27.6 2,156 12,801
1895 33.6 1,771 18,972
1900 39.5 1,816 21,751
1905 49.5 1,847 26,800
1910 59.6 1,568 38,010
1915 59.8 1,345 44,461

Source: United States Brewers Association, 1979 Brewers Almanac, Washington DC: 12-13.

Between the Civil War and national prohibition, the production and consumption of beer greatly outpaced spirits. Though consumption levels of absolute alcohol had peaked in the early 1800s, temperance and prohibition forces grew increasingly vocal and active as the century wore on, and by the late 1800s, they constituted one of the best-organized political pressure groups of the day (Kerr, Chapter 5, 1985). Their efforts culminated in the ratification of the Eighteenth Amendment on January 29, 1919 that, along with the Volstead Act, made the production and distribution of any beverages with more than one-half of one percent alcohol illegal. While estimates of alcohol activity during Prohibition’s thirteen year reign — from 1920 to 1933 — are imprecise, beer consumption almost certainly fell, though spirit consumption may have remained constant or actually even increased slightly (Rorbaugh, Appendices).

1920-1933: The Dark Years, Prohibition

The most important decision all breweries had to make after 1920 was what to do with their plants and equipment. As they grappled with this question, they made implicit bets as to whether Prohibition would prove to be merely a temporary irritant. Pessimists immediately divested themselves of all their brewing equipment, often at substantial losses. Other firms decided to carry on with related products, and so stay prepared for any modifications to the Volstead Act which would allow for beer. Schlitz, Blatz, Pabst, and Anheuser-Busch, the leading pre-Prohibition shippers, began producing near beer, a malt beverage with under one-half of one percent alcohol. While it was not a commercial success, its production allowed these firms to keep current their beer-making skills. Anheuser-Busch called its near beer “Budweiser” which was “simply the old Budweiser lager beer, brewed according to the traditional method, and then de-alcoholized. … August Busch took the same care in purchasing the costly materials as he had done during pre-prohibition days” (Krebs and Orthwein, 1953, 165). Anheuser-Busch and some of the other leading breweries were granted special licenses by the federal government for brewing alcohol greater than one half of one percent for “medicinal purposes” (Plavchan, 1969, 168). Receiving these licensees gave these breweries a competitive advantage as they were able to keep their brewing staff active in beer-making.

The shippers, and some local breweries, also made malt syrup. While they officially advertised it as an ingredient for baking cookies, and while its production was left alone by the government, it was readily apparent to all that its primary use was for homemade beer.

Of perhaps equal importance to the day-to-day business activities of the breweries were their investment decisions. Here, as in so many other places, the shippers exhibited true entrepreneurial insight. Blatz, Pabst, and Anheuser-Busch all expanded their inventories of automobiles and trucks, which became key assets after repeal. In the 1910s, Anheuser-Busch invested in motorized vehicles to deliver beer; by the 1920s, it was building its own trucks in great numbers. While it never sought to become a major producer of delivery vehicles, its forward expansion in this area reflected its appreciation of the growing importance of motorized delivery, an insight which they built on after repeal.

The leading shippers also furthered their investments in bottling equipment and machinery, which was used in the production of near beer, root beer, ginger ale, and soft drinks. These products were not the commercial successes beer had been, but they gave breweries important experience in bottling. While 85 percent of pre-Prohibition beer was kegged, during Prohibition over 80 percent of near beer and a smaller, though growing, percentage of soft drinks was sold in bottles.

This remarkable increase in packaged product impelled breweries to refine their packaging skills and modify their retailing practice. As they sold near beer and soft drinks to drugstores and drink stands, they encountered new marketing problems (Cochran, 1948, 340). Experience gained during these years helped the shippers meet radically different distribution requirements of the post-repeal beer market.

They were learning about canning as well as bottling. In 1925, Blatz’s canned malt syrup sales were more than $1.3 million, significantly greater than its bulk sales. Anheuser-Busch used cans from the American Can Company for its malt syrup in the early 1920s, a firm which would gain national prominence in 1935 for helping to pioneer the beer can. Thus, the canning of malt syrup helped create the first contacts between the leading shipping brewers and American Can Company (Plavchan, 1969, 178; Conny, 1990, 35-36; and American Can Company, 1969, 7-9).

These expensive investments in automobiles and bottling equipment were paid for in part by selling off branch properties, namely saloons (See Cochran, 1948; Plavchan, 1969; Krebs and Orthwein, 1953). Some had equipped their saloons with furniture and bar fixtures, but as Prohibition wore on, they progressively divested themselves of these assets.

1933-1945: The Industry Reawakens after the Repeal of Prohibition

In April 1933 Congress amended the Volstead Act to allow for 3.2 percent beer. Eight months later, in December, Congress and the states ratified the Twenty-first Amendment, officially repealing Prohibition. From repeal until World War II, the brewing industry struggled to regain its pre-Prohibition fortunes. Prior to prohibition, breweries owned or controlled many saloons, which were the dominant retail outlets for alcohol. To prevent the excesses that had been attributed to saloons from reoccurring, post-repeal legislation forbade alcohol manufacturers from owning bars or saloons, requiring them instead to sell their beer to wholesalers that in turn would distribute their beverages to retailers.

Prohibition meant the end of many small breweries that had been profitable, and that, taken together, had posed a formidable challenge to the large shipping breweries. The shippers, who had much greater investments, were not as inclined to walk away from brewing.[3] After repeal, therefore, they reopened for business in a radically new environment, one in which their former rivals were absent or disadvantaged. From this favorable starting point, they continued to consolidate their position. Several hundred locally oriented breweries did reopen, but were unable to regain their pre-Prohibition competitive edge, and they quickly exited the market. From 1935 to 1940, the number of breweries fell by ten percent.

Table 3: U.S. Brewing Industry Data, 1910-1940

Year Number of Breweries Number of Barrels Produced (millions) Average Barrelage per Brewery Largest Firm Production (millions of barrels) Per Capita Consumption (gallons)
1910 1,568 59.5 37,946 1.5 20.1
1915 1,345 59.8 44,461 1.1 18.7
1934 756 37.7 49,867 1.1 7.9
1935 766 45.2 59,008 1.1 10.3
1936 739 51.8 70,095 1.3 11.8
1937 754 58.7 77,851 1.8 13.3
1938 700 56.3 80,429 2.1 12.9
1939 672 53.8 80,059 2.3 12.3
1940 684 54.9 80,263 2.5 12.5

Source: Cochran, 1948; Krebs and Orthwein, 1953; and United States Brewers Almanac, 1956.

Annual industry output, after struggling in 1934 and 1935, began to approach the levels reached in the 1910s. Yet, these total increases are somewhat misleading, as the population of the U.S. had risen from 92 to 98 million in the 1910s to 125 to 130 million in the 1930s (Brewers Almanac, 1956, 10). This translated directly into the lower per capita consumption levels reported in Table 3.

The largest firms grew even larger in the years following repeal, quickly surpassing their pre-Prohibition annual production levels. The post-repeal industry leaders, Anheuser-Busch and Pabst, doubled their annual production levels from 1935 to 1940.

To take for granted the growing importance of the leading shippers during this period is to ignore their momentous reversal of pre-Prohibition trends. While medium-sized breweries dominated the industry output in the years leading up to Prohibition, the shippers regained in the 1930s the dynamism they manifested from the 1870s to the 1890s. Table 4 compares the fortunes of the shippers in relation to the industry as a whole. From 1877 to 1895, Anheuser-Busch and Pabst, the two most prominent shippers, grew much faster than the industry, and their successes helped pull the industry along. This picture changed during the years 1895 to 1915, when the industry grew much faster than the shippers (Stack, 2000). With the repeal of Prohibition, the tides changed again: from 1934 to 1940, the brewing industry grew very slowly, while Anheuser-Busch and Pabst enjoyed tremendous increases in their annual sales.

Table 4: Percentage Change in Output among Shipping Breweries, 1877-1940

Period Anheuser-Busch Pabst Industry
1877-1895 1,106% 685% 248%
1895-1914 58% -23% 78%
1934-1940 173% 87% 26%

Source: Cochran, 1948; Krebs and Orthwein, 1953; and Brewers Almanac, 1956.

National and regional shippers increasingly dominated the market. Breweries such as Anheuser-Busch, Pabst and Schlitz came to exemplify the modern business enterprise, as described by Alfred Chandler (Chandler, 1977), which adeptly integrated mass production and mass distribution.

Table 5: Leading Brewery Output Levels, 1938-1940

Brewery Plant Location 1938 (bls) 1939 (bls) 1940 (bls)
Anheuser-Busch St. Louis, MO 2,087,000 2,306,000 2,468,000
Pabst Brewing Milwaukee, WI

Peoria Heights, IL

1,640,000 1,650,000 1,730,000
Jos. Schlitz Milwaukee, WI 1,620,000 1,651,083 1,570,000
F & M Schafer Brooklyn, NY 1,025,000 1,305,000 1,390,200
P. Ballantine Newark, NJ 1,120,000 1,289,425 1,322,346
Jacob Ruppert New York, NY 1,417,000 1,325,350 1,228,400
Falstaff Brewing St. Louis, MO

New Orleans, LA

Omaha, NE

622,000 622,004 684,537
Duquesne Brewing Pittsburgh, PA

Carnegie, PA

McKees Rock, PA

625,000 680,000 690,000
Theo. Hamm Brewing St. Paul, MN 750,000 780,000 694,200
Liebman Breweries Brooklyn, NY 625,000 632,558 670,198

Source: Fein, 1942, 35.

World War One had presented a direct threat to the brewing industry. Government officials used war-time emergencies to impose grain rationing, a step that led to the lowering of the alcohol level of beer to 2.75 percent. World War Two had a completely different effect on the industry: rather than output falling, beer production rose from 1941 to 1945.

Table 6: Production and Per Capita Consumption, 1940-1945

=”607″>

Year Number of Breweries Number of barrels withdrawn (millions) Per Capita Consumption (gallons)
1940 684 54.9 12.5
1941 574 55.2 12.3
1942 530 63.7 14.1
1943 491 71.0 15.8
1944 469 81.7 18.0
1945 468 86.6 18.6

Source: 1979 USBA, 12-14.

During the war, the industry mirrored the nation at large by casting off its sluggish depression-era growth. As the war economy boomed, consumers, both troops and civilians, used some of their wages for beer, and per capita consumption grew by 50 percent between 1940 and 1945.

1945-1980: Following World War II, the Industry Continues to Grow and to Consolidate

Yet, the take-off registered during the World War II was not sustained during the ensuing decades. Total production continued to grow, but at a slower rate than overall population.

Table 7: Production and per Capita Consumption, 1945-1980

Year Number of Breweries Number of barrels withdrawn (millions) Per Capita Consumption (gallons)
1945 468 86.6 18.6
1950 407 88.8 17.2
1955 292 89.8 15.9
1960 229 94.5 15.4
1965 197 108.0 16.0
1970 154 134.7 18.7
1975 117 157.9 21.1
1980 101 188.4 23.1

Source: 1993 USBA, 7-8.

The period following WWII was characterized by great industry consolidation. Total output continued to grow, though per capita consumption fell into the 1960s before rebounding to levels above 21 gallons per capita in the 1970s, the highest rates in the nation’s history. Not since the 1910s, had consumption levels topped 21 gallons a year; however, there was a significant difference. Prior to Prohibition most consumers bought their beer from local or regional firms and over 85 percent of the beer was served from casks in saloons. Following World War II, two significant changes radically altered the market for beer. First, the total number of breweries operating fell dramatically. This signaled the growing importance of the large national breweries. While many of these firms — Anheuser-Busch, Pabst, Schlitz, and Blatz — had grown into prominence in the late nineteenth century, the scale of their operations grew tremendously in the years after the repeal of prohibition. From the mid 1940s to 1980, the five largest breweries saw their share of the national market grow from 19 to 75 percent (Adams, 125).

Table 8: Concentration of the Brewing Industry, 1947-1981

Year Five Largest (%) Ten Largest (%) Herfindahl Index[4]
1947 19.0 28.2 140
1954 24.9 38.3 240
1958 28.5 45.2 310
1964 39.0 58.2 440
1968 47.6 63.2 690
1974 64.0 80.8 1080
1978 74.3 92.3 1292
1981 75.9 93.9 1614

Source: Adams, 1995, 125.

The other important change concerned how beer was sold. Prior to Prohibition, nearly all beer was sold on-tap in bars or saloons; while approximately 10-15 percent of the beer was bottled, it was much more expensive than draught beer. In 1935, a few years after repeal, the American Can Company successfully canned beer for the first time. The spread of home refrigeration helped spur consumer demand for canned and bottled beer, and from 1935 onwards, draught beer sales have fallen markedly.

Table 9: Packaged vs. Draught Sales, 1935-1980

Year Packaged sales as a percentage of total sales

(bottled and canned)

Draught sales as a percentage of total sales
1935 30 70
1940 52 48
1945 64 36
1950 72 28
1955 78 22
1960 81 19
1965 82 18
1970 86 14
1975 88 12
1980 88 12

Source: 1979 USBA, 20; 1993 USBA, 14.

The rise of packaged beer contributed to the growing industry consolidation detailed in Table 8.

1980-2000: Continued Growth, the Microbrewery Movement, and International Dimensions of the Brewing Industry

From 1980 to 2000, beer production continued to rise, reaching nearly 200 million barrels in 2000. Per capita consumption hit its highest recorded level in 1981 with 23.8 gallons. Since then, though, consumption levels have dropped a bit, and during the 1990s, consumption was typically in the 21-22 gallon range.

Table 10: Production and Per Capita Consumption, 1980-1990

Year Number of Breweries Number of barrels withdrawn (millions) Per Capita Consumption (gallons)
1980 101 188.4 23.1
1985 105 193.8 22.7
1990 286 201.7 22.6

Source: 1993 USBA, 7-8.

Beginning around 1980, the long decline in the number of breweries slowed and then was reversed. Judging solely by the number of breweries in operation, it appeared that a significant change had occurred: the number of firms began to increase, and by the late 1990s, hundreds of new breweries were operating in the U.S. However, this number is rather misleading: the overall industry remained very concentrated, with a three firm concentration ratio in 2000 of 81 percent.

Table 11: Production Levels of the Leading Breweries, 2000

Production (millions of barrels)
Anheuser-Busch 99.2
Miller 39.8
Coors 22.7
Total Domestic Sales 199.4

Source: Beverage Industry, May 2003, 19.

Although entrepreneurs and beer enthusiasts began hundreds of new breweries during this period, most of them were very small, with annual production levels of between 5,000 to 100,000 barrels annually. Reflecting their small size, these new firms were nicknamed microbreweries. Collectively, microbreweries have grown to account for approximately 5-7 percent of the total beer market.

Microbreweries represented a new strategy in the brewing industry: rather than competing on the basis of price or advertising, they attempted to compete on the basis of inherent product characteristics. They emphasized the freshness of locally produced beer; they experimented with much stronger malt and hop flavors; they tried new and long-discarded brewing recipes, often reintroducing styles that had been popular in America decades earlier. Together, these breweries have had an influence much greater than their market share would suggest. The big three breweries, Anheuser Busch, Miller, and Coors, have all tried to incorporate ideas from the microbrewery movement. They have introduced new marquee brands intended to compete for some of this market, and when this failed, they have bought shares in or outright control of some microbreweries.

A final dimension of the brewing industry that has been changing concerns the emerging global market for beer. Until very recently, America was the biggest beer market in the world: as a result, American breweries have not historically looked abroad for additional sales, preferring to expand their share of the domestic market.[5] In the1980s, Anheuser-Busch began to systematically evaluate its market position. While it had done very well in the U.S., it had not tapped markets overseas; as a result, it began a series of international business dealings. It gradually moved from exporting small amounts of its flagship brand Budwesier to entering into licensing accords whereby breweries in a range of countries such as Ireland, Japan, and Argentina began to brew Budweiser for sale in their domestic markets. In 1995, it established its first breweries outside of the U.S., one in England for the European market and the other in China, to service the growing markets in China and East Asia.[6]

While U.S. breweries such as Anheuser-Busch have only recently begun to explore the opportunities abroad, foreign firms have long appreciated the significance of the American market. Beginning in the late 1990s, imports began to increase their market share and by the early 2000s, they accounted for approximately 12 percent of the large U.S. market. Imports and microbrews typically cost more than the big three’s beers and they provide a wider range of flavors and tastes. One of the most interesting developments in the international market for beer occurred in 2002 when South African Breweries (SAB), the dominant brewery in South Africa, and an active firm in Europe, acquired Miller, the second largest brewery in the U.S. Though not widely discussed in the U.S., this may portend a general move towards increased global integration in the world market for beer.

Annotated Bibliography

Adams, Walter and James Brock, editors. The Structure of American Industry, ninth edition. Englewood Cliffs, New Jersey: Prentice Hall, 1995.

Apps, Jerry. Breweries of Wisconsin. Madison, WI: University of Wisconsin Press, 1992. Detailed examination of the history of breweries and brewing in Wisconsin.

Baron, Stanley. Brewed In America: A History of Beer and Ale in the United States.

Boston: Little, Brown, and Co, 1962: Very good historical overview of brewing in America, from the Pilgrims through the post-World War II era.

Baum, Dan. Citizen Coors: A Grand Family Saga of Business, Politics, and Beer. New York: Harper Collins, 2000. Very entertaining story of the Coors family and the brewery they made famous.

Beverage Industry (May 2003): 19-20.

Blum, Peter. Brewed In Detroit: Breweries and Beers since 1830. Detroit: Wayne State University Press, 1999. Very good discussion of Detroit’s major breweries and how they evolved. Particularly strong on the Stroh brewery.

Cochran, Thomas. Pabst Brewing Company: The History of an American Business. New York: New York University Press, 1948: A very insightful, well-researched, and well- written history of one of America’s most important breweries. It is strongest on the years leading up to Prohibition.

Downard, William. The Cincinnati Brewing Industry: A Social and Economic History. Ohio University Press, 1973: A good history of brewing in Cincinnati; particularly strong in the years prior to Prohibition.

Downard, William. Dictionary of the History of the American Brewing and Distilling Industries. Westport, CT: Greenwood Press, 1980: Part dictionary and part encyclopedia, a useful compendium of terms, people, and events relating to the brewing and distilling industries.

Duis, Perry. The Saloon: Public Drinking in Chicagoand Boston, 1880-1920. Urbana: University of Illinois Press, 1983: An excellent overview of the institution of the saloon in pre-Prohibition America.

Eckhardt, Fred. The Essentials of Beer Style. Portland, OR: Fred Eckhardt Communications, 1995: A helpful introduction into the basics of how beer is made and how beer styles differ.

Ehert, George. Twenty-Five Years of Brewing. New York: Gast Lithograph and Engraving, 1891: An interesting snapshot of an important late nineteenth century New York City brewery.

Elzinga, Kenneth. “The Beer Industry.” In The Structure of American Industry, ninth edition, edited by W. Adams and J. Brock. Englewood Cliffs, New Jersey: Prentice Hall, 1995: A good overview summary of the history, structure, conduct, and performance of America’s brewing industry.

Fein, Edward. “The 25 Leading Brewers in the United States Produce 41.5% of the Nation’s Total Beer Output.” Brewers Digest 17 (October 1942): 35.

Greer, Douglas. “Product Differentiation and Concentration in the Brewing Industry,” Journal of Industrial Economics 29 (1971): 201-19.

Greer, Douglas. “The Causes of Concentration in the Brewing Industry,” Quarterly Review of Economics and Business 21 (1981): 87-106.

Greer, Douglas. “Beer: Causes of Structural Change.” In Industry Studies, second edition, edited by Larry Duetsch, Armonk, New York: M.E. Sharpe, 1998.

Hernon, Peter and Terry Ganey. Under the Influence: The Unauthorized Story of the Anheuser-Busch Dynasty. New York: Simon and Schuster, 1991: Somewhat sensationalistic history of the family that has controlled America’s largest brewery, but some interesting pieces on the brewery are included.

Horowitz, Ira and Ann Horowitz. “Firms in a Declining Market: The Brewing Case.” Journal of Industrial Economics 13 (1965): 129-153.

Jackson, Michael. The New World Guide To Beer. Philadelphia: Running Press, 1988: Good overview of the international world of beer and of America’s place in the international beer market.

Keithan, Charles. The Brewing Industry. Washington D.C: Federal Trade Commission, 1978.

Kerr, K. Austin. Organized for Prohibition. New Haven: Yale Press, 1985: Excellent study of the rise of the Anti-Saloon League in the United States.

Kostka, William. The Pre-prohibition History of Adolph Coors Company: 1873-1933. Golden, CO: self-published book, Adolph Coors Company, 1973: A self-published book by the Coors company that provides some interesting insights into the origins of the Colorado brewery.

Krebs, Roland and Orthwein, Percy. Making Friends Is Our Business: 100 Years of Anheuser-Busch. St. Louis, MO: self-published book, Anheuser-Busch, 1953: A self-published book by the Anheuser-Busch brewery that has some nice illustrations and data on firm output levels. The story is nicely told but rather self-congratulatory.

“Large Brewers Boost Share of U.S. Beer Business,” Brewers Digest, 15 (July 1940): 55-57.

Leisley, Bruce. A History of Leisley Brewing. North Newton Kansas: Mennonite Press, 1975: A short but useful history of the Leisley Brewing Company. This was the author’s undergraduate thesis.

Lender, Mark and James Martin. Drinking in America. New York: The Free Press, 1987: Good overview of the social history of drinking in America.

McGahan, Ann. “The Emergence of the National Brewing Oligopoly: Competition in the American Market, 1933-58.” Business History Review 65 (1991): 229-284: Excellent historical analysis of the origins of the brewing oligopoly following the repeal of Prohibition.

McGahan, Ann. “Cooperation in Prices and Capacities: Trade Associations in Brewing after Repeal.” Journal of Law and Economics 38 (1995): 521-559.

Meier, Gary and Meier, Gloria. Brewed in the Pacific Northwest: A History of Beer Making in Oregon and Washington. Seattle: Fjord Press, 1991: A survey of the history of brewing in the Pacific Northwest.

Miller, Carl. Breweries of Cleveland. Cleveland, OH: Schnitzelbank Press, 1998: Good historical overview of the brewing industry in Cleveland.

Norman, Donald. Structural Change and Performance in the U.S. Brewing Industry. Ph.D. dissertation, UCLA, 1975.

One Hundred Years of Brewing. Chicago and New York: Arno Press Reprint, 1903 (Reprint 1974): A very important work. Very detailed historical discussion of the American brewing industry through the end of the nineteenth century.

Persons, Warren. Beer and Brewing In America: An Economic Study. New York: United Brewers Industrial Foundation, 1940.

Plavchan, Ronald. A History of Anheuser-Busch, 1852-1933. Ph.D. dissertation, St. Louis University, 1969: Apart from Cochran’s analysis of Pabst, one of a very few detailed business histories of a major American brewery.

Research Company of America. A National Survey of the Brewing Industry. self-published, 1941: A well research industry analysis with a wealth of information and data.

Rorbaugh, William. The Alcoholic Republic: An American Tradition. New York: Oxford University Press, 1979: Excellent scholarly overview of drinking habits in America.

Rubin, Jay. “The Wet War: American Liquor, 1941-1945.” In Alcohol, Reform, and Society, edited by J. Blocker. Westport, CT: Greenwood Press, 1979: Interesting discussion of American drinking during World War II.

Salem, Frederick. 1880. Beer: Its History and Its Economic Value as a National Beverage. New York: Arno Press, 1880 (Reprint 1972): Early but valuable discussion of American brewing industry.

Scherer, F.M. Industry Structure, Strategy, and Public Policy. New York: Harper Collins, 1996: A very good essay on the brewing industry.

Shih, Ko Ching and C. Ying Shih. American Brewing Industry and the Beer Market. Brookfield, WI, 1958: Good overview of the industry with some excellent data tables.

Skilnik, Bob. The History of Beer and Brewing in Chicago: 1833-1978. Pogo Press, 1999: Good overview of the history of brewing in Chicago.

Smith, Greg. Beer in America: The Early Years, 1587 to 1840. Boulder, CO: Brewers Publications, 1998: Well written account of beer’s development in America, from the Pilgrims to mid-nineteenth century.

Stack, Martin. “Local and Regional Breweries in America’s Brewing Industry, 1865-1920.” Business History Review 74 (Autumn 2000): 435-63.

Thomann, Gallus. American Beer: Glimpses of Its History and Description of Its Manufacture. New York: United States Brewing Association, 1909: Interesting account of the state of the brewing industry at the turn of the twentieth century.

United States Brewers Association. Annual Year Book, 1909-1921. Very important primary source document published by the leading brewing trade association.

United States Brewers Foundation. Brewers Almanac, published annually, 1941-present: Very important primary source document published by the leading brewing trade association.

Van Wieren, Dale. American Breweries II. West Point, PA: Eastern Coast Brewiana Association, 1995. Comprehensive historical listing of every brewery in every state, arranged by city within each state.


[1] A barrel of beer is 31 gallons. One Hundred Years of Brewing, Chicagoand New York: Arno Press Reprint, 1974: 252.

[2] During the nineteenth century, there were often distinctions between temperance advocates, who differentiated between spirits and beer, and prohibition supporters, who campaigned on the need to eliminate all alcohol.

[3] The major shippers may have been taken aback by the loss suffered by Lemp, one of the leading pre-Prohibition shipping breweries. Lemp was sold at auction in 1922 at a loss of 90 percent on the investment (Baron, 1962, 315).

[4] The Herfinhahl Index sums the squared market shares of the fifty largest firms.

[5] China overtook the United States as the world’s largest beer market in 2002.

[6] http://www.anheuser-busch.com/over/international.html

tively extract content, Imported Full Body :( May need to used a more carefully tuned import template.–>

Martin H. Stack, Rockhurst Universtiy

1650 to 1800: The Early Days of Brewing in America

Brewing in America dates to the first communities established by English and Dutch settlers in the early to mid seventeenth century. Dutch immigrants quickly recognized that the climate and terrain of present-day New York were particularly well suited to brewing beer and growing malt and hops, two of beer’s essential ingredients. A 1660 map of New Amsterdam details twenty-six breweries and taverns, a clear indication that producing and selling beer were popular and profitable trades in the American colonies (Baron, Chapter Three). Despite the early popularity of beer, other alcoholic beverages steadily grew in importance and by the early eighteenth century several of them had eclipsed beer commercially.

Between 1650 and the Civil War, the market for beer did not change a great deal: both production and consumption remained essentially local affairs. Bottling was expensive, and beer did not travel well. Nearly all beer was stored in, and then served from, wooden kegs. While there were many small breweries, it was not uncommon for households to brew their own beer. In fact, several of America’s founding fathers brewed their own beer, including George Washington and Thomas Jefferson (Baron, Chapters 13 and 16).

1800-1865: Brewing Begins to Expand

National production statistics are unavailable before 1810, an omission which reflects the rather limited importance of the early brewing industry. In 1810, America’s 140 commercial breweries collectively produced just over 180,000 barrels of beer.[1] During the next fifty years, total beer output continued to increase, but production remained small scale and local. This is not to suggest, however, that brewing could not prove profitable. In 1797, James Vassar founded a brewery in Poughkeepsie, New York whose successes echoed far beyond the brewing industry. After several booming years Vassar ceded control of the brewery to his two sons, Matthew and John. Following the death of his brother in an accident and a fire that destroyed the plant, Matthew Vassar rebuilt the brewery in 1811. Demand for his beer grew rapidly, and by the early 1840s, the Vassar brewery produced nearly 15,000 barrels of ale and porter annually, a significant amount for this period. Continued investment in the firm facilitated even greater production levels, and by 1860 its fifty employees turned out 30,000 barrels of beer, placing it amongst the nation’s largest breweries. Today, the Vassar name is better known for the college Matthew Vassar endowed in 1860 with earnings from the brewery (Baron, Chapter 17).

1865-1920: Brewing Emerges as a Significant Industry

While there were several hundred small scale, local breweries in the 1840s and 1850s, beer did not become a mass-produced, mass-consumed beverage until the decades following the Civil War. Several factors contributed to beer’s emergence as the nation’s dominant alcoholic drink. First, widespread immigration from strong beer drinking countries such as Britain, Ireland, and Germany contributed to the creation of a beer culture in the U.S.. Second, America was becoming increasingly industrialized and urbanized during these years, and many workers in the manufacturing and mining sectors drank beer during work and after. Third, many workers began to receive higher wages and salaries during these years, enabling them to buy more beer. Fourth, beer benefited from members of the temperance movement who advocated lower alcohol beer over higher alcohol spirits such as rum or whiskey.[2] Fifth, a series of technological and scientific developments fostered greater beer production and the brewing of new styles of beer. For example, artificial refrigeration enabled brewers to brew during warm American summers, and pasteurization, the eponymous procedure developed by Louis Pasteur, helped extend packaged beer’s shelf life, making storage and transportation more reliable (Stack, 2000). Finally, American brewers began brewing lager beer, a style that had long been popular in Germany and other continental European countries. Traditionally, beer in America meant British-style ale. Ales are brewed with top fermenting yeasts, and this category ranges from light pale ales to chocolate-colored stouts and porters. During the 1840s, American brewers began making German-style lager beers. In addition to requiring a longer maturation period than ales, lager beers use a bottom fermenting yeast and are much more temperature sensitive. Lagers require a great deal of care and attention from brewers, but to the increasing numbers of nineteenth century German immigrants, lager was synonymous with beer. As the nineteenth century wore on, lager production soared, and by 1900, lager outsold ale by a significant margin.

Together, these factors helped transform the market for beer. Total beer production increased from 3.6 million barrels in 1865 to over 66 million barrels in 1914. By 1910, brewing had grown into one of the leading manufacturing industries in America. Yet, this increase in output did not simply reflect America’s growing population. While the number of beer drinkers certainly did rise during these years, perhaps just as importantly, per capita consumption also rose dramatically, from under four gallons in 1865 to 21 gallons in the early 1910s.

Table 1: Industry Production and per Capita Consumption, 1865-1915

width=”540″>

Year National Production (millions of barrels) Per Capita Consumption (gallons)
1865 3.7 3.4
1870 6.6 5.3
1875 9.5 6.6
1880 13.3 8.2
1885 19.2 10.5
1890 27.6 13.6
1895 33.6 15.0
1900 39.5 16.0
1905 49.5 18.3
1910 59.6 20.0
1915 59.8 18.7

Source: United States Brewers Association, 1979 Brewers Almanac, Washington, DC: 12-13.

An equally impressive transformation was underway at the level of the firm. Until the 1870s and 1880s, American breweries had been essentially small scale, local operations. By the late nineteenth century, several companies began to increase their scale of production and scope of distribution. Pabst Brewing Company in Milwaukee and Anheuser-Busch in St. Louis became two of the nation’s first nationally-oriented breweries, and the first to surpass annual production levels of one million barrels. By utilizing the growing railroad system to distribute significant amounts of their beer into distant beer markets, Pabst, Anheuser-Busch and a handful of other enterprises came to be called “shipping” breweries. Though these firms became very powerful, they did not control the pre-Prohibition market for beer. Rather, an equilibrium emerged that pitted large and regional shipping breweries that incorporated the latest innovations in pasteurizing, bottling, and transporting beer against a great number of locally-oriented breweries that mainly supplied draught beer in wooden kegs to their immediate markets (Stack, 2000).

Table 2: Industry Production, the Number of Breweries, and Average Brewery Size

1865-1915

width=”504″>

Year National Production (millions of barrels) Number of Breweries Average Brewery Size (thousands of barrels)
1865 3.7 2,252 1,643
1870 6.6 3,286 2,009
1875 9.5 2,783 3,414
1880 13.3 2,741 4,852
1885 19.2 2,230 8,610
1890 27.6 2,156 12,801
1895 33.6 1,771 18,972
1900 39.5 1,816 21,751
1905 49.5 1,847 26,800
1910 59.6 1,568 38,010
1915 59.8 1,345 44,461

Source: United States Brewers Association, 1979 Brewers Almanac, Washington DC: 12-13.

Between the Civil War and national prohibition, the production and consumption of beer greatly outpaced spirits. Though consumption levels of absolute alcohol had peaked in the early 1800s, temperance and prohibition forces grew increasingly vocal and active as the century wore on, and by the late 1800s, they constituted one of the best-organized political pressure groups of the day (Kerr, Chapter 5, 1985). Their efforts culminated in the ratification of the Eighteenth Amendment on January 29, 1919 that, along with the Volstead Act, made the production and distribution of any beverages with more than one-half of one percent alcohol illegal. While estimates of alcohol activity during Prohibition’s thirteen year reign — from 1920 to 1933 — are imprecise, beer consumption almost certainly fell, though spirit consumption may have remained constant or actually even increased slightly (Rorbaugh, Appendices).

1920-1933: The Dark Years, Prohibition

The most important decision all breweries had to make after 1920 was what to do with their plants and equipment. As they grappled with this question, they made implicit bets as to whether Prohibition would prove to be merely a temporary irritant. Pessimists immediately divested themselves of all their brewing equipment, often at substantial losses. Other firms decided to carry on with related products, and so stay prepared for any modifications to the Volstead Act which would allow for beer. Schlitz, Blatz, Pabst, and Anheuser-Busch, the leading pre-Prohibition shippers, began producing near beer, a malt beverage with under one-half of one percent alcohol. While it was not a commercial success, its production allowed these firms to keep current their beer-making skills. Anheuser-Busch called its near beer “Budweiser” which was “simply the old Budweiser lager beer, brewed according to the traditional method, and then de-alcoholized. … August Busch took the same care in purchasing the costly materials as he had done during pre-prohibition days” (Krebs and Orthwein, 1953, 165). Anheuser-Busch and some of the other leading breweries were granted special licenses by the federal government for brewing alcohol greater than one half of one percent for “medicinal purposes” (Plavchan, 1969, 168). Receiving these licensees gave these breweries a competitive advantage as they were able to keep their brewing staff active in beer-making.

The shippers, and some local breweries, also made malt syrup. While they officially advertised it as an ingredient for baking cookies, and while its production was left alone by the government, it was readily apparent to all that its primary use was for homemade beer.

Of perhaps equal importance to the day-to-day business activities of the breweries were their investment decisions. Here, as in so many other places, the shippers exhibited true entrepreneurial insight. Blatz, Pabst, and Anheuser-Busch all expanded their inventories of automobiles and trucks, which became key assets after repeal. In the 1910s, Anheuser-Busch invested in motorized vehicles to deliver beer; by the 1920s, it was building its own trucks in great numbers. While it never sought to become a major producer of delivery vehicles, its forward expansion in this area reflected its appreciation of the growing importance of motorized delivery, an insight which they built on after repeal.

The leading shippers also furthered their investments in bottling equipment and machinery, which was used in the production of near beer, root beer, ginger ale, and soft drinks. These products were not the commercial successes beer had been, but they gave breweries important experience in bottling. While 85 percent of pre-Prohibition beer was kegged, during Prohibition over 80 percent of near beer and a smaller, though growing, percentage of soft drinks was sold in bottles.

This remarkable increase in packaged product impelled breweries to refine their packaging skills and modify their retailing practice. As they sold near beer and soft drinks to drugstores and drink stands, they encountered new marketing problems (Cochran, 1948, 340). Experience gained during these years helped the shippers meet radically different distribution requirements of the post-repeal beer market.

They were learning about canning as well as bottling. In 1925, Blatz’s canned malt syrup sales were more than $1.3 million, significantly greater than its bulk sales. Anheuser-Busch used cans from the American Can Company for its malt syrup in the early 1920s, a firm which would gain national prominence in 1935 for helping to pioneer the beer can. Thus, the canning of malt syrup helped create the first contacts between the leading shipping brewers and American Can Company (Plavchan, 1969, 178; Conny, 1990, 35-36; and American Can Company, 1969, 7-9).

These expensive investments in automobiles and bottling equipment were paid for in part by selling off branch properties, namely saloons (See Cochran, 1948; Plavchan, 1969; Krebs and Orthwein, 1953). Some had equipped their saloons with furniture and bar fixtures, but as Prohibition wore on, they progressively divested themselves of these assets.

1933-1945: The Industry Reawakens after the Repeal of Prohibition

In April 1933 Congress amended the Volstead Act to allow for 3.2 percent beer. Eight months later, in December, Congress and the states ratified the Twenty-first Amendment, officially repealing Prohibition. From repeal until World War II, the brewing industry struggled to regain its pre-Prohibition fortunes. Prior to prohibition, breweries owned or controlled many saloons, which were the dominant retail outlets for alcohol. To prevent the excesses that had been attributed to saloons from reoccurring, post-repeal legislation forbade alcohol manufacturers from owning bars or saloons, requiring them instead to sell their beer to wholesalers that in turn would distribute their beverages to retailers.

Prohibition meant the end of many small breweries that had been profitable, and that, taken together, had posed a formidable challenge to the large shipping breweries. The shippers, who had much greater investments, were not as inclined to walk away from brewing.[3] After repeal, therefore, they reopened for business in a radically new environment, one in which their former rivals were absent or disadvantaged. From this favorable starting point, they continued to consolidate their position. Several hundred locally oriented breweries did reopen, but were unable to regain their pre-Prohibition competitive edge, and they quickly exited the market. From 1935 to 1940, the number of breweries fell by ten percent.

Table 3: U.S. Brewing Industry Data, 1910-1940

Year Number of Breweries Number of Barrels Produced (millions) Average Barrelage per Brewery Largest Firm Production (millions of barrels) Per Capita Consumption (gallons)
1910 1,568 59.5 37,946 1.5 20.1
1915 1,345 59.8 44,461 1.1 18.7
1934 756 37.7 49,867 1.1 7.9
1935 766 45.2 59,008 1.1 10.3
1936 739 51.8 70,095 1.3 11.8
1937 754 58.7 77,851 1.8 13.3
1938 700 56.3 80,429 2.1 12.9
1939 672 53.8 80,059 2.3 12.3
1940 684 54.9 80,263 2.5 12.5

Source: Cochran, 1948; Krebs and Orthwein, 1953; and United States Brewers Almanac, 1956.

Annual industry output, after struggling in 1934 and 1935, began to approach the levels reached in the 1910s. Yet, these total increases are somewhat misleading, as the population of the U.S. had risen from 92 to 98 million in the 1910s to 125 to 130 million in the 1930s (Brewers Almanac, 1956, 10). This translated directly into the lower per capita consumption levels reported in Table 3.

The largest firms grew even larger in the years following repeal, quickly surpassing their pre-Prohibition annual production levels. The post-repeal industry leaders, Anheuser-Busch and Pabst, doubled their annual production levels from 1935 to 1940.

To take for granted the growing importance of the leading shippers during this period is to ignore their momentous reversal of pre-Prohibition trends. While medium-sized breweries dominated the industry output in the years leading up to Prohibition, the shippers regained in the 1930s the dynamism they manifested from the 1870s to the 1890s. Table 4 compares the fortunes of the shippers in relation to the industry as a whole. From 1877 to 1895, Anheuser-Busch and Pabst, the two most prominent shippers, grew much faster than the industry, and their successes helped pull the industry along. This picture changed during the years 1895 to 1915, when the industry grew much faster than the shippers (Stack, 2000). With the repeal of Prohibition, the tides changed again: from 1934 to 1940, the brewing industry grew very slowly, while Anheuser-Busch and Pabst enjoyed tremendous increases in their annual sales.

Table 4: Percentage Change in Output among Shipping Breweries, 1877-1940

Period Anheuser-Busch Pabst Industry
1877-1895 1,106% 685% 248%
1895-1914 58% -23% 78%
1934-1940 173% 87% 26%

Source: Cochran, 1948; Krebs and Orthwein, 1953; and Brewers Almanac, 1956.

National and regional shippers increasingly dominated the market. Breweries such as Anheuser-Busch, Pabst and Schlitz came to exemplify the modern business enterprise, as described by Alfred Chandler (Chandler, 1977), which adeptly integrated mass production and mass distribution.

Table 5: Leading Brewery Output Levels, 1938-1940

Brewery Plant Location 1938 (bls) 1939 (bls) 1940 (bls)
Anheuser-Busch St. Louis, MO 2,087,000 2,306,000 2,468,000
Pabst Brewing Milwaukee, WI

Peoria Heights, IL

1,640,000 1,650,000 1,730,000
Jos. Schlitz Milwaukee, WI 1,620,000 1,651,083 1,570,000
F & M Schafer Brooklyn, NY 1,025,000 1,305,000 1,390,200
P. Ballantine Newark, NJ 1,120,000 1,289,425 1,322,346
Jacob Ruppert New York, NY 1,417,000 1,325,350 1,228,400
Falstaff Brewing St. Louis, MO

New Orleans, LA

Omaha, NE

622,000 622,004 684,537
Duquesne Brewing Pittsburgh, PA

Carnegie, PA

McKees Rock, PA

625,000 680,000 690,000
Theo. Hamm Brewing St. Paul, MN 750,000 780,000 694,200
Liebman Breweries Brooklyn, NY 625,000 632,558 670,198

Source: Fein, 1942, 35.

World War One had presented a direct threat to the brewing industry. Government officials used war-time emergencies to impose grain rationing, a step that led to the lowering of the alcohol level of beer to 2.75 percent. World War Two had a completely different effect on the industry: rather than output falling, beer production rose from 1941 to 1945.

Table 6: Production and Per Capita Consumption, 1940-1945

width=”607″>

Year Number of Breweries Number of barrels withdrawn (millions) Per Capita Consumption (gallons)
1940 684 54.9 12.5
1941 574 55.2 12.3
1942 530 63.7 14.1
1943 491 71.0 15.8
1944 469 81.7 18.0
1945 468 86.6 18.6

Source: 1979 USBA, 12-14.

During the war, the industry mirrored the nation at large by casting off its sluggish depression-era growth. As the war economy boomed, consumers, both troops and civilians, used some of their wages for beer, and per capita consumption grew by 50 percent between 1940 and 1945.

1945-1980: Following World War II, the Industry Continues to Grow and to Consolidate

Yet, the take-off registered during the World War II was not sustained during the ensuing decades. Total production continued to grow, but at a slower rate than overall population.

Table 7: Production and per Capita Consumption, 1945-1980

width=”607″>

Year Number of Breweries Number of barrels withdrawn (millions) Per Capita Consumption (gallons)
1945 468 86.6 18.6
1950 407 88.8 17.2
1955 292 89.8 15.9
1960 229 94.5 15.4
1965 197 108.0 16.0
1970 154 134.7 18.7
1975 117 157.9 21.1
1980 101 188.4 23.1

Source: 1993 USBA, 7-8.

The period following WWII was characterized by great industry consolidation. Total output continued to grow, though per capita consumption fell into the 1960s before rebounding to levels above 21 gallons per capita in the 1970s, the highest rates in the nation’s history. Not since the 1910s, had consumption levels topped 21 gallons a year; however, there was a significant difference. Prior to Prohibition most consumers bought their beer from local or regional firms and over 85 percent of the beer was served from casks in saloons. Following World War II, two significant changes radically altered the market for beer. First, the total number of breweries operating fell dramatically. This signaled the growing importance of the large national breweries. While many of these firms — Anheuser-Busch, Pabst, Schlitz, and Blatz — had grown into prominence in the late nineteenth century, the scale of their operations grew tremendously in the years after the repeal of prohibition. From the mid 1940s to 1980, the five largest breweries saw their share of the national market grow from 19 to 75 percent (Adams, 125).

Table 8: Concentration of the Brewing Industry, 1947-1981

Year Five Largest (%) Ten Largest (%) Herfindahl Index[4]
1947 19.0 28.2 140
1954 24.9 38.3 240
1958 28.5 45.2 310
1964 39.0 58.2 440
1968 47.6 63.2 690
1974 64.0 80.8 1080
1978 74.3 92.3 1292
1981 75.9 93.9 1614

Source: Adams, 1995, 125.

The other important change concerned how beer was sold. Prior to Prohibition, nearly all beer was sold on-tap in bars or saloons; while approximately 10-15 percent of the beer was bottled, it was much more expensive than draught beer. In 1935, a few years after repeal, the American Can Company successfully canned beer for the first time. The spread of home refrigeration helped spur consumer demand for canned and bottled beer, and from 1935 onwards, draught beer sales have fallen markedly.

Table 9: Packaged vs. Draught Sales, 1935-1980

Year Packaged sales as a percentage of total sales

(bottled and canned)

Draught sales as a percentage of total sales
1935 30 70
1940 52 48
1945 64 36
1950 72 28
1955 78 22
1960 81 19
1965 82 18
1970 86 14
1975 88 12
1980 88 12

Source: 1979 USBA, 20; 1993 USBA, 14.

The rise of packaged beer contributed to the growing industry consolidation detailed in Table 8.

1980-2000: Continued Growth, the Microbrewery Movement, and International Dimensions of the Brewing Industry

From 1980 to 2000, beer production continued to rise, reaching nearly 200 million barrels in 2000. Per capita consumption hit its highest recorded level in 1981 with 23.8 gallons. Since then, though, consumption levels have dropped a bit, and during the 1990s, consumption was typically in the 21-22 gallon range.

Table 10: Production and Per Capita Consumption, 1980-1990

Year Number of Breweries Number of barrels withdrawn (millions) Per Capita Consumption (gallons)
1980 101 188.4 23.1
1985 105 193.8 22.7
1990 286 201.7 22.6

Source: 1993 USBA, 7-8.

Beginning around 1980, the long decline in the number of breweries slowed and then was reversed. Judging solely by the number of breweries in operation, it appeared that a significant change had occurred: the number of firms began to increase, and by the late 1990s, hundreds of new breweries were operating in the U.S. However, this number is rather misleading: the overall industry remained very concentrated, with a three firm concentration ratio in 2000 of 81 percent.

Table 11: Production Levels of the Leading Breweries, 2000

Production (millions of barrels)
Anheuser-Busch 99.2
Miller 39.8
Coors 22.7
Total Domestic Sales 199.4

Source: Beverage Industry, May 2003, 19.

Although entrepreneurs and beer enthusiasts began hundreds of new breweries during this period, most of them were very small, with annual production levels of between 5,000 to 100,000 barrels annually. Reflecting their small size, these new firms were nicknamed microbreweries. Collectively, microbreweries have grown to account for approximately 5-7 percent of the total beer market.

Microbreweries represented a new strategy in the brewing industry: rather than competing on the basis of price or advertising, they attempted to compete on the basis of inherent product characteristics. They emphasized the freshness of locally produced beer; they experimented with much stronger malt and hop flavors; they tried new and long-discarded brewing recipes, often reintroducing styles that had been popular in America decades earlier. Together, these breweries have had an influence much greater than their market share would suggest. The big three breweries, Anheuser Busch, Miller, and Coors, have all tried to incorporate ideas from the microbrewery movement. They have introduced new marquee brands intended to compete for some of this market, and when this failed, they have bought shares in or outright control of some microbreweries.

A final dimension of the brewing industry that has been changing concerns the emerging global market for beer. Until very recently, America was the biggest beer market in the world: as a result, American breweries have not historically looked abroad for additional sales, preferring to expand their share of the domestic market.[5] In the1980s, Anheuser-Busch began to systematically evaluate its market position. While it had done very well in the U.S., it had not tapped markets overseas; as a result, it began a series of international business dealings. It gradually moved from exporting small amounts of its flagship brand Budwesier to entering into licensing accords whereby breweries in a range of countries such as Ireland, Japan, and Argentina began to brew Budweiser for sale in their domestic markets. In 1995, it established its first breweries outside of the U.S., one in England for the European market and the other in China, to service the growing markets in China and East Asia.[6]

While U.S. breweries such as Anheuser-Busch have only recently begun to explore the opportunities abroad, foreign firms have long appreciated the significance of the American market. Beginning in the late 1990s, imports began to increase their market share and by the early 2000s, they accounted for approximately 12 percent of the large U.S. market. Imports and microbrews typically cost more than the big three’s beers and they provide a wider range of flavors and tastes. One of the most interesting developments in the international market for beer occurred in 2002 when South African Breweries (SAB), the dominant brewery in South Africa, and an active firm in Europe, acquired Miller, the second largest brewery in the U.S. Though not widely discussed in the U.S., this may portend a general move towards increased global integration in the world market for beer.

Annotated Bibliography

Adams, Walter and James Brock, editors. The Structure of American Industry, ninth edition. Englewood Cliffs, New Jersey: Prentice Hall, 1995.

Apps, Jerry. Breweries of Wisconsin. Madison, WI: University of Wisconsin Press, 1992. Detailed examination of the history of breweries and brewing in Wisconsin.

Baron, Stanley. Brewed In America: A History of Beer and Ale in the United States.

Boston: Little, Brown, and Co, 1962: Very good historical overview of brewing in America, from the Pilgrims through the post-World War II era.

Baum, Dan. Citizen Coors: A Grand Family Saga of Business, Politics, and Beer. New York: Harper Collins, 2000. Very entertaining story of the Coors family and the brewery they made famous.

Beverage Industry (May 2003): 19-20.

Blum, Peter. Brewed In Detroit: Breweries and Beers since 1830. Detroit: Wayne State University Press, 1999. Very good discussion of Detroit’s major breweries and how they evolved. Particularly strong on the Stroh brewery.

Cochran, Thomas. Pabst Brewing Company: The History of an American Business. New York: New York University Press, 1948: A very insightful, well-researched, and well- written history of one of America’s most important breweries. It is strongest on the years leading up to Prohibition.

Downard, William. The Cincinnati Brewing Industry: A Social and Economic History. Ohio University Press, 1973: A good history of brewing in Cincinnati; particularly strong in the years prior to Prohibition.

Downard, William. Dictionary of the History of the American Brewing and Distilling Industries. Westport, CT: Greenwood Press, 1980: Part dictionary and part encyclopedia, a useful compendium of terms, people, and events relating to the brewing and distilling industries.

Duis, Perry. The Saloon: Public Drinking in Chicagoand Boston, 1880-1920. Urbana: University of Illinois Press, 1983: An excellent overview of the institution of the saloon in pre-Prohibition America.

Eckhardt, Fred. The Essentials of Beer Style. Portland, OR: Fred Eckhardt Communications, 1995: A helpful introduction into the basics of how beer is made and how beer styles differ.

Ehert, George. Twenty-Five Years of Brewing. New York: Gast Lithograph and Engraving, 1891: An interesting snapshot of an important late nineteenth century New York City brewery.

Elzinga, Kenneth. “The Beer Industry.” In The Structure of American Industry, ninth edition, edited by W. Adams and J. Brock. Englewood Cliffs, New Jersey: Prentice Hall, 1995: A good overview summary of the history, structure, conduct, and performance of America’s brewing industry.

Fein, Edward. “The 25 Leading Brewers in the United States Produce 41.5% of the Nation’s Total Beer Output.” Brewers Digest 17 (October 1942): 35.

Greer, Douglas. “Product Differentiation and Concentration in the Brewing Industry,” Journal of Industrial Economics 29 (1971): 201-19.

Greer, Douglas. “The Causes of Concentration in the Brewing Industry,” Quarterly Review of Economics and Business 21 (1981): 87-106.

Greer, Douglas. “Beer: Causes of Structural Change.” In Industry Studies, second edition, edited by Larry Duetsch, Armonk, New York: M.E. Sharpe, 1998.

Hernon, Peter and Terry Ganey. Under the Influence: The Unauthorized Story of the Anheuser-Busch Dynasty. New York: Simon and Schuster, 1991: Somewhat sensationalistic history of the family that has controlled America’s largest brewery, but some interesting pieces on the brewery are included.

Horowitz, Ira and Ann Horowitz. “Firms in a Declining Market: The Brewing Case.” Journal of Industrial Economics 13 (1965): 129-153.

Jackson, Michael. The New World Guide To Beer. Philadelphia: Running Press, 1988: Good overview of the international world of beer and of America’s place in the international beer market.

Keithan, Charles. The Brewing Industry. Washington D.C: Federal Trade Commission, 1978.

Kerr, K. Austin. Organized for Prohibition. New Haven: Yale Press, 1985: Excellent study of the rise of the Anti-Saloon League in the United States.

Kostka, William. The Pre-prohibition History of Adolph Coors Company: 1873-1933. Golden, CO: self-published book, Adolph Coors Company, 1973: A self-published book by the Coors company that provides some interesting insights into the origins of the Colorado brewery.

Krebs, Roland and Orthwein, Percy. Making Friends Is Our Business: 100 Years of Anheuser-Busch. St. Louis, MO: self-published book, Anheuser-Busch, 1953: A self-published book by the Anheuser-Busch brewery that has some nice illustrations and data on firm output levels. The story is nicely told but rather self-congratulatory.

“Large Brewers Boost Share of U.S. Beer Business,” Brewers Digest, 15 (July 1940): 55-57.

Leisley, Bruce. A History of Leisley Brewing. North Newton Kansas: Mennonite Press, 1975: A short but useful history of the Leisley Brewing Company. This was the author’s undergraduate thesis.

Lender, Mark and James Martin. Drinking in America. New York: The Free Press, 1987: Good overview of the social history of drinking in America.

McGahan, Ann. “The Emergence of the National Brewing Oligopoly: Competition in the American Market, 1933-58.” Business History Review 65 (1991): 229-284: Excellent historical analysis of the origins of the brewing oligopoly following the repeal of Prohibition.

McGahan, Ann. “Cooperation in Prices and Capacities: Trade Associations in Brewing after Repeal.” Journal of Law and Economics 38 (1995): 521-559.

Meier, Gary and Meier, Gloria. Brewed in the Pacific Northwest: A History of Beer Making in Oregon and Washington. Seattle: Fjord Press, 1991: A survey of the history of brewing in the Pacific Northwest.

Miller, Carl. Breweries of Cleveland. Cleveland, OH: Schnitzelbank Press, 1998: Good historical overview of the brewing industry in Cleveland.

Norman, Donald. Structural Change and Performance in the U.S. Brewing Industry. Ph.D. dissertation, UCLA, 1975.

One Hundred Years of Brewing. Chicago and New York: Arno Press Reprint, 1903 (Reprint 1974): A very important work. Very detailed historical discussion of the American brewing industry through the end of the nineteenth century.

Persons, Warren. Beer and Brewing In America: An Economic Study. New York: United Brewers Industrial Foundation, 1940.

Plavchan, Ronald. A History of Anheuser-Busch, 1852-1933. Ph.D. dissertation, St. Louis University, 1969: Apart from Cochran’s analysis of Pabst, one of a very few detailed business histories of a major American brewery.

Research Company of America. A National Survey of the Brewing Industry. self-published, 1941: A well research industry analysis with a wealth of information and data.

Rorbaugh, William. The Alcoholic Republic: An American Tradition. New York: Oxford University Press, 1979: Excellent scholarly overview of drinking habits in America.

Rubin, Jay. “The Wet War: American Liquor, 1941-1945.” In Alcohol, Reform, and Society, edited by J. Blocker. Westport, CT: Greenwood Press, 1979: Interesting discussion of American drinking during World War II.

Salem, Frederick. 1880. Beer: Its History and Its Economic Value as a National Beverage. New York: Arno Press, 1880 (Reprint 1972): Early but valuable discussion of American brewing industry.

Scherer, F.M. Industry Structure, Strategy, and Public Policy. New York: Harper Collins, 1996: A very good essay on the brewing industry.

Shih, Ko Ching and C. Ying Shih. American Brewing Industry and the Beer Market. Brookfield, WI, 1958: Good overview of the industry with some excellent data tables.

Skilnik, Bob. The History of Beer and Brewing in Chicago: 1833-1978. Pogo Press, 1999: Good overview of the history of brewing in Chicago.

Smith, Greg. Beer in America: The Early Years, 1587 to 1840. Boulder, CO: Brewers Publications, 1998: Well written account of beer’s development in America, from the Pilgrims to mid-nineteenth century.

Stack, Martin. “Local and Regional Breweries in America’s Brewing Industry, 1865-1920.” Business History Review 74 (Autumn 2000): 435-63.

Thomann, Gallus. American Beer: Glimpses of Its History and Description of Its Manufacture. New York: United States Brewing Association, 1909: Interesting account of the state of the brewing industry at the turn of the twentieth century.

United States Brewers Association. Annual Year Book, 1909-1921. Very important primary source document published by the leading brewing trade association.

United States Brewers Foundation. Brewers Almanac, published annually, 1941-present: Very important primary source document published by the leading brewing trade association.

Van Wieren, Dale. American Breweries II. West Point, PA: Eastern Coast Brewiana Association, 1995. Comprehensive historical listing of every brewery in every state, arranged by city within each state.


[1] A barrel of beer is 31 gallons. One Hundred Years of Brewing, Chicagoand New York: Arno Press Reprint, 1974: 252.

[2] During the nineteenth century, there were often distinctions between temperance advocates, who differentiated between spirits and beer, and prohibition supporters, who campaigned on the need to eliminate all alcohol.

[3] The major shippers may have been taken aback by the loss suffered by Lemp, one of the leading pre-Prohibition shipping breweries. Lemp was sold at auction in 1922 at a loss of 90 percent on the investment (Baron, 1962, 315).

[4] The Herfinhahl Index sums the squared market shares of the fifty largest firms.

[5] China overtook the United States as the world’s largest beer market in 2002.

[6] http://www.anheuser-busch.com/over/international.html

1650 to 1800: The Early Days of Brewing in America

Brewing in America dates to the first communities established by English and Dutch settlers in the early to mid seventeenth century. Dutch immigrants quickly recognized that the climate and terrain of present-day New York were particularly well suited to brewing beer and growing malt and hops, two of beer’s essential ingredients. A 1660 map of New Amsterdam details twenty-six breweries and taverns, a clear indication that producing and selling beer were popular and profitable trades in the American colonies (Baron, Chapter Three). Despite the early popularity of beer, other alcoholic beverages steadily grew in importance and by the early eighteenth century several of them had eclipsed beer commercially.

Between 1650 and the Civil War, the market for beer did not change a great deal: both production and consumption remained essentially local affairs. Bottling was expensive, and beer did not travel well. Nearly all beer was stored in, and then served from, wooden kegs. While there were many small breweries, it was not uncommon for households to brew their own beer. In fact, several of America’s founding fathers brewed their own beer, including George Washington and Thomas Jefferson (Baron, Chapters 13 and 16).

1800-1865: Brewing Begins to Expand

National production statistics are unavailable before 1810, an omission which reflects the rather limited importance of the early brewing industry. In 1810, America’s 140 commercial breweries collectively produced just over 180,000 barrels of beer.[1] During the next fifty years, total beer output continued to increase, but production remained small scale and local. This is not to suggest, however, that brewing could not prove profitable. In 1797, James Vassar founded a brewery in Poughkeepsie, New York whose successes echoed far beyond the brewing industry. After several booming years Vassar ceded control of the brewery to his two sons, Matthew and John. Following the death of his brother in an accident and a fire that destroyed the plant, Matthew Vassar rebuilt the brewery in 1811. Demand for his beer grew rapidly, and by the early 1840s, the Vassar brewery produced nearly 15,000 barrels of ale and porter annually, a significant amount for this period. Continued investment in the firm facilitated even greater production levels, and by 1860 its fifty employees turned out 30,000 barrels of beer, placing it amongst the nation’s largest breweries. Today, the Vassar name is better known for the college Matthew Vassar endowed in 1860 with earnings from the brewery (Baron, Chapter 17).

1865-1920: Brewing Emerges as a Significant Industry

While there were several hundred small scale, local breweries in the 1840s and 1850s, beer did not become a mass-produced, mass-consumed beverage until the decades following the Civil War. Several factors contributed to beer’s emergence as the nation’s dominant alcoholic drink. First, widespread immigration from strong beer drinking countries such as Britain, Ireland, and Germany contributed to the creation of a beer culture in the U.S.. Second, America was becoming increasingly industrialized and urbanized during these years, and many workers in the manufacturing and mining sectors drank beer during work and after. Third, many workers began to receive higher wages and salaries during these years, enabling them to buy more beer. Fourth, beer benefited from members of the temperance movement who advocated lower alcohol beer over higher alcohol spirits such as rum or whiskey.[2] Fifth, a series of technological and scientific developments fostered greater beer production and the brewing of new styles of beer. For example, artificial refrigeration enabled brewers to brew during warm American summers, and pasteurization, the eponymous procedure developed by Louis Pasteur, helped extend packaged beer’s shelf life, making storage and transportation more reliable (Stack, 2000). Finally, American brewers began brewing lager beer, a style that had long been popular in Germany and other continental European countries. Traditionally, beer in America meant British-style ale. Ales are brewed with top fermenting yeasts, and this category ranges from light pale ales to chocolate-colored stouts and porters. During the 1840s, American brewers began making German-style lager beers. In addition to requiring a longer maturation period than ales, lager beers use a bottom fermenting yeast and are much more temperature sensitive. Lagers require a great deal of care and attention from brewers, but to the increasing numbers of nineteenth century German immigrants, lager was synonymous with beer. As the nineteenth century wore on, lager production soared, and by 1900, lager outsold ale by a significant margin.

Together, these factors helped transform the market for beer. Total beer production increased from 3.6 million barrels in 1865 to over 66 million barrels in 1914. By 1910, brewing had grown into one of the leading manufacturing industries in America. Yet, this increase in output did not simply reflect America’s growing population. While the number of beer drinkers certainly did rise during these years, perhaps just as importantly, per capita consumption also rose dramatically, from under four gallons in 1865 to 21 gallons in the early 1910s.

Table 1: Industry Production and per Capita Consumption, 1865-1915

Year National Production (millions of barrels) Per Capita Consumption (gallons)
1865 3.7 3.4
1870 6.6 5.3
1875 9.5 6.6
1880 13.3 8.2
1885 19.2 10.5
1890 27.6 13.6
1895 33.6 15.0
1900 39.5 16.0
1905 49.5 18.3
1910 59.6 20.0
1915 59.8 18.7

Source: United States Brewers Association, 1979 Brewers Almanac, Washington, DC: 12-13.

An equally impressive transformation was underway at the level of the firm. Until the 1870s and 1880s, American breweries had been essentially small scale, local operations. By the late nineteenth century, several companies began to increase their scale of production and scope of distribution. Pabst Brewing Company in Milwaukee and Anheuser-Busch in St. Louis became two of the nation’s first nationally-oriented breweries, and the first to surpass annual production levels of one million barrels. By utilizing the growing railroad system to distribute significant amounts of their beer into distant beer markets, Pabst, Anheuser-Busch and a handful of other enterprises came to be called “shipping” breweries. Though these firms became very powerful, they did not control the pre-Prohibition market for beer. Rather, an equilibrium emerged that pitted large and regional shipping breweries that incorporated the latest innovations in pasteurizing, bottling, and transporting beer against a great number of locally-oriented breweries that mainly supplied draught beer in wooden kegs to their immediate markets (Stack, 2000).

Table 2: Industry Production, the Number of Breweries, and Average Brewery Size

1865-1915

Year National Production (millions of barrels) Number of Breweries Average Brewery Size (thousands of barrels)
1865 3.7 2,252 1,643
1870 6.6 3,286 2,009
1875 9.5 2,783 3,414
1880 13.3 2,741 4,852
1885 19.2 2,230 8,610
1890 27.6 2,156 12,801
1895 33.6 1,771 18,972
1900 39.5 1,816 21,751
1905 49.5 1,847 26,800
1910 59.6 1,568 38,010
1915 59.8 1,345 44,461

Source: United States Brewers Association, 1979 Brewers Almanac, Washington DC: 12-13.

Between the Civil War and national prohibition, the production and consumption of beer greatly outpaced spirits. Though consumption levels of absolute alcohol had peaked in the early 1800s, temperance and prohibition forces grew increasingly vocal and active as the century wore on, and by the late 1800s, they constituted one of the best-organized political pressure groups of the day (Kerr, Chapter 5, 1985). Their efforts culminated in the ratification of the Eighteenth Amendment on January 29, 1919 that, along with the Volstead Act, made the production and distribution of any beverages with more than one-half of one percent alcohol illegal. While estimates of alcohol activity during Prohibition’s thirteen year reign — from 1920 to 1933 — are imprecise, beer consumption almost certainly fell, though spirit consumption may have remained constant or actually even increased slightly (Rorbaugh, Appendices).

1920-1933: The Dark Years, Prohibition

The most important decision all breweries had to make after 1920 was what to do with their plants and equipment. As they grappled with this question, they made implicit bets as to whether Prohibition would prove to be merely a temporary irritant. Pessimists immediately divested themselves of all their brewing equipment, often at substantial losses. Other firms decided to carry on with related products, and so stay prepared for any modifications to the Volstead Act which would allow for beer. Schlitz, Blatz, Pabst, and Anheuser-Busch, the leading pre-Prohibition shippers, began producing near beer, a malt beverage with under one-half of one percent alcohol. While it was not a commercial success, its production allowed these firms to keep current their beer-making skills. Anheuser-Busch called its near beer “Budweiser” which was “simply the old Budweiser lager beer, brewed according to the traditional method, and then de-alcoholized. … August Busch took the same care in purchasing the costly materials as he had done during pre-prohibition days” (Krebs and Orthwein, 1953, 165). Anheuser-Busch and some of the other leading breweries were granted special licenses by the federal government for brewing alcohol greater than one half of one percent for “medicinal purposes” (Plavchan, 1969, 168). Receiving these licensees gave these breweries a competitive advantage as they were able to keep their brewing staff active in beer-making.

The shippers, and some local breweries, also made malt syrup. While they officially advertised it as an ingredient for baking cookies, and while its production was left alone by the government, it was readily apparent to all that its primary use was for homemade beer.

Of perhaps equal importance to the day-to-day business activities of the breweries were their investment decisions. Here, as in so many other places, the shippers exhibited true entrepreneurial insight. Blatz, Pabst, and Anheuser-Busch all expanded their inventories of automobiles and trucks, which became key assets after repeal. In the 1910s, Anheuser-Busch invested in motorized vehicles to deliver beer; by the 1920s, it was building its own trucks in great numbers. While it never sought to become a major producer of delivery vehicles, its forward expansion in this area reflected its appreciation of the growing importance of motorized delivery, an insight which they built on after repeal.

The leading shippers also furthered their investments in bottling equipment and machinery, which was used in the production of near beer, root beer, ginger ale, and soft drinks. These products were not the commercial successes beer had been, but they gave breweries important experience in bottling. While 85 percent of pre-Prohibition beer was kegged, during Prohibition over 80 percent of near beer and a smaller, though growing, percentage of soft drinks was sold in bottles.

This remarkable increase in packaged product impelled breweries to refine their packaging skills and modify their retailing practice. As they sold near beer and soft drinks to drugstores and drink stands, they encountered new marketing problems (Cochran, 1948, 340). Experience gained during these years helped the shippers meet radically different distribution requirements of the post-repeal beer market.

They were learning about canning as well as bottling. In 1925, Blatz’s canned malt syrup sales were more than $1.3 million, significantly greater than its bulk sales. Anheuser-Busch used cans from the American Can Company for its malt syrup in the early 1920s, a firm which would gain national prominence in 1935 for helping to pioneer the beer can. Thus, the canning of malt syrup helped create the first contacts between the leading shipping brewers and American Can Company (Plavchan, 1969, 178; Conny, 1990, 35-36; and American Can Company, 1969, 7-9).

These expensive investments in automobiles and bottling equipment were paid for in part by selling off branch properties, namely saloons (See Cochran, 1948; Plavchan, 1969; Krebs and Orthwein, 1953). Some had equipped their saloons with furniture and bar fixtures, but as Prohibition wore on, they progressively divested themselves of these assets.

1933-1945: The Industry Reawakens after the Repeal of Prohibition

In April 1933 Congress amended the Volstead Act to allow for 3.2 percent beer. Eight months later, in December, Congress and the states ratified the Twenty-first Amendment, officially repealing Prohibition. From repeal until World War II, the brewing industry struggled to regain its pre-Prohibition fortunes. Prior to prohibition, breweries owned or controlled many saloons, which were the dominant retail outlets for alcohol. To prevent the excesses that had been attributed to saloons from reoccurring, post-repeal legislation forbade alcohol manufacturers from owning bars or saloons, requiring them instead to sell their beer to wholesalers that in turn would distribute their beverages to retailers.

Prohibition meant the end of many small breweries that had been profitable, and that, taken together, had posed a formidable challenge to the large shipping breweries. The shippers, who had much greater investments, were not as inclined to walk away from brewing.[3] After repeal, therefore, they reopened for business in a radically new environment, one in which their former rivals were absent or disadvantaged. From this favorable starting point, they continued to consolidate their position. Several hundred locally oriented breweries did reopen, but were unable to regain their pre-Prohibition competitive edge, and they quickly exited the market. From 1935 to 1940, the number of breweries fell by ten percent.

Table 3: U.S. Brewing Industry Data, 1910-1940

Year Number of Breweries Number of Barrels Produced (millions) Average Barrelage per Brewery Largest Firm Production (millions of barrels) Per Capita Consumption (gallons)
1910 1,568 59.5 37,946 1.5 20.1
1915 1,345 59.8 44,461 1.1 18.7
1934 756 37.7 49,867 1.1 7.9
1935 766 45.2 59,008 1.1 10.3
1936 739 51.8 70,095 1.3 11.8
1937 754 58.7 77,851 1.8 13.3
1938 700 56.3 80,429 2.1 12.9
1939 672 53.8 80,059 2.3 12.3
1940 684 54.9 80,263 2.5 12.5

Source: Cochran, 1948; Krebs and Orthwein, 1953; and United States Brewers Almanac, 1956.

Annual industry output, after struggling in 1934 and 1935, began to approach the levels reached in the 1910s. Yet, these total increases are somewhat misleading, as the population of the U.S. had risen from 92 to 98 million in the 1910s to 125 to 130 million in the 1930s (Brewers Almanac, 1956, 10). This translated directly into the lower per capita consumption levels reported in Table 3.

The largest firms grew even larger in the years following repeal, quickly surpassing their pre-Prohibition annual production levels. The post-repeal industry leaders, Anheuser-Busch and Pabst, doubled their annual production levels from 1935 to 1940.

To take for granted the growing importance of the leading shippers during this period is to ignore their momentous reversal of pre-Prohibition trends. While medium-sized breweries dominated the industry output in the years leading up to Prohibition, the shippers regained in the 1930s the dynamism they manifested from the 1870s to the 1890s. Table 4 compares the fortunes of the shippers in relation to the industry as a whole. From 1877 to 1895, Anheuser-Busch and Pabst, the two most prominent shippers, grew much faster than the industry, and their successes helped pull the industry along. This picture changed during the years 1895 to 1915, when the industry grew much faster than the shippers (Stack, 2000). With the repeal of Prohibition, the tides changed again: from 1934 to 1940, the brewing industry grew very slowly, while Anheuser-Busch and Pabst enjoyed tremendous increases in their annual sales.

Table 4: Percentage Change in Output among Shipping Breweries, 1877-1940

Period Anheuser-Busch Pabst Industry
1877-1895 1,106% 685% 248%
1895-1914 58% -23% 78%
1934-1940 173% 87% 26%

Source: Cochran, 1948; Krebs and Orthwein, 1953; and Brewers Almanac, 1956.

National and regional shippers increasingly dominated the market. Breweries such as Anheuser-Busch, Pabst and Schlitz came to exemplify the modern business enterprise, as described by Alfred Chandler (Chandler, 1977), which adeptly integrated mass production and mass distribution.

Table 5: Leading Brewery Output Levels, 1938-1940

Brewery Plant Location 1938 (bls) 1939 (bls) 1940 (bls)
Anheuser-Busch St. Louis, MO 2,087,000 2,306,000 2,468,000
Pabst Brewing Milwaukee, WI

Peoria Heights, IL

1,640,000 1,650,000 1,730,000
Jos. Schlitz Milwaukee, WI 1,620,000 1,651,083 1,570,000
F & M Schafer Brooklyn, NY 1,025,000 1,305,000 1,390,200
P. Ballantine Newark, NJ 1,120,000 1,289,425 1,322,346
Jacob Ruppert New York, NY 1,417,000 1,325,350 1,228,400
Falstaff Brewing St. Louis, MO

New Orleans, LA

Omaha, NE

622,000 622,004 684,537
Duquesne Brewing Pittsburgh, PA

Carnegie, PA

McKees Rock, PA

625,000 680,000 690,000
Theo. Hamm Brewing St. Paul, MN 750,000 780,000 694,200
Liebman Breweries Brooklyn, NY 625,000 632,558 670,198

Source: Fein, 1942, 35.

World War One had presented a direct threat to the brewing industry. Government officials used war-time emergencies to impose grain rationing, a step that led to the lowering of the alcohol level of beer to 2.75 percent. World War Two had a completely different effect on the industry: rather than output falling, beer production rose from 1941 to 1945.

Table 6: Production and Per Capita Consumption, 1940-1945

Year Number of Breweries Number of barrels withdrawn (millions) Per Capita Consumption (gallons)
1940 684 54.9 12.5
1941 574 55.2 12.3
1942 530 63.7 14.1
1943 491 71.0 15.8
1944 469 81.7 18.0
1945 468 86.6 18.6

Source: 1979 USBA, 12-14.

During the war, the industry mirrored the nation at large by casting off its sluggish depression-era growth. As the war economy boomed, consumers, both troops and civilians, used some of their wages for beer, and per capita consumption grew by 50 percent between 1940 and 1945.

1945-1980: Following World War II, the Industry Continues to Grow and to Consolidate

Yet, the take-off registered during the World War II was not sustained during the ensuing decades. Total production continued to grow, but at a slower rate than overall population.

Table 7: Production and per Capita Consumption, 1945-1980

Year Number of Breweries Number of barrels withdrawn (millions) Per Capita Consumption (gallons)
1945 468 86.6 18.6
1950 407 88.8 17.2
1955 292 89.8 15.9
1960 229 94.5 15.4
1965 197 108.0 16.0
1970 154 134.7 18.7
1975 117 157.9 21.1
1980 101 188.4 23.1

Source: 1993 USBA, 7-8.

The period following WWII was characterized by great industry consolidation. Total output continued to grow, though per capita consumption fell into the 1960s before rebounding to levels above 21 gallons per capita in the 1970s, the highest rates in the nation’s history. Not since the 1910s, had consumption levels topped 21 gallons a year; however, there was a significant difference. Prior to Prohibition most consumers bought their beer from local or regional firms and over 85 percent of the beer was served from casks in saloons. Following World War II, two significant changes radically altered the market for beer. First, the total number of breweries operating fell dramatically. This signaled the growing importance of the large national breweries. While many of these firms — Anheuser-Busch, Pabst, Schlitz, and Blatz — had grown into prominence in the late nineteenth century, the scale of their operations grew tremendously in the years after the repeal of prohibition. From the mid 1940s to 1980, the five largest breweries saw their share of the national market grow from 19 to 75 percent (Adams, 125).

Table 8: Concentration of the Brewing Industry, 1947-1981

Year Five Largest (%) Ten Largest (%) Herfindahl Index[4]
1947 19.0 28.2 140
1954 24.9 38.3 240
1958 28.5 45.2 310
1964 39.0 58.2 440
1968 47.6 63.2 690
1974 64.0 80.8 1080
1978 74.3 92.3 1292
1981 75.9 93.9 1614

Source: Adams, 1995, 125.

The other important change concerned how beer was sold. Prior to Prohibition, nearly all beer was sold on-tap in bars or saloons; while approximately 10-15 percent of the beer was bottled, it was much more expensive than draught beer. In 1935, a few years after repeal, the American Can Company successfully canned beer for the first time. The spread of home refrigeration helped spur consumer demand for canned and bottled beer, and from 1935 onwards, draught beer sales have fallen markedly.

Table 9: Packaged vs. Draught Sales, 1935-1980

Year Packaged sales as a percentage of total sales

(bottled and canned)

Draught sales as a percentage of total sales
1935 30 70
1940 52 48
1945 64 36
1950 72 28
1955 78 22
1960 81 19
1965 82 18
1970 86 14
1975 88 12
1980 88 12

Source: 1979 USBA, 20; 1993 USBA, 14.

The rise of packaged beer contributed to the growing industry consolidation detailed in Table 8.

1980-2000: Continued Growth, the Microbrewery Movement, and International Dimensions of the Brewing Industry

From 1980 to 2000, beer production continued to rise, reaching nearly 200 million barrels in 2000. Per capita consumption hit its highest recorded level in 1981 with 23.8 gallons. Since then, though, consumption levels have dropped a bit, and during the 1990s, consumption was typically in the 21-22 gallon range.

Table 10: Production and Per Capita Consumption, 1980-1990

Year Number of Breweries Number of barrels withdrawn (millions) Per Capita Consumption (gallons)
1980 101 188.4 23.1
1985 105 193.8 22.7
1990 286 201.7 22.6

Source: 1993 USBA, 7-8.

Beginning around 1980, the long decline in the number of breweries slowed and then was reversed. Judging solely by the number of breweries in operation, it appeared that a significant change had occurred: the number of firms began to increase, and by the late 1990s, hundreds of new breweries were operating in the U.S. However, this number is rather misleading: the overall industry remained very concentrated, with a three firm concentration ratio in 2000 of 81 percent.

Table 11: Production Levels of the Leading Breweries, 2000

Production (millions of barrels)
Anheuser-Busch 99.2
Miller 39.8
Coors 22.7
Total Domestic Sales 199.4

Source: Beverage Industry, May 2003, 19.

Although entrepreneurs and beer enthusiasts began hundreds of new breweries during this period, most of them were very small, with annual production levels of between 5,000 to 100,000 barrels annually. Reflecting their small size, these new firms were nicknamed microbreweries. Collectively, microbreweries have grown to account for approximately 5-7 percent of the total beer market.

Microbreweries represented a new strategy in the brewing industry: rather than competing on the basis of price or advertising, they attempted to compete on the basis of inherent product characteristics. They emphasized the freshness of locally produced beer; they experimented with much stronger malt and hop flavors; they tried new and long-discarded brewing recipes, often reintroducing styles that had been popular in America decades earlier. Together, these breweries have had an influence much greater than their market share would suggest. The big three breweries, Anheuser Busch, Miller, and Coors, have all tried to incorporate ideas from the microbrewery movement. They have introduced new marquee brands intended to compete for some of this market, and when this failed, they have bought shares in or outright control of some microbreweries.

A final dimension of the brewing industry that has been changing concerns the emerging global market for beer. Until very recently, America was the biggest beer market in the world: as a result, American breweries have not historically looked abroad for additional sales, preferring to expand their share of the domestic market.[5] In the1980s, Anheuser-Busch began to systematically evaluate its market position. While it had done very well in the U.S., it had not tapped markets overseas; as a result, it began a series of international business dealings. It gradually moved from exporting small amounts of its flagship brand Budwesier to entering into licensing accords whereby breweries in a range of countries such as Ireland, Japan, and Argentina began to brew Budweiser for sale in their domestic markets. In 1995, it established its first breweries outside of the U.S., one in England for the European market and the other in China, to service the growing markets in China and East Asia.[6]

While U.S. breweries such as Anheuser-Busch have only recently begun to explore the opportunities abroad, foreign firms have long appreciated the significance of the American market. Beginning in the late 1990s, imports began to increase their market share and by the early 2000s, they accounted for approximately 12 percent of the large U.S. market. Imports and microbrews typically cost more than the big three’s beers and they provide a wider range of flavors and tastes. One of the most interesting developments in the international market for beer occurred in 2002 when South African Breweries (SAB), the dominant brewery in South Africa, and an active firm in Europe, acquired Miller, the second largest brewery in the U.S. Though not widely discussed in the U.S., this may portend a general move towards increased global integration in the world market for beer.

Annotated Bibliography

Adams, Walter and James Brock, editors. The Structure of American Industry, ninth edition. Englewood Cliffs, New Jersey: Prentice Hall, 1995.

Apps, Jerry. Breweries of Wisconsin. Madison, WI: University of Wisconsin Press, 1992. Detailed examination of the history of breweries and brewing in Wisconsin.

Baron, Stanley. Brewed In America: A History of Beer and Ale in the United States.

Boston: Little, Brown, and Co, 1962: Very good historical overview of brewing in America, from the Pilgrims through the post-World War II era.

Baum, Dan. Citizen Coors: A Grand Family Saga of Business, Politics, and Beer. New York: Harper Collins, 2000. Very entertaining story of the Coors family and the brewery they made famous.

Beverage Industry (May 2003): 19-20.

Blum, Peter. Brewed In Detroit: Breweries and Beers since 1830. Detroit: Wayne State University Press, 1999. Very good discussion of Detroit’s major breweries and how they evolved. Particularly strong on the Stroh brewery.

Cochran, Thomas. Pabst Brewing Company: The History of an American Business. New York: New York University Press, 1948: A very insightful, well-researched, and well- written history of one of America’s most important breweries. It is strongest on the years leading up to Prohibition.

Downard, William. The Cincinnati Brewing Industry: A Social and Economic History. Ohio University Press, 1973: A good history of brewing in Cincinnati; particularly strong in the years prior to Prohibition.

Downard, William. Dictionary of the History of the American Brewing and Distilling Industries. Westport, CT: Greenwood Press, 1980: Part dictionary and part encyclopedia, a useful compendium of terms, people, and events relating to the brewing and distilling industries.

Duis, Perry. The Saloon: Public Drinking in Chicagoand Boston, 1880-1920. Urbana: University of Illinois Press, 1983: An excellent overview of the institution of the saloon in pre-Prohibition America.

Eckhardt, Fred. The Essentials of Beer Style. Portland, OR: Fred Eckhardt Communications, 1995: A helpful introduction into the basics of how beer is made and how beer styles differ.

Ehert, George. Twenty-Five Years of Brewing. New York: Gast Lithograph and Engraving, 1891: An interesting snapshot of an important late nineteenth century New York City brewery.

Elzinga, Kenneth. “The Beer Industry.” In The Structure of American Industry, ninth edition, edited by W. Adams and J. Brock. Englewood Cliffs, New Jersey: Prentice Hall, 1995: A good overview summary of the history, structure, conduct, and performance of America’s brewing industry.

Fein, Edward. “The 25 Leading Brewers in the United States Produce 41.5% of the Nation’s Total Beer Output.” Brewers Digest 17 (October 1942): 35.

Greer, Douglas. “Product Differentiation and Concentration in the Brewing Industry,” Journal of Industrial Economics 29 (1971): 201-19.

Greer, Douglas. “The Causes of Concentration in the Brewing Industry,” Quarterly Review of Economics and Business 21 (1981): 87-106.

Greer, Douglas. “Beer: Causes of Structural Change.” In Industry Studies, second edition, edited by Larry Duetsch, Armonk, New York: M.E. Sharpe, 1998.

Hernon, Peter and Terry Ganey. Under the Influence: The Unauthorized Story of the Anheuser-Busch Dynasty. New York: Simon and Schuster, 1991: Somewhat sensationalistic history of the family that has controlled America’s largest brewery, but some interesting pieces on the brewery are included.

Horowitz, Ira and Ann Horowitz. “Firms in a Declining Market: The Brewing Case.” Journal of Industrial Economics 13 (1965): 129-153.

Jackson, Michael. The New World Guide To Beer. Philadelphia: Running Press, 1988: Good overview of the international world of beer and of America’s place in the international beer market.

Keithan, Charles. The Brewing Industry. Washington D.C: Federal Trade Commission, 1978.

Kerr, K. Austin. Organized for Prohibition. New Haven: Yale Press, 1985: Excellent study of the rise of the Anti-Saloon League in the United States.

Kostka, William. The Pre-prohibition History of Adolph Coors Company: 1873-1933. Golden, CO: self-published book, Adolph Coors Company, 1973: A self-published book by the Coors company that provides some interesting insights into the origins of the Colorado brewery.

Krebs, Roland and Orthwein, Percy. Making Friends Is Our Business: 100 Years of Anheuser-Busch. St. Louis, MO: self-published book, Anheuser-Busch, 1953: A self-published book by the Anheuser-Busch brewery that has some nice illustrations and data on firm output levels. The story is nicely told but rather self-congratulatory.

“Large Brewers Boost Share of U.S. Beer Business,” Brewers Digest, 15 (July 1940): 55-57.

Leisley, Bruce. A History of Leisley Brewing. North Newton Kansas: Mennonite Press, 1975: A short but useful history of the Leisley Brewing Company. This was the author’s undergraduate thesis.

Lender, Mark and James Martin. Drinking in America. New York: The Free Press, 1987: Good overview of the social history of drinking in America.

McGahan, Ann. “The Emergence of the National Brewing Oligopoly: Competition in the American Market, 1933-58.” Business History Review 65 (1991): 229-284: Excellent historical analysis of the origins of the brewing oligopoly following the repeal of Prohibition.

McGahan, Ann. “Cooperation in Prices and Capacities: Trade Associations in Brewing after Repeal.” Journal of Law and Economics 38 (1995): 521-559.

Meier, Gary and Meier, Gloria. Brewed in the Pacific Northwest: A History of Beer Making in Oregon and Washington. Seattle: Fjord Press, 1991: A survey of the history of brewing in the Pacific Northwest.

Miller, Carl. Breweries of Cleveland. Cleveland, OH: Schnitzelbank Press, 1998: Good historical overview of the brewing industry in Cleveland.

Norman, Donald. Structural Change and Performance in the U.S. Brewing Industry. Ph.D. dissertation, UCLA, 1975.

One Hundred Years of Brewing. Chicago and New York: Arno Press Reprint, 1903 (Reprint 1974): A very important work. Very detailed historical discussion of the American brewing industry through the end of the nineteenth century.

Persons, Warren. Beer and Brewing In America: An Economic Study. New York: United Brewers Industrial Foundation, 1940.

Plavchan, Ronald. A History of Anheuser-Busch, 1852-1933. Ph.D. dissertation, St. Louis University, 1969: Apart from Cochran’s analysis of Pabst, one of a very few detailed business histories of a major American brewery.

Research Company of America. A National Survey of the Brewing Industry. self-published, 1941: A well research industry analysis with a wealth of information and data.

Rorbaugh, William. The Alcoholic Republic: An American Tradition. New York: Oxford University Press, 1979: Excellent scholarly overview of drinking habits in America.

Rubin, Jay. “The Wet War: American Liquor, 1941-1945.” In Alcohol, Reform, and Society, edited by J. Blocker. Westport, CT: Greenwood Press, 1979: Interesting discussion of American drinking during World War II.

Salem, Frederick. 1880. Beer: Its History and Its Economic Value as a National Beverage. New York: Arno Press, 1880 (Reprint 1972): Early but valuable discussion of American brewing industry.

Scherer, F.M. Industry Structure, Strategy, and Public Policy. New York: Harper Collins, 1996: A very good essay on the brewing industry.

Shih, Ko Ching and C. Ying Shih. American Brewing Industry and the Beer Market. Brookfield, WI, 1958: Good overview of the industry with some excellent data tables.

Skilnik, Bob. The History of Beer and Brewing in Chicago: 1833-1978. Pogo Press, 1999: Good overview of the history of brewing in Chicago.

Smith, Greg. Beer in America: The Early Years, 1587 to 1840. Boulder, CO: Brewers Publications, 1998: Well written account of beer’s development in America, from the Pilgrims to mid-nineteenth century.

Stack, Martin. “Local and Regional Breweries in America’s Brewing Industry, 1865-1920.” Business History Review 74 (Autumn 2000): 435-63.

Thomann, Gallus. American Beer: Glimpses of Its History and Description of Its Manufacture. New York: United States Brewing Association, 1909: Interesting account of the state of the brewing industry at the turn of the twentieth century.

United States Brewers Association. Annual Year Book, 1909-1921. Very important primary source document published by the leading brewing trade association.

United States Brewers Foundation. Brewers Almanac, published annually, 1941-present: Very important primary source document published by the leading brewing trade association.

Van Wieren, Dale. American Breweries II. West Point, PA: Eastern Coast Brewiana Association, 1995. Comprehensive historical listing of every brewery in every state, arranged by city within each state.


[1] A barrel of beer is 31 gallons. One Hundred Years of Brewing, Chicagoand New York: Arno Press Reprint, 1974: 252.

[2] During the nineteenth century, there were often distinctions between temperance advocates, who differentiated between spirits and beer, and prohibition supporters, who campaigned on the need to eliminate all alcohol.

[3] The major shippers may have been taken aback by the loss suffered by Lemp, one of the leading pre-Prohibition shipping breweries. Lemp was sold at auction in 1922 at a loss of 90 percent on the investment (Baron, 1962, 315).

[4] The Herfinhahl Index sums the squared market shares of the fifty largest firms.

[5] China overtook the United States as the world’s largest beer market in 2002.

[6] http://www.anheuser-busch.com/over/international.html

Citation: Stack, Martin. “A Concise History of America’s Brewing Industry”. EH.Net Encyclopedia, edited by Robert Whaples. July 4, 2003. URL http://eh.net/encyclopedia/a-concise-history-of-americas-brewing-industry/

The Economic Impact of the Black Death

David Routt, University of Richmond

The Black Death was the largest demographic disaster in European history. From its arrival in Italy in late 1347 through its clockwise movement across the continent to its petering out in the Russian hinterlands in 1353, the magna pestilencia (great pestilence) killed between seventeen and twenty—eight million people. Its gruesome symptoms and deadliness have fixed the Black Death in popular imagination; moreover, uncovering the disease’s cultural, social, and economic impact has engaged generations of scholars. Despite growing understanding of the Black Death’s effects, definitive assessment of its role as historical watershed remains a work in progress.

A Controversy: What Was the Black Death?

In spite of enduring fascination with the Black Death, even the identity of the disease behind the epidemic remains a point of controversy. Aware that fourteenth—century eyewitnesses described a disease more contagious and deadlier than bubonic plague (Yersinia pestis), the bacillus traditionally associated with the Black Death, dissident scholars in the 1970s and 1980s proposed typhus or anthrax or mixes of typhus, anthrax, or bubonic plague as the culprit. The new millennium brought other challenges to the Black Death—bubonic plague link, such as an unknown and probably unidentifiable bacillus, an Ebola—like haemorrhagic fever or, at the pseudoscientific fringes of academia, a disease of interstellar origin.

Proponents of Black Death as bubonic plague have minimized differences between modern bubonic and the fourteenth—century plague through painstaking analysis of the Black Death’s movement and behavior and by hypothesizing that the fourteenth—century plague was a hypervirulent strain of bubonic plague, yet bubonic plague nonetheless. DNA analysis of human remains from known Black Death cemeteries was intended to eliminate doubt but inability to replicate initially positive results has left uncertainty. New analytical tools used and new evidence marshaled in this lively controversy have enriched understanding of the Black Death while underscoring the elusiveness of certitude regarding phenomena many centuries past.

The Rate and Structure of mortality

The Black Death’s socioeconomic impact stemmed, however, from sudden mortality on a staggering scale, regardless of what bacillus caused it. Assessment of the plague’s economic significance begins with determining the rate of mortality for the initial onslaught in 1347—53 and its frequent recurrences for the balance of the Middle Ages, then unraveling how the plague chose victims according to age, sex, affluence, and place.

Imperfect evidence unfortunately hampers knowing precisely who and how many perished. Many of the Black Death’s contemporary observers, living in an epoch of famine and political, military, and spiritual turmoil, described the plague apocalyptically. A chronicler famously closed his narrative with empty membranes should anyone survive to continue it. Others believed as few as one in ten survived. One writer claimed that only fourteen people were spared in London. Although sober eyewitnesses offered more plausible figures, in light of the medieval preference for narrative dramatic force over numerical veracity, chroniclers’ estimates are considered evidence of the Black Death’s battering of the medieval psyche, not an accurate barometer of its demographic toll.

Even non—narrative and presumably dispassionate, systematic evidence — legal and governmental documents, ecclesiastical records, commercial archives — presents challenges. No medieval scribe dragged his quill across parchment for the demographer’s pleasure and convenience. With a paucity of censuses, estimates of population and tracing of demographic trends have often relied on indirect indicators of demographic change (e.g., activity in the land market, levels of rents and wages, size of peasant holdings) or evidence treating only a segment of the population (e.g., assignment of new priests to vacant churches, payments by peasants to take over holdings of the deceased). Even the rare census—like record, like England’s Domesday Book (1086) or the Poll Tax Return (1377), either enumerates only heads of households or excludes slices of the populace or ignores regions or some combination of all these. To compensate for these imperfections, the demographer relies on potentially debatable assumptions about the size of the medieval household, the representativeness of a discrete group of people, the density of settlement in an undocumented region, the level of tax evasion, and so forth.

A bewildering array of estimates for mortality from the plague of 1347—53 is the result. The first outbreak of the Black Death indisputably was the deadliest but the death rate varied widely according to place and social stratum. National estimates of mortality for England, where the evidence is fullest, range from five percent, to 23.6 percent among aristocrats holding land from the king, to forty to forty—five percent of the kingdom’s clergy, to over sixty percent in a recent estimate. The picture for the continent likewise is varied. Regional mortality in Languedoc (France) was forty to fifty percent while sixty to eighty percent of Tuscans (Italy) perished. Urban death rates were mostly higher but no less disparate, e.g., half in Orvieto (Italy), Siena (Italy), and Volterra (Italy), fifty to sixty—six percent in Hamburg (Germany), fifty—eight to sixty—eight percent in Perpignan (France), sixty percent for Barcelona’s (Spain) clerical population, and seventy percent in Bremen (Germany). The Black Death was often highly arbitrary in how it killed in a narrow locale, which no doubt broadened the spectrum of mortality rates. Two of Durham Cathedral Priory’s manors, for instance, had respective death rates of twenty—one and seventy—eighty percent (Shrewsbury, 1970; Russell, 1948; Waugh, 1991; Ziegler, 1969; Benedictow, 2004; Le Roy Ladurie, 1976; Bowsky, 1964; Pounds, 1974; Emery, 1967; Gyug, 1983; Aberth, 1995; Lomas, 1989).

Credible death rates between one quarter and three quarters complicate reaching a Europe—wide figure. Neither a casual and unscientific averaging of available estimates to arrive at a probably misleading composite death rate nor a timid placing of mortality somewhere between one and two thirds is especially illuminating. Scholars confronting the problem’s complexity before venturing estimates once favored one third as a reasonable aggregate death rate. Since the early 1970s demographers have found higher levels of mortality plausible and European mortality of one half is considered defensible, a figure not too distant from less fanciful contemporary observations.

While the Black Death of 1347—53 inflicted demographic carnage, had it been an isolated event European population might have recovered to its former level in a generation or two and its economic impact would have been moderate. The disease’s long—term demographic and socioeconomic legacy arose from it recurrence. When both national and local epidemics are taken into account, England endured thirty plague years between 1351 and 1485, a pattern mirrored on the continent, where Perugia was struck nineteen times and Hamburg, Cologne, and Nuremburg at least ten times each in the fifteenth century. Deadliness of outbreaks declined — perhaps ten to twenty percent in the second plague (pestis secunda) of 1361—2, ten to fifteen percent in the third plague (pestis tertia) of 1369, and as low as five and rarely above ten percent thereafter — and became more localized; however, the Black Death’s persistence ensured that demographic recovery would be slow and socioeconomic consequences deeper. Europe’s population in 1430 may have been fifty to seventy—five percent lower than in 1290 (Cipolla, 1994; Gottfried, 1983).

Enumeration of corpses does not adequately reflect the Black Death’s demographic impact. Who perished was equally significant as how many; in other words, the structure of mortality influenced the time and rate of demographic recovery. The plague’s preference for urbanite over peasant, man over woman, poor over affluent, and, perhaps most significantly, young over mature shaped its demographic toll. Eyewitnesses so universally reported disproportionate death among the young in the plague’s initial recurrence (1361—2) that it became known as the Childen’s Plague (pestis puerorum, mortalité des enfants). If this preference for youth reflected natural resistance to the disease among plague survivors, the Black Death may have ultimately resembled a lower—mortality childhood disease, a reality that magnified both its demographic and psychological impact.

The Black Death pushed Europe into a long—term demographic trough. Notwithstanding anecdotal reports of nearly universal pregnancy of women in the wake of the magna pestilencia, demographic stagnancy characterized the rest of the Middle Ages. Population growth recommenced at different times in different places but rarely earlier than the second half of the fifteenth century and in many places not until c. 1550.

The European Economy on the Cusp of the Black Death

Like the plague’s death toll, its socioeconomic impact resists categorical measurement. The Black Death’s timing made a facile labeling of it as a watershed in European economic history nearly inevitable. It arrived near the close of an ebullient high Middle Ages (c. 1000 to c. 1300) in which urban life reemerged, long—distance commerce revived, business and manufacturing innovated, manorial agriculture matured, and population burgeoned, doubling or tripling. The Black Death simultaneously portended an economically stagnant, depressed late Middle Ages (c. 1300 to c. 1500). However, even if this simplistic and somewhat misleading portrait of the medieval economy is accepted, isolating the Black Death’s economic impact from manifold factors at play is a daunting challenge.

Cognizant of a qualitative difference between the high and late Middle Ages, students of medieval economy have offered varied explanations, some mutually exclusive, others not, some favoring the less dramatic, less visible, yet inexorable factor as an agent of change rather than a catastrophic demographic shift. For some, a cooling climate undercut agricultural productivity, a downturn that rippled throughout the predominantly agrarian economy. For others, exploitative political, social, and economic institutions enriched an idle elite and deprived working society of wherewithal and incentive to be innovative and productive. Yet others associate monetary factors with the fourteenth— and fifteenth—century economic doldrums.

The particular concerns of the twentieth century unsurprisingly induced some scholars to view the medieval economy through a Malthusian lens. In this reconstruction of the Middle Ages, population growth pressed against the society’s ability to feed itself by the mid—thirteenth century. Rising impoverishment and contracting holdings compelled the peasant to cultivate inferior, low—fertility land and to convert pasture to arable production and thereby inevitably reduce numbers of livestock and make manure for fertilizer scarcer. Boosting gross productivity in the immediate term yet driving yields of grain downward in the longer term exacerbated the imbalance between population and food supply; redressing the imbalance became inevitable. This idea’s adherents see signs of demographic correction from the mid—thirteenth century onward, possibly arising in part from marriage practices that reduced fertility. A more potent correction came with subsistence crises. Miserable weather in 1315 destroyed crops and the ensuing Great Famine (1315—22) reduced northern Europe’s population by perhaps ten to fifteen percent. Poor harvests, moreover, bedeviled England and Italy to the eve of the Black Death.

These factors — climate, imperfect institutions, monetary imbalances, overpopulation — diminish the Black Death’s role as a transformative socioeconomic event. In other words, socioeconomic changes already driven by other causes would have occurred anyway, merely more slowly, had the plague never struck Europe. This conviction fosters receptiveness to lower estimates of the Black Death’s deadliness. Recent scrutiny of the Malthusian analysis, especially studies of agriculture in source—rich eastern England, has, however, rehabilitated the Black Death as an agent of socioeconomic change. Growing awareness of the use of “progressive” agricultural techniques and of alternative, non—grain economies less susceptible to a Malthusian population—versus—resources dynamic has undercut the notion of an absolutely overpopulated Europe and has encouraged acceptance of higher rates of mortality from the plague (Campbell, 1983; Bailey, 1989).

The Black Death and the Agrarian Economy

The lion’s share of the Black Death’s effect was felt in the economy’s agricultural sector, unsurprising in a society in which, except in the most urbanized regions, nine of ten people eked out a living from the soil.

A village struck by the plague underwent a profound though brief disordering of the rhythm of daily life. Strong administrative and social structures, the power of custom, and innate human resiliency restored the village’s routine by the following year in most cases: fields were plowed, crops were sown, tended, and harvested, labor services were performed by the peasantry, the village’s lord collected dues from tenants. Behind this seeming normalcy, however, lord and peasant were adjusting to the Black Death’s principal economic consequence: a much smaller agricultural labor pool. Before the plague, rising population had kept wages low and rents and prices high, an economic reality advantageous to the lord in dealing with the peasant and inclining many a peasant to cleave to demeaning yet secure dependent tenure.

As the Black Death swung the balance in the peasant’s favor, the literate elite bemoaned a disintegrating social and economic order. William of Dene, John Langland, John Gower, and others polemically evoked nostalgia for the peasant who knew his place, worked hard, demanded little, and squelched pride while they condemned their present in which land lay unplowed and only an immediate pang of hunger goaded a lazy, disrespectful, grasping peasant to do a moment’s desultory work (Hatcher, 1994).

Moralizing exaggeration aside, the rural worker indeed demanded and received higher payments in cash (nominal wages) in the plague’s aftermath. Wages in England rose from twelve to twenty—eight percent from the 1340s to the 1350s and twenty to forty percent from the 1340s to the 1360s. Immediate hikes were sometimes more drastic. During the plague year (1348—49) at Fornham All Saints (Suffolk), the lord paid the pre—plague rate of 3d. per acre for more half of the hired reaping but the rest cost 5d., an increase of 67 percent. The reaper, moreover, enjoyed more and larger tips in cash and perquisites in kind to supplement the wage. At Cuxham (Oxfordshire), a plowman making 2s. weekly before the plague demanded 3s. in 1349 and 10s. in 1350 (Farmer, 1988; Farmer, 1991; West Suffolk Record Office 3/15.7/2.4; Harvey, 1965).

In some instances, the initial hikes in nominal or cash wages subsided in the years further out from the plague and any benefit they conferred on the wage laborer was for a time undercut by another economic change fostered by the plague. Grave mortality ensured that the European supply of currency in gold and silver increased on a per—capita basis, which in turned unleashed substantial inflation in prices that did not subside in England until the mid—1370s and even later in many places on the continent. The inflation reduced the purchasing power (real wage) of the wage laborer so significantly that, even with higher cash wages, his earnings either bought him no more or often substantially less than before the magna pestilencia (Munro, 2003; Aberth, 2001).

The lord, however, was confronted not only by the roving wage laborer on whom he relied for occasional and labor—intensive seasonal tasks but also by the peasant bound to the soil who exchanged customary labor services, rent, and dues for holding land from the lord. A pool of labor services greatly reduced by the Black Death enabled the servile peasant to bargain for less onerous responsibilities and better conditions. At Tivetshall (Norfolk), vacant holdings deprived its lord of sixty percent of his week—work and all his winnowing services by 1350—51. A fifth of winter and summer week—work and a third of reaping services vanished at Redgrave (Suffolk) in 1349—50 due to the magna pestilencia. If a lord did not make concessions, a peasant often gravitated toward any better circumstance beckoning elsewhere. At Redgrave, for instance, the loss of services in 1349—50 directly due to the plague was followed in 1350—51 by an equally damaging wave of holdings abandoned by surviving tenants. For the medieval peasant, never so tightly bound to the manor as once imagined, the Black Death nonetheless fostered far greater rural mobility. Beyond loss of labor services, the deceased or absentee peasant paid no rent or dues and rendered no fees for use of manorial monopolies such as mills and ovens and the lord’s revenues shrank. The income of English lords contracted by twenty percent from 1347 to 1353 (Norfolk Record Office WAL 1247/288×1; University of Chicago Bacon 335—6; Gottfried, 1983).

Faced with these disorienting circumstances, the lord often ultimately had to decide how or even whether the pre—plague status quo could be reestablished on his estate. Not capitalistic in the sense of maximizing productivity for reinvestment of profits to enjoy yet more lucrative future returns, the medieval lord nonetheless valued stable income sufficient for aristocratic ostentation and consumption. A recalcitrant peasantry, diminished dues and services, and climbing wages undermined the material foundation of the noble lifestyle, jostled the aristocratic sense of proper social hierarchy, and invited a response.

In exceptional circumstances, a lord sometimes kept the peasant bound to the land. Because the nobility in Spanish Catalonia had already tightened control of the peasantry before the Black Death, because underdeveloped commercial agriculture provided the peasantry narrow options, and because the labor—intensive demesne agriculture common elsewhere was largely absent, the Catalan lord through a mix of coercion (physical intimidation, exorbitant fees to purchase freedom) and concession (reduced rents, conversion of servile dues to less humiliating fixed cash payments) kept the Catalan peasant in place. In England and elsewhere on the continent, where labor services were needed to till the demesne, such a conservative approach was less feasible. This, however, did not deter some lords from trying. The lord of Halesowen (Worcestershire) not only commanded the servile tenant to perform the full range of services but also resuscitated labor obligations in abeyance long before the Black Death, tantamount to an unwillingness to acknowledge anything had changed (Freedman, 1991; Razi, 1981).

Europe’s political elite also looked to legal coercion not only to contain rising wages and to limit the peasant’s mobility but also to allay a sense of disquietude and disorientation arising from the Black Death’s buffeting of pre—plague social realities. England’s Ordinance of Laborers (1349) and Statute of Laborers (1351) called for a return to the wages and terms of employment of 1346. Labor legislation was likewise promulgated by the Córtes of Aragon and Castile, the French crown, and cities such as Siena, Orvieto, Pisa, Florence, and Ragusa. The futility of capping wages by legislative fiat is evident in the French crown’s 1351 revision of its 1349 enactment to permit a wage increase of one third. Perhaps only in England, where effective government permitted robust enforcement, did the law slow wage increases for a time (Aberth, 2001; Gottfried, 1983; Hunt and Murray, 1999; Cohn, 2007).

Once knee—jerk conservatism and legislative palliatives failed to revivify pre—plague socioeconomic arrangements, the lord cast about for a modus vivendi in a new world of abundant land and scarce labor. A sober triage of the available sources of labor, whether it was casual wage labor or a manor’s permanent stipendiary staff (famuli) or the dependent peasant, led to revision of managerial policy. The abbot of Saint Edmund’s, for example, focused on reconstitution of the permanent staff (famuli) on his manors. Despite mortality and flight, the abbot by and large achieved his goal by the mid—1350s. While labor legislation may have facilitated this, the abbot’s provision of more frequent and lucrative seasonal rewards, coupled with the payment of grain stipends in more valuable and marketable cereals such as wheat, no doubt helped secure the loyalty of famuli while circumventing statutory limits on higher wages. With this core of labor solidified, the focus turned to preserving the most essential labor services, especially those associated with the labor—intensive harvesting season. Less vital labor services were commuted for cash payments and ad hoc wage labor then hired to fill gaps. The cultivation of the demesne continued, though not on the pre—plague scale.

For a time in fact circumstances helped the lord continue direct management of the demesne. The general inflation of the quarter—century following the plague as well as poor harvests in the 1350s and 1360s boosted grain prices and partially compensated for more expensive labor. This so—called “Indian summer” of demesne agriculture ended quickly in the mid—1370s in England and subsequently on the continent when the post—plague inflation gave way to deflation and abundant harvests drove prices for commodities downward, where they remained, aside from brief intervals of inflation, for the rest of the Middle Ages. Recurrences of the plague, moreover, placed further stress on new managerial policies. For the lord who successfully persuaded new tenants to take over vacant holdings, such as happened at Chevington (Suffolk) by the late 1350s, the pestis secunda of 1361—62 often inflicted a decisive blow: a second recovery at Chevington never materialized (West Suffolk Records Office 3/15.3/2.9—2.23).

Under unremitting pressure, the traditional cultivation of the demesne ceased to be viable for lord after lord: a centuries—old manorial system gradually unraveled and the nature of agriculture was transformed. The lord’s earliest concession to this new reality was curtailment of cultivated acreage, a trend that accelerated with time. The 590.5 acres sown on average at Great Saxham (Suffolk) in the late 1330s was more than halved (288.67 acres) in the 1360s, for instance (West Suffolk Record Office, 3/15.14/1.1, 1.7, 1.8).

Beyond reducing the demesne to a size commensurate with available labor, the lord could explore types of husbandry less labor—intensive than traditional grain agriculture. Greater domestic manufacture of woolen cloth and growing demand for meat enabled many English lords to reduce arable production in favor of sheep—raising, which required far less labor. Livestock husbandry likewise became more significant on the continent. Suitable climate, soil, and markets made grapes, olives, apples, pears, vegetables, hops, hemp, flax, silk, and dye—stuffs attractive alternatives to grain. In hope of selling these cash crops, rural agriculture became more attuned to urban demand and urban businessmen and investors more intimately involved in what and how much of it was grown in the countryside (Gottfried, 1983; Hunt and Murray, 1999).

The lord also looked to reduce losses from demesne acreage no longer under the plow and from the vacant holdings of onetime tenants. Measures adopted to achieve this end initiated a process that gained momentum with each passing year until the face of the countryside was transformed and manorialism was dead. The English landlord, hopeful for a return to the pre—plague regime, initially granted brief terminal leases of four to six years at fixed rates for bits of demesne and for vacant dependent holdings. Leases over time lengthened to ten, twenty, thirty years, or even a lifetime. In France and Italy, the lord often resorted to métayage or mezzadria leasing, a type of sharecropping in which the lord contributed capital (land, seed, tools, plow teams) to the lessee, who did the work and surrendered a fraction of the harvest to the lord.

Disillusioned by growing obstacles to profitable cultivation of the demesne, the lord, especially in the late fourteenth century and the early fifteenth, adopted a more sweeping type of leasing, the placing of the demesne or even the entire manor “at farm” (ad firmam). A “farmer” (firmarius) paid the lord a fixed annual “farm” (firma) for the right to exploit the lord’s property and take whatever profit he could. The distant or unprofitable manor was usually “farmed” first and other manors followed until a lord’s personal management of his property often ceased entirely. The rising popularity of this expedient made direct management of demesne by lord rare by c. 1425. The lord often became a rentier bound to a fixed income. The tenurial transformation was completed when the lord sold to the peasant his right of lordship, a surrender to the peasant of outright possession of his holding for a fixed cash rent and freedom from dues and services. Manorialism, in effect, collapsed and was gone from western and central Europe by 1500.

The landlord’s discomfort ultimately benefited the peasantry. Lower prices for foodstuffs and greater purchasing power from the last quarter of the fourteenth century onward, progressive disintegration of demesnes, and waning customary land tenure enabled the enterprising, ambitious peasant to lease or purchase property and become a substantial landed proprietor. The average size of the peasant holding grew in the late Middle Ages. Due to the peasant’s generally improved standard of living, the century and a half following the magna pestilencia has been labeled a “golden age” in which the most successful peasant became a “yeoman” or “kulak” within the village community. Freed from labor service, holding a fixed copyhold lease, and enjoying greater disposable income, the peasant exploited his land exclusively for his personal benefit and often pursued leisure and some of the finer things in life. Consumption of meat by England’s humbler social strata rose substantially after the Black Death, a shift in consumer tastes that reduced demand for grain and helped make viable the shift toward pastoralism in the countryside. Late medieval sumptuary legislation, intended to keep the humble from dressing above his station and retain the distinction between low— and highborn, attests both to the peasant’s greater income and the desire of the elite to limit disorienting social change (Dyer, 1989; Gottfried, 1983; Hunt and Murray, 1999).

The Black Death, moreover, profoundly altered the contours of settlement in the countryside. Catastrophic loss of population led to abandonment of less attractive fields, contraction of existing settlements, and even wholesale desertion of villages. More than 1300 English villages vanished between 1350 and 1500. French and Dutch villagers abandoned isolated farmsteads and huddled in smaller villages while their Italian counterparts vacated remote settlements and shunned less desirable fields. The German countryside was mottled with abandoned settlements. Two thirds of named villages disappeared in Thuringia, Anhalt, and the eastern Harz mountains, one fifth in southwestern Germany, and one third in the Rhenish palatinate, abandonment far exceeding loss of population and possibly arising from migration from smaller to larger villages (Gottfried, 1983; Pounds, 1974).

The Black Death and the Commercial Economy

As with agriculture, assessment of the Black Death’s impact on the economy’s commercial sector is a complex problem. The vibrancy of the high medieval economy is generally conceded. As the first millennium gave way to the second, urban life revived, trade and manufacturing flourished, merchant and craft gilds emerged, commercial and financial innovations proliferated (e.g., partnerships, maritime insurance, double—entry bookkeeping, fair letters, letters of credit, bills of exchange, loan contracts, merchant banking, etc.). The integration of the high medieval economy reached its zenith c. 1250 to c. 1325 with the rise of large companies with international interests, such as the Bonsignori of Siena and the Buonaccorsi of Florence and the emergence of so—called “super companies” such as the Florentine Bardi, Peruzzi, and Acciaiuoli (Hunt and Murray, 1999).

How to characterize the late medieval economy has been more fraught with controversy, however. Historians a century past, uncomprehending of how their modern world could be rooted in a retrograde economy, imagined an entrepreneurially creative and expansive late medieval economy. Succeeding generations of historians darkened this optimistic portrait and fashioned a late Middle Ages of unmitigated decline, an “age of adversity” in which the economy was placed under the rubric “depression of the late Middle Ages.” The historiographical pendulum now swings away from this interpretation and a more nuanced picture has emerged that gives the Black Death’s impact on commerce its full due but emphasizes the variety of the plague’s impact from merchant to merchant, industry to industry, and city to city. Success or failure was equally possible after the Black Death and the game favored adaptability, creativity, nimbleness, opportunism, and foresight.

Once the magna pestilencia had passed, the city had to cope with a labor supply even more greatly decimated than in the countryside due to a generally higher urban death rate. The city, however, could reverse some of this damage by attracting, as it had for centuries, new workers from the countryside, a phenomenon that deepened the crisis for the manorial lord and contributed to changes in rural settlement. A resurgence of the slave trade occurred in the Mediterranean, especially in Italy, where the female slave from Asia or Africa entered domestic service in the city and the male slave toiled in the countryside. Finding more labor was not, however, a panacea. A peasant or slave performed an unskilled task adequately but could not necessarily replace a skilled laborer. The gross loss of talent due to the plague caused a decline in per capita productivity by skilled labor remediable only by time and training (Hunt and Murray, 1999; Miskimin, 1975).

Another immediate consequence of the Black Death was dislocation of the demand for goods. A suddenly and sharply smaller population ensured a glut of manufactured and trade goods, whose prices plummeted for a time. The businessman who successfully weathered this short—term imbalance in supply and demand then had to reshape his business’ output to fit a declining or at best stagnant pool of potential customers.

The Black Death transformed the structure of demand as well. While the standard of living of the peasant improved, chronically low prices for grain and other agricultural products from the late fourteenth century may have deprived the peasant of the additional income to purchase enough manufactured or trade items to fill the hole in commercial demand. In the city, however, the plague concentrated wealth, often substantial family fortunes, in fewer and often younger hands, a circumstance that, when coupled with lower prices for grain, left greater per capita disposable income. The plague’s psychological impact, moreover, it is believed, influenced how this windfall was used. Pessimism and the specter of death spurred an individualistic pursuit of pleasure, a hedonism that manifested itself in the purchase of luxuries, especially in Italy. Even with a reduced population, the gross volume of luxury goods manufactured and sold rose, a pattern of consumption that endured even after the extra income had been spent within a generation or so after the magna pestilencia.

Like the manorial lord, the affluent urban bourgeois sometimes employed structural impediments to block the ambitious parvenu from joining his ranks and becoming a competitor. A tendency toward limiting the status of gild master to the son or son—in—law of a sitting master, evident in the first half of the fourteenth century, gained further impetus after the Black Death. The need for more journeymen after the plague was conceded in the shortening of terms of apprenticeship, but the newly minted journeyman often discovered that his chance of breaking through the glass ceiling and becoming a master was virtually nil without an entrée through kinship. Women also were banished from gilds as unwanted competition. The urban wage laborer, by and large controlled by the gilds, was denied membership and had no access to urban structures of power, a potent source of frustration. While these measures may have permitted the bourgeois to hold his ground for a time, the winds of change were blowing in the city as well as the countryside and gild monopolies and gild restrictions were fraying by the close of the Middle Ages.

In the new climate created by the Black Death, the individual businessman did retain an advantage: the business judgment and techniques honed during the high Middle Ages. This was crucial in a contracting economy in which gross productivity never attained its high medieval peak and in which the prevailing pattern was boom and bust on a roughly generational basis. A fluctuating economy demanded adaptability and the most successful post—plague businessman not merely weathered bad times but located opportunities within adversity and exploited them. The post—plague entrepreneur’s preference for short—term rather than long—term ventures, once believed a product of a gloomy despondency caused by the plague and exacerbated by endemic violence, decay of traditional institutions, and nearly continuous warfare, is now viewed as a judicious desire to leave open entrepreneurial options, to manage risk effectively, and to take advantage of whatever better opportunity arose. The successful post—plague businessman observed markets closely and responded to them while exercising strict control over his concern, looking for greater efficiency, and trimming costs (Hunt and Murray, 1999).

The fortunes of the textile industry, a trade singularly susceptible to contracting markets and rising wages, best underscores the importance of flexibility. Competition among textile manufacturers, already great even before the Black Death due to excess productive capacity, was magnified when England entered the market for low— and medium—quality woolen cloth after the magna pestilencia and was exporting forty—thousand pieces annually by 1400. The English took advantage of proximity to raw material, wool England itself produced, a pattern increasingly common in late medieval business. When English producers were undeterred by a Flemish embargo on English cloth, the Flemish and Italians, the textile trade’s other principal players, were compelled to adapt in order to compete. Flemish producers that emphasized higher—grade, luxury textiles or that purchased, improved, and resold cheaper English cloth prospered while those that stubbornly competed head—to—head with the English in lower—quality woolens suffered. The Italians not only produced luxury woolens, improved their domestically—produced wool, found sources for wool outside England (Spain), and increased production of linen but also produced silks and cottons, once only imported into Europe from the East (Hunt and Murray, 1999).

The new mentality of the successful post—plague businessman is exemplified by the Florentines Gregorio Dati and Buonaccorso Pitti and especially the celebrated merchant of Prato, Francesco di Marco Datini. The large companies and super companies, some of which failed even before the Black Death, were not well suited to the post—plague commercial economy. Datini’s family business, with its limited geographical ambitions, better exercised control, was more nimble and flexible as opportunities vanished or materialized, and more effectively managed risk, all keys to success. Datini through voluminous correspondence with his business associates, subordinates, and agents and his conspicuously careful and regular accounting grasped the reins of his concern tightly. He insulated himself from undue risk by never committing too heavily to any individual venture, by dividing cargoes among ships or by insuring them, by never lending money to notoriously uncreditworthy princes, and by remaining as apolitical as he could. His energy and drive to complete every business venture likewise served him well and made him an exemplar for commercial success in a challenging era (Origo, 1957; Hunt and Murray, 1999).

The Black Death and Popular Rebellion

The late medieval popular uprising, a phenomenon with undeniable economic ramifications, is often linked with the demographic, cultural, social, and economic reshuffling caused by the Black Death; however, the connection between pestilence and revolt is neither exclusive nor linear. Any single uprising is rarely susceptible to a single—cause analysis and just as rarely was a single socioeconomic interest group the fomenter of disorder. The outbreak of rebellion in the first half of the fourteenth century (e.g., in urban [1302] and maritime [1325—28] Flanders and in English monastic towns [1326—27]) indicates the existence of socioeconomic and political disgruntlement well before the Black Death.

Some explanations for popular uprising, such as the placing of immediate stresses on the populace and the cumulative effect of centuries of oppression by manorial lords, are now largely dismissed. At times of greatest stress —— the Great Famine and the Black Death —— disorder but no large—scale, organized uprising materialized. Manorial oppression likewise is difficult to defend when the peasant in the plague’s aftermath was often enjoying better pay, reduced dues and services, broader opportunities, and a higher standard of living. Detailed study of the participants in the revolts most often labeled “peasant” uprisings has revealed the central involvement and apparent common cause of urban and rural tradesmen and craftsmen, not only manorial serfs.

The Black Death may indeed have made its greatest contribution to popular rebellion by expanding the peasant’s horizons and fueling a sense of grievance at the pace of change, not at its absence. The plague may also have undercut adherence to the notion of a divinely—sanctioned, static social order and buffeted a belief that preservation of manorial socioeconomic arrangements was essential to the survival of all, which in turn may have raised receptiveness to the apocalyptic socially revolutionary message of preachers like England’s John Ball. After the Black Death, change was inevitable and apparent to all.

The reasons for any individual rebellion were complex. Measures in the environs of Paris to check wage hikes caused by the plague doubtless fanned discontent and contributed to the outbreak of the Jacquerie of 1358 but high taxation to finance the Hundred Years’ War, depredation by marauding mercenary bands in the French countryside, and the peasantry’s conviction that the nobility had failed them in war roiled popular discontent. In the related urban revolt led by étienne Marcel (1355—58), tensions arose from the Parisian bourgeoisie’s discontent with the war’s progress, the crown’s imposition of regressive sales and head taxes, and devaluation of currency rather than change attributable to the Black Death.

In the English Peasants’ Rebellion of 1381, continued enforcement of the Statute of Laborers no doubt rankled and perhaps made the peasantry more open to provocative sermonizing but labor legislation had not halted higher wages or improvement in the standard of living for peasant. It seems likely that discontent may have arisen from an unsatisfying pace of improvement of the peasant’s lot. The regressive Poll Taxes of 1380 and 1381 also contributed to the discontent. It is furthermore noteworthy that the rebellion began in relatively affluent eastern England, not in the poorer west or north.

In the Ciompi revolt in Florence (1378—83), restrictive gild regulations and denial of political voice to workers due to the Black Death raised tensions; however, Florence’s war with the papacy and an economic slump in the 1370s resulting in devaluation of the penny in which the worker was paid were equally if not more important in fomenting unrest. Once the value of the penny was restored to its former level in 1383 the rebellion in fact subsided.

In sum, the Black Death played some role in each uprising but, as with many medieval phenomena, it is difficult to gauge its importance relative to other causes. Perhaps the plague’s greatest contribution to unrest lay in its fostering of a shrinking economy that for a time was less able to absorb socioeconomic tensions than had the growing high medieval economy. The rebellions in any event achieved little. Promises made to the rebels were invariably broken and brutal reprisals often followed. The lot of the lower socioeconomic strata was improved incrementally by the larger economic changes already at work. Viewed from this perspective, the Black Death may have had more influence in resolving the worker’s grievances than in spurring revolt.

Conclusion

The European economy at the close of the Middle Ages (c. 1500) differed fundamentally from the pre—plague economy. In the countryside, a freer peasant derived greater material benefit from his toil. Fixed rents if not outright ownership of land had largely displaced customary dues and services and, despite low grain prices, the peasant more readily fed himself and his family from his own land and produced a surplus for the market. Yields improved as reduced population permitted a greater focus on fertile lands and more frequent fallowing, a beneficial phenomenon for the peasant. More pronounced socioeconomic gradations developed among peasants as some, especially more prosperous ones, exploited the changed circumstances, especially the availability of land. The peasant’s gain was the lord’s loss. As the Middle Ages waned, the lord was commonly a pure rentier whose income was subject to the depredations of inflation.

In trade and manufacturing, the relative ease of success during the high Middle Ages gave way to greater competition, which rewarded better business practices and leaner, meaner, and more efficient concerns. Greater sensitivity to the market and the cutting of costs ultimately rewarded the European consumer with a wider range of good at better prices.

In the long term, the demographic restructuring caused by the Black Death perhaps fostered the possibility of new economic growth. The pestilence returned Europe’s population roughly its level c. 1100. As one scholar notes, the Black Death, unlike other catastrophes, destroyed people but not property and the attenuated population was left with the whole of Europe’s resources to exploit, resources far more substantial by 1347 than they had been two and a half centuries earlier, when they had been created from the ground up. In this environment, survivors also benefited from the technological and commercial skills developed during the course of the high Middle Ages. Viewed from another perspective, the Black Death was a cataclysmic event and retrenchment was inevitable, but it ultimately diminished economic impediments and opened new opportunity.

References and Further Reading:

Aberth, John. “The Black Death in the Diocese of Ely: The Evidence of the Bishop’s Register.” Journal of Medieval History 21 (1995): 275—87.

Aberth, John. From the Brink of the Apocalypse: Confronting Famine, War, Plague, and Death in the Later Middle Ages. New York: Routledge, 2001.

Aberth, John. The Black Death: The Great Mortality of 1348—1350, a Brief History with Documents . Boston and New York: Bedford/St. Martin’s, 2005.

Aston, T. H. and C. H. E. Philpin, eds. The Brenner Debate: Agrarian Class Structure and Economic Development in Pre—Industrial Europe. Cambridge: Cambridge University Press, 1985.

Bailey, Mark D. “Demographic Decline in Late Medieval England: Some Thoughts on Recent Research.” Economic History Review 49 (1996): 1—19.

Bailey, Mark D. A Marginal Economy? East Anglian Breckland in the Later Middle Ages. Cambridge: Cambridge University Press, 1989.

Benedictow, Ole J. The Black Death, 1346—1353: The Complete History. Woodbridge, Suffolk: Boydell Press, 2004.

Bleukx, Koenraad. “Was the Black Death (1348—49) a Real Plague Epidemic? England as a Case Study.” In Serta Devota in Memoriam Guillelmi Lourdaux. Pars Posterior: Cultura Medievalis, edited by W. Verbeke, M. Haverals, R. de Keyser, and J. Goossens, 64—113. Leuven: Leuven University Press, 1995.

Blockmans, Willem P. “The Social and Economic Effects of Plague in the Low Countries, 1349—1500.” Revue Belge de Philologie et d’Histoire 58 (1980): 833—63.

Bolton, Jim L. “‘The World Upside Down': Plague as an Agent of Economic and Social Change.” In The Black Death in England, edited by M. Ormrod and P. Lindley. Stamford: Paul Watkins, 1996.

Bowsky, William M. “The Impact of the Black Death upon Sienese Government and Society.” Speculum 38 (1964): 1—34.

Campbell, Bruce M. S. “Agricultural Progress in Medieval England: Some Evidence from Eastern Norfolk.” Economic History Review 36 (1983): 26—46.

Campbell, Bruce M. S., ed. Before the Black Death: Studies in the ‘Crisis’ of the Early Fourteenth Century. Manchester: Manchester University Press, 1991.

Cipolla, Carlo M. Before the Industrial Revolution: European Society and Economy, 1000—1700, Third edition. New York: Norton, 1994.

Cohn, Samuel K. The Black Death Transformed: Disease and Culture in Early Renaissance Europe. London: Edward Arnold, 2002.

Cohn, Sameul K. “After the Black Death: Labour Legislation and Attitudes toward Labour in Late—Medieval Western Europe.” Economic History Review 60 (2007): 457—85.

Davis, David E. “The Scarcity of Rats and the Black Death.” Journal of Interdisciplinary History 16 (1986): 455—70.

Davis, R. A. “The Effect of the Black Death on the Parish Priests of the Medieval Diocese of Coventry and Lichfield.” Bulletin of the Institute of Historical Research 62 (1989): 85—90.

Drancourt, Michel, Gerard Aboudharam, Michel Signoli, Olivier Detour, and Didier Raoult. “Detection of 400—Year—Old Yersinia Pestis DNA in Human Dental Pulp: An Approach to the Diagnosis of Ancient Septicemia.” Proceedings of the National Academy of the United States 95 (1998): 12637—40.

Dyer, Christopher. Standards of Living in the Middle Ages: Social Change in England, c. 1200—1520. Cambridge: Cambridge University Press, 1989.

Emery, Richard W. “The Black Death of 1348 in Perpignan.” Speculum 42 (1967): 611—23.

Farmer, David L. “Prices and Wages.” In The Agrarian History of England and Wales, Vol. II, edited H. E. Hallam, 715—817. Cambridge: Cambridge University Press, 1988.

Farmer, D. L. “Prices and Wages, 1350—1500.” In The Agrarian History of England and Wales, Vol. III, edited E. Miller, 431—94. Cambridge: Cambridge University Press, 1991.

Flinn, Michael W. “Plague in Europe and the Mediterranean Countries.” Journal of European Economic History 8 (1979): 131—48.

Freedman, Paul. The Origins of Peasant Servitude in Medieval Catalonia. New York: Cambridge University Press, 1991.

Gottfried, Robert. The Black Death: Natural and Human Disaster in Medieval Europe. New York: Free Press, 1983.

Gyug, Richard. “The Effects and Extent of the Black Death of 1348: New Evidence for Clerical Mortality in Barcelona.” Mediæval Studies 45 (1983): 385—98.

Harvey, Barbara F. “The Population Trend in England between 1300 and 1348.” Transactions of the Royal Historical Society 4th ser. 16 (1966): 23—42.

Harvey, P. D. A. A Medieval Oxfordshire Village: Cuxham, 1240—1400. London: Oxford University Press, 1965.

Hatcher, John. “England in the Aftermath of the Black Death.” Past and Present 144 (1994): 3—35.

Hatcher, John and Mark Bailey. Modelling the Middle Ages: The History and Theory of England’s Economic Development. Oxford: Oxford University Press, 2001.

Hatcher, John. Plague, Population, and the English Economy 1348—1530. London and Basingstoke: MacMillan Press Ltd., 1977.

Herlihy, David. The Black Death and the Transformation of the West, edited by S. K. Cohn. Cambridge and London: Cambridge University Press, 1997.

Horrox, Rosemary, transl. and ed. The Black Death. Manchester: Manchester University Press, 1994.

Hunt, Edwin S.and James M. Murray. A History of Business in Medieval Europe, 1200—1550. Cambridge: Cambridge University Press, 1999.

Jordan, William C. The Great Famine: Northern Europe in the Early Fourteenth Century. Princeton: Princeton University Press, 1996.

Lehfeldt, Elizabeth, ed. The Black Death. Boston: Houghton and Mifflin, 2005.

Lerner, Robert E. The Age of Adversity: The Fourteenth Century. Ithaca: Cornell University Press, 1968.

Le Roy Ladurie, Emmanuel. The Peasants of Languedoc, transl. J. Day. Urbana: University of Illinois Press, 1976.

Lomas, Richard A. “The Black Death in County Durham.” Journal of Medieval History 15 (1989): 127—40.

McNeill, William H. Plagues and Peoples. Garden City, New York: Anchor Books, 1976.

Miskimin, Harry A. The Economy of the Early Renaissance, 1300—1460. Cambridge: Cambridge University Press, 1975.

Morris, Christopher “The Plague in Britain.” Historical Journal 14 (1971): 205—15.

Munro, John H. “The Symbiosis of Towns and Textiles: Urban Institutions and the Changing Fortunes of Cloth Manufacturing in the Low Countries and England, 1270—1570.” Journal of Early Modern History 3 (1999): 1—74.

Munro, John H. “Wage—Stickiness, Monetary Changes, and the Real Incomes in Late—Medieval England and the Low Countries, 1300—1500: Did Money Matter?” Research in Economic History 21 (2003): 185—297.

Origo. Iris The Merchant of Prato: Francesco di Marco Datini, 1335—1410. Boston: David R. Godine, 1957, 1986.

Platt, Colin. King Death: The Black Death and its Aftermath in Late—Medieval England. Toronto: University of Toronto Press, 1996.

Poos, Lawrence R. A Rural Society after the Black Death: Essex 1350—1575. Cambridge: Cambridge University Press, 1991.

Postan, Michael M. The Medieval Economy and Society: An Economic History of Britain in the Middle Ages. Harmondswworth, Middlesex: Penguin, 1975.

Pounds, Norman J. D. An Economic History of Europe. London: Longman, 1974.

Raoult, Didier, Gerard Aboudharam, Eric Crubézy, Georges Larrouy, Bertrand Ludes, and Michel Drancourt. “Molecular Identification by ‘Suicide PCR’ of Yersinia Pestis as the Agent of Medieval Black Death.” Proceedings of the National Academy of Sciences of the United States of America 97 (7 Nov. 2000): 12800—3.

Razi, Zvi “Family, Land, and the Village Community in Later Medieval England.” Past and Present 93 (1981): 3—36.

Russell, Josiah C. British Medieval Population. Albuquerque: University of New Mexico Press, 1948.

Scott, Susan and Christopher J. Duncan. Return of the Black Death: The World’s Deadliest Serial Killer. Chicester, West Sussex and Hoboken, NJ: Wiley, 2004.

Shrewsbury, John F. D. A History of Bubonic Plague in the British Isles. Cambridge: Cambridge University Press, 1970.

Twigg, Graham The Black Death: A Biological Reappraisal. London: Batsford Academic and Educational, 1984.

Waugh, Scott L. England in the Reign of Edward III. Cambridge: Cambridge University Press, 1991.

Ziegler, Philip. The Black Death. London: Penguin, 1969, 1987.

Citation: Routt, David. “The Economic Impact of the Black Death”. EH.Net Encyclopedia, edited by Robert Whaples. July 20, 2008. URL http://eh.net/encyclopedia/the-economic-impact-of-the-black-death/

The Economic History of Major League Baseball

Michael J. Haupert, University of Wisconsin — La Crosse

“The reason baseball calls itself a game is because it’s too screwed up to be a business” — Jim Bouton, author and former MLB player

Origins

The origin of modern baseball is usually considered the formal organization of the New York Knickerbocker Base Ball Club in 1842. The rules they played by evolved into the rules of the organized leagues surviving today. In 1845 they organized into a dues paying club in order to rent the Elysian Fields in Hoboken, New Jersey to play their games on a regular basis. Typically these were amateur teams in name, but almost always featured a few players who were covertly paid. The National Association of Base Ball Players was organized in 1858 in recognition of the profit potential of baseball. The first admission fee (50 cents) was charged that year for an All Star game between the Brooklyn and New York clubs. The association formalized playing rules and created an administrative structure. The original association had 22 teams, and was decidedly amateur in theory, if not practice, banning direct financial compensation for players. In reality of course, the ban was freely and wantonly ignored by teams paying players under the table, and players regularly jumping from one club to another for better financial remuneration.

The Demand for Baseball

Before there were professional players, there was a recognition of the willingness of people to pay to see grown men play baseball. The demand for baseball extends beyond the attendance at live games to television, radio and print. As with most other forms of entertainment, the demand ranges from casual interest to a fanatical following. Many tertiary industries have grown around the demand for baseball, and sports in general, including the sports magazine trade, dedicated sports television and radio stations, tour companies specializing in sports trips, and an active memorabilia industry. While not all of this is devoted exclusively to baseball, it is indicative of the passion for sports, including baseball.

A live baseball game is consumed at the same time as the last stage of production of the game. It is like an airline seat or a hotel room, in that it is a highly perishable good that cannot be inventoried. The result is that price discrimination can be employed. Since the earliest days of paid attendance teams have discriminated based on seat location, sex and age of the patron. The first “ladies day,” which offered free admission to any woman accompanied by a man, was offered by the Gotham club in 1883. The tradition would last for nearly a century. Teams have only recently begun to exploit the full potential of price discrimination by varying ticket prices according to the expected quality, date and time of the game.

Baseball and the Media

Telegraph Rights

Baseball and the media have enjoyed a symbiotic relationship since newspapers began regularly covering games in the 1860s. Games in progress were broadcast by telegraph to saloons as early as the 1890s. In 1897 the first sale of broadcast rights took place. Each team received $300 in free telegrams as part of a league-wide contract to transmit game play-by-play over the telegraph wire. In 1913 Western Union paid each team $17,000 per year over five years for the rights to broadcast the games. The movie industry purchased the rights to film and show the highlights of the 1910 World Series for $500. In 1911 the owners managed to increase that rights fee to $3500.

Radio

It is hard to imagine that Major League Baseball (MLB) teams once saw the media as a threat to the value of their franchises. But originally, they resisted putting their games on the radio for fear that customers would stay home and listen to the game for free rather than come to the park. They soon discovered that radio (and eventually television) was a source of income and free advertising, helping to attract even more fans as well as serving as an additional source of revenue. By 2002, media revenue exceeded gate revenue for the average MLB team.

Originally, local radio broadcasts were the only source of media revenue. National radio broadcasts of regular season games were added in 1950 by the Liberty Broadcasting System. The contract lasted only one year however, before radio reverted to local broadcasting. The World Series, however has been nationally broadcast since 1922. For national broadcasts, the league negotiates a contract with a provider and splits the proceeds equally among all the teams. Thus, national radio and television contracts enrich the pot for all teams on an equal basis.

In the early days of radio, teams saw the broadcasting of their games as free publicity, and charged little or nothing for the rights. The Chicago Cubs were the first team to regularly broadcast their home games, giving them away to local radio in 1925. It would be another fourteen years, however, before every team began regular radio broadcasts of their games.

Television

1939 was also the year that the first game was televised on an experimental basis. In 1946 the New York Yankees became the first team with a local television contract when they sold the rights to their games for $75,000. By the end of the century they sold those same rights for $52 million per season. By 1951 the World Series was a television staple, and by 1955 all teams sold at least some of their games to local television. In 1966 MLB followed the lead of the NFL and sold its first national television package, netting $300,000 per team. The latest national television contract paid $24 million to each team in 2002.

Table 1:

MLB Television Revenue, Ticket Prices and Average Player Salary 1964-2002

(real (inflation-adjusted) values are in 2002 dollars)

Year Total TV revenue(millions of $) Average ticket price Average player salary
nominal real nominal real nominal real
1964 $ 21.28 $ 123 $ 2.25 $13.01 $ 14,863.00 $ 85,909
1965 $ 25.67 $ 146 $ 2.29 $13.02 $ 14,341.00 $ 81,565
1966 $ 27.04 $ 149 $ 2.35 $12.95 $ 17,664.00 $ 97,335
1967 $ 28.93 $ 156 $ 2.37 $12.78 $ 19,000.00 $ 102,454
1968 $ 31.04 $ 160 $ 2.44 $12.58 $ 20,632.00 $ 106,351
1969 $ 38.04 $ 186 $ 2.61 $12.76 $ 24,909.00 $ 121,795
1970 $ 38.09 $ 176 $ 2.72 $12.57 $ 29,303.00 $ 135,398
1971 $ 40.70 $ 180 $ 2.91 $12.87 $ 31,543.00 $ 139,502
1972 $ 41.09 $ 176 $ 2.95 $12.64 $ 34,092.00 $ 146,026
1973 $ 42.39 $ 171 $ 2.98 $12.02 $ 36,566.00 $ 147,506
1974 $ 43.25 $ 157 $ 3.10 $11.25 $ 40,839.00 $ 148,248
1975 $ 44.21 $ 147 $ 3.30 $10.97 $ 44,676.00 $ 148,549
1976 $ 50.01 $ 158 $ 3.45 $10.90 $ 52,300.00 $ 165,235
1977 $ 52.21 $ 154 $ 3.69 $10.88 $ 74,000.00 $ 218,272
1978 $ 52.31 $ 144 $ 3.98 $10.96 $ 97,800.00 $ 269,226
1979 $ 54.50 $ 135 $ 4.12 $10.21 $ 121,900.00 $ 301,954
1980 $ 80.00 $ 174 $ 4.45 $9.68 $ 146,500.00 $ 318,638
1981 $ 89.10 $ 176 $ 4.93 $9.74 $ 196,500.00 $ 388,148
1982 $ 117.60 $ 219 $ 5.17 $9.63 $ 245,000.00 $ 456,250
1983 $ 153.70 $ 277 $ 5.69 $10.25 $ 289,000.00 $ 520,839
1984 $ 268.40 $ 464 $ 5.81 $10.04 $ 325,900.00 $ 563,404
1985 $ 280.50 $ 468 $ 6.08 $10.14 $ 368,998.00 $ 615,654
1986 $ 321.60 $ 527 $ 6.21 $10.18 $ 410,517.00 $ 672,707
1987 $ 349.80 $ 553 $ 6.21 $9.82 $ 402,579.00 $ 636,438
1988 $ 364.10 $ 526 $ 6.21 $8.97 $ 430,688.00 $ 622,197
1989 $ 246.50 $ 357 $ 489,539.00 $ 708,988
1990 $ 659.30 $ 907 $ 589,483.00 $ 810,953
1991 $ 664.30 $ 877 $ 8.84 $11.67 $ 845,383.00 $ 1,116,063
1992 $ 363.00 $ 465 $ 9.41 $12.05 $1,012,424.00 $ 1,296,907
1993 $ 618.25 $ 769 $ 9.73 $12.10 $1,062,780.00 $ 1,321,921
1994 $ 716.05 $ 868 $ 10.62 $12.87 $1,154,486.00 $ 1,399,475
1995 $ 516.40 $ 609 $ 10.76 $12.69 $1,094,440.00 $ 1,290,693
1996 $ 706.30 $ 810 $ 11.32 $12.98 $1,101,455.00 $ 1,263,172
1997 $ 12.06 $13.51 $1,314,420.00 $ 1,472,150
1998 $ 13.58 $14.94 $1,378,506.00 $ 1,516,357
1999 $ 14.45 $15.61 $1,726,282.68 $ 1,864,385
2000 $ 16.22 $16.87 $1,987,543.03 $ 2,067,045
2001 $1,291.06 $ 1,310 $ 17.20 $17.45 $2,343,710.00 $ 2,378,093
2002 $ 17.85 $17.85 $2,385,903.07 $ 2,385,903

Notes: 1989 and 1992 national TV data only, no local TV included. Real values are calculated using Consumer Price Index.

As the importance of local media contracts grew, so did the problems associated with them. As cable and pay per view television became more popular, teams found them attractive sources of revenue. A fledgling cable channel could make its reputation by carrying the local ball team. In a large enough market, this could result in substantial payments to the local team. These local contracts did not pay all teams, only the home team. The problem from MLB’s point of view was not the income, but the variance in that income. That variance has increased over time, and is the primary source of the gap in payrolls, which is linked to the gap in quality, which is cited as the “competitive balance problem.” In 1962 the MLB average for local media income was $640,000 ranging from a low of $300,000 (Washington) to a high of $1.2 million (New York Yankees). In 2001, the average team garnered $19 million from local radio and television contracts, but the gap between the bottom and top had widened to an incredible $51.5 million. The Montreal Expos received $536,000 for their local broadcast rights while the New York Yankees received more than $52 million for theirs. Revenue sharing has resulted in a redistribution of some of these funds from the wealthiest to the poorest teams, but the impact of this on the competitive balance problem remains to be seen.

 

Franchise values

Baseball has been about profits since the first admission fee was charged. The first professional league, the National Association, founded in 1871, charged a $10 franchise fee. The latest teams to join MLB, paid $130 million apiece for the privilege in 1998.

Early Ownership Patterns

The value of franchises has mushroomed over time. In the early part of the twentieth century, owning a baseball team was a career choice for a wealthy sportsman. In some instances, it was a natural choice for someone with a financial interest in a related business, such as a brewery, that provided complementary goods. More commonly, the operation of a baseball team was a full time occupation of the owner, who was usually one individual, occasionally a partnership, but never a corporation.

Corporate Ownership

This model of ownership has since changed. The typical owner of a baseball team is now either a conglomerate, such as Disney, AOL Time Warner, the Chicago Tribune Company, or a wealthy individual who owns a (sometimes) related business, and operates the baseball team on the side – perhaps as a hobby, or as a complementary business. This transition began to occur when the tax benefits of owning a baseball team became significant enough that they were worth more to a wealthy conglomerate than a family owner. A baseball team that can show a negative bottom line while delivering a positive cash flow can provide significant tax benefits by offsetting income from another business. Another advantage of corporate ownership is the ability to cross-market products. For example, the Tribune Company owns the Chicago Cubs, and is able to use the team as part of its television programming. If it is more profitable for the company to show income on the Tribune ledger than the Cubs ledger, then it decreases the payment made to the team for the broadcast rights to its games. If a team owner does not have another source of income, then the ability to show a loss on a baseball team does not provide a tax break on other income. One important source of the tax advantage of owning a franchise comes from the ability to depreciate player salaries. In 1935 the IRS ruled that baseball teams could depreciate the value of their player contracts. This is an anomaly since labor is not a depreciating asset.

Table 2: Comparative Prices for MLB Salaries, Tickets and Franchise Values for Selected Years

Nominal values
year Salary ($000) Average ticketprice Average franchisevalue ($millions$)
minimum mean maximum
1920 5 20 1.00 0.794
1946 11.3 18.5 1.40 2.5
1950 13.3 45 1.54 2.54
1960 3 16 85 1.96 5.58
1970 12 29.3 78 2.72 10.13
1980 30 143.8 1300 4.45 32.1
1985 60 371.2 2130 6.08 40
1991 100 851.5 3200 8.84 110
1994 109 1153 5975 10.62 111
1997 150 1370 10,800 12.06 194
2001 200 2261 22,000 18.42 286
Real values (2002 dollars)
year Salary ($000) Average ticketprice Average franchisevalue ($millions)
minimum mean maximum
1920 44.85 179.4 8.97 7.12218
1946 104.299 170.755 12.922 23.075
1950 99.351 336.15 11.5038 18.9738
1960 18.24 97.28 516.8 11.9168 33.9264
1970 55.44 135.366 360.36 12.5664 46.8006
1980 65.4 313.484 2834 9.701 69.978
1985 100.2 619.904 3557.1 10.1536 66.8
1991 132 1123.98 4224 11.6688 145.2
1994 131.89 1395.13 7229.75 12.8502 134.31
1997 168 1534.4 12096 13.5072 217.28
2001 202 2283.61 22220 18.6042 288.86

The most significant change in the value of franchises has occurred in the last decade as a function of new stadium construction. The construction of a new stadium creates additional sources of revenue for a team owner, which impacts the value of the franchise. It is the increase in the value of franchises which is the most profitable part of ownership. Eight new stadiums were constructed between 1991 and 1999 for existing MLB teams. The average franchise value for the teams in those stadiums increased twenty percent the year the new stadium opened.

 

The Market Structure of MLB and Players’ Organizations

Major League Baseball is a highly successful oligopoly of professional baseball teams. The teams have successfully protected themselves against competition from other leagues for more than 125 years. The closest call came when two rival leagues, the established National League, and a former minor league, the Western League, renamed the American League in 1900, merged in 1903 to form the structure that exists to this day. The league lost some of its power in 1976 when it lost its monopsonistic control over the player labor market, but it retains its monopolistic hold on the number and location of franchises. Now the franchise owners must share a greater percentage of their revenue with the hired help, whereas prior to 1976 they controlled how much of the revenue to divert to the players.

The owners of professional baseball teams have acted in unison since the very beginning. They conspired to hold down the salaries of players with a secret reserve agreement in 1878. This created a monopsony whereby a player could only bargain with the team that originally signed him. This stranglehold on the labor market would last a century.

The baseball labor market is one of extremes. Baseball players began their labor history as amateurs whose skills quickly became highly demanded. For some, this translated into a career. Ultimately, all players became victims of a well-organized and obstinate cartel. Players lost their ability to bargain and offer their services competitively for a century. Despite several attempts to organize and a few attempts to create additional demand for their services from outside sources, they failed to win the right to sell their labor to the employer of their choice.

Beginning of Professionalization

The first team of baseball players to be openly paid was the 1869 Redstockings of Cincinnati. Prior to that, teams were organized as amateur squads who played for the pride of their hometown, club or college. The stakes in these games were bragging rights, often a trophy or loving cup, and occasionally a cash prize put up by a benefactor, or as a wager between the teams. It was inevitable that professional players would soon follow.

The first known professional players were paid under the table. The desire to win had eclipsed the desire to observe good sportsmanship, and the first step down the slope toward full professionalization of the sport had been taken. Just a few years later, in 1869, the first professional team was established. The Redstockings are as famous for being the first professional team as they are for their record and barnstorming accomplishments. The team was openly professional, and thus served as a worthy goal for other teams, amateur, semi-professional, and professional alike. The Cincinnati squad spent the next year barnstorming across America, taking on, and defeating, all challengers. In the process they drew attention to the game of baseball, and played a key part in its growing popularity. Just two years later, the first entirely professional baseball league would be established.

National Association of Professional Baseball Players

The formation of the National Association of Professional Base Ball Players in 1871 created a different level of competition for baseball players. The professional organization, which originally included nine teams, broke away from the National Association of Base Ball Players, which used amateur players. The amateur league folded three years after the split. The league was reorganized and renamed the National League in 1876. Originally, professional teams competed to sign players, and the best were rewarded handsomely, earning as much as $4500 per season. This was good money, given that a skilled laborer might earn $1200-$1500 per year for a 60 hour work week.

This system, however, proved to be problematic. Teams competed so fiercely for players that they regularly raided each other’s rosters. It was not uncommon for players to jump from one team to another during the season for a pay increase. This not only cost team owners money, but also created havoc with the integrity of the game, as players moved among teams, causing dramatic mid-season swings in the quality of teams.

Beginning of the Reserve Clause, 1878-79

During the winter of 1878-79, team owners gathered to discuss the problem of player roster jumping. They made a secret agreement among themselves not to raid one another’s rosters during the season. Furthermore, they agreed to restrain themselves during the off-season as well. Each owner would circulate to the other owners a list of five players he intended to keep on his roster the following season. By agreement, none of the owners would offer a contract to any of these “reserved” players. Hence, the reserve clause was born. It would take nearly a century before this was struck down. In the meantime, it went from five players (about half the team) to the entire team (1883) and to a formal contract clause (1887) agreed to by the players. Owners would ultimately make such a convincing case for the necessity of the reserve clause, that players themselves testified to its necessity in the Celler Anti-monopoly Hearings in 1951.

In 1892 the minor league teams agreed to a system that allowed the National League teams to draft players from their teams. This agreement was in response to their failure to get the NL to honor their reserve clause. In other words, what was good for the goose, was not good for the gander. While NL owners agreed to honor their reserve lists among one another, they paid no such honor to the reserve lists of teams in other organized, professional leagues. They believed they were at the top of the pyramid, where all the best players should be, and therefore they would get those players when they wanted them. As part of the draft agreement, the minor league teams allowed the NL teams to select players from their roster for fixed payments. The NL sacrificed some money, but restored a bit of order to the process, not to mention eliminated expensive bidding wars among teams for the services of players from the minor league teams.

The Players League

The first revolt by the players came in 1890, when they formed their own league, called the Players League, to compete with the National League and its rival, the American Association (AA), founded in 1882. The Players League was the first and only example of a cooperative league. The league featured profit sharing with players, an abolition of unilateral contract transfers, and no reserve clause. The competing league caused a bidding war for talent, leading to salary increases for the best players. The “war” ended after just one season, when the National League and American Association agreed to allow owners of some Players League teams to buy existing franchises. The following year, the NL and AA merged by buying out four AA franchises for $130,000 and merging the other four into the National League, to form a single twelve-team circuit.

Syndicates

This proved to be an unwieldy league arrangement however, and some of the franchises proved financially unstable. In order to preserve the structure of the league and avoid bankruptcy of some teams, syndicate ownership evolved, in which owners purchased a controlling interest in two teams. This did not help the stability of the league. Instead, it became a situation in which the syndicates used one team to train young players and feed the best of them to the other team. This period in league history exhibits some of the greatest examples of disparity between the best and worst teams in the league. In 1899 the Cleveland Spiders, the poor stepsister in the Cleveland-St. Louis syndicate, would lose a record 134 out of 154 games, a level of futility that has never been equaled. In 1900 the NL reduced to eight teams, buying out four of the existing franchises (three of the original AA franchises) for $60,000.

Western League Competes with National League

Syndicate ownership was ended in 1900 as the final part of the reorganization of the NL. It also sparked the minor Western League to declare major league status, and move some teams into NL markets for direct competition (Chicago, Boston, St. Louis, Philadelphia and Manhattan). All out competition followed in 1901, complete with roster raiding, salary increases, and team jumping, much to the benefit of the players. Syndicate ownership appeared again in 1902 when the owners of the Pittsburgh franchise purchased an interest in the Philadelphia club. Owners briefly entertained the idea of turning the entire league into a syndicate, transferring players to the market where they might be most valuable. The idea was dropped, however, for fear that the game would lose credibility and result in a decrease in attendance. In 1910 syndicate ownership was formally banned, though it did occur again in 2002, when the Montreal franchise was purchased by the other 29 MLB franchises as part of a three way franchise swap involving Boston, Miami and Montreal. MLB is currently looking to sell the franchise and move it to a more profitable market.

National and American Leagues End Competition

Team owners quickly saw the light, and in 1903 they made an agreement to honor one another’s rosters. Once more the labor wars were ended, this time in an agreement that would establish the major leagues as an organization of two cooperating leagues: the National League and the American League, each with eight teams, located in the largest cities east of the Mississippi (with the exception of St. Louis), and each league honoring the reserved rosters of teams in the other. This structure would prove remarkably stable, with no changes until 1953 when the Boston Braves became the first team to relocate in half a century when they moved to Milwaukee.

Franchise Numbers and Movements

The location and number of franchises has been a tightly controlled issue for teams since leagues were first organized. Though franchise movements were not rare in the early days of the league, they have always been under the control of the league, not the individual franchise owners. An owner is accepted into the league, but may not change the location of his or her franchise without the approval of the other members of the league. In addition, moving the location of a franchise within the vicinity of another franchise requires the permission of the affected franchise. As a result, MLB franchises have been very stable over time in regard to location. The size of the league has also been stable. From the merger of the AL and NL in 1903 until 1961, the league retained the same sixteen teams. Since that time, expansion has occurred fairly regularly, increasing to its present size of 30 teams with the latest round of expansion in 1998. In 2001, the league proposed going in the other direction, suggesting that it would contract by two teams in response to an alleged fiscal crisis and breakdown in competitive balance. Those plans were postponed at least four years by the labor agreement signed in 2002.

Table 3: MLB Franchise Sales Data by Decade

Decade Average purchase price in millions (2002 dollars) Average annual rate of increase in franchise sales price Average annual rate of return on DJIA (includes capital appreciation and annual dividends) Average tenure of ownership of MLB franchisein years Number of franchise sales
1910s .585(10.35) 6 6
1920s 1.02(10.4) 5.7 14.8 12 9
1930s .673(8.82) -4.1 - 0.3 19.5 4
1940s 1.56(15.6) 8.8 10.8 15.5 11
1950s 3.52(23.65) 8.5 16.7 13.5 10
1960s 7.64(43.45) 8.1 7.4 16 10
1970s 12.62(41.96) 5.1 7.7 10 9
1980s 40.7(67.96) 12.4 14.0 11 12
1990s 172.71(203.68) 15.6 12.6 15.8 14

Note: 2002 values calculated using the Consumer Price Index for decade midpoint

Negro Leagues

Separate professional leagues for African Americans existed, since they were excluded from participating in MLB until 1947 when Jackie Robinson broke the color barrier. The first was formed in 1920, and the last survived until 1960, though their future was doomed by the integration of the major and minor leagues.

Relocations

As revenues dried up or new markets beckoned due to shifts in population and the decreasing cost of trans-continental transportation, franchises began relocating in the second half of the twentieth century. The period from 1953-1972 saw a spate of franchise relocation: teams moved to Kansas City, Minneapolis, Baltimore, Los Angeles, Oakland, Dallas and San Francisco in pursuit of new markets. Most of these moves involved one team moving out of a market it shared with another team. The last team to relocate was the Washington D.C. franchise, which moved to suburban Dallas in 1972. It was the second time in just over a decade that a franchise had moved from the nation’s capitol. The original franchise, a charter member of the American League, had moved to Minneapolis in 1961. While there have been no relocations since then, there have been plenty of examples of threats to relocate. The threat to relocate has frequently been used by a team trying to get a new stadium built with public financing.

There were still a couple of challenges to the reserve clause. Until the 1960s, these came in the form of rival leagues creating competition for players, not a challenge to the legality of the reserve clause.

Federal League and the 1922 Supreme Court Antitrust Exemption

In 1914 the Federal League debuted. The new league did not recognize the reserve clause of the existing leagues, and raided their rosters, successfully luring some of the best players to the rival league with huge salary increases. Other players benefited from the new competition, and were able to win handsome raises from their NL and AL employers in return for not jumping leagues. The Federal League folded after two seasons when some of the franchise owners were granted access to the major leagues. No new teams were added, but a few owners were allowed to purchase existing NL and AL teams.

The first attack on the organizational structure of the major leagues to reach the US Supreme Court occurred when the shunned owner of the Baltimore club of the Federal League sued major league baseball for violation of antitrust law. Federal Baseball Club of Baltimore v the National League eventually reached the Supreme Court, where in 1922 the famous decision that baseball was not interstate commerce, and therefore was exempt from antitrust laws was rendered.

Early Strike and Labor Relations Problems

The first player strike actually occurred in 1912. The Detroit Tigers, in a show of unison for their embattled star Ty Cobb, refused to play in protest of what they regarded as an unfair suspension of Cobb, refusing to take the field unless the suspension was lifted. When warned that the team faced the prospect of a forfeit and a $5000 fine if they did not field a team, owner Frank Navin recruited local amateur players to suit up for the Tigers. The results were not surprising: a 24-2 victory for the visiting Philadelphia Athletics.

This was not an organized strike against the system per se, but it was indicative of the problems existent in the labor relations between players and owners. Cobb’s suspension was determined by the owner of the team, with no chance for a hearing for Cobb, and with no guidance from any existing labor agreement regarding suspensions. The owner was in total control, and could mete out whatever punishment for whatever length he deemed appropriate.

Mexican League

The next competing league appeared in 1946 from an unusual source: Mexico. Again, as in previous league wars, the competition benefited the players. In this case the players who benefited most were those players who were able to use Mexican League offers as leverage to gain better contracts from their major league teams. Those players who accepted offers from Mexican League teams would ultimately regret it. The league was under-financed, the playing and travel conditions far below major league standards, and the wrath of the major leagues deep. When the first paychecks were missed, the players began to head back to the U.S. However, they found no jobs waiting for them. Major League Baseball Commissioner Happy Chandler blacklisted them from the league. This led to a lawsuit, Gardella v MLB. The case was eventually settled out of court after a Federal Appeals court sided with Danny Gardella in 1949. Gardella was one of the blacklisted players who sued MLB for restraint of trade after being prevented from returning to the league after accepting a Mexican League offer for the 1946 season. While many of the players ultimately returned to the major leagues, they lost several years of their careers in the process.

Player Organizations

The first organization of baseball players came in 1885, in part a response to the reserve clause enacted by owners. The National Brotherhood of Professional Base Ball Players was not particularly successful however. In fact, just two years later, the players agreed to the reserve clause, and it became a part of the standard players contract for the next 90 years.

In 1900 another player organization was founded, the Players Protective Association. Competition broke out the next year, when the Western League declared itself a major league, and became the American League. It would merge with the National League for the 1903 season, and the brief period of roster raiding and increasing player salaries ended, as both leagues agreed to recognize one another’s rosters and reserve clauses. The Players Protective Association faded into obscurity amid the brief period of increased competition and player salaries.

Failure and Consequences of the American Baseball Guild

In 1946 the foundation was laid for the current Major League Baseball Player’s Association (MLBPA). Labor lawyer Robert Murphy created the American Baseball Guild, a player’s organization, after holding secret talks with players. Ultimately, the players voted not to form a union, and instead followed the encouragement of the owners, and formed their own committee of player representatives to bargain directly with the owners. The outcome of the negotiations was changes in the standard labor contract. Up to this point, the contract had been pretty much dictated by the owners. It contained such features as the right to waive a player with only ten days notice, the right to unilaterally decrease salary from one year to the next by any amount, and of course the reserve clause.

The players did not make major headway with the owners, but they did garner some concessions. Among them were a maximum pay cut of 25%, a minimum salary of $5000, a promise by the owners to create a pension plan, and $25 per week in living expenses for spring training camp. Until 1947, players received only expense money for spring training, no salary. The players today, despite their multimillion-dollar contracts, still receive “Murphy money” for spring training as well as a meal allowance for each day they are on the road traveling with the club.

Facing eight antitrust lawsuits in 1950, MLB requested Congress to pass a general immunity bill for all professional sports leagues. The request ultimately led to MLB’s inclusion in the Celler Anti-monopoly hearings in 1951. However, no legislative action was recommended. In fact, the owners by this time had so thoroughly convinced the players of the necessity of the reserve clause to the very survival of MLB that several players testified in favor of the monopsonistic structure of the league. They cited it as necessary to maintain the competitive balance among the teams that made the league viable. In 1957 the House Antitrust Subcommittee revisited the issue, once again recommending no change in the status quo.

Impacts of the Reserve Clause

Simon Rottenberg was the first economist to seriously look into professional baseball with the publication of his classic 1956 article “The Baseball Players’ Labor Market.” His conclusion, not surprisingly, was that the reserve clause transferred wealth from the players to owners, but had only a marginal impact on where the best players ended up. They would end up playing for the teams in the market in the best position to exploit their talents for the benefit of paying customers – in other words, the biggest markets: primarily New York. Given the quality of the New York teams (one in Manhattan, one in the Bronx and one in Brooklyn) during the era of Rottenberg’s study, his conclusion seems rather obvious. During the decade preceding his study, the three New York teams consistently performed better than their rivals. The New York Yankees won eight of ten American League pennants, and the two National League New York entries won eight of ten NL pennants (six for the Brooklyn Dodgers, two for the New York Giants).

Foundation of the Major League Baseball Players Association

The current players organization, the Major League Baseball Players Association, was formed in 1954. It remained in the background, however, until the players hired Marvin Miller in 1966 to head the organization. Hiring Miller, a former negotiator for the U.S. steel workers, would turn out to be a stroke of genius. Miller began with a series of small gains for players, including increases in the minimum salary, pension contributions by owners and limits to the maximum salary reduction owners could impose. The first test of the big item – the reserve clause – reached the Supreme Court in 1972.

Free Agency, Arbitration and the Reserve Clause

Curt Flood

Curt Flood, a star player for the St. Louis Cardinals, had been traded to the Philadelphia Phillies in 1970. Flood did not want to move from St. Louis, and informed both teams and the commissioner’s office that he did not intend to leave. He would play out his contract in St. Louis. Commissioner Bowie Kuhn ruled that Flood had no right to act in this way, and ordered him to play for Philadelphia, or not play at all. Flood chose the latter and sued MLB for violation of antitrust laws. The case reached the Supreme Court in 1972, and the court sided with MLB in Flood v. Kuhn. The court acknowledged that the 1922 ruling that MLB was exempt from antitrust law was an anomaly and should be overturned, but it refused to overturn the decision itself, arguing instead that if Congress wanted to rectify this anomaly, they should do so. Therefore the court stood pat, and the owners felt the case was settled permanently: the reserve clause had once again withstood legal challenge. They could not, however, have been more badly mistaken. While the reserve clause never has been overturned in a court of law, it would soon be drastically altered at the bargaining table, and ultimately lead to a revolution in the way baseball talent is dispersed and revenues are shared in the professional sports industry.

Curt Flood lost the legal battle, but the players ultimately won the war, and are no longer restrained by the reserve clause beyond the first two years of their major league contract. In a series of labor market victories beginning in the wake of the Flood decision in 1972 and continuing through the rest of the century, the players won the right to free agency (i.e. to bargain with any team for their services) after six years of service, escalating pension contributions, salary arbitration (after two to three seasons, depending on their service time), individual contract negotiations with agent representatives, hearing committees for disciplinary actions, reductions in maximum salary cuts, increases in travel money and improved travel conditions, the right to have disputes between players and owners settled by an independent arbitrator, and a limit to the number of times their contract could be assigned to a minor league team. Of course the biggest victory was free agency.

Impact of Free Agency – Salary Gains

The right to bargain with other teams for their services changed the landscape of the industry dramatically. No longer were players shackled to one team forever, subject to the whims of the owner for their salary and status. Now they were free to bargain with any and all teams. The impact on salaries was incredible. The average salary skyrocketed from $45,000 in 1975 to $289,000 in 1983.

Table 4: Maximum and Average MLB Player Salaries by Decade

(real values in 2002 dollars)

Period Highest Salary Year Player Team Average Salary Notes
Nominal Real Nominal Real
1800s $ 12,500.00
$ 246,250.00
1892 King Kelly Boston NL $ 3,054
$ 60,163.80
22 observations
1900s $ 10,000.00
$ 190,000.00
1907 Honus Wagner Pittsburgh Pirates $ 6,523
$ 123,937.00
13 observations
1910s $ 20,000.00
$ 360,000.00
1913 Frank Chance New York Yankees $ 2,307
$ 41,526.00
339 observations
1920s $ 80,000.00
$ 717,600.00
1927 Ty Cobb Philadelphia Athletics $ 6,992
$ 72,017.60
340 observations
1930s $ 84,098.33 $899,852 1930 Babe Ruth New York Yankees $ 7,748
$ 82,903.60
210 observations
1940s $ 100,000.00 $755,000 1949 Joe DiMaggio New York Yankees $ 11,197
$ 84,537.35
Average salary calculated using 1949 and 1943 seasons plus 139 additional observations.
1950s $ 125,000.00 $772,500 1959 Ted Williams Boston Red Sox $ 12,340
$ 76,261.20
Average salary estimate based on average of 1949 and 1964 salaries.
1960s $ 111,000.00
$572,164.95
1968 Curt Flood St. Louis Cardinals $ 18,568
$95,711.34
624 observations
1970s $ 561,500.00
$1,656,215.28
1977 Mike Schmidt Philadelphia Phillies $ 55,802
$164,595.06
2208 observations
1980s $ 2,766,666.00
$4,006,895.59
1989 Orel Hershiser, Frank Viola Dodgers, Twins $ 333,686
$483,269.38
approx 6500 observations
1990s $11,949,794.00
$ 12,905,777.52
1999 Albert Belle Baltimore Orioles $1,160,548
$ 1,253,391.84
approx 7000 observations
2000s $22,000,000.00
$22,322,742.55
2001 Alex Rodriguez Texas Rangers $2,165,627
$2,197,397.00
2250 observations

Real values based on 2002 Consumer Price Index.

Over the long haul, the changes have been even more dramatic. The average salary increased from $45,000 in 1975 to $2.4 million in 2002, while the minimum salary increased from $6000 to $200,000 and the highest paid player increased from $240,000 to $22 million. This is a 5200% increase in the average salary. Of course, not all of that increase is due to free agency. Revenues increased during this period by nearly 1800% from an average of $6.4 million to $119 million, primarily due to the 2800% increase in television revenue over the same period. Ticket prices increased by 439% while attendance doubled (the number of MLB teams increased from 24 to 30).

Strikes and Lockouts

Miller organized the players and unified them as no one had done before. The first test of their resolve came in 1972, when the owners refused to bargain on pension and salary issues. The players responded by going out on the first league-wide strike in American professional sports history. The strike began during spring training, and carried on into the season. The owners finally conceded in early April after nearly 100 games were lost to the strike. The labor stoppage became the favorite weapon of the players, who would employ it again in 1981, 1985, and 1994. The latter strike cancelled the World Series for the first time since 1904, and carried on into the 1995 season. The owners preempted strikes in two other labor disputes, locking out the players in 1976 and 1989. After each work stoppage, the players won the concessions they demanded and fended off attempts by owners to reverse previous player gains, particularly in the areas of free agency and arbitration. From the first strike in 1972 through 1994, every time the labor agreement between the two sides expired, a work stoppage ensued. In August of 2002 that pattern was broken when the two sides agreed to a new labor contract for the first time without a work stoppage.

Catfish Hunter

The first player to become a free agent did so due to a technicality. In 1974 Catfish Hunter, a pitcher for the Oakland Athletics, negotiated a contract with the owner, Charles Finley, which required Finley to make a payment into a trust fund for Hunter on a certain date. When Finley missed the date, and then tried to pay Hunter directly instead of honoring the clause, Hunter and Miller filed a complaint charging the contract should be null and void because Finley had broken it. The case went to an arbitrator who sided with Hunter and voided the contract, making Hunter a free agent. In a bidding frenzy, Hunter ultimately signed what was then a record contract with the New York Yankees. It set precedents for both its length – five years guaranteed, and its annual salary of $750,000. Prior to the dawning of free agency, it was a rare circumstance for a player to get anything more than a one-year contract, and a guaranteed contract was virtually unheard of. If a player was injured or fell off in performance, an owner would slash his salary or release him and vacate the remainder of his contract.

The End of the Reserve Clause – Messersmith and McNally

The first real test of the reserve clause came in 1975, when, on the advice of Miller, Andy Messersmith played the season without signing a contract. Dave McNally also refused to sign a contract, though he had unofficially retired at the time. Up to this time, the reserve clause meant that a team could renew a player’s contract at their discretion. The only changes in this clause that occurred since 1879 were the maximum amount by which the owner could reduce the player’s salary. In order to test the clause, which allowed teams to maintain contractual rights to players in perpetuity, Messersmith and McNally refused to sign contracts. Their teams automatically renewed their contracts from the previous season, per the reserve clause. The argument the players put forth was that if no contract was signed, then there was no reserve clause. They argued that Messersmith and McNally would be free to negotiate with any team at the end of the season. The reserve clause was struck down by arbitrator Peter Seitz on Dec. 23, 1975, clearing the way for players to become free agents and sell their services to the highest bidder. Messersmith and McNally became the first players to challenge and successfully escape the reserve clause. The baseball labor market changed permanently and dramatically in favor of the players, and has never turned back.

Current Labor Arrangements

The baseball labor market as it exists today is a result of bargaining between owners and players. Owners ultimately conceded the reserve clause and negotiated a short period of exclusivity for a team with a player. The argument they put forward was that the cost of developing players was so high, they needed a window of time when they could recoup those investments. The existing situation allows them six years. A player is bound to his original team for the first six years of his MLB contract, after which he can become a free agent – though some players bargain away that right by signing long-term contracts before the end of their sixth year.

During that six-year period however, players are not bound to the salary whims of the owners. The minimum salary will rise to $300,000 in 2003, there is a 10% maximum salary cut from one year to the next, and after two seasons players are eligible to have their contract decided by an independent arbitrator if they cannot come to an agreement with the team.

Arbitration

After their successful strike in 1972, the players had increased their bargaining position substantially. The next year they claimed a major victory when the owners agreed to a system of salary arbitration for players who did not yet qualify for free agency. Arbitration was won by the players at in 1973, and has since proved to be one of the costliest concessions the owners ever made. Arbitration requires each side to submit a final offer to an arbitrator, who must then choose one or the other offer. The arbitrator may not compromise on the offers, but must choose one. Once chosen, both sides are then obligated to accept that contract.

Once eligible for arbitration, a player, while not a free agent, does stand to reap a financial windfall. If a player and owner (realistically, a player’s agent and the owner’s agent – the general manager) cannot agree on a contract, either side may file for arbitration. If the other does not agree to go to arbitration, then the player becomes a free agent, and may bargain with any team. If arbitration is accepted, then both sides are bound to accept the contract awarded by the arbitrator. In practice, most of the contracts are settled before they reach the arbitrator. A player will file for arbitration, both sides will submit their final contract offers to the arbitrator, and then will usually settle somewhere in between the final offers. If they do not settle, then the arbitrator must hear the case and make a decision. Both sides will argue their point, which essentially boils down to comparing the player to other players in the league and their salaries. The arbitrator then decides which of the two final offers is closer to the market value for that player, and picks that one.

Collusion under Ueberroth

The owners, used to nearly a century of one-sided labor negotiations, quickly grew tired of the new economics of the player labor markets. They went through a series of labor negotiators, each one faring as poorly as the next, until they hit upon a different solution. Beginning in 1986, under the guidance of commissioner Peter Ueberroth, they tried collusion to stem the increase in player salaries. Teams agreed not to bid on one another’s free agents. The strategy worked, for awhile. During the next two seasons, player salaries grew at lower rates and high profile free agents routinely had difficulty finding anybody interested in their services. The players filed a complaint, charging the owners with a violation of the labor agreement signed by owners and players in 1981, which prohibited collusive action. They filed separate collusion charges for each of the three seasons from 1985-87, and won each time. The ruling resulted in the voiding of the final years of some players contracts, thus awarding them “second look” free agency status, and levied fines in excess of $280 million dollars on the owners. The result was a return to unfettered free agency for the players, a massive financial windfall for the impacted players, a black eye for the owners, and the end of the line for Commissioner Ueberroth.

Table 5:

Average MLB Payroll as a Percentage of Total Team Revenues for Selected Years

Year Percentage
1929 35.3
1933 35.9
1939 32.4
1943 24.8
1946 22.1
1950 17.6
1974 20.5
1977 25.1
1980 39.1
1985 39.7
1988 34.2
1989 31.6
1990 33.4
1991 42.9
1992 50.7
1994 60.5
1997 53.6
2001 54.1

Exploitation Patterns

Economist Andrew Zimbalist calculated the degree of market exploitation for baseball players for the years 1986-89, a decade after free agency began, and during the years of collusion, using a measure of the marginal revenue product of players. The marginal revenue product of a player is a measure of the additional revenue a team receives due to the addition of that player to the team. This is done by calculating the impact of the player on the performance of the team, and the subsequent impact of team performance on total revenue. He found that on average, the degree of exploitation, as measured by the ratio of marginal revenue product to salary, declined each year, from 1.32 in 1986 to 1.01 in 1989. The degree of exploitation, however, was not uniform across players. Not surprisingly, it decreased as players obtained the leverage to bargain. The degree of exploitation was highest for players in their first two years, before they were arbitration eligible, fell for players in the 2-5 year category, between arbitration and free agency, and disappeared altogether for players with six or more years of experience. In fact, for all four years, Zimbalist found that this group of players was overpaid with an average MRP of less than 75% of salary in 1989. No similar study has been done for players before free agency, in part due to the paucity of salary data before this time.

Negotiations under the Reserve Clause

Player contracts have changed dramatically since free agency. Players used to be subject to whatever salary the owner offered. The only recourse for a player was to hold out for a better salary. This strategy seldom worked, because the owner had great influence on the media, and usually was able to turn the public against the player, adding another source of pressure on the player to sign for the terms offered by the team. The pressure of no payday – a payday that, while less than the player’s MRP, still exceeded his opportunity cost by a fair amount, was usually sufficient to minimize the length of most holdouts. The owner influenced the media because the sports reporters were actually paid by the teams in cash or in kind, traveled with them, and enjoyed a relatively luxurious lifestyle for their chosen occupation. A lifestyle that could be halted by edict of the team at any time. The team controlled media passes and access and therefore had nearly total control of who covered the team. It was a comfortable lifestyle for a reporter, and spreading owner propaganda on occasion was seldom seen as an unacceptable price to pay.

Recent Concerns

The major labor issue in the game has shifted from player exploitation, the cry until free agency was granted, to competitive imbalance. Today, critics of the salary structure point to its impact on the competitive balance of the league as a way of criticizing the rising payrolls. Many fans of the game openly pine for a return for “the good old days,” when players played for the love of the game. It should be recognized however, that the game has always been a business. All that has changed has been the amount of money at stake and how it is divided among the employers and their employees.

Suggested Readings

A Club Owner. “The Baseball Trust.” Literary Digest, December 7, 1912.

Burk, Robert F. Much More Than a Game: Players, Owners, and American Baseball since 1921. Chapel Hill: University of North Carolina Press, 2001.

Burk, Robert F. Never Just a Game: Players, Owners, and American Baseball to 1920. Chapel Hill: University of North Carolina Press, 1994.

Dworkin, James B. Owners versus Players: Baseball and Collective Bargaining. Dover, MA: Auburn House, 1981.

Haupert, Michael, baseball financial database.

Haupert, Michael and Ken Winter. “Pay Ball: Estimating the Profitability of the New York Yankees 1915-37.” Essays in Economic and Business History 21 (2002).

Helyar, John. Lords of the Realm: The Real History of Baseball. New York: Villard Books, 1994.

Korr, Charles. The End of Baseball as We Knew It: The Players Union, 1960-1981. Champagne: University of Illinois Press, 2002.

Kuhn, Bowie, Hardball: The Education of a Baseball Commissioner. New York: Times Books, 1987.

Lehn, Ken. “Property Rights, Risk Sharing, and Player Disability in Major League Baseball.” Journal of Law and Economics 25, no. 2 (October1982): 273-79.

Lowe, Stephen. The Kid on the Sandlot: Congress and Professional Sports, 1910-1992. Bowling Green: Bowling Green University Press, 1995.

Lowenfish, Lee. “A Tale of Many Cities: The Westward Expansion of Major League Baseball in the 1950s.” Journal of the West 17 (July 1978).

Lowenfish, Lee. “What Were They Really Worth?” The Baseball Research Journal 20 (1991): 81-2.

Lowenfish, Lee. The Imperfect Diamond: A History of Baseball’s Labor Wars. New York: Da Capo Press, 1980.

Miller, Marvin. A Whole Different Ball Game: The Sport and Business of Baseball. New York: Birch Lane Press, 1991.

Noll, Roger G. and Andrew S. Zimbalist, editors. Sports Jobs and Taxes: Economic Impact of Sports Teams and Facilities. Washington, D.C.: Brookings Institute, 1997.

Noll, Roger, editor. Government and the Sports Business. Washington, D.C.: Brookings Institution, 1974.

Okkonen, Mark. The Federal League of 1914-1915: Baseball’s Third Major League. Cleveland: Society of American Baseball Research, 1989.

Orenstein, Joshua B. “The Union Association of 1884: A Glorious Failure.” The Baseball Research Journal 19 (1990): 3-6.

Pearson, Daniel M. Baseball in 1889: Players v Owners. Bowling Green, OH: Bowling Green State University Popular Press, 1993.

Quirk, James. “An Economic Analysis of Team Movements in Professional Sports.” Law and Contemporary Problems 38 (Winter-Spring 1973): 42-66.

Rottenberg, Simon. “The Baseball Players’ Labor Market.” Journal of Political Economy 64, no. 3 (December 1956) 242-60.

Scully, Gerald. The Business of Major League Baseball. Chicago: University of Chicago Press, 1989.

Sherony, Keith, Michael Haupert and Glenn Knowles. “Competitive Balance in Major League Baseball: Back to the Future.” Nine: A Journal of Baseball History & Culture 9, no. 2 (Spring 2001): 225-36.

Sommers, Paul M., editor. Diamonds Are Forever: The Business of Baseball. Washington, D.C.: Brookings Institution, 1992.

Sullivan, Neil J. The Diamond in the Bronx: Yankee Stadium and the Politics of New York. New York: Oxford University Press, 2001.

Sullivan, Neil J. The Diamond Revolution. New York: St. Martin’s Press, 1992.

Sullivan, Neil J. The Dodgers Move West. New York: Oxford University Press, 1987.

Thorn, John and Peter Palmer, editors. Total Baseball. New York: HarperPerennial, 1993.

Voigt, David Q. The League That Failed, Lanham, MD: Scarecrow Press, 1998.

White, G. Edward. Creating the National Pastime: Baseball Transforms Itself, 1903-1953. Princeton: Princeton University Press, 1996.

Wood, Allan. 1918: Babe Ruth and the World Champion Boston Red Sox. New York: Writers Club Press, 2000.

Zimbalist, Andrew. Baseball and Billions. New York: Basic Books, 1992.

Zingg, Paul, “Bitter Victory: The World Series of 1918: A Case Study in Major League Labor-Management Relations.” Nine: A Journal of Baseball History and Social Policy Perspectives 1, no. 2 (Spring 1993): 121-41.

Zweig, Jason, “Wild Pitch: How American Investors Financed the Growth of Baseball.” Friends of Financial History 43 (Summer 1991).

Citation: Haupert, Michael. “The Economic History of Major League Baseball”. EH.Net Encyclopedia, edited by Robert Whaples. December 3, 2007. URL
http://eh.net/encyclopedia/the-economic-history-of-major-league-baseball/

US Banking History, Civil War to World War II

Richard S. Grossman, Wesleyan University

The National Banking Era Begins, 1863

The National Banking Acts of 1863 and 1864

The National Banking era was ushered in by the passage of the National Currency (later renamed the National Banking) Acts of 1863 and 1864. The Acts marked a decisive change in the monetary system, confirmed a quarter-century-old trend in bank chartering arrangements, and also played a role in financing the Civil War.

Provision of a Uniform National Currency

As its original title suggests, one of the main objectives of the legislation was to provide a uniform national currency. Prior to the establishment of the national banking system, the national currency supply consisted of a confusing patchwork of bank notes issued under a variety of rules by banks chartered under different state laws. Notes of sound banks circulated side-by-side with notes of banks in financial trouble, as well as those of banks that had failed (not to mention forgeries). In fact, bank notes frequently traded at a discount, so that a one-dollar note of a smaller, less well-known bank (or, for that matter, of a bank at some distance) would likely have been valued at less than one dollar by someone receiving it in a transaction. The confusion was such as to lead to the publication of magazines that specialized in printing pictures, descriptions, and prices of various bank notes, along with information on whether or not the issuing bank was still in existence.

Under the legislation, newly created national banks were empowered to issue national bank notes backed by a deposit of US Treasury securities with their chartering agency, the Department of the Treasury’s Comptroller of the Currency. The legislation also placed a tax on notes issued by state banks, effectively driving them out of circulation. Bank notes were of uniform design and, in fact, were printed by the government. The amount of bank notes a national bank was allowed to issue depended upon the bank’s capital (which was also regulated by the act) and the amount of bonds it deposited with the Comptroller. The relationship between bank capital, bonds held, and note issue was changed by laws in 1874, 1882, and 1900 (Cagan 1963, James 1976, and Krooss 1969).

Federal Chartering of Banks

A second element of the Act was the introduction bank charters issued by the federal government. From the earliest days of the Republic, banking had been considered primarily the province of state governments.[1] Originally, individuals who wished to obtain banking charters had to approach the state legislature, which then decided if the applicant was of sufficient moral standing to warrant a charter and if the region in question needed an additional bank. These decisions may well have been influenced by bribes and political pressure, both by the prospective banker and by established bankers who may have hoped to block the entry of new competitors.

An important shift in state banking practice had begun with the introduction of free banking laws in the 1830s. Beginning with laws passed in Michigan (1837) and New York (1838), free banking laws changed the way banks obtained charters. Rather than apply to the state legislature and receive a decision on a case-by-case basis, individuals could obtain a charter by filling out some paperwork and depositing a prescribed amount of specified bonds with the state authorities. By 1860, over one half of the states had enacted some type of free banking law (Rockoff 1975). By regularizing and removing legislative discretion from chartering decisions, the National Banking Acts spread free banking on a national level.

Financing the Civil War

A third important element of the National Banking Acts was that they helped the Union government pay for the war. Adopted in the midst of the Civil War, the requirement for banks to deposit US bonds with the Comptroller maintained the demand for Union securities and helped finance the war effort.[2]

Development and Competition with State Banks

The National Banking system grew rapidly at first (Table 1). Much of the increase came at the expense of the state-chartered banking systems, which contracted over the same period, largely because they were no longer able to issue notes. The expansion of the new system did not lead to the extinction of the old: the growth of deposit-taking, combined with less stringent capital requirements, convinced many state bankers that they could do without either the ability to issue banknotes or a federal charter, and led to a resurgence of state banking in the 1880s and 1890s. Under the original acts, the minimum capital requirement for national banks was $50,000 for banks in towns with a population of 6000 or less, $100,000 for banks in cities with a population ranging from 6000 to 50,000, and $200,000 for banks in cities with populations exceeding 50,000. By contrast, the minimum capital requirement for a state bank was often as low as $10,000. The difference in capital requirements may have been an important difference in the resurgence of state banking: in 1877 only about one-fifth of state banks had a capital of less than $50,000; by 1899 the proportion was over three-fifths. Recognizing this competition, the Gold Standard Act of 1900 reduced the minimum capital necessary for national banks. It is questionable whether regulatory competition (both between states and between states and the federal government) kept regulators on their toes or encouraged a “race to the bottom,” that is, lower and looser standards.

Table 1: Numbers and Assets of National and State Banks, 1863-1913

Number of Banks Assets of Banks ($millions)
National Banks State Banks National Banks State Banks
1863 66 1466 16.8 1185.4
1864 467 1089 252.2 725.9
1865 1294 349 1126.5 165.8
1866 1634 297 1476.3 154.8
1867 1636 272 1494.5 151.9
1868 1640 247 1572.1 154.6
1869 1619 259 1564.1 156.0
1870 1612 325 1565.7 201.5
1871 1723 452 1703.4 259.6
1872 1853 566 1770.8 264.5
1873 1968 277 1851.2 178.9
1874 1983 368 1851.8 237.4
1875 2076 586 1913.2 395.2
1876 2091 671 1825.7 405.9
1877 2078 631 1774.3 506.9
1878 2056 510 1770.4 388.8
1879 2048 648 2019.8 427.6
1880 2076 650 2035.4 481.8
1881 2115 683 2325.8 575.5
1882 2239 704 2344.3 633.8
1883 2417 788 2364.8 724.5
1884 2625 852 2282.5 760.9
1885 2689 1015 2421.8 802.0
1886 2809 891 2474.5 807.0
1887 3014 1471 2636.2 1003.0
1888 3120 1523 2731.4 1055.0
1889 3239 1791 2937.9 1237.3
1890 3484 2250 3061.7 1374.6
1891 3652 2743 3113.4 1442.0
1892 3759 3359 3493.7 1640.0
1893 3807 3807 3213.2 1857.0
1894 3770 3810 3422.0 1782.0
1895 3715 4016 3470.5 1954.0
1896 3689 3968 3353.7 1962.0
1897 3610 4108 3563.4 1981.0
1898 3582 4211 3977.6 2298.0
1899 3583 4451 4708.8 2707.0
1900 3732 4659 4944.1 3090.0
1901 4165 5317 5675.9 3776.0
1902 4535 5814 6008.7 4292.0
1903 4939 6493 6286.9 4790.0
1904 5331 7508 6655.9 5244.0
1905 5668 8477 7327.8 6056.0
1906 6053 9604 7784.2 6636.0
1907 6429 10761 8476.5 7190.0
1908 6824 12062 8714.0 6898.0
1909 6926 12398 9471.7 7407.0
1910 7145 13257 9896.6 7911.0
1911 7277 14115 10383 8412.0
1912 7372 14791 10861.7 9005.0
1913 7473 15526 11036.9 9267.0

Source: U.S. Department of the Treasury. Annual Report of the Comptroller of the Currency (1931), pp. 3, 5. State bank columns include data on state-chartered commercial banks and loan and trust companies.

Capital Requirements and Interest Rates

The relatively high minimum capital requirement for national banks may have contributed to regional interest rate differentials in the post-Civil War era. The period from the Civil War through World War I saw a substantial decline in interregional interest rate differentials. According to Lance Davis (1965), the decline in difference between regional interest rates can be explained by the development and spread of the commercial paper market, which increased the interregional mobility of funds. Richard Sylla (1969) argues that the high minimum capital requirements established by the National Banking Acts represented barriers to entry and therefore led to local monopolies by note-issuing national banks. These local monopolies in capital-short regions led to the persistence of interest rate spreads.[3] (See also James 1976b.)

Bank Failures

Financial crises were a common occurrence in the National Banking era. O.M.W. Sprague (1910) classified the main financial crises during the era as occurring in 1873, 1884, 1890, 1893, and 1907, with those of 1873, 1893, and 1907 being regarded as full-fledged crises and those of 1884 and 1890 as less severe.

Contemporary observers complained of both the persistence and ill effects of bank failures under the new system.[4] The number and assets of failed national and non-national banks during the National Banking era is shown in Table 2. Suspensions — temporary closures of banks unable to meet demand for their liabilities — were even higher during this period.

Table 2: Bank Failures, 1865-1913

Number of Failed Banks Assets of Failed Banks ($millions)
National Banks Other Banks National Banks Other banks
1865 1 5 0.1 0.2
1866 2 5 1.8 1.2
1867 7 3 4.9 0.2
1868 3 7 0.5 0.2
1869 2 6 0.7 0.1
1870 0 1 0.0 0.0
1871 0 7 0.0 2.3
1872 6 10 5.2 2.1
1873 11 33 8.8 4.6
1874 3 40 0.6 4.1
1875 5 14 3.2 9.2
1876 9 37 2.2 7.3
1877 10 63 7.3 13.1
1878 14 70 6.9 26.0
1879 8 20 2.6 5.1
1880 3 10 1.0 1.6
1881 0 9 0.0 0.6
1882 3 19 6.0 2.8
1883 2 27 0.9 2.8
1884 11 54 7.9 12.9
1885 4 32 4.7 3.0
1886 8 13 1.6 1.3
1887 8 19 6.9 2.9
1888 8 17 6.9 2.8
1889 8 15 0.8 1.3
1890 9 30 2.0 10.7
1891 25 44 9.0 7.2
1892 17 27 15.1 2.7
1893 65 261 27.6 54.8
1894 21 71 7.4 8.0
1895 36 115 12.1 11.3
1896 27 78 12.0 10.2
1897 38 122 29.1 17.9
1898 7 53 4.6 4.5
1899 12 26 2.3 7.8
1900 6 32 11.6 7.7
1901 11 56 8.1 6.4
1902 2 43 0.5 7.3
1903 12 26 6.8 2.2
1904 20 102 7.7 24.3
1905 22 57 13.7 7.0
1906 8 37 2.2 6.6
1907 7 34 5.4 13.0
1908 24 132 30.8 177.1
1909 9 60 3.4 15.8
1910 6 28 2.6 14.5
1911 3 56 1.1 14.0
1912 8 55 5.0 7.8
1913 6 40 7.6 6.2

Source: U.S. Department of the Treasury. Annual Report of the Comptroller of the Currency (1931), pp. 6, 8.

The largest number of failures occurred in the years following the financial crisis of 1893. The number and assets of national and non-national bank failures remained high for four years following the crisis, a period which coincided with the free silver agitation of the mid-1890s, before returning to pre-1893 levels. Other crises were also accompanied by an increase in the number and assets of bank failures. The earliest peak during the national banking era accompanied the onset of the crisis of 1873. Failures subsequently fell, but rose again in the trough of the depression that followed the 1873 crisis. The panic of 1884 saw a slight increase in failures, while the financial stringency of 1890 was followed by a more substantial increase. Failures peaked again following several minor panics around the turn of the century and again at the time of the crisis of 1907.

Among the alleged causes of crises during the national banking era were that the money supply was not sufficiently elastic to allow for seasonal and other stresses on the money market and the fact that reserves were pyramided. That is, under the National Banking Acts, a portion of banks’ required reserves could be held in national banks in larger cities (“reserve city banks”). Reserve city banks could, in turn, hold a portion of their required reserves in “central reserve city banks,” national banks in New York, Chicago, and St. Louis. In practice, this led to the build-up of reserve balances in New York City. Increased demands for funds in the interior of the country during the autumn harvest season led to substantial outflows of funds from New York, which contributed to tight money market conditions and, sometimes, to panics (Miron 1986).[5]

Attempted Remedies for Banking Crises

Causes of Bank Failures

Bank failures occur when banks are unable to meet the demands of their creditors (in earlier times these were note holders; later on, they were more often depositors). Banks typically do not hold 100 percent of their liabilities in reserves, instead holding some fraction of demandable liabilities in reserves: as long as the flows of funds into and out of the bank are more or less in balance, the bank is in little danger of failing. A withdrawal of deposits that exceeds the bank’s reserves, however, can lead to the banks’ temporary suspension (inability to pay) or, if protracted, failure. The surge in withdrawals can have a variety of causes including depositor concern about the bank’s solvency (ability to pay depositors), as well as worries about other banks’ solvency that lead to a general distrust of all banks.[6]

Clearinghouses

Bankers and policy makers attempted a number of different responses to banking panics during the National Banking era. One method of dealing with panics was for the bankers of a city to pool their resources, through the local bankers’ clearinghouse and to jointly guarantee the payment of every member banks’ liabilities (see Gorton (1985a, b)).

Deposit Insurance

Another method of coping with panics was deposit insurance. Eight states (Oklahoma, Kansas, Nebraska, Texas, Mississippi, South Dakota, North Dakota, and Washington) adopted deposit insurance systems between 1908 and 1917 (six other states had adopted some form of deposit insurance in the nineteenth century: New York, Vermont, Indiana, Michigan, Ohio, and Iowa). These systems were not particularly successful, in part because they lacked diversification: because these systems operated statewide, when a panic fell full force on a state, deposit insurance system did not have adequate resources to handle each and every failure. When the agricultural depression of the 1920s hit, a number of these systems failed (Federal Deposit Insurance Corporation 1988).

Double Liability

Another measure adopted to curtail bank risk-taking, and through risk-taking, bank failures, was double liability (Grossman 2001). Under double liability, shareholders who had invested in banks that failed were liable to lose not only the money they had invested, but could be called on by a bank’s receiver to contribute an additional amount equal to the par value of the shares (hence the term “double liability,” although clearly the loss to the shareholder need not have been double if the par and market values of shares were different). Other states instituted triple liability, where the receiver could call on twice the par value of shares owned. Still others had unlimited liability, while others had single, or regular limited, liability.[7] It was argued that banks with double liability would be more risk averse, since shareholders would be liable for a greater payment if the firm went bankrupt.

By 1870, multiple (i.e., double, triple, and unlimited) liability was already the rule for state banks in eighteen states, principally in the Midwest, New England, and Middle Atlantic regions, as well as for national banks. By 1900, multiple liability was the law for state banks in thirty-two states. By this time, the main pockets of single liability were in the south and west. By 1930, only four states had single liability.

Double liability appears to have been successful (Grossman 2001), at least during less-than-turbulent times. During the 1890-1930 period, state banks in states where banks were subject to double (or triple, or unlimited) liability typically undertook less risk than their counterparts in single (limited) liability states in normal years. However, in years in which bank failures were quite high, banks in multiple liability states appeared to take more risk than their limited liability counterparts. This may have resulted from the fact that legislators in more crisis-prone states were more likely to have already adopted double liability. Whatever its advantages or disadvantages, the Great Depression spelled the end of double liability: by 1941, virtually every state had repealed double liability for state-chartered banks.

The Crisis of 1907 and Founding of the Federal Reserve

The crisis of 1907, which had been brought under control by a coalition of trust companies and other chartered banks and clearing-house members led by J.P. Morgan, led to a reconsideration of the monetary system of the United States. Congress set up the National Monetary Commission (1908-12), which undertook a massive study of the history of banking and monetary arrangements in the United States and in other economically advanced countries.[8]

The eventual result of this investigation was the Federal Reserve Act (1913), which established the Federal Reserve System as the central bank of the US. Unlike other countries that had one central bank (e.g., Bank of England, Bank of France), the Federal Reserve Act provided for a system of between eight and twelve reserve banks (twelve were eventually established under the act, although during debate over the act, some had called for as many as one reserve bank per state). This provision, like the rejection of the first two attempts at a central bank, resulted, in part, from American’s antipathy towards centralized monetary authority. The Federal Reserve was established to manage the monetary affairs of the country, to hold the reserves of banks and to regulate the money supply. At the time of its founding each of the reserve banks had a high degree of independence. As a result of the crises surrounding the Great Depression, Congress passed the Banking Act of 1935, which, among other things, centralized Federal Reserve power (including the power to engage in open market operations) in a Washington-based Board of Governors (and Federal Open Market Committee), relegating the heads of the individual reserve banks to a more consultative role in the operation of monetary policy.

The Goal of an “Elastic Currency”

The stated goals of the Federal Reserve Act were: ” . . . to furnish an elastic currency, to furnish the means of rediscounting commercial paper, to establish a more effective supervision of banking in the United States, and for other purposes.” Furnishing an “elastic currency” was important goal of the act, since none of the components of the money supply (gold and silver certificates, national bank notes) were able to expand or contract particularly rapidly. The inelasticity of the money supply, along with the seasonal fluctuations in money demand had led to a number of the panics of the National Banking era. These panic-inducing seasonal fluctuations resulted from the large flows of money out of New York and other money centers to the interior of the country to pay for the newly harvested crops. If monetary conditions were already tight before the drain of funds to the nation’s interior, the autumnal movement of funds could — and did –precipitate panics.[9]

Growth of the Bankers’ Acceptance Market

The act also fostered the growth of the bankers’ acceptance market. Bankers’ acceptances were essentially short-dated IOUs, issued by banks on behalf of clients that were importing (or otherwise purchasing) goods. These acceptances were sent to the seller who could hold on to them until they matured, and receive the face value of the acceptance, or could discount them, that is, receive the face value minus interest charges. By allowing the Federal Reserve to rediscount commercial paper, the act facilitated the growth of this short-term money market (Warburg 1930, Broz 1997, and Federal Reserve Bank of New York 1998). In the 1920s, the various Federal Reserve banks began making large-scale purchases of US Treasury obligations, marking the beginnings of Federal Reserve open market operations.[10]

The Federal Reserve and State Banking

The establishment of the Federal Reserve did not end the competition between the state and national banking systems. While national banks were required to be members of the new Federal Reserve System, state banks could also become members of the system on equal terms. Further, the Federal Reserve Act, bolstered by the Act of June 21, 1917, ensured that state banks could become member banks without losing any competitive advantages they might hold over national banks. Depending upon the state, state banking law sometimes gave state banks advantages in the areas of branching,[11] trust operations,[12] interlocking managements, loan and investment powers,[13] safe deposit operations, and the arrangement of mergers.[14] Where state banking laws were especially liberal, banks had an incentive to give up their national bank charter and seek admission to the Federal Reserve System as a state member bank.

McFadden Act

The McFadden Act (1927) addressed some of the competitive inequalities between state and national banks. It gave national banks charters of indeterminate length, allowing them to compete with state banks for trust business. It expanded the range of permissible investments, including real estate investment and allowed investment in the stock of safe deposit companies. The Act greatly restricted the ability of member banks — whether state or nationally chartered — from opening or maintaining out-of-town branches.

The Great Depression: Panic and Reform

The Great Depression was the longest, most severe economic downturn in the history of the United States.[15] The banking panics of 1930, 1931, and 1933 were the most severe banking disruption ever to hit the United States, with more than one quarter of all banks closing. Data on the number of bank suspensions during this period is presented in Table 3.

Table 3: Bank Suspensions, 1921-33

Number of Bank Suspensions
All Banks National Banks
1921 505 52
1922 367 49
1923 646 90
1924 775 122
1925 618 118
1926 976 123
1927 669 91
1928 499 57
1929 659 64
1930 1352 161
1931 2294 409
1932 1456 276
1933 5190 1475

Source: Bremer (1935).

Note: 1933 figures include 4507 non-licensed banks (1400 non-licensed national banks). Non-licensed banks consist of banks operating on a restricted basis or not in operation, but not in liquidation or receivership.

The first banking panic erupted in October 1930. According to Friedman and Schwartz (1963, pp. 308-309), it began with failures in Missouri, Indiana, Illinois, Iowa, Arkansas, and North Carolina and quickly spread to other areas of the country. Friedman and Schwartz report that 256 banks with $180 million of deposits failed in November 1930, while 352 banks with over $370 million of deposits failed in the following month (the largest of which was the Bank of United States which failed on December 11 with over $200 million of deposits). The second banking panic began in March of 1931 and continued into the summer.[16] The third and final panic began at the end of 1932 and persisted into March of 1933. During the early months of 1933, a number of states declared banking holidays, allowing banks to close their doors and therefore freeing them from the requirement to redeem deposits. By the time President Franklin Delano Roosevelt was inaugurated on March 4, 1933, state-declared banking holidays were widespread. The following day, the president declared a national banking holiday.

Beginning on March 13, the Secretary of the Treasury began granting licenses to banks to reopen for business.

Federal Deposit Insurance

The crises led to the implementation of several major reforms in banking. Among the most important of these was the introduction of federal deposit insurance under the Banking (Glass-Steagall) Act of 1933. Originally an explicitly temporary program, the Act established the Federal Deposit Insurance Corporation (the FDIC was made permanent by the Banking Act of 1935); insurance became effective January 1, 1934. Member banks of the Federal Reserve (which included all national banks) were required to join FDIC. Within six months, 14,000 out of 15,348 commercial banks, representing 97 percent of bank deposits had subscribed to federal deposit insurance (Friedman and Schwartz, 1963, 436-437).[17] Coverage under the initial act was limited to a maximum of $2500 of deposits for each depositor. Table 4 documents the increase in the limit from the act’s inception until 1980, when it reached its current $100,000 level.

Table 4: FDIC Insurance Limit

1934 (January) $2500
1934 (July) $5000
1950 $10,000
1966 $15,000
1969 $20,000
1974 $40,000
1980 $100,000
Source: http://www.fdic.gov/

Additional Provisions of the Glass-Steagall Act

An important goal of the New Deal reforms was to enhance the stability of the banking system. Because the involvement of commercial banks in securities underwriting was seen as having contributed to banking instability, the Glass-Steagall Act of 1933 forced the separation of commercial and investment banking.[18] Additionally, the Acts (1933 for member banks, 1935 for other insured banks) established Regulation Q, which forbade banks from paying interest on demand deposits (i.e., checking accounts) and established limits on interest rates paid to time deposits. It was argued that paying interest on demand deposits introduced unhealthy competition.

Recent Responses to New Deal Banking Laws

In a sense, contemporary debates on banking policy stem largely from the reforms of the post-Depression era. Although several of the reforms introduced in the wake of the 1931-33 crisis have survived into the twenty-first century, almost all of them have been subject to intense scrutiny in the last two decades. For example, several court decisions, along with the Financial Services Modernization Act (Gramm-Leach-Bliley) of 1999, have blurred the previously strict separation between different financial service industries (particularly, although not limited to commercial and investment banking).

FSLIC

The Savings and Loan crisis of the 1980s, resulting from a combination of deposit insurance-induced moral hazard and deregulation, led to the dismantling of the Depression-era Federal Savings and Loan Insurance Corporation (FSLIC) and the transfer of Savings and Loan insurance to the Federal Deposit Insurance Corporation.

Further Reading

Bernanke, Ben S. “Nonmonetary Effects of the Financial Crisis in Propagation of the Great Depression.” American Economic Review 73 (1983): 257-76.

Bordo, Michael D., Claudia Goldin, and Eugene N. White, editors. The Defining Moment: The Great Depression and the American Economy in the Twentieth Century. Chicago: University of Chicago Press, 1998.

Bremer, C. D. American Bank Failures. New York: Columbia University Press, 1935.

Broz, J. Lawrence. The International Origins of the Federal Reserve System. Ithaca: Cornell University Press, 1997.

Cagan, Phillip. “The First Fifty Years of the National Banking System: An Historical Appraisal.” In Banking and Monetary Studies, edited by Deane Carson, 15-42. Homewood: Richard D. Irwin, 1963.

Cagan, Phillip. The Determinants and Effects of Changes in the Stock of Money. New York: National Bureau of Economic Research, 1065.

Calomiris, Charles W. and Gorton, Gary. “The Origins of Banking Panics: Models, Facts, and Bank Regulation.” In Financial Markets and Financial Crises, edited by Glenn R. Hubbard, 109-73. Chicago: University of Chicago Press, 1991.

Davis, Lance. “The Investment Market, 1870-1914: The Evolution of a National Market.” Journal of Economic History 25 (1965): 355-399.

Dewald, William G. “ The National Monetary Commission: A Look Back.”

Journal of Money, Credit and Banking 4 (1972): 930-956.

Eichengreen, Barry. “Mortgage Interest Rates in the Populist Era.” American Economic Review 74 (1984): 995-1015.

Eichengreen, Barry. Golden Fetters: The Gold Standard and the Great Depression, 1919-1939, Oxford: Oxford University Press, 1992.

Federal Deposit Insurance Corporation. “A Brief History of Deposit Insurance in the United States.” Washington: FDIC, 1998. http://www.fdic.gov/bank/historical/brief/brhist.pdf

Federal Reserve. The Federal Reserve: Purposes and Functions. Washington: Federal Reserve Board, 1994. http://www.federalreserve.gov/pf/pdf/frspurp.pdf

Federal Reserve Bank of New York. U.S. Monetary Policy and Financial Markets.

New York, 1998. http://www.ny.frb.org/pihome/addpub/monpol/chapter2.pdf

Friedman, Milton and Anna J. Schawtz. A Monetary History of the United States, 1867-1960. Princeton: Princeton University Press, 1963.

Goodhart, C.A.E. The New York Money Market and the Finance of Trade, 1900-1913. Cambridge: Harvard University Press, 1969.

Gorton, Gary. “Bank Suspensions of Convertibility.” Journal of Monetary Economics 15 (1985a): 177-193.

Gorton, Gary. “Clearing Houses and the Origin of Central Banking in the United States.” Journal of Economic History 45 (1985b): 277-283.

Grossman, Richard S. “Deposit Insurance, Regulation, Moral Hazard in the Thrift Industry: Evidence from the 1930s.” American Economic Review 82 (1992): 800-821.

Grossman, Richard S. “The Macroeconomic Consequences of Bank Failures under the National Banking System.” Explorations in Economic History 30 (1993): 294-320.

Grossman, Richard S. “The Shoe That Didn’t Drop: Explaining Banking Stability during the Great Depression.” Journal of Economic History 54, no. 3 (1994): 654-82.

Grossman, Richard S. “Double Liability and Bank Risk-Taking.” Journal of Money, Credit, and Banking 33 (2001): 143-159.

James, John A. “The Conundrum of the Low Issue of National Bank Notes.” Journal of Political Economy 84 (1976a): 359-67.

James, John A. “The Development of the National Money Market, 1893-1911.” Journal of Economic History 36 (1976b): 878-97.

Kent, Raymond P. “Dual Banking between the Two Wars.” In Banking and Monetary Studies, edited by Deane Carson, 43-63. Homewood: Richard D. Irwin, 1963.

Kindleberger, Charles P. Manias, Panics, and Crashes: A History of Financial Crises. New York: Basic Books, 1978.

Krooss, Herman E., editor. Documentary History of Banking and Currency in the United States. New York: Chelsea House Publishers, 1969.

Minsky, Hyman P. Can ‘It” Happen Again? Essays on Instability and Finance. Armonk, NY: M.E. Sharpe, 1982.

Miron , Jeffrey A. “Financial Panics, the Seasonality of the Nominal Interest Rate, and the Founding of the Fed.” American Economic Review 76 (1986): 125-38.

Mishkin, Frederic S. “Asymmetric Information and Financial Crises: A Historical Perspective.” In Financial Markets and Financial Crises, edited by R. Glenn Hubbard, 69-108. Chicago: University of Chicago Press, 1991.

Rockoff, Hugh. The Free Banking Era: A Reexamination. New York: Arno Press, 1975.

Rockoff, Hugh. “Banking and Finance, 1789-1914.” In The Cambridge Economic History of the United States. Volume 2. The Long Nineteenth Century, edited by Stanley L Engerman and Robert E. Gallman, 643-84. New York: Cambridge University Press, 2000.

Sprague, O. M. W. History of Crises under the National Banking System. Washington, DC: Government Printing Office, 1910.

Sylla, Richard. “Federal Policy, Banking Market Structure, and Capital Mobilization in the United States, 1863-1913.” Journal of Economic History 29 (1969): 657-686.

Temin, Peter. Did Monetary Forces Cause the Great Depression? New York: Norton, 1976.

Temin, Peter. Lessons from the Great Depression. Cambridge: MIT Press, 1989.

Warburg,. Paul M. The Federal Reserve System: Its Origin and Growth: Reflections and Recollections, 2 volumes. New York: Macmillan, 1930.

White, Eugene N. The Regulation and Reform of American Banking, 1900-1929. Princeton: Princeton University Press, 1983.

White, Eugene N. “Before the Glass-Steagall Act: An Analysis of the Investment Banking Activities of National Banks.” Explorations in Economic History 23 (1986) 33-55.

White, Eugene N. “Banking and Finance in the Twentieth Century.” In The Cambridge Economic History of the United States. Volume 3. The Twentieth Century, edited by Stanley L.Engerman and Robert E. Gallman, 743-802. New York: Cambridge University Press, 2000.

Wicker, Elmus. The Banking Panics of the Great Depression. New York: Cambridge University Press, 1996.

Wicker, Elmus. Banking Panics of the Gilded Age. New York: Cambridge University Press, 2000.


[1] The two exceptions were the First and Second Banks of the United States. The First Bank, which was chartered by Congress at the urging of Alexander Hamilton, in 1791, was granted a 20-year charter, which Congress allowed to expire in 1811. The Second Bank was chartered just five years after the expiration of the first, but Andrew Jackson vetoed the charter renewal in 1832 and the bank ceased to operate with a national charter when its 20-year charter expired in 1836. The US remained without a central bank until the founding of the Federal Reserve in 1914. Even then, the Fed was not founded as one central bank, but as a collection of twelve regional reserve banks. American suspicion of concentrated financial power has not been limited to central banking: in contrast to the rest of the industrialized world, twentieth century US banking was characterized by large numbers of comparatively small, unbranched banks.

[2] The relationship between the enactment of the National Bank Acts and the Civil War was perhaps even deeper. Hugh Rockoff suggested the following to me: “There were western states where the banking system was in trouble because the note issue was based on southern bonds, and people in those states were looking to the national government to do something. There were also conservative politicians who were afraid that they wouldn’t be able to get rid of the greenback (a perfectly uniform [government issued wartime] currency) if there wasn’t a private alternative that also promised uniformity…. It has even been claimed that by setting up a national system, banks in the South were undermined — as a war measure.”

[3] Eichengreen (1984) argues that regional mortgage interest rate differentials resulted from differences in risk.

[4] There is some debate over the direction of causality between banking crises and economic downturns. According to monetarists Friedman and Schwartz (1963) and Cagan (1965), the monetary contraction associated with bank failures magnifies real economic downturns. Bernanke (1983) argues that bank failures raise the cost of credit intermediation and therefore have an effect on the real economy through non-monetary channels. An alternative view, articulated by Sprague (1910), Fisher (1933), Temin (1976), Minsky (1982), and Kindleberger (1978), maintains that bank failures and monetary contraction are primarily a consequence, rather than a cause, of sluggishness in the real economy which originates in non-monetary sources. See Grossman (1993) for a summary of this literature.

[5] See Calomiris and Gorton (1991) for an alternative view.

[6] See Mishkin (1991) on asymmetric information and financial crises.

[7] Still other states had “voluntary liability,” whereby each bank could choose single or double liability.

[8] See Dewald (1972) on the National Monetary Commission.

[9] Miron (1986) demonstrates the decline in the seasonality of interest rates following the founding of the Fed.

[10] Other Fed activities included check clearing.

[11] According to Kent (1963, pp. 48), starting in 1922 the Comptroller allowed national banks to open “offices” to receive deposits, cash checks, and receive applications for loans in head office cities of states that allowed state-chartered banks to establish branches.

[12] Prior to 1922, national bank charters had lives of only 20 years. This severely limited their ability to compete with state banks in the trust business. (Kent 1963, p. 49)

[13] National banks were subject to more severe limitations on lending than most state banks. These restrictions included a limit on the amount that could be loaned to one borrower as well as limitations on real estate lending. (Kent 1963, pp. 50-51)

[14] Although the Bank Consolidation Act of 1918 provided for the merger of two or more national banks, it made no provision for the merger of a state and national bank. Kent (1963, p. 51).

[15] References touching on banking and financial aspects of the Great Depression in the United States include Friedman and Schwartz (1963), Temin (1976, 1989), Kindleberger (1978), Bernanke (1983), Eichangreen (1992), and Bordo, Goldin, and White (1998).

[16] During this period, the failures of the Credit-Anstalt, Austria’s largest bank, and the Darmstädter und Nationalbank (Danat Bank), a large German bank, inaugurated the beginning of financial crisis in Europe. The European financial crisis led to Britain’s suspension of the gold standard in September 1931. See Grossman (1994) on the European banking crisis of 1931. The best source on the gold standard in the interwar years is Eichengreen (1992).

[17] Interestingly, federal deposit insurance was made optional for savings and loan institutions at about the same time. The majority of S&L’s did not elect to adopt deposit insurance until after 1950. See Grossman (1992).

[18] See, however, White (1986) for

Citation: Grossman, Richard. “US Banking History, Civil War to World War II”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL
http://eh.net/encyclopedia/us-banking-history-civil-war-to-world-war-ii/

The Economic History of Australia from 1788: An Introduction

Bernard Attard, University of Leicester

Introduction

The economic benefits of establishing a British colony in Australia in 1788 were not immediately obvious. The Government’s motives have been debated but the settlement’s early character and prospects were dominated by its original function as a jail. Colonization nevertheless began a radical change in the pattern of human activity and resource use in that part of the world, and by the 1890s a highly successful settler economy had been established on the basis of a favorable climate in large parts of the southeast (including Tasmania ) and the southwest corner; the suitability of land for European pastoralism and agriculture; an abundance of mineral wealth; and the ease with which these resources were appropriated from the indigenous population. This article will focus on the creation of a colonial economy from 1788 and its structural change during the twentieth century. To simplify, it will divide Australian economic history into four periods, two of which overlap. These are defined by the foundation of the ‘bridgehead economy’ before 1820; the growth of a colonial economy between 1820 and 1930; the rise of manufacturing and the protectionist state between 1891 and 1973; and the experience of liberalization and structural change since 1973. The article will conclude by suggesting briefly some of the similarities between Australia and other comparable settler economies, as well as the ways in which it has differed from them.

The Bridgehead Economy, 1788-1820

The description ‘bridgehead economy’ was used by one of Australia’s foremost economic historians, N. G. Butlin to refer to the earliest decades of British occupation when the colony was essentially a penal institution. The main settlements were at Port Jackson (modern Sydney, 1788) in New South Wales and Hobart (1804) in what was then Van Diemen’s Land (modern Tasmania). The colony barely survived its first years and was largely neglected for much of the following quarter-century while the British government was preoccupied by the war with France. An important beginning was nevertheless made in the creation of a private economy to support the penal regime. Above all, agriculture was established on the basis of land grants to senior officials and emancipated convicts, and limited freedoms were allowed to convicts to supply a range of goods and services. Although economic life depended heavily on the government Commissariat as a supplier of goods, money and foreign exchange, individual rights in property and labor were recognized, and private markets for both started to function. In 1808, the recall of the New South Wales Corps, whose officers had benefited most from access to land and imported goods (thus hopelessly entangling public and private interests), coupled with the appointment of a new governor, Lachlan Macquarie, in the following year, brought about a greater separation of the private economy from the activities and interests of the colonial government. With a significant increase in the numbers transported after 1810, New South Wales’ future became more secure. As laborers, craftsmen, clerks and tradesmen, many convicts possessed the skills required in the new settlements. As their terms expired, they also added permanently to the free population. Over time, this would inevitably change the colony’s character.

Natural Resources and the Colonial Economy, 1820-1930

Pastoral and Rural Expansion

For Butlin, the developments around 1810 were a turning point in the creation of a ‘colonial’ economy. Many historians have preferred to view those during the 1820s as more significant. From that decade, economic growth was based increasingly upon the production of fine wool and other rural commodities for markets in Britain and the industrializing economies of northwestern Europe. This growth was interrupted by two major depressions during the 1840s and 1890s and stimulated in complex ways by the rich gold discoveries in Victoria in 1851, but the underlying dynamics were essentially unchanged. At different times, the extraction of natural resources, whether maritime before the 1840s or later gold and other minerals, was also important. Agriculture, local manufacturing and construction industries expanded to meet the immediate needs of growing populations, which concentrated increasingly in the main urban centers. The colonial economy’s structure, growth of population and significance of urbanization are illustrated in tables 1 and 2. The opportunities for large profits in pastoralism and mining attracted considerable amounts of British capital, while expansion generally was supported by enormous government outlays for transport, communication and urban infrastructures, which also depended heavily on British finance. As the economy expanded, large-scale immigration became necessary to satisfy the growing demand for workers, especially after the end of convict transportation to the eastern mainland in 1840. The costs of immigration were subsidized by colonial governments, with settlers coming predominantly from the United Kingdom and bringing skills that contributed enormously to the economy’s growth. All this provided the foundation for the establishment of free colonial societies. In turn, the institutions associated with these — including the rule of law, secure property rights, and stable and democratic political systems — created conditions that, on balance, fostered growth. In addition to New South Wales, four other British colonies were established on the mainland: Western Australia (1829), South Australia (1836), Victoria (1851) and Queensland (1859). Van Diemen’s Land (Tasmania after 1856) became a separate colony in 1825. From the 1850s, these colonies acquired responsible government. In 1901, they federated, creating the Commonwealth of Australia.

Table 1
The Colonial Economy: Percentage Shares of GDP, 1891 Prices, 1861-1911

Pastoral Other rural Mining Manuf. Building Services Rent
1861 9.3 13.0 17.5 14.2 8.4 28.8 8.6
1891 16.1 12.4 6.7 16.6 8.5 29.2 10.3
1911 14.8 16.7 9.0 17.1 5.3 28.7 8.3

Source: Haig (2001), Table A1. Totals do not sum to 100 because of rounding.

Table 2
Colonial Populations (thousands), 1851-1911

Australia Colonies Cities
NSW Victoria Sydney Melbourne
1851 257 100 46 54 29
1861 669 198 328 96 125
1891 1,704 608 598 400 473
1911 2,313 858 656 648 593

Source: McCarty (1974), p. 21; Vamplew (1987), POP 26-34.

The process of colonial growth began with two related developments. First, in 1820, Macquarie responded to land pressure in the districts immediately surrounding Sydney by relaxing restrictions on settlement. Soon the outward movement of herdsmen seeking new pastures became uncontrollable. From the 1820s, the British authorities also encouraged private enterprise by the wholesale assignment of convicts to private employers and easy access to land. In 1831, the principles of systematic colonization popularized by Edward Gibbon Wakefield (1796-1862) were put into practice in New South Wales with the substitution of land sales for grants in order to finance immigration. This, however, did not affect the continued outward movement of pastoralists who simply occupied land where could find it beyond the official limits of settlement. By 1840, they had claimed a vast swathe of territory two hundred miles in depth running from Moreton Bay in the north (the site of modern Brisbane) through the Port Phillip District (the future colony of Victoria, whose capital Melbourne was marked out in 1837) to Adelaide in South Australia. The absence of any legal title meant that these intruders became known as ‘squatters’ and the terms of their tenure were not finally settled until 1846 after a prolonged political struggle with the Governor of New South Wales, Sir George Gipps.

The impact of the original penal settlements on the indigenous population had been enormous. The consequences of squatting after 1820 were equally devastating as the land and natural resources upon which indigenous hunter-gathering activities and environmental management depended were appropriated on a massive scale. Aboriginal populations collapsed in the face of disease, violence and forced removal until they survived only on the margins of the new pastoral economy, on government reserves, or in the arid parts of the continent least touched by white settlement. The process would be repeated again in northern Australia during the second half of the century.

For the colonists this could happen because Australia was considered terra nullius, vacant land freely available for occupation and exploitation. The encouragement of private enterprise, the reception of Wakefieldian ideas, and the wholesale spread of white settlement were all part of a profound transformation in official and private perceptions of Australia’s prospects and economic value as a British colony. Millennia of fire-stick management to assist hunter-gathering had created inland grasslands in the southeast that were ideally suited to the production of fine wool. Both the physical environment and the official incentives just described raised expectations of considerable profits to be made in pastoral enterprise and attracted a growing stream of British capital in the form of organizations like the Australian Agricultural Company (1824); new corporate settlements in Western Australia (1829) and South Australia (1836); and, from the 1830s, British banks and mortgage companies formed to operate in the colonies. By the 1830s, wool had overtaken whale oil as the colony’s most important export, and by 1850 New South Wales had displaced Germany as the main overseas supplier to British industry (see table 3). Allowing for the colonial economy’s growing complexity, the cycle of growth based upon land settlement, exports and British capital would be repeated twice. The first pastoral boom ended in a depression which was at its worst during 1842-43. Although output continued to grow during the 1840s, the best land had been occupied in the absence of substantial investment in fencing and water supplies. Without further geographical expansion, opportunities for high profits were reduced and the flow of British capital dried up, contributing to a wider downturn caused by drought and mercantile failure.

Table 3
Imports of Wool into Britain (thousands of bales), 1830-50

German Australian
1830 74.5 8.0
1840 63.3 41.0
1850 30.5 137.2

Source: Sinclair (1976), p. 46

When pastoral growth revived during the 1860s, borrowed funds were used to fence properties and secure access to water. This in turn allowed a further extension of pastoral production into the more environmentally fragile semi-arid interior districts of New South Wales, particularly during the 1880s. As the mobs of sheep moved further inland, colonial governments increased the scale of their railway construction programs, some competing to capture the freight to ports. Technical innovation and government sponsorship of land settlement brought greater diversity to the rural economy (see table 4). Exports of South Australian wheat started in the 1870s. The development of drought resistant grain varieties from the turn of the century led to an enormous expansion of sown acreage in both the southeast and southwest. From the 1880s, sugar production increased in Queensland, although mainly for the domestic market. From the 1890s, refrigeration made it possible to export meat, dairy products and fruit.

Table 4
Australian Exports (percentages of total value of exports), 1881-1928/29

Wool Minerals Wheat,flour Butter Meat Fruit
1881-90 54.1 27.2 5.3 0.1 1.2 0.2
1891-1900 43.5 33.1 2.9 2.4 4.1 0.3
1901-13 34.3 35.4 9.7 4.1 5.1 0.5
1920/21-1928/29 42.9 8.8 20.5 5.6 4.6 2.2

Source: Sinclair (1976), p. 166

Gold and Its Consequences

Alongside rural growth and diversification, the remarkable gold discoveries in central Victoria in 1851 brought increased complexity to the process of economic development. The news sparked an immediate surge of gold seekers into the colony, which was soon reinforced by a flood of overseas migrants. Until the 1870s, gold displaced wool as Australia’s most valuable export. Rural industries either expanded output (wheat in South Australia) or, in the case of pastoralists, switched production to meat and tallow, to supply a much larger domestic market. Minerals had been extracted since earliest settlement and, while yields on the Victorian gold fields soon declined, rich mineral deposits continued to be found. During the 1880s alone these included silver, lead and zinc at Broken Hill in New South Wales; copper at Mount Lyell in Tasmania; and gold at Charters Towers and Mount Morgan in Queensland. From 1893, what eventually became the richest goldfields in Australia were discovered at Coolgardie in Western Australia. The mining industry’s overall contribution to output and exports is illustrated in tables 1 and 4.

In Victoria, the deposits of easily extracted alluvial gold were soon exhausted and mining was taken over by companies that could command the financial and organizational resources needed to work the deep lodes. But the enormous permanent addition to the colonial population caused by the gold rush had profound effects throughout eastern Australia, dramatically accelerating the growth of the local market and workforce, and deeply disturbing the social balance that had emerged during the decade before. Between 1851 and 1861, the Australian population more than doubled. In Victoria it increased sevenfold; Melbourne outgrew Sydney, Chicago and San Francisco (see table 2). Significantly enlarged populations required social infrastructure, political representation, employment and land; and the new colonial legislatures were compelled to respond. The way this was played out varied between colonies but the common outcomes were the introduction of manhood suffrage, access to land through ‘free selection’ of small holdings, and, in the Victorian case, the introduction of a protectionist tariff in 1865. The particular age structure of the migrants of the 1850s also had long-term effects on the building cycle, notably in Victoria. The demand for housing accelerated during the 1880s, as the children of the gold generation matured and established their own households. With pastoral expansion and public investment also nearing their peaks, the colony experienced a speculative boom which added to the imbalances already being caused by falling export prices and rising overseas debt. The boom ended with the wholesale collapse of building companies, mortgage banks and other financial institutions during 1891-92 and the stoppage of much of the banking system during 1893.

The depression of the 1890s was worst in Victoria. Its impact on employment was softened by the Western Australian gold discoveries, which drew population away, but the colonial economy had grown to such an extent since the 1850s that the stimulus provided by the earlier gold finds could not be repeated. Severe drought in eastern Australia from the mid-1890s until 1903 caused the pastoral industry to contract. Yet, as we have seen, technological innovation also created opportunities for other rural producers, who were now heavily supported by government with little direct involvement by foreign investors. The final phase of rural expansion, with its associated public investment in rural (and increasingly urban) infrastructure continued until the end of the 1920s. Yields declined, however, as farmers moved onto the most marginal land. The terms of trade also deteriorated with the oversupply of several commodities in world markets after the First World War. As a result, the burden of servicing foreign debt rose once again. Australia’s position as a capital importer and exporter of natural resources meant that the Great Depression arrived early. From late 1929, the closure of overseas capital markets and collapse of export prices forced the Federal Government to take drastic measures to protect the balance of payments. The falls in investment and income transmitted the contraction to the rest of the economy. By 1932, average monthly unemployment amongst trade union members was over 22 percent. Although natural resource industries continued to have enduring importance as earners of foreign exchange, the Depression finally ended the long period in which land settlement and technical innovation had together provided a secure foundation for economic growth.

Manufacturing and the Protected Economy, 1891-1973

The ‘Australian Settlement’

There is a considerable chronological overlap between the previous section, which surveyed the growth of a colonial economy during the nineteenth century based on the exploitation of natural resources, and this one because it is a convenient way of approaching the two most important developments in Australian economic history between Federation and the 1970s: the enormous increase in government regulation after 1901 and, closely linked to this, the expansion of domestic manufacturing, which from the Second World War became the most dynamic part of the Australian economy.

The creation of the Commonwealth of Australia on 1 January 1901 broadened the opportunities for public intervention in private markets. The new Federal Government was given clearly-defined but limited powers over obviously ‘national’ matters like customs duties. The rest, including many affecting economic development and social welfare, remained with the states. The most immediate economic consequence was the abolition of inter-colonial tariffs and the establishment of a single Australian market. But the Commonwealth also soon set about transferring to the national level several institutions that different the colonies had experimented with during the 1890s. These included arrangements for the compulsory arbitration of industrial disputes by government tribunals, which also had the power to fix wages, and a discriminatory ‘white Australia’ immigration policy designed to exclude non-Europeans from the labor market. Both were partly responses to organized labor’s electoral success during the 1890s. Urban business and professional interests had always been represented in colonial legislatures; during the 1910s, rural producers also formed their own political parties. Subsequently, state and federal governments were typically formed by the either Australian Labor Party or coalitions of urban conservatives and the Country Party. The constituencies they each represented were thus able to influence the regulatory structure to protect themselves against the full impact of market outcomes, whether in the form of import competition, volatile commodity prices or uncertain employment conditions. The institutional arrangements they created have been described as the ‘Australian settlement’ because they balanced competing producer interests and arguably provided a stable framework for economic development until the 1970s, despite the inevitable costs.

The Growth of Manufacturing

An important part of the ‘Australian settlement’ was the imposition of a uniform federal tariff and its eventual elaboration into a system of ‘protection all round’. The original intended beneficiaries were manufacturers and their employees; indeed, when the first protectionist tariff was introduced in 1907, its operation was linked to the requirement that employers pay their workers ‘fair and reasonable wages’. Manufacturing’s actual contribution to economic growth before Federation has been controversial. The population influx of the 1850s widened opportunities for import-substitution but the best evidence suggests that manufacturing grew slowly as the industrial workforce increased (see table 1). Production was small-scale and confined largely to the processing of rural products and raw materials; assembly and repair-work; or the manufacture of goods for immediate consumption (e.g. soap and candle-making, brewing and distilling). Clothing and textile output was limited to a few lines. For all manufacturing, growth was restrained by the market’s small size and the limited opportunities for technical change it afforded.

After Federation, production was stimulated by several factors: rural expansion, the increasing use of agricultural machinery and refrigeration equipment, and the growing propensity of farm incomes to be spent locally. The removal of inter-colonial tariffs may also have helped. The statistical evidence indicates that between 1901 and the outbreak of the First World War manufacturing grew faster than the economy as a whole, while output per worker increased. But manufacturers also aspired mainly to supply the domestic market and expended increasing energy on retaining privileged access. Tariffs rose considerably between the two world wars. Some sectors became more capital intensive, particularly with the establishment of a local steel industry, the beginnings of automobile manufacture, and the greater use of electricity. But, except during the first half of the 1920s, there was little increase in labor productivity and the inter-war expansion of textile manufacturing reflected the heavy bias towards import substitution. Not until the Second World War and after did manufacturing growth accelerate and extend to those sectors most characteristic of an advance industrial economy (table 5). Amongst these were automobiles, chemicals, electrical and electronic equipment, and iron-and-steel. Growth was sustained during 1950s by similar factors to those operating in other countries during the ‘long boom’, including a growing stream of American direct investment, access to new and better technology, and stable conditions of full employment.

Table 5
Manufacturing and the Australian Economy, 1913-1949

1938-39 prices
Manufacturing share of GDP % Manufacturing annual rate of growth % GDP, annual rate of growth %
1913/14 21.9
1928/29 23.6 2.6 2.1
1948/49 29.8 3.4 2.2

Calculated from Haig (2001), Table A2. Rates of change are average annual changes since the previous year in the first column.

Manufacturing peaked in the mid-1960s at about 28 percent of national output (measured in 1968-69 prices) but natural resource industries remained the most important suppliers of exports. Since the 1920s, over-supply in world markets and the need to compensate farmers for manufacturing protection, had meant that virtually all rural industries, with the exception of wool, had been drawn into a complicated system of subsidies, price controls and market interventions at both federal and state levels. The post-war boom in the world economy increased demand for commodities, benefiting rural producers but also creating new opportunities for Australian miners. Most important of all, the first surge of breakneck growth in East Asia opened a vast new market for iron ore, coal and other mining products. Britain’s significance as a trading partner had declined markedly since the 1950s. By the end of the 1960s, Japan overtook it as Australia’s largest customer, while the United States was now the main provider of imports.

The mining bonanza contributed to the boom conditions experienced generally after 1950. The Federal Government played its part by using the full range of macroeconomic policies that were also increasingly familiar in similar western countries to secure stability and full employment. It encouraged high immigration, relaxing the entry criteria to allow in large numbers of southern Europeans, who added directly to the workforce, but also brought knowledge and experience. With state governments, the Commonwealth increased expenditure on education significantly, effectively entering the field for the first time after 1945. Access to secondary education was widened with the abandonment of fees in government schools and federal finance secured an enormous expansion of university places, especially after 1960. Some weaknesses remained. Enrolment rates after primary school were below those in many industrial countries and funding for technical education was poor. Despite this, the Australian population’s rising levels of education and skill continued to be important additional sources of growth. Finally, although government advisers expressed misgivings, industry policy remained determinedly interventionist. While state governments competed to attract manufacturing investment with tax and other incentives, by the 1960s protection had reached its highest level, with Australia playing virtually no part in the General Agreement on Tariffs and Trade (GATT), despite being an original signatory. The effects of rising tariffs since 1900 were evident in the considerable decline in Australia’s openness to trade (Table 6). Yet, as the post-war boom approached its end, the country still relied upon commodity exports and foreign investment to purchase the manufactures it was unable to produce itself. The impossibility of sustaining growth in this way was already becoming clear, even though the full implications would only be felt during the decades to come.

Table 6
Trade (Exports Plus Imports)
as a Share of GDP, Current Prices, %

1900/1 44.9
1928/29 36.9
1938/38 32.7
1964/65 33.3
1972/73 29.5

Calculated from Vamplew (1987), ANA 119-129.

Liberalization and Structural Change, 1973-2005

From the beginning of the 1970s, instability in the world economy and weakness at home ended Australia’s experience of the post-war boom. During the following decades, manufacturing’s share in output (table 7) and employment fell, while the long-term relative decline of commodity prices meant that natural resources could no longer be relied on to cover the cost of imports, let alone the long-standing deficits in payments for services, migrant remittances and interest on foreign debt. Until the early 1990s, Australia also suffered from persistent inflation and rising unemployment (which remained permanently higher, see chart 1). As a consequence, per capita incomes fluctuated during the 1970s, and the economy contracted in absolute terms during 1982-83 and 1990-91.

Even before the 1970s, new sources of growth and rising living standards had been needed, but the opportunities for economic change were restricted by the elaborate regulatory structure that had evolved since Federation. During that decade itself, policy and outlook were essentially defensive and backward looking, despite calls for reform and some willingness to alter the tariff. Governments sought to protect employment in established industries, while dependence on mineral exports actually increased as a result of the commodity booms at the decade’s beginning and end. By the 1980s, however, it was clear that the country’s existing institutions were failing and fundamental reform was required.

Table 7
The Australian Economy, 1974-2004

A. Percentage shares of value-added, constant prices

1974 1984 1994 2002
Agriculture 4.4 4.3 3.0 2.7
Manufacturing 18.1 15.2 13.3 11.8
Other industry, inc. mining 14.2 14.0 14.6 14.4
Services 63.4 66.4 69.1 71.1

B. Per capita GDP, annual average rate of growth %, constant prices

1973-84 1.2
1984-94 1.7
1994-2004 2.5

Calculated from World Bank, World Development Indicators (Sept. 2005).

Figure 1
Unemployment, 1971-2005, percent

Unemployment, 1971-2005, percent

Source: Reserve Bank of Australia (1988); Reserve Bank of Australia, G07Hist.xls. Survey data at August. The method of data collection changed in 1978.

The catalyst was the resumption of the relative fall of commodity prices since the Second World War which meant that the cost of purchasing manufactured goods inexorably rose for primary producers. The decline had been temporarily reversed by the oil shocks of the 1970s but, from the 1980/81 financial year until the decade’s end, the value of Australia’s merchandise imports exceeded that of merchandise exports in every year but two. The overall deficit on current account measured as a proportion of GDP also moved became permanently higher, averaging around 4.7 percent. During the 1930s, deflation had been followed by the further closing of the Australian economy. There was no longer much scope for this. Manufacturing had stagnated since the 1960s, suffering especially from the inflation of wage and other costs during the 1970s. It was particularly badly affected by the recession of 1982-83, when unemployment rose to almost ten percent, its highest level since the Great Depression. In 1983, a new federal Labor Government led by Bob Hawke sought to engineer a recovery through an ‘Accord’ with the trade union movement which aimed at creating employment by holding down real wages. But under Hawke and his Treasurer, Paul Keating — who warned colorfully that otherwise the country risked becoming a ‘banana republic’ — Labor also started to introduce broader reforms to increase the efficiency of Australian firms by improving their access to foreign finance and exposing them to greater competition. Costs would fall and exports of more profitable manufactures increase, reducing the economy’s dependence on commodities. During the 1980s and 1990s, the reforms deepened and widened, extending to state governments and continuing with the election of a conservative Liberal-National Party government under John Howard in 1996, as each act of deregulation invited further measures to consolidate them and increase their effectiveness. Key reforms included the floating of the Australian dollar and the deregulation of the financial system; the progressive removal of protection of most manufacturing and agriculture; the dismantling of the centralized system of wage-fixing; taxation reform; and the promotion of greater competition and better resource use through privatization and the restructuring of publicly-owned corporations, the elimination of government monopolies, and the deregulation of sectors like transport and telecommunications. In contrast with the 1930s, the prospects of further domestic reform were improved by an increasingly favorable international climate. Australia contributed by joining other nations in the Cairns Group to negotiate reductions of agricultural protection during the Uruguay round of GATT negotiations and by promoting regional liberalization through the Asia Pacific Economic Cooperation (APEC) forum.

Table 8
Exports and Openness, 1983-2004

Shares of total exports, % Shares of GDP: exports + imports, %
Goods Services
Rural Resource Manuf. Other
1983 30 34 9 3 24 26
1989 23 37 11 5 24 27
1999 20 34 17 4 24 37
2004 18 33 19 6 23 39

Calculated from: Reserve Bank of Australia, G10Hist.xls and H03Hist.xls; World Bank, World Development Indicators (Sept. 2005). Chain volume measures, except shares of GDP, 1983, which are at current prices.

The extent to which institutional reform had successfully brought about long-term structural change was still not clear at the end of the century. Recovery from the 1982-83 recession was based upon a strong revival of employment. By contrast, the uninterrupted growth experienced since 1992 arose from increases in the combined productivity of workers and capital. If this persisted, it was a historic change in the sources of growth from reliance on the accumulation of capital and the increase of the workforce to improvements in the efficiency of both. From the 1990s, the Australian economy also became more open (table 8). Manufactured goods increased their share of exports, while rural products continued to decline. Yet, although growth was more broadly-based, rapid and sustained (table 7), the country continued to experience large trade and current account deficits, which were augmented by the considerable increase of foreign debt after financial deregulation during the 1980s. Unemployment also failed to return to its pre-1974 level of around 2 percent, although much of the permanent rise occurred during the mid to late 1970s. In 2005, it remained 5 percent (Figure 1). Institutional reform clearly contributed to these changes in economic structure and performance but they were also influenced by other factors, including falling transport costs, the communications and information revolutions, the greater openness of the international economy, and the remarkable burst of economic growth during the century’s final decades in southeast and east Asia, above all China. Reform was also complemented by policies to provide the skills needed in a technologically-sophisticated, increasingly service-oriented economy. Retention rates in the last years of secondary education doubled during the 1980s, followed by a sharp increase of enrolments in technical colleges and universities. By 2002, total expenditure on education as a proportion of national income had caught up with the average of member countries of the OECD (Table 9). Shortages were nevertheless beginning to be experienced in the engineering and other skilled trades, raising questions about some priorities and the diminishing relative financial contribution of government to tertiary education.

Table 9
Tertiary Enrolments and Education Expenditure, 2002

Tertiary enrolments, gross percent Education expenditure as a proportion of GDP, percent
Australia 63.22 6.0
OECD 61.68 5.8
United States 70.67 7.2

Source: World Bank, World Development Indicators (Sept. 2005); OECD (2005). Gross enrolments are total enrolments, regardless of age, as a proportion of the population in the relevant official age group. OECD enrolments are for fifteen high-income members only.

Summing Up: The Australian Economy in a Wider Context

Virtually since the beginning of European occupation, the Australian economy had provided the original British colonizers, generations of migrants, and the descendants of both with a remarkably high standard of living. Towards the end of the nineteenth century, this was by all measures the highest in the world (see table 10). After 1900, national income per member of the population slipped behind that of several countries, but continued to compare favorably with most. In 2004, Australia was ranked behind only Norway and Sweden in the United Nation’s Human Development Index. Economic historians have differed over the sources of growth that made this possible. Butlin emphasized the significance of local factors like the unusually high rate of urbanization and the expansion of domestic manufacturing. In important respects, however, Australia was subject to the same forces as other European settler societies in New Zealand and Latin America, and its development bore striking similarities to theirs. From the 1820s, its economy grew as one frontier of an expanding western capitalism. With its close institutional ties to, and complementarities with, the most dynamic parts of the world economy, it drew capital and migrants from them, supplied them with commodities, and shared the benefits of their growth. Like other settler societies, it sought population growth as an end in itself and, from the turn of the nineteenth century, aspired to the creation of a national manufacturing base. Finally, when openness to the world economy appeared to threaten growth and living standards, governments intervened to regulate and protect with broader social objectives in mind. But there were also striking contrasts with other settler economies, notably those in Latin America like Argentina, with which it has been frequently compared. In particular, Australia responded to successive challenges to growth by finding new opportunities for wealth creation with a minimum of political disturbance, social conflict or economic instability, while sharing a rising national income as widely as possible.

Table 10
Per capita GDP in Australia, United States and Argentina
(1990 international dollars)

Australia United States Argentina
1870 3,641 2,457 1,311
1890 4,433 3,396 2,152
1950 7,493 9,561 4,987
1998 20,390 27,331 9,219

Sources: Australia: GDP, Haig (2001) as converted in Maddison (2003); all other data Maddison (1995) and (2001)

From the mid-twentieth century, Australia’s experience also resembled that of many advanced western countries. This included the post-war willingness to use macroeconomic policy to maintain growth and full employment; and, after the 1970s, the abandonment of much government intervention in private markets while at the same time retaining strong social services and seeking to improve education and training. Australia also experienced a similar relative decline of manufacturing, permanent rise of unemployment, and transition to a more service-based economy typical of high income countries. By the beginning of the new millennium, services accounted for over 70 percent of national income (table 7). Australia remained vulnerable as an exporter of commodities and importer of capital but its endowment of natural resources and the skills of its population were also creating opportunities. The country was again favorably positioned to take advantage of growth in the most dynamic parts of the world economy, particularly China. With the final abandonment of the White Australia policy during the 1970s, it had also started to integrate more closely with its region. This was further evidence of the capacity to change that allowed Australians to face the future with confidence.

References:

Anderson, Kym. “Australia in the International Economy.” In Reshaping Australia’s Economy: Growth with Equity and Sustainability, edited by John Nieuwenhuysen, Peter Lloyd and Margaret Mead, 33-49. Cambridge: Cambridge University Press, 2001.

Blainey, Geoffrey. The Rush that Never Ended: A History of Australian Mining, fourth edition. Melbourne: Melbourne University Press, 1993.

Borland, Jeff. “Unemployment.” In Reshaping Australia’s Economy: Growth and with Equity and Sustainable Development, edited by John Nieuwenhuysen, Peter Lloyd and Margaret Mead, 207-228. Cambridge: Cambridge University Press, 2001.

Butlin, N. G. Australian Domestic Product, Investment and Foreign Borrowing 1861-1938/39. Cambridge: Cambridge University Press, 1962.

Butlin, N.G. Economics and the Dreamtime, A Hypothetical History. Cambridge: Cambridge University Press, 1993.

Butlin, N.G. Forming a Colonial Economy: Australia, 1810-1850. Cambridge: Cambridge University Press, 1994.

Butlin, N.G. Investment in Australian Economic Development, 1861-1900. Cambridge: Cambridge University Press, 1964.

Butlin, N. G., A. Barnard and J. J. Pincus. Government and Capitalism: Public and Private Choice in Twentieth Century Australia. Sydney: George Allen and Unwin, 1982.

Butlin, S. J. Foundations of the Australian Monetary System, 1788-1851. Sydney: Sydney University Press, 1968.

Chapman, Bruce, and Glenn Withers. “Human Capital Accumulation: Education and Immigration.” In Reshaping Australia’s economy: growth with equity and sustainability, edited by John Nieuwenhuysen, Peter Lloyd and Margaret Mead, 242-267. Cambridge: Cambridge University Press, 2001.

Dowrick, Steve. “Productivity Boom: Miracle or Mirage?” In Reshaping Australia’s Economy: Growth with Equity and Sustainability, edited by John Nieuwenhuysen, Peter Lloyd and Margaret Mead, 19-32. Cambridge: Cambridge University Press, 2001.

Economist. “Has he got the ticker? A survey of Australia.” 7 May 2005.

Haig, B. D. “Australian Economic Growth and Structural Change in the 1950s: An International Comparison.” Australian Economic History Review 18, no. 1 (1978): 29-45.

Haig, B.D. “Manufacturing Output and Productivity 1910 to 1948/49.” Australian Economic History Review 15, no. 2 (1975): 136-61.

Haig, B.D. “New Estimates of Australian GDP: 1861-1948/49.” Australian Economic History Review 41, no. 1 (2001): 1-34.

Haig, B. D., and N. G. Cain. “Industrialization and Productivity: Australian Manufacturing in the 1920s and 1950s.” Explorations in Economic History 20, no. 2 (1983): 183-98.

Jackson, R. V. Australian Economic Development in the Nineteenth Century. Canberra: Australian National University Press, 1977.

Jackson, R.V. “The Colonial Economies: An Introduction.” Australian Economic History Review 38, no. 1 (1998): 1-15.

Kelly, Paul. The End of Certainty: The Story of the 1980s. Sydney: Allen and Unwin, 1992.

Macintyre, Stuart. A Concise History of Australia. Cambridge: Cambridge University Press, 1999.

McCarthy, J. W. “Australian Capital Cities in the Nineteenth Century.” In Urbanization in Australia; The Nineteenth Century, edited by J. W. McCarthy and C. B. Schedvin, 9-39. Sydney: Sydney University Press, 1974.

McLean, I.W. “Australian Economic Growth in Historical Perspective.” The Economic Record 80, no. 250 (2004): 330-45.

Maddison, Angus. Monitoring the World Economy 1820-1992. Paris: OECD, 1995.

Maddison, Angus. The World Economy: A Millennial Perspective. Paris: OECD, 2001.

Maddison, Angus. The World Economy: Historical Statistics. Paris: OECD, 2003.

Meredith, David, and Barrie Dyster. Australia in the Global Economy: Continuity and Change. Cambridge: Cambridge University Press, 1999.

Nicholas, Stephen, editor. Convict Workers: Reinterpreting Australia’s Past. Cambridge: Cambridge University Press, 1988.

OECD. Education at a Glance 2005 – Tables OECD, 2005 [cited 9 February 2006]. Available from http://www.oecd.org/document/11/0,2340,en_2825_495609_35321099_1_1_1_1,00.html.

Pope, David, and Glenn Withers. “The Role of Human Capital in Australia’s Long-Term Economic Growth.” Paper presented to 24th Conference of Economists, Adelaide, 1995.

Reserve Bank of Australia. “Australian Economic Statistics: 1949-50 to 1986-7: I Tables.” Occasional Paper No. 8A (1988).

Reserve Bank of Australia. Current Account – Balance of Payments – H1 [cited 29 November 2005]. Available from http://www.rba.gov.au/Statistics/Bulletin/H01bhist.xls.

Reserve Bank of Australia. Gross Domestic Product – G10 [cited 29 November 2005]. Available from http://www.rba.gov.au/Statistics/Bulletin/G10hist.xls.

Reserve Bank of Australia. Unemployment – Labour Force – G1 [cited 2 February 2006]. Available from http://www.rba.gov.au/Statistics/Bulletin/G07hist.xls.

Schedvin, C. B. Australia and the Great Depression: A Study of Economic Development and Policy in the 120s and 1930s. Sydney: Sydney University Press, 1970.

Schedvin, C.B. “Midas and the Merino: A Perspective on Australian Economic History.” Economic History Review 32, no. 4 (1979): 542-56.

Sinclair, W. A. The Process of Economic Development in Australia. Melbourne: Longman Cheshire, 1976.

United Nations Development Programme. Human Development Index [cited 29 November 2005]. Available from http://hdr.undp.org/statistics/data/indicators.cfm?x=1&y=1&z=1.

Vamplew, Wray, ed. Australians: Historical Statistics. Edited by Alan D. Gilbert and K. S. Inglis, Australians: A Historical Library. Sydney: Fairfax, Syme and Weldon Associates, 1987.

White, Colin. Mastering Risk: Environment, Markets and Politics in Australian Economic History. Melbourne: Oxford University Press, 1992.

World Bank. World Development Indicators ESDS International, University of Manchester, September 2005 [cited 29 November 2005]. Available from http://www.esds.ac.uk/International/Introduction.asp.

Citation: Attard, Bernard. “The Economic History of Australia from 1788: An Introduction”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL
http://eh.net/encyclopedia/the-economic-history-of-australia-from-1788-an-introduction/

The History of the Aerospace Industry

Glenn E. Bugos, The Prologue Group

The aerospace industry ranks among the world’s largest manufacturing industries in terms of people employed and value of output. Yet even beyond its shear size, the aerospace industry was one of the defining industries of the twentieth century. As a socio-political phenomenon, aerospace has inflamed the imaginations of youth around the world, inspired new schools of industrial design, decisively bolstered both the self-image and power of the nation state, and shrunk the effective size of the globe. As an economic phenomenon, aerospace has consumed the major amount of research and development funds across many fields, subsidized innovation in a vast array of component technologies, evoked new forms of production, spurred construction of enormous manufacturing complexes, inspired technology-sensitive managerial techniques, supported dependent regional economies, and justified the deeper incursion of national governments into their economies. No other industry has so persistently and intimately interacted with the bureaucratic apparatus of the nation state.

Aerospace technology permeates many other industries — travel and tourism, logistics, telecommunications, electronics and computing, advanced materials, civil construction, capital goods manufacture, and defense supply. Here, the aerospace industry is defined by those firms that design and build vehicles that fly through our atmosphere and outer space.

The First Half-Century

Aircraft remained experimental apparatus for five years after the Wright brother’s famous first flight in December 1903. In 1908 the Wrights secured a contract to make a single aircraft from the U.S. Army, and also licensed their patents to allow the Astra Company to manufacture aircraft in France. Glenn Curtiss of New York began selling his own aircraft in 1909, prompting many American aircraft hobbyists to turn entrepreneurial.

Europeans took a clear early lead in aircraft manufacture. By the outbreak of the Great War in August 1914, French firms had built more than 2,000 aircraft, German firms had built about 1,000, and Britain slightly fewer. American firms had built less than a hundred, most of these one of a kind. Even then aircraft embodied diverse materials at close tolerances, and those who mismanaged the American wartime manufacturing effort failed to realize the need for special facilities and trained workers. American warplanes ultimately arrived too late to have much military impact or to impart much momentum to an industry. When contracts were cancelled with the armistice the industry collapsed, leading to the reconfiguration of every significant aircraft firm. By contrast, seven firms built more than 22,500 of the 400-horsepower Liberty engines, and their efforts laid the foundation for an efficient and well-concentrated aircraft engine industry — led by Wright Aeronautical Company and Curtiss Aeroplane and Motor.

Still, the war induced some infrastructure that moved the industry beyond its fragmented roots. National governments funded testing laboratories — like the National Advisory Committee for Aeronautics established in May 1915 in the United States — that also disseminated scientific information of explicit use to industry. Universities began to offer engineering degrees specific to aircraft. American aircraft designers formed a patent pool in July 1917 — administered by the Aircraft Manufacturers Association — whereby all aircraft firms cross-licensed key patents and paid into the pool without fear of infringement suits. The post-war glut of light aircraft, like the Curtiss Jenny trainers in America, allowed anyone who dreamed of flying to become a pilot.

Most of the companies that survived the war remained entrepreneurial in spirit, led by designers more interested in advancing the state of the art than in mass production. During the 1920s, aircraft assumed their modern shape. Monoplanes superceded biplanes, stressed-skin cantilevered wings replaced externally braced wings, radial air-cooled engines turned variable pitch propellers, and enclosed fuselages and cowlings gave aircraft their sleek aerodynamic shape. By the mid-1930s, metal replaced wood as the material of choice in aircraft construction so new types of component suppliers fed the aircraft manufacturers.

Likewise, the customers of aircraft grew more sophisticated in matching designs to their needs. Militaries formed air arms specifically to exploit this new technology, which became dedicated procurers of aircrafts. Air transport companies began flying passengers in the 1920s, though all those airlines were kept afloat by government airmail contracts. European nations developed airmail routes around their colonies — served by flag-carriers like the British Overseas Airways Corporation, Lufthansa, and Aeropostale. Pan Am’s routes to Asia and Latin America, linked by flying boats built by Sikorsky, Douglas and Lockheed, was the equivalent in the American empire.

The United States was the only country with a large indigenous airmail system, and it drove the structure of the industry during the 1920s. The Kelly Air Mail Act of 1925 gave airmail business to hundreds of small pilot-owned firms that hopped from airport and airport. Gradually, these operations were consolidated into larger airlines. In 1928 — in a mix of stock market euphoria and aviation enthusiasm following Charles Lindbergh’s transatlantic flight — Wall Street financiers formed holding companies that integrated airlines with the manufacture of aircraft and engines. United Aircraft and Transport, for example, combined United Airlines with Boeing, North American Aviation, and the Aviation Corporation. These holding companies struggled for profitability following the stock market crash of 1929, and were ultimately undone in 1934 through legislation that split manufacturers and airlines — a separation that continued thereafter.

The United States was also the only country large enough for air travel to challenge rail travel, and in the 1930s airlines competed for passengers by forging alliances with aircraft manufacturers. The Boeing 247 airliner, based on its B-9 bomber design, marked the start of American dominance in transport aircraft. The Douglas DC-3, introduced in 1935, gave airlines their first shot at solvency by carrying people rather than mail. Many advances in aircraft design during the 1930s addressed the comfort, efficiency and safety of air travel — cabin pressurization, retractable landing gear, better instrumentation and better navigational devices around airports. Britain and Germany produced the best large bombers at the start of the 1930s, though by the start of the World War II American designs were better. American firms, by contrast though, were producing very few of them.

During the 1930s, the European states had begun ramping up production of military aircraft, training pilots to fly them, and building airfields to host them. Once the war began, though, factories were bombed and supply lines cut off. As it became less likely they would overwhelm their enemies with vast fleets of aircraft, German and British aircraft firms instead invested in research and engineering to create better aircraft. Under the exigency of war, Europeans developed the strategic missile, the jet engine, better radar, all-weather navigation aids, and more nimble fighters. The German Messerschmitt 262 fighter aircraft — which combined a strong turbine engine with the innovation of swept wings — approached the speed of sound. The Europeans also innovated in tactics and logistics to use fewer aircraft more effectively. The discipline of operations research grew out of British needs to use patrol aircraft more efficiently. Though American designers also proved innovative in the crucible of war, American firms clearly triumphed in mass production.

In the six-year period 1940 through 1945, American firms built 300,718 military aircraft, including 95,272 in 1944 alone. In the previous six-year period, American firms built only 19,587 aircraft, most of those civil. In 1943, the aviation industry was America’s largest producer and employer — with 1,345,600 people bent to the task of making aircraft. A vast array of firms — especially automobile makers — fed this rapid escalation of production. Engineers disaggregated aircraft into smaller parts to parcel out to subcontractors, managed distributed manufacturing, and devised the concept of the learning curve to forecast when cost reductions kicked in. By the end of the war, Americans firmly believed in the doctrine of air power. They invested in their belief, and for the next half-century Americans would set the agenda for the aircraft industry around the world. Mass production, though, slipped from that agenda. On VJ Day the American military cancelled all orders for aircraft, and assembly lines ground to a halt. Total sales by American aircraft firms were $16 billion in 1944; by 1947 they were only $1.2 billion. Production never again reached World War II levels, despite a minor blip for the wars in Korea and Vietnam. Instead, research ruled the industry.

The Cold War

The Berlin airlift of 1947 marked the start of the Cold War between the United States and the Soviet Union, a symbolic conflict in which perceptions of aerial might played a key role. Once they divested themselves of their surplus plants, American aircraft firms rushed to incorporate into their designs the technological advances of World War II. The preeminent symbol of these efforts, and of the nature of the Cold War, was the massive Boeing B-47 long-range strategic bomber, with six engines and swept wings. Boeing built 2,000 B-47s, following its first flight in December 1947, and emerged as the dominant builder of strategic bombers and large airliners — like the B-52 and the 707. Also symbolizing this conflict was the needle-thin rocket-powered Bell X-1 which, in December 1947, became the first aircraft to break the sound barrier. The X-1 was the first in the X-series of experimental aircraft – sleek, specially built research aircraft that jousted with Soviet aircraft to set speed and altitude records. More importantly, the aerospace industry made new types of vehicles to join the half-century old propeller-driven airplane in the skies.

New technologies prompted a massive restructuring of the industry. Established airframe firms shifted from manufacturing to research, while the military channeled funds to technology-specific startup firms. For example, Sikorsky, Hiller and Bell quickly dominated the market for new type of airframe known as a helicopter. Electronics specialists like Raytheon, Sperry, and Hughes became prime contractors for the new guided missiles, while airframe manufacturers subcontracted to them. Turbojet engines were the most disruptive new technology. Turbojets shared little in common with piston engines so two firms specializing in steam turbines — General Electric and Westinghouse — grabbed the bulk of jet engine orders until Pratt & Whitney caught up. Aircraft firms also struggled to modify their airframes for the greater speeds and altitudes possible with jet engines. Those firms that failed were superceded by those that succeeded — notably McDonnell Aircraft and Lockheed.

Intercontinental ballistic missile programs, started in 1954, fueled the micro-level restructuring of the industry. ICBMs were touted as “winning weapons” to replace massive numbers of aircraft, so missile firms invested in smaller but better factories — with clean rooms and test chambers — rather than in cavernous assembly buildings. Because of the complexity of the designs, the reliability required of each part, and the hurry in which the missiles had to be designed and built, new management models emerged from the military and aerospace firms. The Aerospace Corporation, Space Technology Laboratories of TRW Inc., and Lockheed Missiles & Space were three firms that proclaimed proprietary expertise in this new aerospace management. The ICBM efforts introduced, to all high-tech industries worldwide, the ideal and techniques of program management and systems engineering. When Europeans fretted over The American Challenge in the 1960s, they meant not so much American technology as management methods like these that generated technical innovation so relentlessly. Young men flocked to aerospace because it was cool and cutting-edge.

Also revolutionary were the spacecraft and the rockets that lifted them into orbit. The neologism “aerospace” reflected the shape of the money that flowed into the industry following the Soviet launch of Sputnik in October 1957. The U.S. Aircraft Industries Association changed its name to the Aerospace Industries Association of America, so the public might think it natural that the firms that built aircraft should also build vehicles to travel through air-less space. Furthermore, the laboratories of the National Advisory Committee for Aeronautics formed the kernel of the National Aeronautics and Space Administration, then bent the efforts of academic aeronautics toward hypersonics and space travel. In 1961, NASA got the mission to send an American to the Moon and return him safely to Earth before the decade was out. NASA built enormous space ports in Florida and Texas, enhanced its arsenal of research laboratories, bolstered its own network of hardware contractors, opened up new areas of material science, and pioneered new methods of reliability testing. Following the success of Apollo, in the 1970s NASA invested ahead of demand to create the space shuttle for regular access to space, then struggled to find ways to industrialize space.

Program management and systems engineering were applied to military aircraft in the 1960s, as the Defense Department took a more active role in telling the industry what to make and how to make it. Because of a uniformity in contracting rules, this was one of the few epochs in which the aerospace industry approached monopsony — dominated by a single customer. This systems engineering mentality drove greater design costs up-front. Aircraft grew more expensive, so the fewer produced were expected to have longer lives with more frequent remanufacturing. To get more diverse types of engineering talent involved in design, the Defense Department insisted that airframe firms — former competitors — team to win aircraft contracts. Key members in these teams were avionics firms, as airframes became little more than platforms to take electronic equipment aloft. Fewer contracts meant that Congress, voicing concern over the defense industrial base, made more procurement decisions than experts in the military or NASA. Meanwhile, profits among American aerospace firms remained high compared with almost any other industry.

Amidst all the other shocks to the American economy in the 1970s, in 1975 the United States would record its last trade surplus of the twentieth century. While other American industries lost ground to European or Japanese competitors, American aircraft have remained in consistent demand. Since the mid-1960s, aerospace products have comprised between six and ten percent of all American merchandise exports. The U.S. Export-Import Bank was nicknamed the “Boeing Bank” for its willingness to lend other countries money to buy American airliners. Yet increasingly, the aerospace industry was seen as a cause of American economic failure. So much federal research and development funding filtering through the aerospace firms distorted innovation so that American consumer products suffered. Conglomerates formed in the late 1960s around aerospace firms — like LTV and Litton — suggested that their core competence was not aerospace systems but the ability to read government contracting trends. Aerospace firms that were not consolidated in the mid-1970s, after aircraft lost in Vietnam were replaced, pursued diversification strong in the belief that the engineering skill that made American aircraft so dominant could also make world-class busses and microwave ovens. They failed. Waste, fraud and abuse dominated discussion of military aerospace. Persistent cost overruns and delays suggested no one in the industry took efficiency seriously.

Matters got worse in the 1980s. Republican administrations channeled enormous funds into the aerospace firms dotting the American sunbelt, without a concomitant increase in aircraft actually built. Efforts to build a space-based missile defense system symbolized the accepted futility of this spend-up. Likewise, NASA poured money into Space Shuttle operations without an increase in flights. NASA engineers sketched, then resketched plans for an international space station to create a permanent base in space. American aerospace firms seemed overly mature, and European firms took advantage.

An International Industry

International politics has always played a role in aviation. Aircraft in flight easily transcended national borders, so governments jointly developed navigation systems and airspace protocols. Spacecraft overflew national borders within seconds so nations set up international bodies to allocate portions of near-earth space. INTELSAT, an international consortium modeled on COMSAT (the American consortium that governed operations of commercial satellites) standardized the operation of geosynchronous satellites to start the commercialization of space. Those who dreamed of space colonization also dreamed it might be free of earthly politics. Internationalization more clearly reshaped aerospace by helping firms from other countries find the economies of scale they needed to forge a place in an industry so clearly dominated by American firms.

Only the Soviet Union challenged the American aerospace industry. In some areas, like heavy lifting rockets and space medicine, the Soviets outpaced the Americans. But the Soviets and Americans fought solely in the realm of perceptions of military might, not on any military or economic battleground. The Soviets also sold military aircraft and civil transports but, with few exceptions, an airline bought either Soviet or American aircraft because of alliance politics rather than efficiencies in the marketplace. Even in civil aircraft, the Soviet Union invested far more than their returns. In 1991, when the Soviet Union fractured into smaller states and the subsidies disappeared, the once mighty Soviet aerospace firms were reduced to paupers. European firms then stood as more serious competitors, largely because they had developed a global understanding of the industry.

Following World War II, the European aircraft industry was in shards. Germany, Italy, and Japan were prohibited from making any aircraft of significance. French and British firms remained strong and innovative, though these firms sold mostly to their nation’s militaries and airlines. Neither could buy as many aircraft as their American counterparts, and European firms could not sufficiently amortize their engineering costs. During the 1960s, European governments allowed aircraft and missile firms to fail or consolidate into clear “national champions:” British Aircraft Corporation, Hawker Siddely Aviation, and Rolls-Royce in Britain; Aerospatiale, Dassault, SNECMA and Matra in France; Messerschmit-Bölkow-Blohm and VFW in Germany; and CASA in Spain. Then governments asked their national champions to join transnational consortia intent on building specific types of aircraft — like the PANAVIA Tornado fighter, the launch vehicles and satellites of the European Space Agency or, most successfully, the Airbus airliners. The matrix of many national firms participating variously in many transnational projects meant that the European industry operated neither as monopoly nor monopsony.

Meanwhile international travel grew rapidly, and airlines became some of the world’s largest employers. By the late 1950s, the major airlines had transitioned to Boeing or Douglas-built jet airliners — which carried twice as many passengers at twice the speed in greater comfort. Between 1960 and 1974 passenger volume on international flights grew six fold. The Boeing 747, a jumbo jet with 360 seats, took international air travel to a new level of excitement when introduced in January 1970. Each nation had at least one airline, and each airline had slightly different requirements for the aircraft they used. Boeing and McDonnell Douglas pioneered new methods of mass customization to build aircraft to these specifications. The Airbus A300 first flew in September 1972, and European governments continued to subsidize the Airbus Industrie consortium as it struggled for customers. In the 1980s, air travel again enjoyed a growth spurt that Boeing and Douglas could not immediately satisfy, and Airbus found its market. By the 1990s, the Airbus consortium had built a contractor network with tentacles around the world, had developed a family of successful airliners, and split the market with American producers.

Aerospace extends beyond the most industrialized nations. Walt Rostow in his widely read book on economic development used aviation imagery to suggest a trajectory of industrial growth. The imagery was not lost on newly industrializing countries like Brazil, Israel, Taiwan, South Korea, Singapore or Indonesia. They too entered the industry, opportunistically, by setting up depots to maintain the aircraft they bought abroad. Then, they took subcontracts from American and European firms to learn how to manage their own projects to high standards. Nations at war — in the Middle East, Africa, and Asia — proved ready customers for these simple and inexpensive aircraft. Missiles, likewise, if derived from proven designs, were generally easy and cheap to produce. By 1971, fourteen nations could build short-range and air-defense missiles. By the 1990s more than thirty nations had some capacity to manufacture complete aircraft. Some made only small, general-purpose aircraft — which represent a tiny fraction of the total dollar value of the industry but proved immensely important to a military and communication needs of developing states. The leaders of almost every nation have seen aircraft as a leading sector — one that creates spin offs and sets the pace of technological advance in an entire economy.

A Post-Cold War World

When the Cold War ended, the aerospace industry changed dramatically. After the record run up in the federal deficit during the 1980s, by 1992 the United States Congress demanded a peace dividend and slashed funding for defense procurement. By 1994, the demand for civil airliners also underwent a cyclical downturn. Aerospace-dependent regions — notably Los Angeles and Seattle — suffered recession then rebuilt their economies around different industries. Aerospace employed 1.3 million Americans in 1989 or 8.8 percent of everyone working in manufacturing; by 1995 aerospace employed only 796,000 people or 4.3 percent of everyone working in a manufacturing industry. As it had for decades, in 1985 aerospace employed about one-fifth of all American scientists and engineers engaged in research and development; by 1999 it employed only seven percent.

Rather than diversify or shed capacity haphazardly, aerospace firms focused. They divested or merged feverishly in 1995 and 1996, hoping to find the best consolidation partners before the federal government feared that competition would suffer. GE sold its aerospace division to Martin Marietta, which then sold itself to Lockheed. Boeing bought the aerospace units of Rockwell International, and then acquired McDonnell Douglas. Northrop bought Grumman. Lockheed Martin and Boeing both ended up with about ten percent of all government aerospace contracts, though joint ventures and teaming remained significant. The concentration in the American industry made it look like European industry, except that in the margins new venture-backed firms sprang up to develop new hybrid aircraft. Funding for space vehicles held fairly steady as new firms found new uses for satellites in communications, defense, and remote sensing of the earth. NASA reconfigured its relations with industry around the mantra of “faster, better, and cheaper,” especially in the creation of reusable launch vehicles.

Throughout the Cold War, total sales by aerospace firms has divided one-half aircraft, with that amount split fairly evenly between military and civil, one quarter space vehicles, one-tenth missiles, and the rest ground support equipment. When spending for aerospace recovered in the late 1990s, there was the first significant shift toward sales of civil aircraft. After a century of development, there are strong signs that the aircraft and space industries are finally breaking free of their military vassalage. There are also strong signs that the industry is becoming global — trans-Atlantic mergers, increasing standardization of parts and operations, aerospace imports and exports rising in lockstep. More likely, as it has been for a century, aerospace will remain intimately tied to the nation state.

Bibliography

Aerospace Industries Association of America, Inc., Washington D.C. Aerospace Facts & Figures. This is an annual statistical series, dating back to 1945, about developments in the aerospace industry.

Bilstein, Roger E. The American Aerospace Industry: From Workshop to Global Enterprise. New York: Twayne Publishers, 1996.

Brumberg, Joan Lisa. NASA and the Space Industry. Baltimore: Johns Hopkins University Press, 1999.

Bugos, Glenn E. Engineering the F-4 Phantom II: Parts Into Systems. Annapolis: Naval Institute Press, 1996.

Hayward, Keith. The World Aerospace Industry: Collaboration and Competition. London: Duckworth, 1994.

Pattilo, Donald M. Pushing the Envelope: The American Aircraft Industry. Ann Arbor: University of Michigan Press, 1998.

Pisano, Dominick and Cathleen Lewis, editors. Air and Space History: An Annotated Bibliography. New York: Garland, 1988.

Rae, John B. Climb to Greatness: The American Aircraft Industry, 1920-1960. Cambridge: MIT Press, 1968.

Stekler, Herman O. The Structure and Performance of the Aerospace Industry. Berkeley: University of California Press, 1965.

Vander Meulen, Jacob. The Politics of Aircraft: Building an American Military Industry. Lawrence: University Press of Kansas, 1991.

Citation: Bugos, Glenn. “History of the Aerospace Industry”. EH.Net Encyclopedia, edited by Robert Whaples. August 28, 2001. URL http://eh.net/encyclopedia/the-history-of-the-aerospace-industry/

Chicago Business and Industry: From Fur Trade to E-Commerce

Reviewer(s):Owen, Laura J.

Published by EH.Net (September 2013)

Janice L. Reiff, editor, Chicago Business and Industry: From Fur Trade to E-Commerce. Chicago: University of Chicago Press, 2013. xii + 377 pp. $22.50 (paperback), ISBN: 978-0-226-70936-9.

Reviewed for EH.Net by Laura J. Owen, Department of Economics, DePaul University.

Janice Reiff (Associate Professor of History, UCLA) draws on entries from the Encyclopedia of Chicago to weave a history of Chicago?s economy. Reiff seeks to explore the reciprocal relationship between the metropolitan area and its economy. The city represents the centralization of economic activity, but the types and locations of this economic activity in turn shape the fabric of the city. Chicago Business and Industry: From Fur Trade to E-Commerce is organized into four sections, topically arranging the encyclopedic entries.

Section 1, ?The Economic Geography,? includes three essays which explore how Chicago?s location allowed it to develop from a fur trading center to a global city. Illustrating Reiff?s idea of reciprocity, the essays also detail how the distribution of new economic activities continually shaped the local environments and neighborhoods of the city. The section includes maps (from the Newberry Library collection) providing excellent visuals on the role transportation improvements have played in linking Chicago to an ever-expanding world economy and the new shapes of local retail activity as the city has expanded.

The second section begins with two longer essays, ?The Business of Chicago? (Peter A. Coclanis) and ?Innovation, Invention and Chicago Business? (Louis P. Cain) which trace pivotal events and players in the development of Chicago?s economy. These are followed by more than fifty shorter entries on Chicago industries ranging from Accounting to Wholesaling. There is a wealth of information here and many interesting stories that could enrich the teaching of Economic History. Examples range from the origins of futures contracts during the Civil War to the role that the mail order giants (Ward and Sears) played in the distribution of musical instruments to rural America.

?Chicago?s Business? consists of seventy-five pages of short entries on specific firms ranging from Abbott Laboratories to LaSalle National Bank to Zenith Electronics. The alphabetic order of sections two and three makes it easy to search for specific industries or firms, but does not allow the reader to readily see possible connections between industries producing similar goods and services, or firms operating within the same industry. Grouping the entries on firms by industry or sector (goods producing, services) would encourage the reader to make these connections.

Section four, ?Working in Chicago,? begins with several longer entries focusing on types of work available, work culture, leisure activities of workers, and how educational institutions responded to the needs of the workplace. These are followed by shorter entries on workers in specific occupations and industries. Again, there are many fascinating details that could be incorporated into the teaching of the transformation of work in the nineteenth and twentieth centuries. For example, ?Schooling for Work? (Arthur Zilversmit) addresses several features of vocational training in the U.S that help explain why it was often less successful than its counterpart in some European countries.

The book concludes with an eight-page bibliography and detailed index.

The primary drawback of the book is that it reads as an encyclopedia.? In the Introduction (pp. 1-6) Reiff provides a brief description of the theme of each of the four sections, but leaves it to the reader to see how the selected entries support these themes. An introduction at the beginning of each section could guide the reader through the selected entries and minimize the choppy experience of reading multiple short pieces. Individual section introductions could be used to make connections between entries that are missing in the alphabetic organization.

In the introduction, Reiff encourages readers ?to explore that larger picture [created by the many particular stories] by following their own paths through the essays? (p. 3). The 30-page index should facilitate this type of exploration. However, exploration on specific topics (beyond what is included in the book) is hindered by the bibliography?s placement at the end of the volume with no linkage to the specific entries or sections. Interested readers can overcome this obstacle by consulting the online version of the Encyclopedia of Chicago in which entries are immediately followed by bibliographic references.

After finding this volume on the bookshelf, one hopes that readers will discover the complete work from which it is drawn. Reiff, along with Ann Durkin Keating and James R. Grossman, edited the Encyclopedia of Chicago, and the print version was adapted into The Electronic Encyclopedia of Chicago and linked to the online resources of the Chicago Historical Society. This online resource is the real treasure, allowing teachers to pull in specific Chicago examples to courses (ranging from U.S. economic history to urban economics) and providing students with a wealth of ideas for research projects.

References:

Janice L. Reiff, Ann Durkin Keating, and James R. Grossman, editors, Encyclopedia of Chicago, University of Chicago Press, 2004.

The Electronic Encyclopedia of Chicago, Chicago Historical Society, 2005, [http://www.encyclopedia.chicagohistory.org/].

Laura Owen (Associate Professor of Economics, DePaul University) is currently working on a project examining hours of work in the U.S. and Canada in the second half of the twentieth century.

Copyright (c) 2013 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator (administrator@eh.net). Published by EH.Net (September P2013). All EH.Net reviews are archived at http://www.eh.net/BookReview

Subject(s):Business History
Industry: Manufacturing and Construction
Urban and Regional History
Geographic Area(s):North America
Time Period(s):19th Century
20th Century: Pre WWII
20th Century: WWII and post-WWII

Labour-Intensive Industrialization in Global History

Reviewer(s):Horn, Jeff

Published by EH.Net (May 2013)

Gareth Austin and Kaoru Sugihara, editors, Labour-Intensive Industrialization in Global History.? London: Routledge, 2013. xiv + 314 pp. $140 (hardcover), ISBN: 978-0-415-45552-7.

Reviewed for EH.Net by Jeff Horn, Department of History, Manhattan College.

This timely and important book gathers together a number of challenges to Anglo- and Euro-centric explanations of the process of industrialization in various states and regions around the globe.? This volume, which appears in the Routledge Explorations in Economic History series, developed out of interactions at conferences, large and small between 2001 and 2012.? Collectively, these authors seek to test historically and then extend conceptually Kaoru Sugihara?s influential argument that the East Asian path featuring labor-intensive, resource-saving industrialization is diffusing globally and that this model offers a more realistic means of improving living standards without destroying the environment in areas that have not yet industrialized (p. i).

The co-editors, Gareth Austin, Senior Lecturer in Economic History at the London School of Economics and Sugihara, Professor of Economic History at Kyoto University, have assembled a superb team of scholars: Jan de Vries, Michel Hau, Colin M. Lewis, Kenneth Pomeranz, Tirthankar Roy, Osamu Saito, Nicolas Stoskopf, Masayuki Tanimoto, and Pierre van der Eng.? Inevitably, the various authors have done so with more or less success, but the overall product is valuable, both in gathering these studies in one place and in challenging a number of existing orthodoxies about the actual and theoretical roles of labor, capital and factor endowments in industrialization. The editors contend that: 1) the diffusion of the East Asian model has reduced regional inequalities between East and West caused by industrialization ad colonialism; 2) the diffusion of labor-intensive industrialization has generated the majority of today?s global employment in manufacturing; and 3) the Western path to development has not been and is not, at present, the sole route to industrialization, though they do acknowledge that the Western model strongly impacted the other paths (p. 6).?

The introduction competently sets forth the essential issues.? The next chapter is Sugihara?s latest refinement of his interpretation of East Asian industrial experiences.? His emphasis on improving the quality of labor as the ?vital element? in achieving global transformation is particularly welcome (p. 21).? In Chapter 3, de Vries provides a well-argued and balanced examination of ?The Industrious Revolutions in East and West? that focuses on the role of markets as the chief difference between the behavior of households in these two regions (p. 80). With an emphasis on the role of skill intensity, Saito posits in Chapter 4 that proto-industrialization should be understood as one form of labor-intensive industrialization capable of generating Smithian growth (p. 85). These four ?framing? chapters are provocative statements of arguments that have been made elsewhere, but here they are explicitly in dialogue, which sharpens the analysis considerably.

Based on diverse Asian examples, the editors argue that ?labour-intensive industrialization is transferable to labour-surplus economies through trade, investment and industrial and education policies? (p. 5).? This theme is explored fruitfully in chapters by Roy, Pomeranz, Tanimoto, and van der Eng constituting the middle third of the volume.? The authors emphasize in India (Roy), China (Pomeranz), Japan (Tanimoto) and Indonesia (van der Eng) the existence, persistence and competitiveness of small-scale, labor-intensive industry both before and during industrialization.? They all find that the industrial success of this model is based on relatively cheap and relatively abundant labor of relatively high quality.? The argument works best for India, China and Japan.? In the case of Indonesia, van der Eng explores, in somewhat roundabout fashion, why export-driven, labor-intensive industrialization was impossible before oil prices fell in the 1980s, necessitating the development of new opportunities (p. 195).

Outside southern and eastern Asia, the East Asian model runs into difficulties, as Austin himself acknowledges (pp. 291-92).? In Chapter 9, Austin analyzes the role of labor intensity in first retarding, then supporting manufacturing in West Africa before concluding overly optimistically that conditions may be shifting in West Africa?s favor (pp. 223-25).? He highlights the role of markets by demonstrating that cheap labor alone is not enough to support labor-intensive manufacturing (p. 218).? Lewis demonstrates that Latin America was chronically short of both capital and labor, which explains why there was no transition from labor-intensive colonial-era to capital-intensive modern industry (p. 244).? He explains that historical complaints about labor quality or labor scarcity actually reflect a lack of work, rather than any objective workforce deficiency (pp. 244-46).? Lewis?s history is far more convincing than his analysis of contemporary success in Latin America which seems to come out of nowhere in his account.? The volume then returns to Europe for Hau and Stoskopf?s discussion of nineteenth-century Alsace centered on demographic factors.? Although this piece contains much interesting data, it is too short.? Unfortunately, Hau and Stoskopf do not link their piece to the broader arguments, either theoretically and historiographically, that they evoke.? The book concludes with Austin?s extended historiographical examination of the interplay of labor-intensive industrialization and global economic development.? He explicitly seeks to expand Sugihara?s two paths theory by expounding a third model based on environmental constraints on the use of land and labor in West Africa (p. 292).? To limit the Western-bias of Alexander Gerschenkron?s examination of ?late developing? countries, Austin convincingly emphasize the importance of factor ratios (p. 297).

This volume presents an exciting set of economic explanations of global industrial development that fit the historical evidence far better than standard Anglo- or Euro-centric accounts.? A few shortcomings, however, require comment.? For a book that criticizes explanations that assign a relatively passive role to labor (p. i), it is startling that every single one of these authors deals solely with labor as an abstract collective.? Readers never meet a worker, even briefly, to illustrate a point.? In short, this volume presents a faceless version of labor without individual laborers which undermines some of the effectiveness of the focus on work and weakens the argument.? The related issues of global context and change over time are also muddied by the order of presentation.? To go from a twentieth-century chapter like van der Eng?s to Austin and Lewis, who both examine far longer periods, before returning to nineteenth-century Alsace is disconcerting and is not dealt with sufficiently in the introduction.? It should be noted, however, that these critiques do not undermine the basic worth of either the project itself or its conclusions.? This volume profoundly challenges existing orthodoxies and should provoke groundbreaking further research.? At the very least, these linked accounts are too firmly grounded in historical experience to be ignored and must be taken into account in any explanation of global industrial development in the past, present or future.? Despite its hefty price, this book merits purchase by every academic library.

Jeff Horn is Professor of History at Manhattan College.? He will soon complete Economic Development in Early-Modern France: The Privilege of Liberty, 1650-1800, under contract with Cambridge University Press. jeff.horn@manhattan.edu

Copyright (c) 2013 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator (administrator@eh.net). Published by EH.Net (May 2013). All EH.Net reviews are archived at http://www.eh.net/BookReview

Subject(s):Industry: Manufacturing and Construction
Geographic Area(s):General, International, or Comparative
Time Period(s):17th Century
18th Century
19th Century
20th Century: Pre WWII
20th Century: WWII and post-WWII

Why Australia Prospered: The Shifting Sources of Economic Growth

Author(s):McLean, Ian W.
Reviewer(s):Harper, Ian

Published by EH.Net (May 2013)

Ian W. McLean, Why Australia Prospered: The Shifting Sources of Economic Growth. Princeton, NJ: Princeton University Press, 2012. xvi + 281 pp. $35 (cloth), ISBN: 978-0-691-15467-1.

Reviewed for EH.Net by Ian Harper, Deloitte Access Economics.

There was a time not so long ago when the study of Australian economic history was taken more seriously than it is today.? Australia?s major universities boasted separate departments of economic history, in which some of the authors familiar to any student of Australian economic history studied and taught.? Occasionally professional economic historians took their place alongside economists in departments of economics, as is true of the author of this fine book, who taught for many years at the University of Adelaide in South Australia.

Why Australia Prospered
is the first major survey of Australia?s modern economic history to appear in many years, and it is an outstanding piece of scholarship.? Indeed, in his comment on the dust-cover, E.L. Jones, one of Australia?s internationally distinguished economic historians, confidently predicts that, ?it will become the standard work on Australian economic history.?

The book spans the full gamut of Australia?s story from the settlement of the penal colony of New South Wales by the British in 1788 to current debates about the future of Australian prosperity in the wake of the China-driven resources boom.? A great strength of the book is its value to readers interested in Australia?s contemporary economic challenges as much as to those keen to understand more of what distinguishes Australia?s historical experience from that of similar ?settler? economies like Canada, the United States and Argentina, with which Australia is often compared.

Australia remains one of the most prosperous countries in the world, easily within the top ten ranked by GDP per capita and second only to Norway according to the United Nations Human Development Index.? By any account, this is a story of remarkable success considering the comparatively short time period since European settlement and the fact, as McLean sensitively explains, that Australia?s modern economy was ?built from scratch,? there being no economic interaction of significance between the Australian Aborigines and the newly-arrived British colonists.

McLean tells the story sequentially, partly for convenience but partly to demonstrate one of his themes that the basis of Australia?s prosperity has shifted over time.? There is no single reason why Australians are rich, and McLean firmly repudiates the popular view that Australia is just a vast quarry or farm, and Australians are wealthy for no better reason than their fortunate initial endowment of resource-laden and/or arable land ? in other words, there?s little more to Australia?s economic development than dumb luck.

Australia?s bountiful resource endowment might have amounted to far less without the kick-start provided by favorable demographics and labor force participation rates characteristic of a convict settlement, generous financial ?aid? from Britain and strong trade links with the Mother Country, as well as the early adoption of representative and responsible self-government.? After all, Argentina stands as the classic counterexample of a settler economy with a resource endowment to rival Australia?s but whose economic prosperity began to lag Australia?s in the late nineteenth century and has fallen further behind ever since.

The run-up from near-starvation in the first decade of colonial existence to enjoying the world?s highest material living standard (as measured by GDP per capita) just one century later in the 1880s is familiar.? It is a story of land, exploited first for wool production, then mineral extraction beginning with gold, and subsequently for agriculture, especially following the development of refrigerated shipping.? But then the first of a series of major shocks, sourced from beyond her shores rather than within, hit Australia, beginning with the devastating depression of 1893-95 and followed by two world wars and another depression, all within the span of the next half-century.

The seemingly relentless rise of Australian living standards slowed to a crawl.? The annual average growth rate of per capita GDP fell by half during the two-and-a-half decades following 1890, and barely reached 0.1 percent per annum between 1913 and 1939.? Not until after the Second World War did prosperity levels once again begin to lift.? The latest long boom, which has seen Australian living standards ascend several places up global league tables, commenced as recently as 1990, during which time Australia, almost uniquely among developed economies, has avoided recession altogether.

As Australia?s economic fortunes waxed and waned, domestic economic institutions and policies evolved.? McLean makes much of this historical dialectic, more so than the orthodox telling.? This applies most especially to his interpretation of Australia?s long dalliance with industrial tariff protection, commencing in 1908 during the lengthy aftermath of the 1890s depression and concluding only in the 1980s.

Most economists regard this episode as the triumph of vested interests over rational economic policy.? Yet, in McLean?s rendition, such policy innovations say more about the resilience of Australian political institutions in re-casting the social trade-off between stability and growth as circumstances warrant or allow.? Australia?s later abandonment of tariff protection and floating of the Australian dollar are interpreted in a similar vein.? This is the only respect in which McLean?s narrative has divided professional opinion in Australia, with proponents and detractors of Australia?s proclivity towards economic interventionism voicing their approval or disapproval with equal brio.

Ian McLean has written a timely and masterful account of the long sweep of Australia?s economic history, which will be relished by anyone interested in the unique circumstances of this country?s remarkable economic development.? Written for the non-specialist, the narrative is accessible, brisk and appropriately, if sparsely, illustrated with charts and tables.? There is an extensive bibliography and index.

Ian Harper is Emeritus Professor of Economics at the University of Melbourne and a Partner at Deloitte Access Economics.? His latest book, Economics for Life (Acorn Press, 2011), includes a chapter on Australian economic history.? Email correspondence should be sent to iaharper@deloitte.com.au.

Copyright (c) 2013 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator (administrator@eh.net). Published by EH.Net (May 2013). All EH.Net reviews are archived at http://www.eh.net/BookReview

Subject(s):Economic Development, Growth, and Aggregate Productivity
Economywide Country Studies and Comparative History
Geographic Area(s):Australia/New Zealand, incl. Pacific Islands
Time Period(s):18th Century
19th Century
20th Century: Pre WWII
20th Century: WWII and post-WWII

The Chosen Few: How Education Shaped Jewish History, 70-1492

Author(s):Botticini, Maristella
Eckstein, Zvi
Reviewer(s):Chiswick, Carmel U.

Published by EH.Net (January 2013)

Maristella Botticini and Zvi Eckstein, The Chosen Few: How Education Shaped Jewish History, 70-1492.? Princeton, NJ: Princeton University Press, 2012.? xvii + 323 pp.? $39.50 (hardcover) ISBN: 978-0-691-14487-0.

Reviewed for EH.Net by Carmel U. Chiswick, Department of Economics, George Washington University.

The Chosen Few by Maristella Botticini (Bocconi University) and Zvi Eckstein (Tel Aviv University) reminds us ? for those who need reminding ? how Cliometrics can transform our understanding of historical events. They examine Jewish history from an economic perspective with results that are both innovative and insightful.?

The book is structured around a skeleton of straightforward economic theory, fleshed out with data ? quantitative and qualitative ? obtained from an extraordinary array of documentary evidence.? The historical period covered is a few decades short of 1,500 years, requiring us to step back and look through a very broad lens, yet the proof offers details on everyday economic life and on the timing of events.? The economic model is simple but not simplistic, presented elegantly without bells and whistles, sophisticated but accessible to a general reader.? (Technical language is wisely confined to appendices that spell out the model mathematically and present details on statistics that are new or controversial.)? And the overall result is a new perspective that will change forever the way we understand the economic history of Jews over a broad spectrum of time and space.

The economic model developed by Botticini and Eckstein uses a human capital approach to look at the way investments in religious education interact with occupational choice and earnings.? At the beginning of their story, approximately in the first century, Judaism was in transition from a religion centered on the Temple in Jerusalem to a synagogue-based religion that could be observed anywhere that Jews lived.? Part of this transition required that every Jewish male learn to read from the Torah, making basic literacy a part of religious training that began at the age of 5 or 6 and encouraging further study for those so inclined.? This meant that even ordinary Jewish men (and sometimes women) could read, and perhaps write, at a time when literacy was rare among the common people.?

Botticini and Eckstein develop a model placing Jewish literacy within its economic context.? When urbanization and commercialization raised the demand for occupations where reading and writing was an advantage, the religious training of Jews gave them a comparative advantage.? This meant that investments in Jewish religious education earned a reward in the marketplace as long as Jews moved into those occupations, which of course they did.? In contrast, when urban and commercial economies declined, Jewish religious training lost its economic advantage.? This deceptively simple model is the framework for understanding the economic incentives not only for Jewish occupational clustering but also for the strength of Jewish attachment to Judaism.

At the beginning of their story, in the year 70, Botticini and Eckstein estimate a population of some 5 million Jews (about the size of today?s American Jewish population), half of whom lived in the Land of Israel under Roman rule and the rest in various places in Mesopotamia, Persia, Egypt, Asia Minor and the Balkans.? Within the next century the Jewish population dropped by nearly half, and by the year 650 there were only one million Jews living mostly in Mesopotamia and Persia.? Throughout this period most Jews, like most non-Jews, were farmers, a fact that Botticini and Eckstein document in some detail. War and famine, including the exile that dispersed Jews after the Romans destroyed Jerusalem, explain no more than half of this decline, which was considerably greater than the general population decline during this period.? As long as Jews remained farmers, however, the literacy requirement provided benefits only in the religious sphere but not in the secular economy.? Many farmers responded by not investing in their children?s Jewish education, and most of their descendants assimilated into the surrounding (often Christian) populations.? Those who remained Jews would have been a self-selected group of people with either strong preferences for religious Judaism or a high ability for reading and writing.

The second part of their period, approximately from 750 to 1150, uses the same model to explain why Jews (most of whom still lived in Persia and Mesopotamia) shifted from rural to urban occupations, from a community of farmers to one of craftsmen and merchants.? Under the Muslim Caliphates cities grew, trade thrived, and the demand for occupations benefitting from literacy grew accordingly.? Literacy skills Jews acquired as part of their religious education transferred readily to these urban occupations and were rewarded with high earnings, generating an income effect that supported a Golden Age of Jewish culture.? Those Jews who remained as farmers were self-selected for persons who invested little in religious education and eventually assimilated into the general (Muslim) population.

Botticini and Eckstein look at demographic trends during this period of prosperity and cultural flowering, observing that the Jewish population not only grew in size but dispersed to cities all along the trading routes from India to Iberia, from Yemen to Europe.? They are at pains to show that these migrations were not motivated by push factors like discrimination or expulsion, but rather by the pull of new opportunities for urban craftsmen and merchants.? In most places the Jewish community concentrated in large cities, but in Europe ? where the cities were too small to support much activity in high-level urban occupations ? the Jewish communities were smaller and scattered more widely in many towns.? The Mongol invasions of the thirteenth century destroyed the cities of the Middle East, devastated its commerce, and dramatically reduced demand for urban occupations throughout the region.? Jewish religious education no longer yielded secular benefits in the impoverished Muslim economy, and the number of Jews declined as they assimilated into the surrounding population to avoid costly investments in religious human capital.

The Golden Age of Jewish culture in the Muslim world created a spiritual and intellectual legacy on which European Jewry could build.? In particular, the Talmud and Responsa literature (correspondence ruling on religious observance in everyday business and family matters) discussed the application of ancient (biblical) rules to contemporary activities.? This literature took Jewish religious studies well beyond basic literacy to develop literary sophistication and hone decision-making skills.? After the Mongol invasions destroyed the Muslim commercial economy, Europe became the new center of Jewish learning that nurtured these skills.? During the fourteenth century Spain had the most sophisticated economy in Europe and Spanish Jewry flourished in both religious culture and secular occupations.

Wherever they lived, Jewish communities maintained an active correspondence with each other on religious matters, creating networks that benefitted commercial activities as well.? These networks meant that urban Jews living in capital-scarce countries could borrow from Jews in more prosperous communities.? After the Mongol invasions, when European economies began to expand in the fourteenth and fifteenth centuries, imperfect capital markets created arbitrage opportunities that made money lending an especially profitable business.? Botticini and Eckstein argue convincingly that Jewish trading networks, mercantile experience, and universal literacy gave European Jews a comparative advantage in a very profitable profession for which few non-Jews had the relevant skills.? They thus argue that religiously-motivated education (creating literacy and decision-making skills transferable to secular occupations) and religiously-motivated correspondence networks explain why money-lending had become the dominant occupation of European Jews by the fifteenth century.?

Botticini and Eckstein?s simple yet sophisticated human capital analysis provides new insights into Jewish history for the fourteen centuries covered in this book.? In the last chapter of The Chosen Few they promise us a new book carrying the analysis forward for the next 500 years, from 1492 to the present.? Judging from the economic success of modern Jews, 80 percent of whom now live in the United States or Israel, their model suggests strong complementarity between skills developed by a Jewish religious education and those associated with business management and scientific investigation.?

Intentional or not, The Chosen Few follows an expositional style that suggests this very hypothesis.? Like the Talmud, each topic is introduced by a statement of fact (evidence) followed by questions about what those facts mean and how to explain them.? They then consider a number of opinions (hypotheses), including their own, and discuss the pros and cons of each with respect to internal consistency and historical evidence.? This methodology yields a very convincing Cliometric analysis that we can expect to inform all future economic histories of the Jews between 70 and 1492.

Carmel U. Chiswick is Research Professor of Economics, George Washington University, and Professor Emerita, University of Illinois at Chicago.? She has published widely on the economics of religion, especially on Jews, and much of her work on this subject is collected in C. Chiswick, The Economics of American Judaism (Routledge, 2008).

Copyright (c) 2013 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator (administrator@eh.net). Published by EH.Net (January 2013). All EH.Net reviews are archived at http://www.eh.net/BookReview

Subject(s):Education and Human Resource Development
Geographic Area(s):Europe
Middle East
Time Period(s):Ancient
Medieval