EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

The Depression of 1893

David O. Whitten, Auburn University

The Depression of 1893 was one of the worst in American history with the unemployment rate exceeding ten percent for half a decade. This article describes economic developments in the decades leading up to the depression; the performance of the economy during the 1890s; domestic and international causes of the depression; and political and social responses to the depression.

The Depression of 1893 can be seen as a watershed event in American history. It was accompanied by violent strikes, the climax of the Populist and free silver political crusades, the creation of a new political balance, the continuing transformation of the country’s economy, major changes in national policy, and far-reaching social and intellectual developments. Business contraction shaped the decade that ushered out the nineteenth century.

Unemployment Estimates

One way to measure the severity of the depression is to examine the unemployment rate. Table 1 provides estimates of unemployment, which are derived from data on output — annual unemployment was not directly measured until 1929, so there is no consensus on the precise magnitude of the unemployment rate of the 1890s. Despite the differences in the two series, however, it is obvious that the Depression of 1893 was an important event. The unemployment rate exceeded ten percent for five or six consecutive years. The only other time this occurred in the history of the US economy was during the Great Depression of the 1930s.

Timing and Depth of the Depression

The National Bureau of Economic Research estimates that the economic contraction began in January 1893 and continued until June 1894. The economy then grew until December 1895, but it was then hit by a second recession that lasted until June 1897. Estimates of annual real gross national product (which adjust for this period’s deflation) are fairly crude, but they generally suggest that real GNP fell about 4% from 1892 to 1893 and another 6% from 1893 to 1894. By 1895 the economy had grown past its earlier peak, but GDP fell about 2.5% from 1895 to 1896. During this period population grew at about 2% per year, so real GNP per person didn’t surpass its 1892 level until 1899. Immigration, which had averaged over 500,000 people per year in the 1880s and which would surpass one million people per year in the first decade of the 1900s, averaged only 270,000 from 1894 to 1898.

Table 1
Estimates of Unemployment during the 1890s

Year Lebergott Romer
1890 4.0% 4.0%
1891 5.4 4.8
1892 3.0 3.7
1893 11.7 8.1
1894 18.4 12.3
1895 13.7 11.1
1896 14.5 12.0
1897 14.5 12.4
1898 12.4 11.6
1899 6.5 8,7
1900 5.0 5.0

Source: Romer, 1984

The depression struck an economy that was more like the economy of 1993 than that of 1793. By 1890, the US economy generated one of the highest levels of output per person in the world — below that in Britain, but higher than the rest of Europe. Agriculture no longer dominated the economy, producing only about 19 percent of GNP, well below the 30 percent produced in manufacturing and mining. Agriculture’s share of the labor force, which had been about 74% in 1800, and 60% in 1860, had fallen to roughly 40% in 1890. As Table 2 shows, only the South remained a predominantly agricultural region. Throughout the country few families were self-sufficient, most relied on selling their output or labor in the market — unlike those living in the country one hundred years earlier.

Table 2
Agriculture’s Share of the Labor Force by Region, 1890

Northeast 15%
Middle Atlantic 17%
Midwest 43%
South Atlantic 63%
South Central 67%
West 29%

Economic Trends Preceding the 1890s

Between 1870 and 1890 the number of farms in the United States rose by nearly 80 percent, to 4.5 million, and increased by another 25 percent by the end of the century. Farm property value grew by 75 percent, to $16.5 billion, and by 1900 had increased by another 25 percent. The advancing checkerboard of tilled fields in the nation’s heartland represented a vast indebtedness. Nationwide about 29% of farmers were encumbered by mortgages. One contemporary observer estimated 2.3 million farm mortgages nationwide in 1890 worth over $2.2 billion. But farmers in the plains were much more likely to be in debt. Kansas croplands were mortgaged to 45 percent of their true value, those in South Dakota to 46 percent, in Minnesota to 44, in Montana 41, and in Colorado 34 percent. Debt covered a comparable proportion of all farmlands in those states. Under favorable conditions the millions of dollars of annual charges on farm mortgages could be borne, but a declining economy brought foreclosures and tax sales.

Railroads opened new areas to agriculture, linking these to rapidly changing national and international markets. Mechanization, the development of improved crops, and the introduction of new techniques increased productivity and fueled a rapid expansion of farming operations. The output of staples skyrocketed. Yields of wheat, corn, and cotton doubled between 1870 and 1890 though the nation’s population rose by only two-thirds. Grain and fiber flooded the domestic market. Moreover, competition in world markets was fierce: Egypt and India emerged as rival sources of cotton; other areas poured out a growing stream of cereals. Farmers in the United States read the disappointing results in falling prices. Over 1870-73, corn and wheat averaged $0.463 and $1.174 per bushel and cotton $0.152 per pound; twenty years later they brought but $0.412 and $0.707 a bushel and $0.078 a pound. In 1889 corn fell to ten cents in Kansas, about half the estimated cost of production. Some farmers in need of cash to meet debts tried to increase income by increasing output of crops whose overproduction had already demoralized prices and cut farm receipts.

Railroad construction was an important spur to economic growth. Expansion peaked between 1879 and 1883, when eight thousand miles a year, on average, were built including the Southern Pacific, Northern Pacific and Santa Fe. An even higher peak was reached in the late 1880s, and the roads provided important markets for lumber, coal, iron, steel, and rolling stock.

The post-Civil War generation saw an enormous growth of manufacturing. Industrial output rose by some 296 percent, reaching in 1890 a value of almost $9.4 billion. In that year the nation’s 350,000 industrial firms employed nearly 4,750,000 workers. Iron and steel paced the progress of manufacturing. Farm and forest continued to provide raw materials for such established enterprises as cotton textiles, food, and lumber production. Heralding the machine age, however, was the growing importance of extractives — raw materials for a lengthening list of consumer goods and for producing and fueling locomotives, railroad cars, industrial machinery and equipment, farm implements, and electrical equipment for commerce and industry. The swift expansion and diversification of manufacturing allowed a growing independence from European imports and was reflected in the prominence of new goods among US exports. Already the value of American manufactures was more than half the value of European manufactures and twice that of Britain.

Onset and Causes of the Depression

The depression, which was signaled by a financial panic in 1893, has been blamed on the deflation dating back to the Civil War, the gold standard and monetary policy, underconsumption (the economy was producing goods and services at a higher rate than society was consuming and the resulting inventory accumulation led firms to reduce employment and cut back production), a general economic unsoundness (a reference less to tangible economic difficulties and more to a feeling that the economy was not running properly), and government extravagance .

Economic indicators signaling an 1893 business recession in the United States were largely obscured. The economy had improved during the previous year. Business failures had declined, and the average liabilities of failed firms had fallen by 40 percent. The country’s position in international commerce was improved. During the late nineteenth century, the United States had a negative net balance of payments. Passenger and cargo fares paid to foreign ships that carried most American overseas commerce, insurance charges, tourists’ expenditures abroad, and returns to foreign investors ordinarily more than offset the effect of a positive merchandise balance. In 1892, however, improved agricultural exports had reduced the previous year’s net negative balance from $89 million to $20 million. Moreover, output of non-agricultural consumer goods had risen by more than 5 percent, and business firms were believed to have an ample backlog of unfilled orders as 1893 opened. The number checks cleared between banks in the nation at large and outside New York, factory employment, wholesale prices, and railroad freight ton mileage advanced through the early months of the new year.

Yet several monthly series of indicators showed that business was falling off. Building construction had peaked in April 1892, later moving irregularly downward, probably in reaction to over building. The decline continued until the turn of the century, when construction volume finally turned up again. Weakness in building was transmitted to the rest of the economy, dampening general activity through restricted investment opportunities and curtailed demand for construction materials. Meanwhile, a similar uneven downward drift in business activity after spring 1892 was evident from a composite index of cotton takings (cotton turned into yarn, cloth, etc.) and raw silk consumption, rubber imports, tin and tin plate imports, pig iron manufactures, bituminous and anthracite coal production, crude oil output, railroad freight ton mileage, and foreign trade volume. Pig iron production had crested in February, followed by stock prices and business incorporations six months later.

The economy exhibited other weaknesses as the March 1893 date for Grover Cleveland’s inauguration to the presidency drew near. One of the most serious was in agriculture. Storm, drought, and overproduction during the preceding half-dozen years had reversed the remarkable agricultural prosperity and expansion of the early 1880s in the wheat, corn, and cotton belts. Wheat prices tumbled twenty cents per bushel in 1892. Corn held steady, but at a low figure and on a fall of one-eighth in output. Twice as great a decline in production dealt a severe blow to the hopes of cotton growers: the season’s short crop canceled gains anticipated from a recovery of one cent in prices to 8.3 cents per pound, close to the average level of recent years. Midwestern and Southern farming regions seethed with discontent as growers watched staple prices fall by as much as two-thirds after 1870 and all farm prices by two-fifths; meanwhile, the general wholesale index fell by one-fourth. The situation was grave for many. Farmers’ terms of trade had worsened, and dollar debts willingly incurred in good times to permit agricultural expansion were becoming unbearable burdens. Debt payments and low prices restricted agrarian purchasing power and demand for goods and services. Significantly, both output and consumption of farm equipment began to fall as early as 1891, marking a decline in agricultural investment. Moreover, foreclosure of farm mortgages reduced the ability of mortgage companies, banks, and other lenders to convert their earning assets into cash because the willingness of investors to buy mortgage paper was reduced by the declining expectation that they would yield a positive return.

Slowing investment in railroads was an additional deflationary influence. Railroad expansion had long been a potent engine of economic growth, ranging from 15 to 20 percent of total national investment in the 1870s and 1880s. Construction was a rough index of railroad investment. The amount of new track laid yearly peaked at 12,984 miles in 1887, after which it fell off steeply. Capital outlays rose through 1891 to provide needed additions to plant and equipment, but the rate of growth could not be sustained. Unsatisfactory earnings and a low return for investors indicated the system was over built and overcapitalized, and reports of mismanagement were common. In 1892, only 44 percent of rail shares outstanding returned dividends, although twice that proportion of bonds paid interest. In the meantime, the completion of trunk lines dried up local capital sources. Political antagonism toward railroads, spurred by the roads’ immense size and power and by real and imagined discrimination against small shippers, made the industry less attractive to investors. Declining growth reduced investment opportunity even as rail securities became less appealing. Capital outlays fell in 1892 despite easy credit during much of the year. The markets for ancillary industries, like iron and steel, felt the impact of falling railroad investment as well; at times in the 1880s rails had accounted for 90 percent of the country’s rolled steel output. In an industry whose expansion had long played a vital role in creating new markets for suppliers, lagging capital expenditures loomed large in the onset of depression.

European Influences

European depression was a further source of weakness as 1893 began. Recession struck France in 1889, and business slackened in Germany and England the following year. Contemporaries dated the English downturn from a financial panic in November. Monetary stringency was a base cause of economic hard times. Because specie — gold and silver — was regarded as the only real money, and paper money was available in multiples of the specie supply, when people viewed the future with doubt they stockpiled specie and rejected paper. The availability of specie was limited, so the longer hard times prevailed the more difficult it was for anyone to secure hard money. In addition to monetary stringency, the collapse of extensive speculations in Australian, South African, and Argentine properties; and a sharp break in securities prices marked the advent of severe contraction. The great banking house of Baring and Brothers, caught with excessive holdings of Argentine securities in a falling market, shocked the financial world by suspending business on November 20, 1890. Within a year of the crisis, commercial stagnation had settled over most of Europe. The contraction was severe and long-lived. In England many indices fell to 80 percent of capacity; wholesale prices overall declined nearly 6 percent in two years and had declined 15 percent by 1894. An index of the prices of principal industrial products declined by almost as much. In Germany, contraction lasted three times as long as the average for the period 1879-1902. Not until mid-1895 did Europe begin to revive. Full prosperity returned a year or more later.

Panic in the United Kingdom and falling trade in Europe brought serious repercussions in the United States. The immediate result was near panic in New York City, the nation’s financial center, as British investors sold their American stocks to obtain funds. Uneasiness spread through the country, fostered by falling stock prices, monetary stringency, and an increase in business failures. Liabilities of failed firms during the last quarter of 1890 were $90 million — twice those in the preceding quarter. Only the normal year’s end grain exports, destined largely for England, averted a gold outflow.

Circumstances moderated during the early months of 1891, although gold flowed to Europe, and business failures remained high. Credit eased, if slowly: in response to pleas for relief, the federal treasury began the premature redemption of government bonds to put additional money into circulation, and the end of the harvest trade reduced demand for credit. Commerce quickened in the spring. Perhaps anticipation of brisk trade during the harvest season stimulated the revival of investment and business; in any event, the harvest of 1891 buoyed the economy. A bumper American wheat crop coincided with poor yields in Europe increase exports and the inflow of specie: US exports in fiscal 1892 were $150 million greater than in the preceding year, a full 1 percent of gross national product. The improved market for American crops was primarily responsible for a brief cycle of prosperity in the United States that Europe did not share. Business thrived until signs of recession began to appear in late 1892 and early 1893.

The business revival of 1891-92 only delayed an inevitable reckoning. While domestic factors led in precipitating a major downturn in the United States, the European contraction operated as a powerful depressant. Commercial stagnation in Europe decisively affected the flow of foreign investment funds to the United States. Although foreign investment in this country and American investment abroad rose overall during the 1890s, changing business conditions forced American funds going abroad and foreign funds flowing into the United States to reverse as Americans sold off foreign holdings and foreigners sold off their holdings of American assets. Initially, contraction abroad forced European investors to sell substantial holdings of American securities, then the rate of new foreign investment fell off. The repatriation of American securities prompted gold exports, deflating the money stock and depressing prices. A reduced inflow of foreign capital slowed expansion and may have exacerbated the declining growth of the railroads; undoubtedly, it dampened aggregate demand.

As foreign investors sold their holdings of American stocks for hard money, specie left the United States. Funds secured through foreign investment in domestic enterprise were important in helping the country meet its usual balance of payments deficit. Fewer funds invested during the 1890s was one of the factors that, with a continued negative balance of payments, forced the United States to export gold almost continuously from 1892 to 1896. The impact of depression abroad on the flow of capital to this country can be inferred from the history of new capital issues in Britain, the source of perhaps 75 percent of overseas investment in the United States. British issues varied as shown in Table 3.

Table 3
British New Capital Issues, 1890-1898 (millions of pounds, sterling)

1890 142.6
1891 104.6
1892 81.1
1893 49.1
1894 91.8
1895 104.7
1896 152.8
1897 157.3
1898 150.2

Source: Hoffmann, p. 193

Simultaneously, the share of new British investment sent abroad fell from one-fourth in 1891 to one-fifth two years later. Over that same period, British net capital flows abroad declined by about 60 percent; not until 1896 and 1897 did they resume earlier levels.

Thus, the recession that began in 1893 had deep roots. The slowdown in railroad expansion, decline in building construction, and foreign depression had reduced investment opportunities, and, following the brief upturn effected by the bumper wheat crop of 1891, agricultural prices fell as did exports and commerce in general. By the end of 1893, business failures numbering 15,242 averaging $22,751 in liabilities, had been reported. Plagued by successive contractions of credit, many essentially sound firms failed which would have survived under ordinary circumstances. Liabilities totaled a staggering $357 million. This was the crisis of 1893.

Response to the Depression

The financial crises of 1893 accelerated the recession that was evident early in the year into a major contraction that spread throughout the economy. Investment, commerce, prices, employment, and wages remained depressed for several years. Changing circumstances and expectations, and a persistent federal deficit, subjected the treasury gold reserve to intense pressure and generated sharp counterflows of gold. The treasury was driven four times between 1894 and 1896 to resort to bond issues totaling $260 million to obtain specie to augment the reserve. Meanwhile, restricted investment, income, and profits spelled low consumption, widespread suffering, and occasionally explosive labor and political struggles. An extensive but incomplete revival occurred in 1895. The Democratic nomination of William Jennings Bryan for the presidency on a free silver platform the following year amid an upsurge of silverite support contributed to a second downturn peculiar to the United States. Europe, just beginning to emerge from depression, was unaffected. Only in mid-1897 did recovery begin in this country; full prosperity returned gradually over the ensuing year and more.

The economy that emerged from the depression differed profoundly from that of 1893. Consolidation and the influence of investment bankers were more advanced. The nation’s international trade position was more advantageous: huge merchandise exports assured a positive net balance of payments despite large tourist expenditures abroad, foreign investments in the United States, and a continued reliance on foreign shipping to carry most of America’s overseas commerce. Moreover, new industries were rapidly moving to ascendancy, and manufactures were coming to replace farm produce as the staple products and exports of the country. The era revealed the outlines of an emerging industrial-urban economic order that portended great changes for the United States.

Hard times intensified social sensitivity to a wide range of problems accompanying industrialization, by making them more severe. Those whom depression struck hardest as well as much of the general public and major Protestant churches, shored up their civic consciousness about currency and banking reform, regulation of business in the public interest, and labor relations. Although nineteenth century liberalism and the tradition of administrative nihilism that it favored remained viable, public opinion began to slowly swing toward governmental activism and interventionism associated with modern, industrial societies, erecting in the process the intellectual foundation for the reform impulse that was to be called Progressivism in twentieth century America. Most important of all, these opposed tendencies in thought set the boundaries within which Americans for the next century debated the most vital questions of their shared experience. The depression was a reminder of business slumps, commonweal above avarice, and principle above principal.

Government responses to depression during the 1890s exhibited elements of complexity, confusion, and contradiction. Yet they also showed a pattern that confirmed the transitional character of the era and clarified the role of the business crisis in the emergence of modern America. Hard times, intimately related to developments issuing in an industrial economy characterized by increasingly vast business units and concentrations of financial and productive power, were a major influence on society, thought, politics, and thus, unavoidably, government. Awareness of, and proposals of means for adapting to, deep-rooted changes attending industrialization, urbanization, and other dimensions of the current transformation of the United States long antedated the economic contraction of the nineties.

Selected Bibliography

*I would like to thank Douglas Steeples, retired dean of the College of Liberal Arts and professor of history, emeritus, Mercer University. Much of this article has been taken from Democracy in Desperation: The Depression of 1893 by Douglas Steeples and David O. Whitten, which was declared an Exceptional Academic Title by Choice. Democracy in Desperation includes the most recent and extensive bibliography for the depression of 1893.

Clanton, Gene. Populism: The Humane Preference in America, 1890-1900. Boston: Twayne, 1991.

Friedman, Milton, and Anna Jacobson Schwartz. A Monetary History of the United States, 1867-1960. Princeton: Princeton University Press, 1963.

Goodwyn, Lawrence. Democratic Promise: The Populist Movement in America. New York: Oxford University Press, 1976.

Grant, H. Roger. Self Help in the 1890s Depression. Ames: Iowa State University Press, 1983.

Higgs, Robert. The Transformation of the American Economy, 1865-1914. New York: Wiley, 1971.

Himmelberg, Robert F. The Rise of Big Business and the Beginnings of Antitrust and Railroad Regulation, 1870-1900. New York: Garland, 1994.

Hoffmann, Charles. The Depression of the Nineties: An Economic History. Westport, CT: Greenwood Publishing, 1970.

Jones, Stanley L. The Presidential Election of 1896. Madison: University of Wisconsin Press, 1964.

Kindleberger, Charles Poor. Manias, Panics, and Crashes: A History of Financial Crises. Revised Edition. New York: Basic Books, 1989.

Kolko, Gabriel. Railroads and Regulation, 1877-1916. Princeton: Princeton University Press, 1965.

Lamoreaux, Naomi R. The Great Merger Movement in American Business, 1895-1904. New York: Cambridge University Press, 1985.

Rees, Albert. Real Wages in Manufacturing, 1890-1914. Princeton, NJ: Princeton University Press, 1961.

Ritter, Gretchen. Goldbugs and Greenbacks: The Antimonopoly Tradition and the Politics of Finance in America. New York: Cambridge University Press, 1997.

Romer, Christina. “Spurious Volatility in Historical Unemployment Data.” Journal of Political Economy 94, no. 1. (1986): 1-37.

Schwantes, Carlos A. Coxey’s Army: An American Odyssey. Lincoln: University of Nebraska Press, 1985.

Steeples, Douglas, and David Whitten. Democracy in Desperation: The Depression of 1893. Westport, CT: Greenwood Press, 1998.

Timberlake, Richard. “Panic of 1893.” In Business Cycles and Depressions: An Encyclopedia, edited by David Glasner. New York: Garland, 1997.

White, Gerald Taylor. Years of Transition: The United States and the Problems of Recovery after 1893. University, AL: University of Alabama Press, 1982.

Citation: Whitten, David. “Depression of 1893″. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/the-depression-of-1893/

The Company Town

Lawrence W. Boyd, University of Hawaii

The company town was an economic institution that was part of the market for labor. In a company town a single firm provided its employees with goods and services, hired police, collected garbage, dispensed justice, and answered (or failed to answer) complaints from residents. The economy of the company town was totally “privatized” — community services that today are provided by municipal governments were provided by the profit-maximizing firm, which ran the company town. Property rights were defined in such a way that companies could exclude competition by other firms that wished to provide goods and services to their employees.

Although company towns were most closely associated with the coal mining industry it should be noted that they existed in a number of other industries. For example, Homestead, Pennsylvania was a company town situated next to the Homestead Steel Mill. Similarly, Pullman, Illinois was a company town for workers employed at the factory that produced Pullman railroad cars. Because of large and persistent labor struggles associated with the coal industry the focus of interest, both historical and scholarly, has been on coal company towns.

Prevalence of Company Towns in the Coal Industry

The prevalence of company towns, at least in the coal industry, was related to the settlement of regions where mines were developed. When mines opened in isolated regions they needed to provide housing and other necessities to their employees. Thus in more settled regions, the proportion of miners living in company towns was less than in areas that were less settled. In the early 1920’s the United States Coal Commission found that in Southern Appalachia (West Virginia, Eastern Kentucky, Tennessee, Maryland, Virginia, and Alabama) and in the Rocky Mountains 65 to 80 percent of miners lived in company towns. In most of the Midwest only 10 to 20 percent of miners lived in company towns. In Ohio 25 percent lived in company towns, while in Pennsylvania 50 percent lived in company towns.

Property Rights in Company Towns

The leases for company houses that miners rented to a certain extent governed property rights in these company towns. These leases were also something like “tied” contracts in that miners rented their homes so long as they were employed by the company, or at least, had a good relationship with it. Leases generally allowed for a quick termination, usually five days, rather than longer notices. Many leases prevented non-employees from living in or trespassing on company housing. In some leases companies reserved the right of entry into the property and the right to make and enforce regulations on the roads leading into the property.

These rights were commonly enforced during strikes when strikers were evicted from their homes. Sometimes mine owners prevented peddlers or deliveries from independent stores from entering the towns. A practice that seemed to be more common early on was requiring workers to purchase supplies at the company store. This practice appears to have mostly disappeared by the 1920’s.

Complaints about Company Towns

The relationship between employees and owners in company towns could be contentious and this in turn generated a series of complaints that were surprisingly similar across time. The first mention of coal company towns can be found in Friedrich Engels’The Condition of the Working Class in England in 1844, which described dangerous working conditions, cheating on the weighing of coal for piece rate workers, high prices and unsanitary conditions.

Prices in Coal Company Towns

At various times prices charged by mining companies, or profits derived from the sale of goods and services, were deemed excessive. The United States Census in its Census of Mines in 1910 found many mines reported revenues from their mining operations were lower than their expenses and that they stayed in business through sales of goods and services to their employees. Lawrence Lynch in an article published during 1913 in the Political Science Quarterly found prices in company stores excessive. He used the example of black powder, which companies sold to their employees at an average markup of 62 percent.

The United States Coal Commission surveyed stores in mining districts during 1922. They found that stores in southern West Virginia mining districts of New River and Kanawha had prices that were approximately 9 and 5 percent higher, respectively, than in the nearby city of Charleston. In other mining districts the prices were equal to or lower than they were in the nearby city. The Commission also compared the price differentials between independent and company stores and found that in the southern West Virginia districts they were 4.2 percent higher in company stores while in Alabama they were 7 percent higher.

Price Fishback’s Findings

Price Fishback pointed out that a competitive labor market would limit monopolistic pricing policies on the part of firms that operated company towns and produce a single real wage for all miners participating in this market. Thus if miners were free to move among mines, and companies competed for employees then this would in turn have prevented mine owners from exploiting their employees.

Fishback suggested that the 4.2 percent price differential on food that existed between company and independent stores represented the maximum company stores could charge due to their favorable location. Furthermore, this difference was partly offset by higher wages and transportation costs.

Fishback also found that the use of scrip, or advances by the company, used to finance living expenses was limited. Companies seldom carried debt longer than two weeks. Furthermore most miners had no more than 60 percent of their pay deducted as a result of the use of scrip. This meant that miners were not tied to the company through exorbitant debt. In addition he found that rents on company housing were normal in comparison to rents in independent communities. He also found that companies that offered exceptional sanitation, such as flush toilets, paid lower wages (roughly 3.4 percent less). This suggests that wages adjusted to differences in amenities offered by different company towns.

Lawrence Boyd’s Findings

Lawrence Boyd made use of the data in the 1923 Coal Commission’s Report to develop a price index for food that was comparable across districts. This was something the Commission had not done in their original report. The Commission decided not to compare living conditions in various districts because:

It would blur the picture of living conditions beyond recognition if the information were assembled in one composite photograph. Notwithstanding their common occupational interest, the American mountaineer miner of West Virginia, the Slovakian on Pennsylvania’s plateaus and hills, the Alabama negro, the Italian miner in Illinois towns, differ greatly in habits and ideals. (United States Coal Comission, Report. 1453)

Boyd found that close to 90 percent of the items listed in the Commission’s Report could also be found in other districts. Thus a price index for these items could be constructed that allowed a comparison between those districts where most miners lived in company towns and those where company towns were less prevalent.

Boyd found that that the price index was as much as 15 percent lower in districts where the fewest miners lived in company towns such as Illinois and Ohio. Furthermore, districts where there were fewer miners in company towns tended to have higher incomes. If the price index was used to deflate nominal incomes then real incomes were approximately 70 percent higher in a district like Illinois. Boyd suggests that this indicates that miners were mobile within mining districts but not between mining districts.

Boyd’s comparisons were only for food, which comprised approximately 45 percent of miner’s budgets in southern West Virginia and about 30 percent in districts around Illinois. The Coal Commissions reported that miners in Southern Appalachia spent nearly twice as much on flour as they did on rent. Unfortunately, records for other prices, as well as wages, were not available, due in part to a fire that burned some of the archives of the U. S. Coal Commission.

The Decline of Company Towns

Company towns declined as an institution as automobiles and highways became more common. As the Coal Commission found in 1923, company towns were limited to areas newly settled and thus isolated from previous patterns of settlement. As areas became more settled and connected to transportation networks company towns declined. Many former company towns simply were abandoned when mines shut down. Others became incorporated or unincorporated communities. Former company towns such as Homestead are now suburbs of cities like Pittsburgh. Anniston, Alabama, on the other hand, started out as a company town and was transformed into a public town. Grace Gates documents this transition and describes the role of diversification of industry and commerce in this transformation. The necessity that workers live close to their place of employment ended with the development of modern transportation networks.

Conclusions

What can be said about the company town is that it does not appear to have risen to the level of exploitation many commentators have assumed. As Fishback concludes coal miners were able to protect themselves from exploitation through the use of both “voice and exit.” They could engage in collective action through strikes, and joining a union, or through individual actions such as quitting and moving to a better location. Thus the functioning of a competitive labor market could blunt the worst aspects of the company town.

References

Boyd, Lawrence W. “The Coal Company Town.” Ph.D. Dissertation, West Virginia University, 1993.

Fishback, Price V. Soft Coal, Hard Choices: The Economic Welfare of Bituminous Coal Miners 1890-1930. New York: Oxford University Press, 1992.

Gates, Grace Hooten. The Model City of the New South: Anniston, Alabama, 1872-1900. Tuscaloosa, AL: University of Alabama Press, 1996.

United States Coal Commission, Report, 5 parts, Senate Document 195, 68th Congress, Second Session. Washington, DC: U. S. Government Printing Office, 1925.

Citation: Boyd, Lawrence. “The Company Town”. EH.Net Encyclopedia, edited by Robert Whaples. January 30, 2003. URL http://eh.net/encyclopedia/the-company-town/

The US Coal Industry in the Nineteenth Century

Sean Patrick Adams, University of Florida

Introduction

The coal industry was a major foundation for American industrialization in the nineteenth century. As a fuel source, coal provided a cheap and efficient source of power for steam engines, furnaces, and forges across the United States. As an economic pursuit, coal spurred technological innovations in mine technology, energy consumption, and transportation. When mine managers brought increasing sophistication to the organization of work in the mines, coal miners responded by organizing into industrial trade unions. The influence of coal was so pervasive in the United States that by the advent of the twentieth century, it became a necessity of everyday life. In an era where smokestacks equaled progress, the smoky air and sooty landscape of industrial America owed a great deal to the growth of the nation’s coal industry. By the close of the nineteenth century, many Americans across the nation read about the latest struggle between coal companies and miners by the light of a coal-gas lamp and in the warmth of a coal-fueled furnace, in a house stocked with goods brought to them by coal-fired locomotives. In many ways, this industry served as a major factor of American industrial growth throughout the nineteenth century.

The Antebellum American Coal Trade

Although coal had served as a major source of energy in Great Britain for centuries, British colonists had little use for North America’s massive reserves of coal prior to American independence. With abundant supplies of wood, water, and animal fuel, there was little need to use mineral fuel in seventeenth and eighteenth-century America. But as colonial cities along the eastern seaboard grew in population and in prestige, coal began to appear in American forges and furnaces. Most likely this coal was imported from Great Britain, but a small domestic trade developed in the bituminous fields outside of Richmond, Virginia and along the Monongahela River near Pittsburgh, Pennsylvania.

The Richmond Basin

Following independence from Britain, imported coal became less common in American cities and the domestic trade became more important. Economic nationalists such as Tench Coxe, Albert Gallatin, and Alexander Hamilton all suggested that the nation’s coal trade — at that time centered in the Richmond coal basin of eastern Virginia — would serve as a strategic resource for the nation’s growth and independence. Although it labored under these weighty expectations, the coal trade of eastern Virginia was hampered by its existence on the margins of the Old Dominion’s plantation economy. Colliers of the Richmond Basin used slave labor effectively in their mines, but scrambled to fill out their labor force, especially during peak periods of agricultural activity. Transportation networks in the region also restricted the growth of coal mining. Turnpikes proved too expensive for the coal trade and the James River and Kanawha Canal failed to make necessary improvements in order to accommodate coal barge traffic and streamline the loading, conveyance, and distribution of coal at Richmond’s tidewater port. Although the Richmond Basin was nation’s first major coalfield, miners there found growth potential to be limited.

The Rise of Anthracite Coal

At the same time that the Richmond Basin’s coal trade declined in importance, a new type of mineral fuel entered urban markets of the American seaboard. Anthracite coal has higher carbon content and is much harder than bituminous coal, thus earning the nickname “stone coal” in its early years of use. In 1803, Philadelphians watched a load of anthracite coal actually squelch a fire during a trial run, and city officials used the load of “stone coal” as attractive gravel for sidewalks. Following the War of 1812, however, a series of events paved the way for anthracite coal’s acceptance in urban markets. Colliers like Jacob Cist saw the shortage of British and Virginia coal in urban communities as an opportunity to promote the use of “stone coal.” Philadelphia’s American Philosophical Society and Franklin Institute enlisted the aid of the area’s scientific community to disseminate information to consumers on the particular needs of anthracite. The opening of several links between Pennsylvania’s anthracite fields via the Lehigh Coal and Navigation Company (1820), the Schuylkill Navigation Company (1825), and the Delaware and Hudson (1829) insured that the flow of anthracite from mine to market would be cheap and fast. “Stone coal” became less a geological curiosity by the 1830s and instead emerged as a valuable domestic fuel for heating and cooking, as well as a powerful source of energy for urban blacksmiths, bakers, brewers, and manufacturers. As demonstrated in Figure 1, Pennsylvania anthracite dominated urban markets by the late 1830s. By 1840, annual production had topped one million tons, or about ten times the annual production of the Richmond bituminous field.

Figure One: Percentage of Seaboard Coal Consumption by Origin, 1822-1842

Sources:
Hunt’s Merchant’s Magazine and Commercial Review 8 (June 1843): 548;
Alfred Chandler, “Anthracite Coal and the Beginnings of the Industrial Revolution,” p. 154.

The Spread of Coalmining

The antebellum period also saw the expansion of coal mining into many more states than Pennsylvania and Virginia, as North America contains a variety of workable coalfields. Ohio’s bituminous fields employed 7,000 men and raised about 320,000 tons of coal in 1850 — only three years later the state’s miners had increased production to over 1,300,000 tons. In Maryland, the George’s Creek bituminous region began to ship coal to urban markets by the Baltimore and Ohio Railroad (1842) and the Chesapeake and Ohio Canal (1850). The growth of St. Louis provided a major boost to the coal industries of Illinois and Missouri, and by 1850 colliers in the two states raised about 350,000 tons of coal annually. By the advent of the Civil War, coal industries appeared in at least twenty states.

Organization of Antebellum Mines

Throughout the antebellum period, coal mining firms tended to be small and labor intensive. The seams that were first worked in the anthracite fields of eastern Pennsylvania or the bituminous fields in Virginia, western Pennsylvania, and Ohio tended to lie close to the surface. A skilled miner and a handful of laborers could easily raise several tons of coal a day through the use of a “drift” or “slope” mine that intersected a vein of coal along a hillside. In the bituminous fields outside of Pittsburgh, for example, coal seams were exposed along the banks of the Monongahela and colliers could simply extract the coal with a pickax or shovel and roll it down the riverbank via a handcart into a waiting barge. Once the coal left the mouth of the mine, however, the size of the business handling it varied. Proprietary colliers usually worked on land that was leased for five to fifteen years — often from a large landowner or corporation. The coal was often shipped to market via a large railroad or canal corporation such as the Baltimore and Ohio Railroad, or the Delaware and Hudson Canal. Competition between mining firms and increases in production kept prices and profit margins relatively low, and many colliers slipped in and out of bankruptcy. These small mining firms were typical of the “easy entry, easy exit” nature of American business competition in the antebellum period.

Labor Relations

Since most antebellum coal mining operations were often limited to a few skilled miners aided by lesser skilled laborers, the labor relations in American coal mining regions saw little extended conflict. Early coal miners also worked close to the surface, often in horizontal drift mines, which meant that work was not as dangerous in the era before deep shaft mining. Most mining operations were far-flung enterprises away from urban centers, which frustrated attempts to organize miners into a “critical mass” of collective power — even in the nation’s most developed anthracite fields. These factors, coupled with the mine operator’s belief that individual enterprise in the anthracite regions insured a harmonious system of independent producers, had inhibited the development of strong labor organizations in Pennsylvania’s antebellum mining industry. In less developed regions, proprietors often worked in the mines themselves, so the lines between ownership, management, and labor were often blurred.

Early Unions

Most disputes, when they did occur, were temporary affairs that focused upon the low wages spurred by the intense competition among colliers. The first such action in the anthracite industry occurred in July of 1842 when workers from Minersville in Schuylkill County marched on Pottsville to protest low wages. This short-lived strike was broken up by the Orwigsburgh Blues, a local militia company. In 1848 John Bates enrolled 5,000 miners and struck for higher pay in the summer of 1849. But members of the “Bates Union” found themselves locked out of work and the movement quickly dissipated. In 1853, the Delaware and Hudson Canal Company’s miners struck for a 2½ cent per ton increase in their piece rate. This strike was successful, but failed to produce any lasting union presence in the D&H’s operations. Reports of disturbances in the bituminous fields of western Pennsylvania and Ohio follow the same pattern, as antebellum strikes tended to be localized and short-lived. Production levels thus remained high, and consumers of mineral fuel could count upon a steady supply reaching market.

Use of Anthracite in the Iron Industry

The most important technological development in the antebellum American coal industry was the successful adoption of anthracite coal to iron making techniques. Since the 1780s, bituminous coal or coke — which is bituminous coal with the impurities burned away — had been the preferred fuel for British iron makers. Once anthracite had nearly successfully entered American hearths, there seemed to be no reason why stone coal could not be used to make iron. As with its domestic use, however, the industrial potential of anthracite coal faced major technological barriers. In British and American iron furnaces of the early nineteenth century, the high heat needed to smelt iron ore required a blast of excess air to aid the combustion of the fuel, whether it was coal, wood, or charcoal. While British iron makers in the 1820s attempted to increase the efficiency of the process by using superheated air, known commonly as a “hot blast,” American iron makers still used a “cold blast” to stoke their furnaces. The density of anthracite coal resisted attempts to ignite it through the cold blast and therefore appeared to be an inappropriate fuel for most American iron furnaces.

Anthracite iron first appeared in Pennsylvania in 1840, when David Thomas brought Welsh hot blast technology into practice at the Lehigh Crane Iron Company. The firm had been chartered in 1839 under the general incorporation act. The Allentown firm’s innovation created a stir in iron making circles, and iron furnaces for smelting ore with anthracite began to appear across eastern and central Pennsylvania. In 1841, only a year after the Lehigh Crane Iron Company’s success, Walter Johnson found no less than eleven anthracite iron furnaces in operation. That same year, an American correspondent of London bankers cited savings on iron making of up to twenty-five percent after the conversion to anthracite and noted that “wherever the coal can be procured the proprietors are changing to the new plan; and it is generally believed that the quality of the iron is much improved where the entire process is affected with anthracite coal.” Pennsylvania’s investment in anthracite iron paid dividends for the industrial economy of the state and proved that coal could be adapted to a number of industrial pursuits. By 1854, forty-six percent of all American pig iron had been smelted with anthracite coal as a fuel, and by 1860 anthracite’s share of pig iron was more than fifty-six percent.

Rising Levels of Coal Output and Falling Prices

The antebellum decades saw the coal industry emerge as a critical component of America’s industrial revolution. Anthracite coal became a fixture in seaboard cities up and down the east coast of North America — as cities grew, so did the demand for coal. To the west, Pittsburgh and Ohio colliers shipped their coal as far as Louisville, Cincinnati, or New Orleans. As wood, animal, and waterpower became scarcer, mineral fuel usually took their place in domestic consumption and small-scale manufacturing. The structure of the industry, many small-scale firms working on short-term leases, meant that production levels remained high throughout the antebellum period, even in the face of falling prices. In 1840, American miners raised 2.5 million tons of coal to serve these growing markets and by 1850 increased annual production to 8.4 million tons. Although prices tended to fluctuate with the season, in the long run, they fell throughout the antebellum period. For example, in 1830 anthracite coal sold for about $11 per ton. Ten years later, the price had dropped to $7 per ton and by 1860 anthracite sold for about $5.50 a ton in New York City. Annual production in 1860 also passed twenty million tons for the first time in history. Increasing production, intense competition, low prices, and quiet labor relations all were characteristics of the antebellum coal trade in the United States, but developments during and after the Civil War would dramatically alter the structure and character of this critical industrial pursuit.

Coal and the Civil War

The most dramatic expansion of the American coal industry occurred in the late antebellum decades but the outbreak of the Civil War led to some major changes. The fuel needs of the federal army and navy, along with their military suppliers, promised a significant increase in the demand for coal. Mine operators planned for rising, or at least stable, coal prices for the duration of the war. Their expectations proved accurate. Even when prices are adjusted for wartime inflation, they increased substantially over the course of the conflict. Over the years 1860 to 1863, the real (i.e., inflation-adjusted) price of a ton of anthracite rose by over thirty percent, and in 1864 the real price had increased to forty-five percent above its 1860 level. In response, the production of coal increased to over twelve million tons of anthracite and over twenty-four million tons nationwide by 1865.

The demand for mineral fuel in the Confederacy led to changes in southern coalfields as well. In 1862, the Confederate Congress organized the Niter and Mining Bureau within the War Department to supervise the collection of niter (also known as saltpeter) for the manufacture of gunpowder and the mining of copper, lead, iron, coal, and zinc. In addition to aiding the Richmond Basin’s production, the Niter and Mining Bureau opened new coalfields in North Carolina and Alabama and coordinated the flow of mineral fuel to Confederate naval stations along the coast. Although the Confederacy was not awash in coal during the conflict, the work of the Niter and Mining Bureau established the groundwork for the expansion of mining in the postbellum South.

In addition to increases in production, the Civil War years accelerated some qualitative changes in the structure of the industry. In the late 1850s, new railroads stretched to new bituminous coalfields in states like Maryland, Ohio, and Illinois. In the established anthracite coal regions of Pennsylvania, railroad companies profited immensely from the increased traffic spurred by the war effort. For example, the Philadelphia & Reading Railroad’s margin of profit increased from $0.88 per ton of coal in 1861 to $1.72 per ton in 1865. Railroad companies emerged from the Civil War as the most important actors in the nation’s coal trade.

The American Coal Trade after the Civil War

Railroads and the Expansion of the Coal Trade

In the years immediately following the Civil War, the expansion of the coal trade accelerated as railroads assumed the burden of carrying coal to market and opening up previously inaccessible fields. They did this by purchasing coal tracts directly and leasing them to subsidiary firms or by opening their own mines. In 1878, the Baltimore and Ohio Railroad shipped three million tons of bituminous coal from mines in Maryland and from the northern coalfields of the new state of West Virginia. When the Chesapeake and Ohio Railroad linked Huntington, West Virginia with Richmond, Virginia in 1873, the rich bituminous coal fields of southern West Virginia were open for development. The Norfolk and Western developed the coalfields of southwestern Virginia by completing their railroad from tidewater to remote Tazewell County in 1883. A network of smaller lines linking individual collieries to these large trunk lines facilitated the rapid development of Appalachian coal.

Railroads also helped open up the massive coal reserves west of the Mississippi. Small coal mines in Missouri and Illinois existed in the antebellum years, but were limited to the steamboat trade down the Mississippi River. As the nation’s web of railroad construction expanded across the Great Plains, coalfields in Colorado, New Mexico, and Wyoming witnessed significant development. Coal had truly become a national endeavor in the United States.

Technological Innovations

As the coal industry expanded, it also incorporated new mining methods. Early slope or drift mines intersected coal seams relatively close to the surface and needed only small capital investments to prepare. Most miners still used picks and shovels to extract the coal, but some miners used black powder to blast holes in the coal seams, then loaded the broken coal onto wagons by hand. But as miners sought to remove more coal, shafts were dug deeper below the water line. As a result, coal mining needed larger amounts of capital as new systems of pumping, ventilation, and extraction required the implementation of steam power in mines. By the 1890s, electric cutting machines replaced the blasting method of loosening the coal in some mines, and by 1900 a quarter of American coal was mined using these methods. As the century progressed, miners raised more and more coal by using new technology. Along with this productivity came the erosion of many traditional skills cherished by experienced miners.

The Coke Industry

Consumption patterns also changed. The late nineteenth century saw the emergence of coke — a form of processed bituminous coal in which impurities are “baked” out under high temperatures — as a powerful fuel in the iron and steel industry. The discovery of excellent coking coal in the Connellsville region of southwestern Pennsylvania spurred the aggressive growth of coke furnaces there. By 1880, the Connellsville region contained more than 4,200 coke ovens and the national production of coke in the United States stood at three million tons. Two decades later, the United States consumed over twenty million tons of coke fuel.

Competition and Profits

The successful incorporation of new mining methods and the emergence of coke as a major fuel source served as both a blessing and a curse to mining firms. With the new technology they raised more coal, but as more coalfields opened up and national production neared eighty million tons by 1880, coal prices remained relatively low. Cheap coal undoubtedly helped America’s rapidly industrializing economy, but it also created an industry structure characterized by boom and bust periods, low profit margins, and cutthroat competition among firms. But however it was raised, the United States became more and more dependent upon coal as the nineteenth century progressed, as demonstrated by Figure 2.

Figure 2: Coal as a Percentage of American Energy Consumption, 1850-1900

Source: Sam H. Schurr and Bruce C. Netschert, Energy in the American Economy, 1850-1975 (Baltimore: Johns Hopkins Press, 1960), 36-37.

The Rise of Labor Unions

As coal mines became more capital intensive over the course of the nineteenth century, the role of miners changed dramatically. Proprietary mines usually employed skilled miners as subcontractors in the years prior to the Civil War; by doing so they abdicated a great deal of control over the pace of mining. Corporate reorganization and the introduction of expensive machinery eroded the traditional authority of the skilled miner. By the 1870s, many mining firms employed managers to supervise the pace of work, but kept the old system of paying mine laborers per ton rather than an hourly wage. Falling piece rates quickly became a source of discontent in coal mining regions.

Miners responded to falling wages and the restructuring of mine labor by organizing into craft unions. The Workingmen’s Benevolent Association founded in Pennsylvania in 1868, united English, Irish, Scottish, and Welsh anthracite miners. The WBA won some concessions from coal companies until Franklin Gowen, acting president of the Philadelphia and Reading Railroad led a concerted effort to break the union in the winter of 1874-75. When sporadic violence plagued the anthracite fields, Gowen led the charge against the “Molly Maguires,” a clandestine organization supposedly led by Irish miners. After the breaking of the WBA, most coal mining unions served to organize skilled workers in specific regions. In 1890, a national mining union appeared when delegates from across the United States formed the United Mine Workers of America. The UMWA struggled to gain widespread acceptance until 1897, when widespread strikes pushed many workers into union membership. By 1903, the UMWA listed about a quarter of a million members, raised a treasury worth over one million dollars, and played a major role in industrial relations of the nation’s coal industry.

Coal at the Turn of the Century

By 1900, the American coal industry was truly a national endeavor that raised fifty-seven million tons of anthracite and 212 million tons of bituminous coal. (See Tables 1 and 2 for additional trends.) Some coal firms grew to immense proportions by nineteenth-century standards. The U.S. Coal and Oil Company, for example, was capitalized at six million dollars and owned the rights to 30,000 acres of coal-bearing land. But small mining concerns with one or two employees also persisted through the turn of the century. New developments in mine technology continued to revolutionize the trade as more and more coal fields across the United States became integrated into the national system of railroads. Industrial relations also assumed nationwide dimensions. John Mitchell, the leader of the UMWA, and L.M. Bowers of the Colorado Fuel and Iron Company, symbolized a new coal industry in which hard-line positions developed in both labor and capital’s respective camps. Since the bituminous coal industry alone employed over 300,000 workers by 1900, many Americans kept a close eye on labor relations in this critical trade. Although “King Coal” stood unchallenged as the nation’s leading supplier of domestic and industrial fuel, tension between managers and workers threatened the stability of the coal industry in the twentieth century.

Table 1: Coal Production in the United States, 1829-1899

Year Coal Production (thousands of tons) Percent Increase over Decade Tons per capita
Anthracite Bituminous
1829 138 102 0.02
1839 1008 552 550 0.09
1849 3995 2453 313 0.28
1859 9620 6013 142 0.50
1869 17,083 15,821 110 0.85
1879 30,208 37,898 107 1.36
1889 45,547 95,683 107 2.24
1899 60,418 193,323 80 3.34

Source: Fourteenth Census of the United States, Vol. XI, Mines and Quarries, 1922, Tables 8 and 9, pp. 258 and 260.

Table 2: Leading Coal Producing States, 1889

State Coal Production (thousands of tons)
Pennsylvania 81,719
Illinois 12,104
Ohio 9977
West Virginia 6232
Iowa 4095
Alabama 3573
Indiana 2845
Colorado 2544
Kentucky 2400
Kansas 2221
Tennessee 1926

Source: Thirteenth Census of the United States, Vol. XI, Mines and Quarries, 1913, Table 4, p. 187

Suggestions for Further Reading

Adams, Sean Patrick. “Different Charters, Different Paths: Corporations and Coal in Antebellum Pennsylvania and Virginia,” Business and Economic History 27 (Fall 1998): 78-90.

Adams, Sean Patrick. Old Dominion, Industrial Commonwealth: Coal, Politics, and Economy in Antebellum America. Baltimore: Johns Hopkins University Press, 2004.

Binder, Frederick Moore. Coal Age Empire: Pennsylvania Coal and Its Utilization to 1860. Harrisburg: Pennsylvania Historical and Museum Commission, 1974.

Blatz, Perry. Democratic Miners: Work and Labor Relations in the Anthracite Coal Industry, 1875-1925. Albany: SUNY Press, 1994.

Broehl, Wayne G. The Molly Maguires. Cambridge, MA: Harvard University Press, 1964.

Bruce, Kathleen. Virginia Iron Manufacture in the Slave Era. New York: The Century Company, 1931.

Chandler, Alfred. “Anthracite Coal and the Beginnings of the ‘Industrial Revolution’ in the United States,” Business History Review 46 (1972): 141-181.

DiCiccio, Carmen. Coal and Coke in Pennsylvania. Harrisburg: Pennsylvania Historical and Museum Commission, 1996

Eavenson, Howard. The First Century and a Quarter of the American Coal Industry. Pittsburgh: Privately Printed, 1942.

Eller, Ronald. Miners, Millhands, and Mountaineers: Industrialization of the Appalachian South, 1880-1930. Knoxville: University of Tennessee Press, 1982.

Harvey, Katherine. The Best Dressed Miners: Life and Labor in the Maryland Coal Region, 1835-1910. Ithaca, NY: Cornell University Press, 1993.

Hoffman, John. “Anthracite in the Lehigh Valley of Pennsylvania, 1820-1845,” United States National Museum Bulletin 252 (1968): 91-141.

Laing, James T. “The Early Development of the Coal Industry in the Western Counties of Virginia,” West Virginia History 27 (January 1966): 144-155.

Laslett, John H.M. editor. The United Mine Workers: A Model of Industrial Solidarity? University Park: Penn State University Press, 1996.

Letwin, Daniel. The Challenge of Interracial Unionism: Alabama Coal Miners, 1878-1921 Chapel Hill: University of North Carolina Press, 1998.

Lewis, Ronald. Coal, Iron, and Slaves. Industrial Slavery in Maryland and Virginia, 1715-1865. Westport, Connecticut: Greenwood Press, 1979.

Long, Priscilla. Where the Sun Never Shines: A History of America’s Bloody Coal Industry. New York: Paragon, 1989.

Nye, David E.. Consuming Power: A Social History of American Energies. Cambridge: Massachusetts Institute of Technology Press, 1998.

Palladino, Grace. Another Civil War: Labor, Capital, and the State in the Anthracite Regions of Pennsylvania, 1840-1868. Urbana: University of Illinois Press, 1990.

Powell, H. Benjamin. Philadelphia’s First Fuel Crisis. Jacob Cist and the Developing Market for Pennsylvania Anthracite. University Park: The Pennsylvania State University Press, 1978.

Schurr, Sam H. and Bruce C. Netschert. Energy in the American Economy, 1850-1975: An Economic Study of Its History and Prospects. Baltimore: Johns Hopkins Press, 1960.

Stapleton, Darwin. The Transfer of Early Industrial Technologies to America. Philadelphia: American Philosophical Society, 1987.

Stealey, John E.. The Antebellum Kanawha Salt Business and Western Markets. Lexington: The University Press of Kentucky, 1993.

Wallace, Anthony F.C. St. Clair. A Nineteenth-Century Coal Town’s Experience with a Disaster-Prone Industry. New York: Alfred A. Knopf, 1981.

Warren, Kenneth. Triumphant Capitalism: Henry Clay Frick and the Industrial Transformation of America. Pittsburgh: University of Pittsburgh Press, 1996.

Woodworth, J. B.. “The History and Conditions of Mining in the Richmond Coal-Basin, Virginia.” Transactions of the American Institute of Mining Engineers 31 (1902): 477-484.

Yearley, Clifton K.. Enterprise and Anthracite: Economics and Democracy in Schuylkill County, 1820-1875. Baltimore: The

Citation: Adams, Sean. “US Coal Industry in the Nineteenth Century”. EH.Net Encyclopedia, edited by Robert Whaples. January 23, 2003. URL http://eh.net/encyclopedia/the-us-coal-industry-in-the-nineteenth-century/

The Economics of the Civil War

Roger L. Ransom, University of California, Riverside

The Civil War has been something of an enigma for scholars studying American history. During the first half of the twentieth century, historians viewed the war as a major turning point in American economic history. Charles Beard labeled it “Second American Revolution,” claiming that “at bottom the so-called Civil War – was a social war, ending in the unquestioned establishment of a new power in the government, making vast changes – in the course of industrial development, and in the constitution inherited from the Fathers” (Beard and Beard 1927: 53). By the time of the Second World War, Louis Hacker could sum up Beard’s position by simply stating that the war’s “striking achievement was the triumph of industrial capitalism” (Hacker 1940: 373). The “Beard-Hacker Thesis” had become the most widely accepted interpretation of the economic impact of the Civil War. Harold Faulkner devoted two chapters to a discussion of the causes and consequences of the war in his 1943 textbook American Economic History (which was then in its fifth edition), claiming that “its effects upon our industrial, financial, and commercial history were profound” (1943: 340).

In the years after World War II, a new group of economic historians — many of them trained in economics departments — focused their energies on the explanation of economic growth and development in the United States. As they looked for the keys to American growth in the nineteenth century, these economic historians questioned whether the Civil War — with its enormous destruction and disruption of society — could have been a stimulus to industrialization. In his 1955 textbook on American economic history, Ross Robertson mirrored a new view of the Civil War and economic growth when he argued that “persistent, fundamental forces were at work to forge the economic system and not even the catastrophe of internecine strife could greatly affect the outcome” (1955: 249). “Except for those with a particular interest in the economics of war,” claimed Robertson, “the four year period of conflict [1861-65] has had little attraction for economic historians” (1955: 247). Over the next two decades, this became the dominant view of the Civil War’s role industrialization of the United States.

Historical research has a way of returning to the same problems over and over. The efforts to explain regional patterns of economic growth and the timing of the United States’ “take-off” into industrialization, together with extensive research into the “economics” of the slave system of the South and the impact of emancipation, brought economic historians back to questions dealing with the Civil War. By the 1990s a new generation of economic history textbooks once again examined the “economics” of the Civil War (Atack and Passell 1994; Hughes and Cain 1998; Walton and Rockoff 1998). This reconsideration of the Civil War by economic historians can be loosely grouped into four broad issues: the “economic” causes of the war; the “costs” of the war; the problem of financing the War; and a re-examination of the Hacker-Beard thesis that the War was a turning point in American economic history.

Economic Causes of the War

No one seriously doubts that the enormous economic stake the South had in its slave labor force was a major factor in the sectional disputes that erupted in the middle of the nineteenth century. Figure 1 plots the total value of all slaves in the United States from 1805 to 1860. In 1805 there were just over one million slaves worth about $300 million; fifty-five years later there were four million slaves worth close to $3 billion. In the 11 states that eventually formed the Confederacy, four out of ten people were slaves in 1860, and these people accounted for more than half the agricultural labor in those states. In the cotton regions the importance of slave labor was even greater. The value of capital invested in slaves roughly equaled the total value of all farmland and farm buildings in the South. Though the value of slaves fluctuated from year to year, there was no prolonged period during which the value of the slaves owned in the United States did not increase markedly. Looking at Figure 1, it is hardly surprising that Southern slaveowners in 1860 were optimistic about the economic future of their region. They were, after all, in the midst of an unparalleled rise in the value of their slave assets.

A major finding of the research into the economic dynamics of the slave system was to demonstrate that the rise in the value of slaves was not based upon unfounded speculation. Slave labor was the foundation of a prosperous economic system in the South. To illustrate just how important slaves were to that prosperity, Gerald Gunderson (1974) estimated what fraction of the income of a white person living in the South of 1860 was derived from the earnings of slaves. Table 1 presents Gunderson’s estimates. In the seven states where most of the cotton was grown, almost one-half the population were slaves, and they accounted for 31 percent of white people’s income; for all 11 Confederate States, slaves represented 38 percent of the population and contributed 23 percent of whites’ income. Small wonder that Southerners — even those who did not own slaves — viewed any attempt by the federal government to limit the rights of slaveowners over their property as a potentially catastrophic threat to their entire economic system. By itself, the South’s economic investment in slavery could easily explain the willingness of Southerners to risk war when faced with what they viewed as a serious threat to their “peculiar institution” after the electoral victories of the Republican Party and President Abraham Lincoln the fall of 1860.

Table 1

The Fraction of Whites’ Incomes from Slavery

State Percent of the Population That Were Slaves Per Capita Earnings of Free Whites (in dollars) Slave Earnings per Free White (in dollars) Fraction of Earnings Due to Slavery
Alabama 45 120 50 41.7
South Carolina 57 159 57 35.8
Florida 44 143 48 33.6
Georgia 44 136 40 29.4
Mississippi 55 253 74 29.2
Louisiana 47 229 54 23.6
Texas 30 134 26 19.4
Seven Cotton States 46 163 50 30.6
North Carolina 33 108 21 19.4
Tennessee 25 93 17 18.3
Arkansas 26 121 21 17.4
Virginia 32 121 21 17.4
All 11 States 38 135 35 25.9
Source: Computed from data in Gerald Gunderson (1974: 922, Table 1)

The Northern states also had a huge economic stake in slavery and the cotton trade. The first half of the nineteenth century witnessed an enormous increase in the production of short-staple cotton in the South, and most of that cotton was exported to Great Britain and Europe. Figure 2 charts the growth of cotton exports from 1815 to 1860. By the mid 1830s, cotton shipments accounted for more than half the value of all exports from the United States. Note that there is a marked similarity between the trends in the export of cotton and the rising value of the slave population depicted in Figure 1. There could be little doubt that the prosperity of the slave economy rested on its ability to produce cotton more efficiently than any other region of the world.

The income generated by this “export sector” was a major impetus for growth not only in the South, but in the rest of the economy as well. Douglass North, in his pioneering study of the antebellum U.S. economy, examined the flows of trade within the United States to demonstrate how all regions benefited from the South’s concentration on cotton production (North 1961). Northern merchants gained from Southern demands for shipping cotton to markets abroad, and from the demand by Southerners for Northern and imported consumption goods. The low price of raw cotton produced by slave labor in the American South enabled textile manufacturers — both in the United States and in Britain — to expand production and provide benefits to consumers through a declining cost of textile products. As manufacturing of all kinds expanded at home and abroad, the need for food in cities created markets for foodstuffs that could be produced in the areas north of the Ohio River. And the primary force at work was the economic stimulus from the export of Southern Cotton. When James Hammond exclaimed in 1859 that “Cotton is King!” no one rose to dispute the point.

With so much to lose on both sides of the Mason-Dixon Line, economic logic suggests that a peaceful solution to the slave issue would have made far more sense than a bloody war. Yet no solution emerged. One “economic” solution to the slave problem would be for those who objected to slavery to “buy out” the economic interest of Southern slaveholders. Under such a scheme, the federal government would purchase slaves. A major problem here was that the costs of such a scheme would have been enormous. Claudia Goldin estimates that the cost of having the government buy all the slaves in the United States in 1860, would be about $2.7 billion (1973: 85, Table 1). Obviously, such a large sum could not be paid all at once. Yet even if the payments were spread over 25 years, the annual costs of such a scheme would involve a tripling of federal government outlays (Ransom and Sutch 1990: 39-42)! The costs could be reduced substantially if instead of freeing all the slaves at once, children were left in bondage until the age of 18 or 21 (Goldin 1973:85). Yet there would remain the problem of how even those reduced costs could be distributed among various groups in the population. The cost of any “compensated” emancipation scheme was so high that even those who wished to eliminate slavery were unwilling to pay for a “buyout” of those who owned slaves.

The high cost of emancipation was not the only way in which economic forces produced strong regional tensions in the United States before 1860. The regional economic specialization, previously noted as an important cause of the economic expansion of the antebellum period, also generated very strong regional divisions on economic issues. Recent research by economic, social and political historians has reopened some of the arguments first put forward by Beard and Hacker that economic changes in the Northern states were a major factor leading to the political collapse of the 1850s. Beard and Hacker focused on the narrow economic aspects of these changes, interpreting them as the efforts of an emerging class of industrial capitalists to gain control of economic policy. More recently, historians have taken a broader view of the situation, arguing that the sectional splits on these economic issues reflected sweeping economic and social changes in the Northern and Western states that were not experienced by people in the South. The term most historians have used to describe these changes is a “market revolution.”

Source: United States Population Census, 1860.

Perhaps the best single indicator of how pervasive the “market revolution” was in the Northern and Western states is the rise of urban areas in areas where markets have become important. Map 1 plots the 292 counties that reported an “urban population” in 1860. (The 1860 Census Office defined an “urban place” as a town or city having a population of at least 2,500 people.) Table 2 presents some additional statistics on urbanization by region. In 1860 6.1 million people — roughly one out of five persons in the United States — lived in an urban county. A glance at either the map or Table 2 reveals the enormous difference in urban development in the South compared to the Northern states. More than two-thirds of all urban counties were in the Northeast and West; those two regions accounted for nearly 80 percent of the urban population of the country. By contrast, less than 7 percent of people in the 11 Southern states of Table 2 lived in urban counties.

Table 2

Urban Population of the United States in 1860a

Region Counties with Urban Populations Total Urban Population in the Region Percent of Region’s Population Living in Urban Counties Region’s Urban Population as Percent of U.S. Urban Population
Northeastb 103 3,787,337 35.75 61.66
Westc 108 1,059,755 13.45 17.25
Borderd 23 578,669 18.45 9.42
Southe 51 621,757 6.83 10.12
Far Westf 7 99,145 15.19 1.54
Totalg 292 6,141,914 19.77 100.00
Notes:

a Urban population is people living in a city or town of at least 2,500

b Includes: Connecticut, Maine, Massachusetts, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, and Vermont.

c Includes: Illinois, Indiana, Iowa, Kansas, Minnesota, Nebraska, Ohio, and Wisconsin.

d Includes: Delaware, Kentucky, Maryland, and Missouri.

e Includes: Alabama, Arkansas, Florida, Georgia, Louisiana, Mississippi, North Carolina, South Carolina, Tennessee, Texas, and Virginia.

f Includes: Colorado, California, Dakotas, Nevada, New Mexico, Oregon, Utah and Washington

g includes District of Columbia

Source: U.S Census of Population, 1860.

The region along the north Atlantic Coast, with its extensive development of commerce and industry, had the largest concentration of urban population in the United States; roughly one-third of the population of the nine states defined as the Northeast in Table 2 lived in urban counties. In the South, the picture was very different. Cotton cultivation with slave labor did not require local financial services or nearby manufacturing activities that might generate urban activities. The 11 states of the Confederacy had only 51 urban counties and they were widely scattered throughout the region. Western agriculture with its emphasis on foodstuffs encouraged urban activity near to the source of production. These centers were not necessarily large; indeed, the West had roughly the same number of large and mid-sized cities as the South. However there were far more small towns scattered throughout settled regions of Ohio, Indiana, Illinois, Wisconsin and Michigan than in the Southern landscape.

Economic policy had played a prominent role in American politics since the birth of the republic in 1790. With the formation of the Whig Party in the 1830s, a number of key economic issues emerged at the national level. To illustrate the extent to which the rise of urban centers and increased market activity in the North led to a growing crisis in economic policy, historians have re-examined four specific areas of legislative action singled out by Beard and Hacker as evidence of a Congressional stalemate in 1860 (Egnal 2001; Ransom and Sutch 2001; 1989; Bensel 1990; McPherson 1988).

Land Policy

1. Land Policy. Settlement of western lands had always been a major bone of contention for slave and free-labor farms. The manner in which the federal government distributed land to people could have a major impact on the nature of farming in a region. Northerners wanted to encourage the settlement of farms which would depend primarily on family labor by offering cheap land in small parcels. Southerners feared that such a policy would make it more difficult to keep areas open for settlement by slaveholders who wanted to establish large plantations. This all came to a head with the “Homestead Act” of 1860 that would provide 160 acres of free land for anyone who wanted to settle and farm the land. Northern and western congressmen strongly favored the bill in the House of Representatives but the measure received only a single vote from slave states’ representatives. The bill passed, but President Buchanan vetoed it. (Bensel 1990: 69-72)

Transportation Improvements

2. Transportation Improvements. Following the opening of the Erie Canal in 1823, there was growing support in the North and the Northwest for government support of improvement in transportation facilities — what were termed in those days “internal improvements”. The need for government- sponsored improvements was particularly urgent in the Great Lakes region (Egnal 2001: 45-50). The appearance of the railroad in the 1840s gave added support for those advocating government subsidies to promote transportation. Southerners required far fewer internal improvements than people in the Northwest, and they tended to view federal subsidies for such projects to be part of a “deal” between western and eastern interests that held no obvious gains for the South. The bill that best illustrates the regional disputes on transportation was the Pacific Railway Bill of 1860, which proposed a transcontinental railway link to the West Coast. The bill failed to pass the House, receiving no votes from congressmen representing districts of the South where there was a significant slave population (Bensel 1990: 70-71).

The Tariff

3. The Tariff. Southerners, with their emphasis on staple agriculture and need to buy goods produced outside the South, strongly objected to the imposition of duties on imported goods. Manufacturers in the Northeast, on the other hand, supported a high tariff as protection against cheap British imports. People in the West were caught in the middle of this controversy. Like the agricultural South they disliked the idea of a high “protective” tariff that raised the cost of imports. However the tariff was also the main source of federal revenue at this time, and Westerners needed government funds for the transportation improvements they supported in Congress. As a result, a compromise reached by western and eastern interests during in the tariff debates of 1857 was to support a “moderate” tariff; with duties set high enough to generate revenue and offer some protection to Northern manufacturers while not putting too much of a burden on Western and Eastern consumers. Southerners complained that even this level of protection was excessive and that it was one more example of the willingness of the West and the North to make economic bargains at the expense of the South (Ransom and Sutch 2001; Egnal 2001:50-52).

Banking

4. Banking. The federal government’s role in the chartering and regulation of banks was a volatile political issue throughout the antebellum period. In 1834 President Andrew Jackson created a major furor when he vetoed a bill to recharter the Second Bank of the United States. Jackson’s veto ushered in a period of that was termed “free banking” in the United States, where the chartering and regulation of banks was left entirely in the hands of state governments. Banks were a relatively new economic institution at this point in time, and opinions were sharply divided over the degree to which the federal government should regulate banks. In the Northeast, where over 60 percent of all banks were located, there was strong support by 1860 for the creation of a system of banks that would be chartered and regulated by the federal government. But in the South, which had little need for local banking services, there was little enthusiasm for such a proposal. Here again, the western states were caught in the middle. While they worried that a system of “national” banks that would be controlled by the already dominant eastern banking establishment, western farmers found themselves in need of local banking services for financing their crops. By 1860 many were inclined to support the Republican proposal for a National Banking System, however Southern opposition killed the National Bank Bill in 1860 (Ransom and Sutch 2001; Bensel 1990).

The growth of an urbanized market society in the North produced more than just a legislative program of political economy that Southerners strongly resisted. Several historians have taken a much broader view of the market revolution and industrialization in the North. They see the economic conflict of North and South, in the words of Richard Brown, as “the conflict of a modernizing society” (1976: 161). A leading historian of the Civil War, James McPherson, argues that Southerners were correct when they claimed that the revolutionary program sweeping through the North threatened their way of life (1983; 1988). James Huston (1999) carries the argument one step further by arguing that Southerners were correct in their fears that the triumph of this coalition would eventually lead to an assault by Northern politicians on slave property rights.

All this provided ample argument for those clamoring for the South to leave the Union in 1861. But why did the North fight a war rather than simply letting the unhappy Southerners go in peace? It seems unlikely that anyone will ever be able to show that the “gains” from the war outweighed the “costs” in economic terms. Still, war is always a gamble, and with the neither the costs nor the benefits easily calculated before the fact, leaders are often tempted to take the risk. The evidence above certainly lent strong support for those arguing that it made sense for the South to fight if a belligerent North threatened the institution of slavery. An economic case for the North is more problematic. Most writers argue that the decision for war on Lincoln’s part was not based primarily on economic grounds. However, Gerald Gunderson points out that if, as many historians argue, Northern Republicans were intent on controlling the spread of slavery, then a war to keep the South in the Union might have made sense. Gunderson compares the “costs” of the war (which we discuss below) with the cost of “compensated” emancipation and notes that the two are roughly the same order of magnitude — 2.5 to 3.7 billion dollars (1974: 940-42). Thus, going to war made as much “economic sense” as buying out the slaveholders. Gunderson makes the further point, which has been echoed by other writers, that the only way that the North could ensure that their program to contain slavery could be “enforced” would be if the South were kept in the Union. Allowing the South to leave the Union would mean that the North could no longer control the expansion of slavery anywhere in the Western Hemisphere (Ransom 1989; Ransom and Sutch 2001; Weingast 1998; Weingast 1995; Wolfson 1995). What is novel about these interpretations of the war is that they argue it was economic pressures of “modernization” in the North that made Northern policy towards secession in 1861 far more aggressive than the traditional story of a North forced into military action by the South’s attack on Fort Sumter.

That is not to say that either side wanted war — for economic or any other reason. Abraham Lincoln probably summarized the situation as well as anyone when he observed in his second inaugural address that: “Both parties deprecated war, but one of them would make war rather than let the nation survive, and the other would accept war rather than let it perish, and the war came.”

The “Costs” of the War

The Civil War has often been called the first “modern” war. In part this reflects the enormous effort expended by both sides to conduct the war. What was the cost of this conflict? The most comprehensive effort to answer this question is the work of Claudia Goldin and Frank Lewis (1978; 1975). The Goldin and Lewis estimates of the costs of the war are presented in Table 3. The costs are divided into two groups: the direct costs which include the expenditures of state and local governments plus the loss from destruction of property and the loss of human capital from the casualties; and what Goldin and Lewis term the indirect costs of the war which include the subsequent implications of the war after 1865. Goldin and Lewis estimate that the combined outlays of both governments — in 1860 dollars — totaled $3.3 billion. To this they add $1.8 billion to account for the discounted economic value of casualties in the war, and they add $1.5 billion to account for the destruction of the war in the South. This gives a total of $6.6 billion in direct costs — with each region incurring roughly half the total.

Table 3

The Costs of the Civil War

(Millions of 1860 Dollars)

South

North

Total

Direct Costs:

Government Expenditures

1,032

2,302

3,334

Physical Destruction

1,487

1,487

Loss of Human Capital

767

1,064

1,831

Total Direct Costs of the War

3,286

3,366

6,652

Per capita

376

148

212

Indirect Costs:

Total Decline in Consumption

6,190

1,149

7,339

Less:

Effect of Emancipation

1,960

Effect of Cotton Prices

1,670

Total Indirect Costs of The War

2,560

1,149

3,709

Per capita

293

51

118

Total Costs of the War

5,846

4,515

10,361

Per capita

670

199

330

Population in 1860 (Million)

8.73

27.71

31.43

Source: Ransom, (1998: 51, Table 3-1); Goldin and Lewis. (1975; 1978)

While these figures are only a very rough estimate of the actual costs, they provide an educated guess as to the order of magnitude of the economic effort required to wage the war, and it seems likely that if there is a bias, it is to understate the total. (Thus, for example, the estimated “economic” losses from casualties ignore the emotional cost of 625,000 deaths, and the estimates of property destruction were quite conservative.) Even so, the direct cost of the war as calculated by Goldin and Lewis was 1.5 times the total gross national product of the United States for 1860 — an enormous sum in comparison with any military effort by the United States up to that point. What stands out in addition to the enormity of the bill is the disparity in the burden these costs represented to the people in the North and the South. On a per capita basis, the costs to the North population were about $150 — or roughly equal to one year’s income. The Southern burden was two and a half times that amount — $376 per man, woman and child.

Staggering though these numbers are, they represent only a fraction of the full costs of the war, which lingered long after the fighting had stopped. One way to measure the full “costs” and “benefits” of the war, Goldin and Lewis argue, is to estimate the value of the observed postwar stream of consumption in each region and compare that figure to the estimated hypothetical stream of consumption had there been no war (1975: 309-10). (All the figures for the costs in Table 3 have been adjusted to reflect their discounted value in 1860.) The Goldin and Lewis estimate for the discounted value of lost consumption for the South was $6.2 billion; for the North the estimate was $1.15 billion. Ingenious though this methodology is, it suffers from the serious drawback that consumption lost for any reason — not just the war — is included in the figure. Particularly for the South, not all the decline in output after 1860 could be directly attributed to the war; the growth in the demand for cotton that fueled the antebellum economy did not continue, and there was a dramatic change in the supply of labor due to emancipation. Consequently, Goldin and Lewis subsequently adjusted their estimate of lost consumption due to the war down to $2.56 billion for the South in order to exclude the effects of emancipation and the collapse of the cotton market. The magnitudes of the indirect effects are detailed in Table 3. After the adjustments, the estimated costs for the war totaled more than $10 billion. Allocating the costs to each region produces a per capita burden of $670 in the South and $199 in the North. What Table 3 does not show is the extent to which these expenses were spread out over a long period of time. In the North, consumption had regained its prewar level by 1873, however in the South consumption remained below its 1860 level to the end of the century. We shall return to this issue below.

Financing the War

No war in American history strained the economic resources of the economy as the Civil War did. Governments on both sides were forced to resort to borrowing on an unprecedented scale to meet the financial obligations for the war. With more developed markets and an industrial base that could ultimately produce the goods needed for the war, the Union was clearly in a better position to meet this challenge. The South, on the other hand, had always relied on either Northern or foreign capital markets for their financial needs, and they had virtually no manufacturing establishments to produce military supplies. From the outset, the Confederates relied heavily on funds borrowed outside the South to purchase supplies abroad.

Figure 3 shows the sources of revenue collected by the Union government during the war. In 1862 and 1863 the government covered less than 15 percent of its total expenditures through taxes. With the imposition of a higher tariff, excise taxes, and the introduction of the first income tax in American history, this situation improved somewhat, and by the war’s end 25 percent of the federal government revenues had been collected in taxes. But what of the other 75 percent? In 1862 Congress authorized the U.S. Treasury to issue currency notes that were not backed by gold. By the end of the war, the treasury had printed more than $250 million worth of these “Greenbacks” and, together with the issue of gold-backed notes, the printing of money accounted for 18 percent of all government revenues. This still left a huge shortfall in revenue that was not covered by either taxes or the printing of money. The remaining revenues were obtained by borrowing funds from the public. Between 1861 and 1865 the debt obligation of the Federal government increased from $65 million to $2.7 billion (including the increased issuance of notes by the Treasury). The financial markets of the North were strained by these demands, but they proved equal to the task. In all, Northerners bought almost $2 billion worth of treasury notes and absorbed $700 million of new currency. Consequently, the Northern economy was able to finance the war without a significant reduction in private consumption. While the increase in the national debt seemed enormous at the time, events were to prove that the economy was more than able to deal with it. Indeed, several economic historians have claimed that the creation and subsequent retirement of the Civil War debt ultimately proved to be a significant impetus to post-war growth (Williamson 1974; James 1984). Wartime finance also prompted a significant change in the banking system of the United States. In 1862 Congress finally passed legislation creating the National Banking System. Their motive was not only to institute the program of banking reform pressed for many years by the Whigs and the Republicans; the newly-chartered federal banks were also required to purchase large blocs of federal bonds to hold as security against the issuance of their national bank notes.

The efforts of the Confederate government to pay for their war effort were far more chaotic than in the North, and reliable expenditure and revenue data are not available. Figure 4 presents the best revenue estimates we have for the Richmond government from 1861 though November 1864 (Burdekin and Langdana 1993). Several features of Confederate finance immediately stand out in comparison to the Union effort. First is the failure of the Richmond government to finance their war expenditures through taxation. Over the course of the war, tax revenues accounted for only 11 percent of all revenues. Another contrast was the much higher fraction of revenues accounted for by the issuance of currency on the part of the Richmond government. Over a third of the Confederate government’s revenue came from the printing press. The remainder came in the form of bonds, many of which were sold abroad in either London or Amsterdam. The reliance on borrowed funds proved to be a growing problem for the Confederate treasury. By mid-1864 the costs of paying interest on outstanding government bonds absorbed more than half all government expenditures. The difficulties of collecting taxes and floating new bond issues had become so severe that in the final year of the war the total revenues collected by the Confederate Government actually declined.

The printing of money and borrowing on such a huge scale had a dramatic effect on the economic stability of the Confederacy. The best measure of this instability and eventual collapse can be seen in the behavior of prices. An index of consumer prices is plotted together with the stock on money from early 1861 to April 1865 in Figure 5. By the beginning of 1862 prices had already doubled; by middle of 1863 they had increased by a factor of 13. Up to this point, the inflation could be largely attributed to the money placed in the hands of consumers by the huge deficits of the government. Prices and the stock of money had risen at roughly the same rate. This represented a classic case of what economists call demand-pull inflation: too much money chasing too few goods. However, from the middle of 1863 on, the behavior of prices no longer mirrors the money supply. Several economic historians have suggested that at this point the prices reflect people’s confidence in the future of the Confederacy as a viable state (Burdekin and Langdana 1993; Weidenmier 2000). Figure 5 identifies three major military “turning points” between 1863 and 1865. In late 1863 and early 1864, following the Confederate defeats at Gettysburg and Vicksburg, prices rose very sharply despite a marked decrease in the growth of the money supply. When the Union offensives in Georgia and Virginia stalled in the summer of 1864, prices stabilized for a few months, only to resume their upward spiral after the fall of Atlanta in September 1864. By that time, of course, the Confederate cause was clearly doomed. By the end of the war, inflation had reached a point where the value of the Confederate currency was virtually zero. People had taken to engaging in barter or using Union dollars (if they could be found) to conduct their transactions. The collapse of the Confederate monetary system was a reflection of the overall collapse of the economy’s efforts to sustain the war effort.

The Union also experienced inflation as a result of deficit finance during the war; the consumer price index rose from 100 at the outset of the war to 175 by the end of 1865. While this is nowhere near the degree of economic disruption caused by the increase in prices experienced by the Confederacy, a doubling of prices did have an effect on how the burden of the war’s costs were distributed among various groups in each economy. Inflation is a tax, and it tends to fall on those who are least able to afford it. One group that tends to be vulnerable to a sudden rise in prices is wage earners. Table 4 presents data on prices and wages in the United States and the Confederacy. The series for wages has been adjusted to reflect the decline in purchasing power due to inflation. Not surprisingly, wage earners in the South saw the real value of their wages practically disappear by the end of the war. In the North the situation was not as severe, but wages certainly did not keep pace with prices; the real value of wages fell by about 20 percent. It is not obvious why this happened. The need for manpower in the army and the demand for war production should have created a labor shortage that would drive wages higher. While the economic situation of laborers deteriorated during the war, one must remember that wage earners in 1860 were still a relatively small share of the total labor force. Agriculture, not industry, was the largest economic sector in the north, and farmers fared much in terms of their income during the war than did wage earners in the manufacturing sector (Ransom 1998:255-64; Atack and Passell 1994:368-70).

Table 4:

Indices of Prices and Real Wages During the Civil War

(1860=100)

Union Confederate
Year Prices Real Wages Prices Real Wages
1860 100 100 100 100
1861 101 100 121 86
1862 113 93 388 35
1863 139 84 1,452 19
1864 176 77 3,992 11
1865 175 82
Source: Union: (Atack and Passell 1994: 367, Table 13.5)

Confederate: (Lerner 1954)

Overall, it is clear that the North did a far better job of mobilizing the economic resources needed to carry on the war. The greater sophistication and size of Northern markets meant that the Union government could call upon institutional arrangements that allowed for a more efficient system of redirecting resources into wartime production than was possible in the South. The Confederates depended far more upon outside resources and direct intervention in the production of goods and services for their war effort, and in the end the domestic economy could not bear up under the strain of the effort. It is worth noting in this regard, that the Union blockade, which by 1863 had largely closed down not only the external trade of the South with Europe, but also the coastal trade that had been an important element in the antebellum transportation system, may have played a more crucial part in bringing about the eventual collapse of the Southern war effort than is often recognized (Ransom 2002).

The Civil War as a Watershed in American Economic History

It is easy to see why contemporaries believed that the Civil War was a watershed event in American History. With a cost of billions of dollars and 625,000 men killed, slavery had been abolished and the Union had been preserved. Economic historians viewing the event fifty years later could note that the half-century following the Civil War had been a period of extraordinary growth and expansion of the American economy. But was the war really the “Second American Revolution” as Beard (1927) and Louis Hacker (1940) claimed? That was certainly the prevailing view as late as 1960, when Thomas Cochran (1961) published an article titled “Did the Civil War Retard Industrialization?” Cochran pointed out that, until the 1950s, there was no quantitative evidence to prove or disprove the Beard-Hacker thesis. Recent quantitative research, he argued, showed that the war had actually slowed the rate of industrial growth. Stanley Engerman expanded Cochran’s argument by attacking the Beard-Hacker claim that political changes — particularly the passage in 1862 of the Republican program of political economy that had been bottled up in Congress by Southern opposition — were instrumental in accelerating economic growth (Engerman 1966). The major thrust of these arguments was that neither the war nor the legislation was necessary for industrialization — which was already well underway by 1860. “Aside from commercial banking,” noted one commentator, “the Civil War appears not to have started or created any new patterns of economic institutional change” (Gilchrist and Lewis 1965: 174). Had there been no war, these critics argued, the trajectory of economic growth that emerged after 1870 would have done so anyway.

Despite this criticism, the notion of a “second” American Revolution lives on. Clearly the Beards and Hacker were in error in their claim that industrial growth accelerated during the war. The Civil War, like most modern wars, involved a huge effort to mobilize resources to carry on the fight. This had the effect of making it appear that the economy was expanding due to the production of military goods. However, Beard and Hacker — and a good many other historians — mistook this increased wartime activity as a net increase in output when in fact what happened is that resources were shifted away from consumer products towards wartime production (Ransom 1989: Chapter 7). But what of the larger question of political change resulting from the war? Critics of Beard and Hacker claimed that the Republican program would have eventually been enacted even if there been no war; hence the war was not a crucial turning point in economic development. The problem with this line of argument is that it completely misses the point of the Beard-Hacker argument. They would readily agree that in the absence of a war the Republican program of political economy would triumph — and that is why there was a war! Historians who argue that economic forces were an underlying cause of sectional conflicts go on to point out that war was probably the only way to settle those conflicts. In this view, the war was a watershed event in the economic development of the United States because the Union military victory ensured that the “market revolution” would not be stymied by the South’s attempt to break up the Union (Ransom 1999).

Whatever the effects of the war on industrial growth, economic historians agree that the war had a profound effect on the South. The destruction of slavery meant that the entire Southern economy had to be rebuilt. This turned out to be a monumental task; far larger than anyone at the time imagined. As noted above in the discussion of the indirect costs of the war, Southerners bore a disproportionate share of those costs and the burden persisted long after the war had ended. The failure of the postbellum Southern economy to recover has spawned a huge literature that goes well beyond the effects of the war.

Economic historians who have examined the immediate effects of the war have reached a few important conclusions. First, the idea that the South was physically destroyed by the fighting has been largely discarded. Most writers have accepted the argument of Ransom and Sutch (2001) that the major “damage” to the South from the war was the depreciation and neglect of property on farms as a significant portion of the male workforce went off to war for several years. Second was the impact of emancipation. Slaveholders lost their enormous investment in slaves as a result of emancipation. Planters were consequently strapped for capital in the years immediately after the war, and this affected their options with regard to labor contracts with the freedmen and in their dealings with capital markets to obtain credit for the planting season. The freedmen and their families responded to emancipation by withdrawing up to a third of their labor from the market. While this was a perfectly reasonable response, it had the effect of creating an apparent labor “shortage” and it convinced white landlords that a free labor system could never work with the ex-slaves; thus further complicating an already unsettled labor market. In the longer run, as Gavin Wright (1986) put it, emancipation transformed the white landowners from “laborlords” to “landlords.” This was not a simple transition. While they were able, for the most part, to cling to their landholdings, the ex-slaveholders were ultimately forced to break up the great plantations that had been the cornerstone of the antebellum Southern economy and rent small parcels of land to the freedmen under using a new form of rental contract — sharecropping. From a situation where tenancy was extremely rare, the South suddenly became an agricultural economy characterized by tenant farms.

The result was an economy that remained heavily committed not only to agriculture, but to the staple crop of cotton. Crop output in the South fell dramatically at the end of the war, and had not yet recovered its antebellum level by 1879. The loss of income was particularly hard on white Southerners; per capita income of whites in 1857 had been $125; in 1879 it was just over $80 (Ransom and Sutch 1979). Table 5 compares the economic growth of GNP in the United States with the gross crop output of the Southern states from 1874 to 1904. Over the last quarter of the nineteenth century, gross crop output in the South rose by about one percent per year at a time when the GNP of United States (including the South) was rising at twice that rate. By the end of the century, Southern per capita income had fallen to roughly two-thirds the national level, and the South was locked in a cycle of poverty that lasted well into the twentieth century. How much of this failure was due solely to the war remains open to debate. What is clear is that neither the dreams of those who fought for an independent South in 1861 nor the dreams of those who hoped that a “New South” that might emerge from the destruction of war after 1865 were realized.

Table 5Annual Rates of Growth of Gross National Product of the U.S. and the Gross Southern Crop Output, 1874 to 1904
Annual Percentage Rate of Growth
Interval Gross National Product of the U.S. Gross Southern Crop Output
1874 to 1884 2.79 1.57
1879 to 1889 1.91 1.14
1884 to 1894 0.96 1.51
1889 to 1899 1.15 0.97
1894 to 1904 2.30 0.21
1874 to 1904 2.01 1.10
Source: (Ransom and Sutch 1979: 140, Table 7.3

References

Atack, Jeremy, and Peter Passell. A New Economic View of American History from Colonial Times to 1940. Second edition. New York: W.W. Norton, 1994.

Beard, Charles, and Mary Beard. The Rise of American Civilization. Two volumes. New York: Macmillan, 1927.

Bensel, Richard F. Yankee Leviathan: The Origins of Central State Authority in America, 1859-1877. New York: Cambridge University Press, 1990.

Brown, Richard D. Modernization: The Transformation of American Life, 1600-1865. New York: Hill and Wang, 1976.

Burdekin, Richard C.K., and Farrokh K. Langdana. “War Finance in the Southern Confederacy.” Explorations in Economic History 30 (1993): 352-377.

Cochran, Thomas C. “Did the Civil War Retard Industrialization?” Mississippi Valley Historical Review 48 (September 1961): 197-210.

Egnal, Marc. “The Beards Were Right: Parties in the North, 1840-1860.” Civil War History 47 (2001): 30-56.

Engerman, Stanley L. “The Economic Impact of the Civil War.” Explorations in Entrepreneurial History, second series 3 (1966): 176-199 .

Faulkner, Harold Underwood. American Economic History. Fifth edition. New York: Harper & Brothers, 1943.

Gilchrist, David T., and W. David Lewis, editors. Economic Change in the Civil War Era. Greenville, DE: Eleutherian Mills-Hagley Foundation, 1965.

Goldin, Claudia Dale. “The Economics of Emancipation.” Journal of Economic History 33 (1973): 66-85.

Goldin, Claudia, and Frank Lewis. “The Economic Costs of the American Civil War: Estimates and Implications.” Journal of Economic History 35 (1975): 299-326.

Goldin, Claudia, and Frank Lewis. “The Post-Bellum Recovery of the South and the Cost of the Civil War: Comment.” Journal of Economic History 38 (1978): 487-492.

Gunderson, Gerald. “The Origin of the American Civil War.” Journal of Economic History 34 (1974): 915-950.

Hacker, Louis. The Triumph of American Capitalism: The Development of Forces in American History to the End of the Nineteenth Century. New York: Columbia University Press, 1940.

Hughes, J.R.T., and Louis P. Cain. American Economic History. Fifth edition. New York: Addison Wesley, 1998.

Huston, James L. “Property Rights in Slavery and the Coming of the Civil War.” Journal of Southern History 65 (1999): 249-286.

James, John. “Public Debt Management and Nineteenth-Century American Economic Growth.” Explorations in Economic History 21 (1984): 192-217.

Lerner, Eugene. “Money, Prices and Wages in the Confederacy, 1861-65.” Ph.D. dissertation, University of Chicago, Chicago, 1954.

McPherson, James M. “Antebellum Southern Exceptionalism: A New Look at an Old Question.” Civil War History 29 (1983): 230-244.

McPherson, James M. Battle Cry of Freedom: The Civil War Era. New York: Oxford University Press, 1988.

North, Douglass C. The Economic Growth of the United States, 1790-1860. Englewood Cliffs: Prentice Hall, 1961.

Ransom, Roger L. Conflict and Compromise: The Political Economy of Slavery, Emancipation, and the American Civil War. New York: Cambridge University Press, 1989.

Ransom, Roger L. “The Economic Consequences of the American Civil War.” In The Political Economy of War and Peace, edited by M. Wolfson. Norwell, MA: Kluwer Academic Publishers, 1998.

Ransom, Roger L. “Fact and Counterfact: The ‘Second American Revolution’ Revisited.” Civil War History 45 (1999): 28-60.

Ransom, Roger L. “The Historical Statistics of the Confederacy.” In The Historical Statistics of the United States, Millennial Edition, edited by Susan Carter and Richard Sutch. New York: Cambridge University Press, 2002.

Ransom, Roger L., and Richard Sutch. “Growth and Welfare in the American South in the Nineteenth Century.” Explorations in Economic History 16 (1979): 207-235.

Ransom, Roger L., and Richard Sutch. “Who Pays for Slavery?” In The Wealth of Races: The Present Value of Benefits from Past Injustices, edited by Richard F. America, 31-54. Westport, CT: Greenwood Press, 1990.

Ransom, Roger L., and Richard Sutch. “Conflicting Visions: The American Civil War as a Revolutionary Conflict.” Research in Economic History 20 (2001)

Ransom, Roger L., and Richard Sutch. One Kind of Freedom: The Economic Consequences of Emancipation. Second edition. New York: Cambridge University Press, 2001.

Robertson, Ross M. History of the American Economy. Second edition. New York: Harcourt Brace and World, 1955.

United States, Bureau of the Census. Historical Statistics of the United States, Colonial Times to 1970. Two volumes. Washington: U.S. Government Printing Office, 1975.

Walton, Gary M., and Hugh Rockoff. History of the American Economy. Eighth edition. New York: Dryden, 1998.

Weidenmier, Marc. “The Market for Confederate Bonds.” Explorations in Economic History 37 (2000): 76-97.

Weingast, Barry. “The Economic Role of Political Institutions: Market Preserving Federalism and Economic Development.” Journal of Law, Economics and Organization 11 (1995): 1:31.

Weingast, Barry R. “Political Stability and Civil War: Institutions, Commitment, and American Democracy.” In Analytic Narratives, edited by Robert Bates et al. Princeton: Princeton University Press, 1998.

Williamson, Jeffrey. “Watersheds and Turning Points: Conjectures on the Long-Term Impact of Civil War Financing.” Journal of Economic History 34 (1974): 636-661.

Wolfson, Murray. “A House Divided against Itself Cannot Stand.” Conflict Management and Peace Science 14 (1995): 115-141.

Wright, Gavin. Old South, New South: Revolutions in the Southern Economy since the Civil War. New York: Basic Books, 1986.

Citation: Ransom, Roger. “Economics of the Civil War”. EH.Net Encyclopedia, edited by Robert Whaples. August 24, 2001. URL http://eh.net/encyclopedia/the-economics-of-the-civil-war/

A Concise History of America’s Brewing Industry

Martin H. Stack, Rockhurst Universtiy

1650 to 1800: The Early Days of Brewing in America

Brewing in America dates to the first communities established by English and Dutch settlers in the early to mid seventeenth century. Dutch immigrants quickly recognized that the climate and terrain of present-day New York were particularly well suited to brewing beer and growing malt and hops, two of beer’s essential ingredients. A 1660 map of New Amsterdam details twenty-six breweries and taverns, a clear indication that producing and selling beer were popular and profitable trades in the American colonies (Baron, Chapter Three). Despite the early popularity of beer, other alcoholic beverages steadily grew in importance and by the early eighteenth century several of them had eclipsed beer commercially.

Between 1650 and the Civil War, the market for beer did not change a great deal: both production and consumption remained essentially local affairs. Bottling was expensive, and beer did not travel well. Nearly all beer was stored in, and then served from, wooden kegs. While there were many small breweries, it was not uncommon for households to brew their own beer. In fact, several of America’s founding fathers brewed their own beer, including George Washington and Thomas Jefferson (Baron, Chapters 13 and 16).

1800-1865: Brewing Begins to Expand

National production statistics are unavailable before 1810, an omission which reflects the rather limited importance of the early brewing industry. In 1810, America’s 140 commercial breweries collectively produced just over 180,000 barrels of beer.[1] During the next fifty years, total beer output continued to increase, but production remained small scale and local. This is not to suggest, however, that brewing could not prove profitable. In 1797, James Vassar founded a brewery in Poughkeepsie, New York whose successes echoed far beyond the brewing industry. After several booming years Vassar ceded control of the brewery to his two sons, Matthew and John. Following the death of his brother in an accident and a fire that destroyed the plant, Matthew Vassar rebuilt the brewery in 1811. Demand for his beer grew rapidly, and by the early 1840s, the Vassar brewery produced nearly 15,000 barrels of ale and porter annually, a significant amount for this period. Continued investment in the firm facilitated even greater production levels, and by 1860 its fifty employees turned out 30,000 barrels of beer, placing it amongst the nation’s largest breweries. Today, the Vassar name is better known for the college Matthew Vassar endowed in 1860 with earnings from the brewery (Baron, Chapter 17).

1865-1920: Brewing Emerges as a Significant Industry

While there were several hundred small scale, local breweries in the 1840s and 1850s, beer did not become a mass-produced, mass-consumed beverage until the decades following the Civil War. Several factors contributed to beer’s emergence as the nation’s dominant alcoholic drink. First, widespread immigration from strong beer drinking countries such as Britain, Ireland, and Germany contributed to the creation of a beer culture in the U.S.. Second, America was becoming increasingly industrialized and urbanized during these years, and many workers in the manufacturing and mining sectors drank beer during work and after. Third, many workers began to receive higher wages and salaries during these years, enabling them to buy more beer. Fourth, beer benefited from members of the temperance movement who advocated lower alcohol beer over higher alcohol spirits such as rum or whiskey.[2] Fifth, a series of technological and scientific developments fostered greater beer production and the brewing of new styles of beer. For example, artificial refrigeration enabled brewers to brew during warm American summers, and pasteurization, the eponymous procedure developed by Louis Pasteur, helped extend packaged beer’s shelf life, making storage and transportation more reliable (Stack, 2000). Finally, American brewers began brewing lager beer, a style that had long been popular in Germany and other continental European countries. Traditionally, beer in America meant British-style ale. Ales are brewed with top fermenting yeasts, and this category ranges from light pale ales to chocolate-colored stouts and porters. During the 1840s, American brewers began making German-style lager beers. In addition to requiring a longer maturation period than ales, lager beers use a bottom fermenting yeast and are much more temperature sensitive. Lagers require a great deal of care and attention from brewers, but to the increasing numbers of nineteenth century German immigrants, lager was synonymous with beer. As the nineteenth century wore on, lager production soared, and by 1900, lager outsold ale by a significant margin.

Together, these factors helped transform the market for beer. Total beer production increased from 3.6 million barrels in 1865 to over 66 million barrels in 1914. By 1910, brewing had grown into one of the leading manufacturing industries in America. Yet, this increase in output did not simply reflect America’s growing population. While the number of beer drinkers certainly did rise during these years, perhaps just as importantly, per capita consumption also rose dramatically, from under four gallons in 1865 to 21 gallons in the early 1910s.

Table 1: Industry Production and per Capita Consumption, 1865-1915

Year National Production (millions of barrels) Per Capita Consumption (gallons)
1865 3.7 3.4
1870 6.6 5.3
1875 9.5 6.6
1880 13.3 8.2
1885 19.2 10.5
1890 27.6 13.6
1895 33.6 15.0
1900 39.5 16.0
1905 49.5 18.3
1910 59.6 20.0
1915 59.8 18.7

Source: United States Brewers Association, 1979 Brewers Almanac, Washington, DC: 12-13.

An equally impressive transformation was underway at the level of the firm. Until the 1870s and 1880s, American breweries had been essentially small scale, local operations. By the late nineteenth century, several companies began to increase their scale of production and scope of distribution. Pabst Brewing Company in Milwaukee and Anheuser-Busch in St. Louis became two of the nation’s first nationally-oriented breweries, and the first to surpass annual production levels of one million barrels. By utilizing the growing railroad system to distribute significant amounts of their beer into distant beer markets, Pabst, Anheuser-Busch and a handful of other enterprises came to be called “shipping” breweries. Though these firms became very powerful, they did not control the pre-Prohibition market for beer. Rather, an equilibrium emerged that pitted large and regional shipping breweries that incorporated the latest innovations in pasteurizing, bottling, and transporting beer against a great number of locally-oriented breweries that mainly supplied draught beer in wooden kegs to their immediate markets (Stack, 2000).

Table 2: Industry Production, the Number of Breweries, and Average Brewery Size

1865-1915

Year National Production (millions of barrels) Number of Breweries Average Brewery Size (thousands of barrels)
1865 3.7 2,252 1,643
1870 6.6 3,286 2,009
1875 9.5 2,783 3,414
1880 13.3 2,741 4,852
1885 19.2 2,230 8,610
1890 27.6 2,156 12,801
1895 33.6 1,771 18,972
1900 39.5 1,816 21,751
1905 49.5 1,847 26,800
1910 59.6 1,568 38,010
1915 59.8 1,345 44,461

Source: United States Brewers Association, 1979 Brewers Almanac, Washington DC: 12-13.

Between the Civil War and national prohibition, the production and consumption of beer greatly outpaced spirits. Though consumption levels of absolute alcohol had peaked in the early 1800s, temperance and prohibition forces grew increasingly vocal and active as the century wore on, and by the late 1800s, they constituted one of the best-organized political pressure groups of the day (Kerr, Chapter 5, 1985). Their efforts culminated in the ratification of the Eighteenth Amendment on January 29, 1919 that, along with the Volstead Act, made the production and distribution of any beverages with more than one-half of one percent alcohol illegal. While estimates of alcohol activity during Prohibition’s thirteen year reign — from 1920 to 1933 — are imprecise, beer consumption almost certainly fell, though spirit consumption may have remained constant or actually even increased slightly (Rorbaugh, Appendices).

1920-1933: The Dark Years, Prohibition

The most important decision all breweries had to make after 1920 was what to do with their plants and equipment. As they grappled with this question, they made implicit bets as to whether Prohibition would prove to be merely a temporary irritant. Pessimists immediately divested themselves of all their brewing equipment, often at substantial losses. Other firms decided to carry on with related products, and so stay prepared for any modifications to the Volstead Act which would allow for beer. Schlitz, Blatz, Pabst, and Anheuser-Busch, the leading pre-Prohibition shippers, began producing near beer, a malt beverage with under one-half of one percent alcohol. While it was not a commercial success, its production allowed these firms to keep current their beer-making skills. Anheuser-Busch called its near beer “Budweiser” which was “simply the old Budweiser lager beer, brewed according to the traditional method, and then de-alcoholized. … August Busch took the same care in purchasing the costly materials as he had done during pre-prohibition days” (Krebs and Orthwein, 1953, 165). Anheuser-Busch and some of the other leading breweries were granted special licenses by the federal government for brewing alcohol greater than one half of one percent for “medicinal purposes” (Plavchan, 1969, 168). Receiving these licensees gave these breweries a competitive advantage as they were able to keep their brewing staff active in beer-making.

The shippers, and some local breweries, also made malt syrup. While they officially advertised it as an ingredient for baking cookies, and while its production was left alone by the government, it was readily apparent to all that its primary use was for homemade beer.

Of perhaps equal importance to the day-to-day business activities of the breweries were their investment decisions. Here, as in so many other places, the shippers exhibited true entrepreneurial insight. Blatz, Pabst, and Anheuser-Busch all expanded their inventories of automobiles and trucks, which became key assets after repeal. In the 1910s, Anheuser-Busch invested in motorized vehicles to deliver beer; by the 1920s, it was building its own trucks in great numbers. While it never sought to become a major producer of delivery vehicles, its forward expansion in this area reflected its appreciation of the growing importance of motorized delivery, an insight which they built on after repeal.

The leading shippers also furthered their investments in bottling equipment and machinery, which was used in the production of near beer, root beer, ginger ale, and soft drinks. These products were not the commercial successes beer had been, but they gave breweries important experience in bottling. While 85 percent of pre-Prohibition beer was kegged, during Prohibition over 80 percent of near beer and a smaller, though growing, percentage of soft drinks was sold in bottles.

This remarkable increase in packaged product impelled breweries to refine their packaging skills and modify their retailing practice. As they sold near beer and soft drinks to drugstores and drink stands, they encountered new marketing problems (Cochran, 1948, 340). Experience gained during these years helped the shippers meet radically different distribution requirements of the post-repeal beer market.

They were learning about canning as well as bottling. In 1925, Blatz’s canned malt syrup sales were more than $1.3 million, significantly greater than its bulk sales. Anheuser-Busch used cans from the American Can Company for its malt syrup in the early 1920s, a firm which would gain national prominence in 1935 for helping to pioneer the beer can. Thus, the canning of malt syrup helped create the first contacts between the leading shipping brewers and American Can Company (Plavchan, 1969, 178; Conny, 1990, 35-36; and American Can Company, 1969, 7-9).

These expensive investments in automobiles and bottling equipment were paid for in part by selling off branch properties, namely saloons (See Cochran, 1948; Plavchan, 1969; Krebs and Orthwein, 1953). Some had equipped their saloons with furniture and bar fixtures, but as Prohibition wore on, they progressively divested themselves of these assets.

1933-1945: The Industry Reawakens after the Repeal of Prohibition

In April 1933 Congress amended the Volstead Act to allow for 3.2 percent beer. Eight months later, in December, Congress and the states ratified the Twenty-first Amendment, officially repealing Prohibition. From repeal until World War II, the brewing industry struggled to regain its pre-Prohibition fortunes. Prior to prohibition, breweries owned or controlled many saloons, which were the dominant retail outlets for alcohol. To prevent the excesses that had been attributed to saloons from reoccurring, post-repeal legislation forbade alcohol manufacturers from owning bars or saloons, requiring them instead to sell their beer to wholesalers that in turn would distribute their beverages to retailers.

Prohibition meant the end of many small breweries that had been profitable, and that, taken together, had posed a formidable challenge to the large shipping breweries. The shippers, who had much greater investments, were not as inclined to walk away from brewing.[3] After repeal, therefore, they reopened for business in a radically new environment, one in which their former rivals were absent or disadvantaged. From this favorable starting point, they continued to consolidate their position. Several hundred locally oriented breweries did reopen, but were unable to regain their pre-Prohibition competitive edge, and they quickly exited the market. From 1935 to 1940, the number of breweries fell by ten percent.

Table 3: U.S. Brewing Industry Data, 1910-1940

Year Number of Breweries Number of Barrels Produced (millions) Average Barrelage per Brewery Largest Firm Production (millions of barrels) Per Capita Consumption (gallons)
1910 1,568 59.5 37,946 1.5 20.1
1915 1,345 59.8 44,461 1.1 18.7
1934 756 37.7 49,867 1.1 7.9
1935 766 45.2 59,008 1.1 10.3
1936 739 51.8 70,095 1.3 11.8
1937 754 58.7 77,851 1.8 13.3
1938 700 56.3 80,429 2.1 12.9
1939 672 53.8 80,059 2.3 12.3
1940 684 54.9 80,263 2.5 12.5

Source: Cochran, 1948; Krebs and Orthwein, 1953; and United States Brewers Almanac, 1956.

Annual industry output, after struggling in 1934 and 1935, began to approach the levels reached in the 1910s. Yet, these total increases are somewhat misleading, as the population of the U.S. had risen from 92 to 98 million in the 1910s to 125 to 130 million in the 1930s (Brewers Almanac, 1956, 10). This translated directly into the lower per capita consumption levels reported in Table 3.

The largest firms grew even larger in the years following repeal, quickly surpassing their pre-Prohibition annual production levels. The post-repeal industry leaders, Anheuser-Busch and Pabst, doubled their annual production levels from 1935 to 1940.

To take for granted the growing importance of the leading shippers during this period is to ignore their momentous reversal of pre-Prohibition trends. While medium-sized breweries dominated the industry output in the years leading up to Prohibition, the shippers regained in the 1930s the dynamism they manifested from the 1870s to the 1890s. Table 4 compares the fortunes of the shippers in relation to the industry as a whole. From 1877 to 1895, Anheuser-Busch and Pabst, the two most prominent shippers, grew much faster than the industry, and their successes helped pull the industry along. This picture changed during the years 1895 to 1915, when the industry grew much faster than the shippers (Stack, 2000). With the repeal of Prohibition, the tides changed again: from 1934 to 1940, the brewing industry grew very slowly, while Anheuser-Busch and Pabst enjoyed tremendous increases in their annual sales.

Table 4: Percentage Change in Output among Shipping Breweries, 1877-1940

Period Anheuser-Busch Pabst Industry
1877-1895 1,106% 685% 248%
1895-1914 58% -23% 78%
1934-1940 173% 87% 26%

Source: Cochran, 1948; Krebs and Orthwein, 1953; and Brewers Almanac, 1956.

National and regional shippers increasingly dominated the market. Breweries such as Anheuser-Busch, Pabst and Schlitz came to exemplify the modern business enterprise, as described by Alfred Chandler (Chandler, 1977), which adeptly integrated mass production and mass distribution.

Table 5: Leading Brewery Output Levels, 1938-1940

Brewery Plant Location 1938 (bls) 1939 (bls) 1940 (bls)
Anheuser-Busch St. Louis, MO 2,087,000 2,306,000 2,468,000
Pabst Brewing Milwaukee, WI

Peoria Heights, IL

1,640,000 1,650,000 1,730,000
Jos. Schlitz Milwaukee, WI 1,620,000 1,651,083 1,570,000
F & M Schafer Brooklyn, NY 1,025,000 1,305,000 1,390,200
P. Ballantine Newark, NJ 1,120,000 1,289,425 1,322,346
Jacob Ruppert New York, NY 1,417,000 1,325,350 1,228,400
Falstaff Brewing St. Louis, MO

New Orleans, LA

Omaha, NE

622,000 622,004 684,537
Duquesne Brewing Pittsburgh, PA

Carnegie, PA

McKees Rock, PA

625,000 680,000 690,000
Theo. Hamm Brewing St. Paul, MN 750,000 780,000 694,200
Liebman Breweries Brooklyn, NY 625,000 632,558 670,198

Source: Fein, 1942, 35.

World War One had presented a direct threat to the brewing industry. Government officials used war-time emergencies to impose grain rationing, a step that led to the lowering of the alcohol level of beer to 2.75 percent. World War Two had a completely different effect on the industry: rather than output falling, beer production rose from 1941 to 1945.

Table 6: Production and Per Capita Consumption, 1940-1945

=”607″>

Year Number of Breweries Number of barrels withdrawn (millions) Per Capita Consumption (gallons)
1940 684 54.9 12.5
1941 574 55.2 12.3
1942 530 63.7 14.1
1943 491 71.0 15.8
1944 469 81.7 18.0
1945 468 86.6 18.6

Source: 1979 USBA, 12-14.

During the war, the industry mirrored the nation at large by casting off its sluggish depression-era growth. As the war economy boomed, consumers, both troops and civilians, used some of their wages for beer, and per capita consumption grew by 50 percent between 1940 and 1945.

1945-1980: Following World War II, the Industry Continues to Grow and to Consolidate

Yet, the take-off registered during the World War II was not sustained during the ensuing decades. Total production continued to grow, but at a slower rate than overall population.

Table 7: Production and per Capita Consumption, 1945-1980

Year Number of Breweries Number of barrels withdrawn (millions) Per Capita Consumption (gallons)
1945 468 86.6 18.6
1950 407 88.8 17.2
1955 292 89.8 15.9
1960 229 94.5 15.4
1965 197 108.0 16.0
1970 154 134.7 18.7
1975 117 157.9 21.1
1980 101 188.4 23.1

Source: 1993 USBA, 7-8.

The period following WWII was characterized by great industry consolidation. Total output continued to grow, though per capita consumption fell into the 1960s before rebounding to levels above 21 gallons per capita in the 1970s, the highest rates in the nation’s history. Not since the 1910s, had consumption levels topped 21 gallons a year; however, there was a significant difference. Prior to Prohibition most consumers bought their beer from local or regional firms and over 85 percent of the beer was served from casks in saloons. Following World War II, two significant changes radically altered the market for beer. First, the total number of breweries operating fell dramatically. This signaled the growing importance of the large national breweries. While many of these firms — Anheuser-Busch, Pabst, Schlitz, and Blatz — had grown into prominence in the late nineteenth century, the scale of their operations grew tremendously in the years after the repeal of prohibition. From the mid 1940s to 1980, the five largest breweries saw their share of the national market grow from 19 to 75 percent (Adams, 125).

Table 8: Concentration of the Brewing Industry, 1947-1981

Year Five Largest (%) Ten Largest (%) Herfindahl Index[4]
1947 19.0 28.2 140
1954 24.9 38.3 240
1958 28.5 45.2 310
1964 39.0 58.2 440
1968 47.6 63.2 690
1974 64.0 80.8 1080
1978 74.3 92.3 1292
1981 75.9 93.9 1614

Source: Adams, 1995, 125.

The other important change concerned how beer was sold. Prior to Prohibition, nearly all beer was sold on-tap in bars or saloons; while approximately 10-15 percent of the beer was bottled, it was much more expensive than draught beer. In 1935, a few years after repeal, the American Can Company successfully canned beer for the first time. The spread of home refrigeration helped spur consumer demand for canned and bottled beer, and from 1935 onwards, draught beer sales have fallen markedly.

Table 9: Packaged vs. Draught Sales, 1935-1980

Year Packaged sales as a percentage of total sales

(bottled and canned)

Draught sales as a percentage of total sales
1935 30 70
1940 52 48
1945 64 36
1950 72 28
1955 78 22
1960 81 19
1965 82 18
1970 86 14
1975 88 12
1980 88 12

Source: 1979 USBA, 20; 1993 USBA, 14.

The rise of packaged beer contributed to the growing industry consolidation detailed in Table 8.

1980-2000: Continued Growth, the Microbrewery Movement, and International Dimensions of the Brewing Industry

From 1980 to 2000, beer production continued to rise, reaching nearly 200 million barrels in 2000. Per capita consumption hit its highest recorded level in 1981 with 23.8 gallons. Since then, though, consumption levels have dropped a bit, and during the 1990s, consumption was typically in the 21-22 gallon range.

Table 10: Production and Per Capita Consumption, 1980-1990

Year Number of Breweries Number of barrels withdrawn (millions) Per Capita Consumption (gallons)
1980 101 188.4 23.1
1985 105 193.8 22.7
1990 286 201.7 22.6

Source: 1993 USBA, 7-8.

Beginning around 1980, the long decline in the number of breweries slowed and then was reversed. Judging solely by the number of breweries in operation, it appeared that a significant change had occurred: the number of firms began to increase, and by the late 1990s, hundreds of new breweries were operating in the U.S. However, this number is rather misleading: the overall industry remained very concentrated, with a three firm concentration ratio in 2000 of 81 percent.

Table 11: Production Levels of the Leading Breweries, 2000

Production (millions of barrels)
Anheuser-Busch 99.2
Miller 39.8
Coors 22.7
Total Domestic Sales 199.4

Source: Beverage Industry, May 2003, 19.

Although entrepreneurs and beer enthusiasts began hundreds of new breweries during this period, most of them were very small, with annual production levels of between 5,000 to 100,000 barrels annually. Reflecting their small size, these new firms were nicknamed microbreweries. Collectively, microbreweries have grown to account for approximately 5-7 percent of the total beer market.

Microbreweries represented a new strategy in the brewing industry: rather than competing on the basis of price or advertising, they attempted to compete on the basis of inherent product characteristics. They emphasized the freshness of locally produced beer; they experimented with much stronger malt and hop flavors; they tried new and long-discarded brewing recipes, often reintroducing styles that had been popular in America decades earlier. Together, these breweries have had an influence much greater than their market share would suggest. The big three breweries, Anheuser Busch, Miller, and Coors, have all tried to incorporate ideas from the microbrewery movement. They have introduced new marquee brands intended to compete for some of this market, and when this failed, they have bought shares in or outright control of some microbreweries.

A final dimension of the brewing industry that has been changing concerns the emerging global market for beer. Until very recently, America was the biggest beer market in the world: as a result, American breweries have not historically looked abroad for additional sales, preferring to expand their share of the domestic market.[5] In the1980s, Anheuser-Busch began to systematically evaluate its market position. While it had done very well in the U.S., it had not tapped markets overseas; as a result, it began a series of international business dealings. It gradually moved from exporting small amounts of its flagship brand Budwesier to entering into licensing accords whereby breweries in a range of countries such as Ireland, Japan, and Argentina began to brew Budweiser for sale in their domestic markets. In 1995, it established its first breweries outside of the U.S., one in England for the European market and the other in China, to service the growing markets in China and East Asia.[6]

While U.S. breweries such as Anheuser-Busch have only recently begun to explore the opportunities abroad, foreign firms have long appreciated the significance of the American market. Beginning in the late 1990s, imports began to increase their market share and by the early 2000s, they accounted for approximately 12 percent of the large U.S. market. Imports and microbrews typically cost more than the big three’s beers and they provide a wider range of flavors and tastes. One of the most interesting developments in the international market for beer occurred in 2002 when South African Breweries (SAB), the dominant brewery in South Africa, and an active firm in Europe, acquired Miller, the second largest brewery in the U.S. Though not widely discussed in the U.S., this may portend a general move towards increased global integration in the world market for beer.

Annotated Bibliography

Adams, Walter and James Brock, editors. The Structure of American Industry, ninth edition. Englewood Cliffs, New Jersey: Prentice Hall, 1995.

Apps, Jerry. Breweries of Wisconsin. Madison, WI: University of Wisconsin Press, 1992. Detailed examination of the history of breweries and brewing in Wisconsin.

Baron, Stanley. Brewed In America: A History of Beer and Ale in the United States.

Boston: Little, Brown, and Co, 1962: Very good historical overview of brewing in America, from the Pilgrims through the post-World War II era.

Baum, Dan. Citizen Coors: A Grand Family Saga of Business, Politics, and Beer. New York: Harper Collins, 2000. Very entertaining story of the Coors family and the brewery they made famous.

Beverage Industry (May 2003): 19-20.

Blum, Peter. Brewed In Detroit: Breweries and Beers since 1830. Detroit: Wayne State University Press, 1999. Very good discussion of Detroit’s major breweries and how they evolved. Particularly strong on the Stroh brewery.

Cochran, Thomas. Pabst Brewing Company: The History of an American Business. New York: New York University Press, 1948: A very insightful, well-researched, and well- written history of one of America’s most important breweries. It is strongest on the years leading up to Prohibition.

Downard, William. The Cincinnati Brewing Industry: A Social and Economic History. Ohio University Press, 1973: A good history of brewing in Cincinnati; particularly strong in the years prior to Prohibition.

Downard, William. Dictionary of the History of the American Brewing and Distilling Industries. Westport, CT: Greenwood Press, 1980: Part dictionary and part encyclopedia, a useful compendium of terms, people, and events relating to the brewing and distilling industries.

Duis, Perry. The Saloon: Public Drinking in Chicagoand Boston, 1880-1920. Urbana: University of Illinois Press, 1983: An excellent overview of the institution of the saloon in pre-Prohibition America.

Eckhardt, Fred. The Essentials of Beer Style. Portland, OR: Fred Eckhardt Communications, 1995: A helpful introduction into the basics of how beer is made and how beer styles differ.

Ehert, George. Twenty-Five Years of Brewing. New York: Gast Lithograph and Engraving, 1891: An interesting snapshot of an important late nineteenth century New York City brewery.

Elzinga, Kenneth. “The Beer Industry.” In The Structure of American Industry, ninth edition, edited by W. Adams and J. Brock. Englewood Cliffs, New Jersey: Prentice Hall, 1995: A good overview summary of the history, structure, conduct, and performance of America’s brewing industry.

Fein, Edward. “The 25 Leading Brewers in the United States Produce 41.5% of the Nation’s Total Beer Output.” Brewers Digest 17 (October 1942): 35.

Greer, Douglas. “Product Differentiation and Concentration in the Brewing Industry,” Journal of Industrial Economics 29 (1971): 201-19.

Greer, Douglas. “The Causes of Concentration in the Brewing Industry,” Quarterly Review of Economics and Business 21 (1981): 87-106.

Greer, Douglas. “Beer: Causes of Structural Change.” In Industry Studies, second edition, edited by Larry Duetsch, Armonk, New York: M.E. Sharpe, 1998.

Hernon, Peter and Terry Ganey. Under the Influence: The Unauthorized Story of the Anheuser-Busch Dynasty. New York: Simon and Schuster, 1991: Somewhat sensationalistic history of the family that has controlled America’s largest brewery, but some interesting pieces on the brewery are included.

Horowitz, Ira and Ann Horowitz. “Firms in a Declining Market: The Brewing Case.” Journal of Industrial Economics 13 (1965): 129-153.

Jackson, Michael. The New World Guide To Beer. Philadelphia: Running Press, 1988: Good overview of the international world of beer and of America’s place in the international beer market.

Keithan, Charles. The Brewing Industry. Washington D.C: Federal Trade Commission, 1978.

Kerr, K. Austin. Organized for Prohibition. New Haven: Yale Press, 1985: Excellent study of the rise of the Anti-Saloon League in the United States.

Kostka, William. The Pre-prohibition History of Adolph Coors Company: 1873-1933. Golden, CO: self-published book, Adolph Coors Company, 1973: A self-published book by the Coors company that provides some interesting insights into the origins of the Colorado brewery.

Krebs, Roland and Orthwein, Percy. Making Friends Is Our Business: 100 Years of Anheuser-Busch. St. Louis, MO: self-published book, Anheuser-Busch, 1953: A self-published book by the Anheuser-Busch brewery that has some nice illustrations and data on firm output levels. The story is nicely told but rather self-congratulatory.

“Large Brewers Boost Share of U.S. Beer Business,” Brewers Digest, 15 (July 1940): 55-57.

Leisley, Bruce. A History of Leisley Brewing. North Newton Kansas: Mennonite Press, 1975: A short but useful history of the Leisley Brewing Company. This was the author’s undergraduate thesis.

Lender, Mark and James Martin. Drinking in America. New York: The Free Press, 1987: Good overview of the social history of drinking in America.

McGahan, Ann. “The Emergence of the National Brewing Oligopoly: Competition in the American Market, 1933-58.” Business History Review 65 (1991): 229-284: Excellent historical analysis of the origins of the brewing oligopoly following the repeal of Prohibition.

McGahan, Ann. “Cooperation in Prices and Capacities: Trade Associations in Brewing after Repeal.” Journal of Law and Economics 38 (1995): 521-559.

Meier, Gary and Meier, Gloria. Brewed in the Pacific Northwest: A History of Beer Making in Oregon and Washington. Seattle: Fjord Press, 1991: A survey of the history of brewing in the Pacific Northwest.

Miller, Carl. Breweries of Cleveland. Cleveland, OH: Schnitzelbank Press, 1998: Good historical overview of the brewing industry in Cleveland.

Norman, Donald. Structural Change and Performance in the U.S. Brewing Industry. Ph.D. dissertation, UCLA, 1975.

One Hundred Years of Brewing. Chicago and New York: Arno Press Reprint, 1903 (Reprint 1974): A very important work. Very detailed historical discussion of the American brewing industry through the end of the nineteenth century.

Persons, Warren. Beer and Brewing In America: An Economic Study. New York: United Brewers Industrial Foundation, 1940.

Plavchan, Ronald. A History of Anheuser-Busch, 1852-1933. Ph.D. dissertation, St. Louis University, 1969: Apart from Cochran’s analysis of Pabst, one of a very few detailed business histories of a major American brewery.

Research Company of America. A National Survey of the Brewing Industry. self-published, 1941: A well research industry analysis with a wealth of information and data.

Rorbaugh, William. The Alcoholic Republic: An American Tradition. New York: Oxford University Press, 1979: Excellent scholarly overview of drinking habits in America.

Rubin, Jay. “The Wet War: American Liquor, 1941-1945.” In Alcohol, Reform, and Society, edited by J. Blocker. Westport, CT: Greenwood Press, 1979: Interesting discussion of American drinking during World War II.

Salem, Frederick. 1880. Beer: Its History and Its Economic Value as a National Beverage. New York: Arno Press, 1880 (Reprint 1972): Early but valuable discussion of American brewing industry.

Scherer, F.M. Industry Structure, Strategy, and Public Policy. New York: Harper Collins, 1996: A very good essay on the brewing industry.

Shih, Ko Ching and C. Ying Shih. American Brewing Industry and the Beer Market. Brookfield, WI, 1958: Good overview of the industry with some excellent data tables.

Skilnik, Bob. The History of Beer and Brewing in Chicago: 1833-1978. Pogo Press, 1999: Good overview of the history of brewing in Chicago.

Smith, Greg. Beer in America: The Early Years, 1587 to 1840. Boulder, CO: Brewers Publications, 1998: Well written account of beer’s development in America, from the Pilgrims to mid-nineteenth century.

Stack, Martin. “Local and Regional Breweries in America’s Brewing Industry, 1865-1920.” Business History Review 74 (Autumn 2000): 435-63.

Thomann, Gallus. American Beer: Glimpses of Its History and Description of Its Manufacture. New York: United States Brewing Association, 1909: Interesting account of the state of the brewing industry at the turn of the twentieth century.

United States Brewers Association. Annual Year Book, 1909-1921. Very important primary source document published by the leading brewing trade association.

United States Brewers Foundation. Brewers Almanac, published annually, 1941-present: Very important primary source document published by the leading brewing trade association.

Van Wieren, Dale. American Breweries II. West Point, PA: Eastern Coast Brewiana Association, 1995. Comprehensive historical listing of every brewery in every state, arranged by city within each state.


[1] A barrel of beer is 31 gallons. One Hundred Years of Brewing, Chicagoand New York: Arno Press Reprint, 1974: 252.

[2] During the nineteenth century, there were often distinctions between temperance advocates, who differentiated between spirits and beer, and prohibition supporters, who campaigned on the need to eliminate all alcohol.

[3] The major shippers may have been taken aback by the loss suffered by Lemp, one of the leading pre-Prohibition shipping breweries. Lemp was sold at auction in 1922 at a loss of 90 percent on the investment (Baron, 1962, 315).

[4] The Herfinhahl Index sums the squared market shares of the fifty largest firms.

[5] China overtook the United States as the world’s largest beer market in 2002.

[6] http://www.anheuser-busch.com/over/international.html

tively extract content, Imported Full Body :( May need to used a more carefully tuned import template.–>

Martin H. Stack, Rockhurst Universtiy

1650 to 1800: The Early Days of Brewing in America

Brewing in America dates to the first communities established by English and Dutch settlers in the early to mid seventeenth century. Dutch immigrants quickly recognized that the climate and terrain of present-day New York were particularly well suited to brewing beer and growing malt and hops, two of beer’s essential ingredients. A 1660 map of New Amsterdam details twenty-six breweries and taverns, a clear indication that producing and selling beer were popular and profitable trades in the American colonies (Baron, Chapter Three). Despite the early popularity of beer, other alcoholic beverages steadily grew in importance and by the early eighteenth century several of them had eclipsed beer commercially.

Between 1650 and the Civil War, the market for beer did not change a great deal: both production and consumption remained essentially local affairs. Bottling was expensive, and beer did not travel well. Nearly all beer was stored in, and then served from, wooden kegs. While there were many small breweries, it was not uncommon for households to brew their own beer. In fact, several of America’s founding fathers brewed their own beer, including George Washington and Thomas Jefferson (Baron, Chapters 13 and 16).

1800-1865: Brewing Begins to Expand

National production statistics are unavailable before 1810, an omission which reflects the rather limited importance of the early brewing industry. In 1810, America’s 140 commercial breweries collectively produced just over 180,000 barrels of beer.[1] During the next fifty years, total beer output continued to increase, but production remained small scale and local. This is not to suggest, however, that brewing could not prove profitable. In 1797, James Vassar founded a brewery in Poughkeepsie, New York whose successes echoed far beyond the brewing industry. After several booming years Vassar ceded control of the brewery to his two sons, Matthew and John. Following the death of his brother in an accident and a fire that destroyed the plant, Matthew Vassar rebuilt the brewery in 1811. Demand for his beer grew rapidly, and by the early 1840s, the Vassar brewery produced nearly 15,000 barrels of ale and porter annually, a significant amount for this period. Continued investment in the firm facilitated even greater production levels, and by 1860 its fifty employees turned out 30,000 barrels of beer, placing it amongst the nation’s largest breweries. Today, the Vassar name is better known for the college Matthew Vassar endowed in 1860 with earnings from the brewery (Baron, Chapter 17).

1865-1920: Brewing Emerges as a Significant Industry

While there were several hundred small scale, local breweries in the 1840s and 1850s, beer did not become a mass-produced, mass-consumed beverage until the decades following the Civil War. Several factors contributed to beer’s emergence as the nation’s dominant alcoholic drink. First, widespread immigration from strong beer drinking countries such as Britain, Ireland, and Germany contributed to the creation of a beer culture in the U.S.. Second, America was becoming increasingly industrialized and urbanized during these years, and many workers in the manufacturing and mining sectors drank beer during work and after. Third, many workers began to receive higher wages and salaries during these years, enabling them to buy more beer. Fourth, beer benefited from members of the temperance movement who advocated lower alcohol beer over higher alcohol spirits such as rum or whiskey.[2] Fifth, a series of technological and scientific developments fostered greater beer production and the brewing of new styles of beer. For example, artificial refrigeration enabled brewers to brew during warm American summers, and pasteurization, the eponymous procedure developed by Louis Pasteur, helped extend packaged beer’s shelf life, making storage and transportation more reliable (Stack, 2000). Finally, American brewers began brewing lager beer, a style that had long been popular in Germany and other continental European countries. Traditionally, beer in America meant British-style ale. Ales are brewed with top fermenting yeasts, and this category ranges from light pale ales to chocolate-colored stouts and porters. During the 1840s, American brewers began making German-style lager beers. In addition to requiring a longer maturation period than ales, lager beers use a bottom fermenting yeast and are much more temperature sensitive. Lagers require a great deal of care and attention from brewers, but to the increasing numbers of nineteenth century German immigrants, lager was synonymous with beer. As the nineteenth century wore on, lager production soared, and by 1900, lager outsold ale by a significant margin.

Together, these factors helped transform the market for beer. Total beer production increased from 3.6 million barrels in 1865 to over 66 million barrels in 1914. By 1910, brewing had grown into one of the leading manufacturing industries in America. Yet, this increase in output did not simply reflect America’s growing population. While the number of beer drinkers certainly did rise during these years, perhaps just as importantly, per capita consumption also rose dramatically, from under four gallons in 1865 to 21 gallons in the early 1910s.

Table 1: Industry Production and per Capita Consumption, 1865-1915

width=”540″>

Year National Production (millions of barrels) Per Capita Consumption (gallons)
1865 3.7 3.4
1870 6.6 5.3
1875 9.5 6.6
1880 13.3 8.2
1885 19.2 10.5
1890 27.6 13.6
1895 33.6 15.0
1900 39.5 16.0
1905 49.5 18.3
1910 59.6 20.0
1915 59.8 18.7

Source: United States Brewers Association, 1979 Brewers Almanac, Washington, DC: 12-13.

An equally impressive transformation was underway at the level of the firm. Until the 1870s and 1880s, American breweries had been essentially small scale, local operations. By the late nineteenth century, several companies began to increase their scale of production and scope of distribution. Pabst Brewing Company in Milwaukee and Anheuser-Busch in St. Louis became two of the nation’s first nationally-oriented breweries, and the first to surpass annual production levels of one million barrels. By utilizing the growing railroad system to distribute significant amounts of their beer into distant beer markets, Pabst, Anheuser-Busch and a handful of other enterprises came to be called “shipping” breweries. Though these firms became very powerful, they did not control the pre-Prohibition market for beer. Rather, an equilibrium emerged that pitted large and regional shipping breweries that incorporated the latest innovations in pasteurizing, bottling, and transporting beer against a great number of locally-oriented breweries that mainly supplied draught beer in wooden kegs to their immediate markets (Stack, 2000).

Table 2: Industry Production, the Number of Breweries, and Average Brewery Size

1865-1915

width=”504″>

Year National Production (millions of barrels) Number of Breweries Average Brewery Size (thousands of barrels)
1865 3.7 2,252 1,643
1870 6.6 3,286 2,009
1875 9.5 2,783 3,414
1880 13.3 2,741 4,852
1885 19.2 2,230 8,610
1890 27.6 2,156 12,801
1895 33.6 1,771 18,972
1900 39.5 1,816 21,751
1905 49.5 1,847 26,800
1910 59.6 1,568 38,010
1915 59.8 1,345 44,461

Source: United States Brewers Association, 1979 Brewers Almanac, Washington DC: 12-13.

Between the Civil War and national prohibition, the production and consumption of beer greatly outpaced spirits. Though consumption levels of absolute alcohol had peaked in the early 1800s, temperance and prohibition forces grew increasingly vocal and active as the century wore on, and by the late 1800s, they constituted one of the best-organized political pressure groups of the day (Kerr, Chapter 5, 1985). Their efforts culminated in the ratification of the Eighteenth Amendment on January 29, 1919 that, along with the Volstead Act, made the production and distribution of any beverages with more than one-half of one percent alcohol illegal. While estimates of alcohol activity during Prohibition’s thirteen year reign — from 1920 to 1933 — are imprecise, beer consumption almost certainly fell, though spirit consumption may have remained constant or actually even increased slightly (Rorbaugh, Appendices).

1920-1933: The Dark Years, Prohibition

The most important decision all breweries had to make after 1920 was what to do with their plants and equipment. As they grappled with this question, they made implicit bets as to whether Prohibition would prove to be merely a temporary irritant. Pessimists immediately divested themselves of all their brewing equipment, often at substantial losses. Other firms decided to carry on with related products, and so stay prepared for any modifications to the Volstead Act which would allow for beer. Schlitz, Blatz, Pabst, and Anheuser-Busch, the leading pre-Prohibition shippers, began producing near beer, a malt beverage with under one-half of one percent alcohol. While it was not a commercial success, its production allowed these firms to keep current their beer-making skills. Anheuser-Busch called its near beer “Budweiser” which was “simply the old Budweiser lager beer, brewed according to the traditional method, and then de-alcoholized. … August Busch took the same care in purchasing the costly materials as he had done during pre-prohibition days” (Krebs and Orthwein, 1953, 165). Anheuser-Busch and some of the other leading breweries were granted special licenses by the federal government for brewing alcohol greater than one half of one percent for “medicinal purposes” (Plavchan, 1969, 168). Receiving these licensees gave these breweries a competitive advantage as they were able to keep their brewing staff active in beer-making.

The shippers, and some local breweries, also made malt syrup. While they officially advertised it as an ingredient for baking cookies, and while its production was left alone by the government, it was readily apparent to all that its primary use was for homemade beer.

Of perhaps equal importance to the day-to-day business activities of the breweries were their investment decisions. Here, as in so many other places, the shippers exhibited true entrepreneurial insight. Blatz, Pabst, and Anheuser-Busch all expanded their inventories of automobiles and trucks, which became key assets after repeal. In the 1910s, Anheuser-Busch invested in motorized vehicles to deliver beer; by the 1920s, it was building its own trucks in great numbers. While it never sought to become a major producer of delivery vehicles, its forward expansion in this area reflected its appreciation of the growing importance of motorized delivery, an insight which they built on after repeal.

The leading shippers also furthered their investments in bottling equipment and machinery, which was used in the production of near beer, root beer, ginger ale, and soft drinks. These products were not the commercial successes beer had been, but they gave breweries important experience in bottling. While 85 percent of pre-Prohibition beer was kegged, during Prohibition over 80 percent of near beer and a smaller, though growing, percentage of soft drinks was sold in bottles.

This remarkable increase in packaged product impelled breweries to refine their packaging skills and modify their retailing practice. As they sold near beer and soft drinks to drugstores and drink stands, they encountered new marketing problems (Cochran, 1948, 340). Experience gained during these years helped the shippers meet radically different distribution requirements of the post-repeal beer market.

They were learning about canning as well as bottling. In 1925, Blatz’s canned malt syrup sales were more than $1.3 million, significantly greater than its bulk sales. Anheuser-Busch used cans from the American Can Company for its malt syrup in the early 1920s, a firm which would gain national prominence in 1935 for helping to pioneer the beer can. Thus, the canning of malt syrup helped create the first contacts between the leading shipping brewers and American Can Company (Plavchan, 1969, 178; Conny, 1990, 35-36; and American Can Company, 1969, 7-9).

These expensive investments in automobiles and bottling equipment were paid for in part by selling off branch properties, namely saloons (See Cochran, 1948; Plavchan, 1969; Krebs and Orthwein, 1953). Some had equipped their saloons with furniture and bar fixtures, but as Prohibition wore on, they progressively divested themselves of these assets.

1933-1945: The Industry Reawakens after the Repeal of Prohibition

In April 1933 Congress amended the Volstead Act to allow for 3.2 percent beer. Eight months later, in December, Congress and the states ratified the Twenty-first Amendment, officially repealing Prohibition. From repeal until World War II, the brewing industry struggled to regain its pre-Prohibition fortunes. Prior to prohibition, breweries owned or controlled many saloons, which were the dominant retail outlets for alcohol. To prevent the excesses that had been attributed to saloons from reoccurring, post-repeal legislation forbade alcohol manufacturers from owning bars or saloons, requiring them instead to sell their beer to wholesalers that in turn would distribute their beverages to retailers.

Prohibition meant the end of many small breweries that had been profitable, and that, taken together, had posed a formidable challenge to the large shipping breweries. The shippers, who had much greater investments, were not as inclined to walk away from brewing.[3] After repeal, therefore, they reopened for business in a radically new environment, one in which their former rivals were absent or disadvantaged. From this favorable starting point, they continued to consolidate their position. Several hundred locally oriented breweries did reopen, but were unable to regain their pre-Prohibition competitive edge, and they quickly exited the market. From 1935 to 1940, the number of breweries fell by ten percent.

Table 3: U.S. Brewing Industry Data, 1910-1940

Year Number of Breweries Number of Barrels Produced (millions) Average Barrelage per Brewery Largest Firm Production (millions of barrels) Per Capita Consumption (gallons)
1910 1,568 59.5 37,946 1.5 20.1
1915 1,345 59.8 44,461 1.1 18.7
1934 756 37.7 49,867 1.1 7.9
1935 766 45.2 59,008 1.1 10.3
1936 739 51.8 70,095 1.3 11.8
1937 754 58.7 77,851 1.8 13.3
1938 700 56.3 80,429 2.1 12.9
1939 672 53.8 80,059 2.3 12.3
1940 684 54.9 80,263 2.5 12.5

Source: Cochran, 1948; Krebs and Orthwein, 1953; and United States Brewers Almanac, 1956.

Annual industry output, after struggling in 1934 and 1935, began to approach the levels reached in the 1910s. Yet, these total increases are somewhat misleading, as the population of the U.S. had risen from 92 to 98 million in the 1910s to 125 to 130 million in the 1930s (Brewers Almanac, 1956, 10). This translated directly into the lower per capita consumption levels reported in Table 3.

The largest firms grew even larger in the years following repeal, quickly surpassing their pre-Prohibition annual production levels. The post-repeal industry leaders, Anheuser-Busch and Pabst, doubled their annual production levels from 1935 to 1940.

To take for granted the growing importance of the leading shippers during this period is to ignore their momentous reversal of pre-Prohibition trends. While medium-sized breweries dominated the industry output in the years leading up to Prohibition, the shippers regained in the 1930s the dynamism they manifested from the 1870s to the 1890s. Table 4 compares the fortunes of the shippers in relation to the industry as a whole. From 1877 to 1895, Anheuser-Busch and Pabst, the two most prominent shippers, grew much faster than the industry, and their successes helped pull the industry along. This picture changed during the years 1895 to 1915, when the industry grew much faster than the shippers (Stack, 2000). With the repeal of Prohibition, the tides changed again: from 1934 to 1940, the brewing industry grew very slowly, while Anheuser-Busch and Pabst enjoyed tremendous increases in their annual sales.

Table 4: Percentage Change in Output among Shipping Breweries, 1877-1940

Period Anheuser-Busch Pabst Industry
1877-1895 1,106% 685% 248%
1895-1914 58% -23% 78%
1934-1940 173% 87% 26%

Source: Cochran, 1948; Krebs and Orthwein, 1953; and Brewers Almanac, 1956.

National and regional shippers increasingly dominated the market. Breweries such as Anheuser-Busch, Pabst and Schlitz came to exemplify the modern business enterprise, as described by Alfred Chandler (Chandler, 1977), which adeptly integrated mass production and mass distribution.

Table 5: Leading Brewery Output Levels, 1938-1940

Brewery Plant Location 1938 (bls) 1939 (bls) 1940 (bls)
Anheuser-Busch St. Louis, MO 2,087,000 2,306,000 2,468,000
Pabst Brewing Milwaukee, WI

Peoria Heights, IL

1,640,000 1,650,000 1,730,000
Jos. Schlitz Milwaukee, WI 1,620,000 1,651,083 1,570,000
F & M Schafer Brooklyn, NY 1,025,000 1,305,000 1,390,200
P. Ballantine Newark, NJ 1,120,000 1,289,425 1,322,346
Jacob Ruppert New York, NY 1,417,000 1,325,350 1,228,400
Falstaff Brewing St. Louis, MO

New Orleans, LA

Omaha, NE

622,000 622,004 684,537
Duquesne Brewing Pittsburgh, PA

Carnegie, PA

McKees Rock, PA

625,000 680,000 690,000
Theo. Hamm Brewing St. Paul, MN 750,000 780,000 694,200
Liebman Breweries Brooklyn, NY 625,000 632,558 670,198

Source: Fein, 1942, 35.

World War One had presented a direct threat to the brewing industry. Government officials used war-time emergencies to impose grain rationing, a step that led to the lowering of the alcohol level of beer to 2.75 percent. World War Two had a completely different effect on the industry: rather than output falling, beer production rose from 1941 to 1945.

Table 6: Production and Per Capita Consumption, 1940-1945

width=”607″>

Year Number of Breweries Number of barrels withdrawn (millions) Per Capita Consumption (gallons)
1940 684 54.9 12.5
1941 574 55.2 12.3
1942 530 63.7 14.1
1943 491 71.0 15.8
1944 469 81.7 18.0
1945 468 86.6 18.6

Source: 1979 USBA, 12-14.

During the war, the industry mirrored the nation at large by casting off its sluggish depression-era growth. As the war economy boomed, consumers, both troops and civilians, used some of their wages for beer, and per capita consumption grew by 50 percent between 1940 and 1945.

1945-1980: Following World War II, the Industry Continues to Grow and to Consolidate

Yet, the take-off registered during the World War II was not sustained during the ensuing decades. Total production continued to grow, but at a slower rate than overall population.

Table 7: Production and per Capita Consumption, 1945-1980

width=”607″>

Year Number of Breweries Number of barrels withdrawn (millions) Per Capita Consumption (gallons)
1945 468 86.6 18.6
1950 407 88.8 17.2
1955 292 89.8 15.9
1960 229 94.5 15.4
1965 197 108.0 16.0
1970 154 134.7 18.7
1975 117 157.9 21.1
1980 101 188.4 23.1

Source: 1993 USBA, 7-8.

The period following WWII was characterized by great industry consolidation. Total output continued to grow, though per capita consumption fell into the 1960s before rebounding to levels above 21 gallons per capita in the 1970s, the highest rates in the nation’s history. Not since the 1910s, had consumption levels topped 21 gallons a year; however, there was a significant difference. Prior to Prohibition most consumers bought their beer from local or regional firms and over 85 percent of the beer was served from casks in saloons. Following World War II, two significant changes radically altered the market for beer. First, the total number of breweries operating fell dramatically. This signaled the growing importance of the large national breweries. While many of these firms — Anheuser-Busch, Pabst, Schlitz, and Blatz — had grown into prominence in the late nineteenth century, the scale of their operations grew tremendously in the years after the repeal of prohibition. From the mid 1940s to 1980, the five largest breweries saw their share of the national market grow from 19 to 75 percent (Adams, 125).

Table 8: Concentration of the Brewing Industry, 1947-1981

Year Five Largest (%) Ten Largest (%) Herfindahl Index[4]
1947 19.0 28.2 140
1954 24.9 38.3 240
1958 28.5 45.2 310
1964 39.0 58.2 440
1968 47.6 63.2 690
1974 64.0 80.8 1080
1978 74.3 92.3 1292
1981 75.9 93.9 1614

Source: Adams, 1995, 125.

The other important change concerned how beer was sold. Prior to Prohibition, nearly all beer was sold on-tap in bars or saloons; while approximately 10-15 percent of the beer was bottled, it was much more expensive than draught beer. In 1935, a few years after repeal, the American Can Company successfully canned beer for the first time. The spread of home refrigeration helped spur consumer demand for canned and bottled beer, and from 1935 onwards, draught beer sales have fallen markedly.

Table 9: Packaged vs. Draught Sales, 1935-1980

Year Packaged sales as a percentage of total sales

(bottled and canned)

Draught sales as a percentage of total sales
1935 30 70
1940 52 48
1945 64 36
1950 72 28
1955 78 22
1960 81 19
1965 82 18
1970 86 14
1975 88 12
1980 88 12

Source: 1979 USBA, 20; 1993 USBA, 14.

The rise of packaged beer contributed to the growing industry consolidation detailed in Table 8.

1980-2000: Continued Growth, the Microbrewery Movement, and International Dimensions of the Brewing Industry

From 1980 to 2000, beer production continued to rise, reaching nearly 200 million barrels in 2000. Per capita consumption hit its highest recorded level in 1981 with 23.8 gallons. Since then, though, consumption levels have dropped a bit, and during the 1990s, consumption was typically in the 21-22 gallon range.

Table 10: Production and Per Capita Consumption, 1980-1990

Year Number of Breweries Number of barrels withdrawn (millions) Per Capita Consumption (gallons)
1980 101 188.4 23.1
1985 105 193.8 22.7
1990 286 201.7 22.6

Source: 1993 USBA, 7-8.

Beginning around 1980, the long decline in the number of breweries slowed and then was reversed. Judging solely by the number of breweries in operation, it appeared that a significant change had occurred: the number of firms began to increase, and by the late 1990s, hundreds of new breweries were operating in the U.S. However, this number is rather misleading: the overall industry remained very concentrated, with a three firm concentration ratio in 2000 of 81 percent.

Table 11: Production Levels of the Leading Breweries, 2000

Production (millions of barrels)
Anheuser-Busch 99.2
Miller 39.8
Coors 22.7
Total Domestic Sales 199.4

Source: Beverage Industry, May 2003, 19.

Although entrepreneurs and beer enthusiasts began hundreds of new breweries during this period, most of them were very small, with annual production levels of between 5,000 to 100,000 barrels annually. Reflecting their small size, these new firms were nicknamed microbreweries. Collectively, microbreweries have grown to account for approximately 5-7 percent of the total beer market.

Microbreweries represented a new strategy in the brewing industry: rather than competing on the basis of price or advertising, they attempted to compete on the basis of inherent product characteristics. They emphasized the freshness of locally produced beer; they experimented with much stronger malt and hop flavors; they tried new and long-discarded brewing recipes, often reintroducing styles that had been popular in America decades earlier. Together, these breweries have had an influence much greater than their market share would suggest. The big three breweries, Anheuser Busch, Miller, and Coors, have all tried to incorporate ideas from the microbrewery movement. They have introduced new marquee brands intended to compete for some of this market, and when this failed, they have bought shares in or outright control of some microbreweries.

A final dimension of the brewing industry that has been changing concerns the emerging global market for beer. Until very recently, America was the biggest beer market in the world: as a result, American breweries have not historically looked abroad for additional sales, preferring to expand their share of the domestic market.[5] In the1980s, Anheuser-Busch began to systematically evaluate its market position. While it had done very well in the U.S., it had not tapped markets overseas; as a result, it began a series of international business dealings. It gradually moved from exporting small amounts of its flagship brand Budwesier to entering into licensing accords whereby breweries in a range of countries such as Ireland, Japan, and Argentina began to brew Budweiser for sale in their domestic markets. In 1995, it established its first breweries outside of the U.S., one in England for the European market and the other in China, to service the growing markets in China and East Asia.[6]

While U.S. breweries such as Anheuser-Busch have only recently begun to explore the opportunities abroad, foreign firms have long appreciated the significance of the American market. Beginning in the late 1990s, imports began to increase their market share and by the early 2000s, they accounted for approximately 12 percent of the large U.S. market. Imports and microbrews typically cost more than the big three’s beers and they provide a wider range of flavors and tastes. One of the most interesting developments in the international market for beer occurred in 2002 when South African Breweries (SAB), the dominant brewery in South Africa, and an active firm in Europe, acquired Miller, the second largest brewery in the U.S. Though not widely discussed in the U.S., this may portend a general move towards increased global integration in the world market for beer.

Annotated Bibliography

Adams, Walter and James Brock, editors. The Structure of American Industry, ninth edition. Englewood Cliffs, New Jersey: Prentice Hall, 1995.

Apps, Jerry. Breweries of Wisconsin. Madison, WI: University of Wisconsin Press, 1992. Detailed examination of the history of breweries and brewing in Wisconsin.

Baron, Stanley. Brewed In America: A History of Beer and Ale in the United States.

Boston: Little, Brown, and Co, 1962: Very good historical overview of brewing in America, from the Pilgrims through the post-World War II era.

Baum, Dan. Citizen Coors: A Grand Family Saga of Business, Politics, and Beer. New York: Harper Collins, 2000. Very entertaining story of the Coors family and the brewery they made famous.

Beverage Industry (May 2003): 19-20.

Blum, Peter. Brewed In Detroit: Breweries and Beers since 1830. Detroit: Wayne State University Press, 1999. Very good discussion of Detroit’s major breweries and how they evolved. Particularly strong on the Stroh brewery.

Cochran, Thomas. Pabst Brewing Company: The History of an American Business. New York: New York University Press, 1948: A very insightful, well-researched, and well- written history of one of America’s most important breweries. It is strongest on the years leading up to Prohibition.

Downard, William. The Cincinnati Brewing Industry: A Social and Economic History. Ohio University Press, 1973: A good history of brewing in Cincinnati; particularly strong in the years prior to Prohibition.

Downard, William. Dictionary of the History of the American Brewing and Distilling Industries. Westport, CT: Greenwood Press, 1980: Part dictionary and part encyclopedia, a useful compendium of terms, people, and events relating to the brewing and distilling industries.

Duis, Perry. The Saloon: Public Drinking in Chicagoand Boston, 1880-1920. Urbana: University of Illinois Press, 1983: An excellent overview of the institution of the saloon in pre-Prohibition America.

Eckhardt, Fred. The Essentials of Beer Style. Portland, OR: Fred Eckhardt Communications, 1995: A helpful introduction into the basics of how beer is made and how beer styles differ.

Ehert, George. Twenty-Five Years of Brewing. New York: Gast Lithograph and Engraving, 1891: An interesting snapshot of an important late nineteenth century New York City brewery.

Elzinga, Kenneth. “The Beer Industry.” In The Structure of American Industry, ninth edition, edited by W. Adams and J. Brock. Englewood Cliffs, New Jersey: Prentice Hall, 1995: A good overview summary of the history, structure, conduct, and performance of America’s brewing industry.

Fein, Edward. “The 25 Leading Brewers in the United States Produce 41.5% of the Nation’s Total Beer Output.” Brewers Digest 17 (October 1942): 35.

Greer, Douglas. “Product Differentiation and Concentration in the Brewing Industry,” Journal of Industrial Economics 29 (1971): 201-19.

Greer, Douglas. “The Causes of Concentration in the Brewing Industry,” Quarterly Review of Economics and Business 21 (1981): 87-106.

Greer, Douglas. “Beer: Causes of Structural Change.” In Industry Studies, second edition, edited by Larry Duetsch, Armonk, New York: M.E. Sharpe, 1998.

Hernon, Peter and Terry Ganey. Under the Influence: The Unauthorized Story of the Anheuser-Busch Dynasty. New York: Simon and Schuster, 1991: Somewhat sensationalistic history of the family that has controlled America’s largest brewery, but some interesting pieces on the brewery are included.

Horowitz, Ira and Ann Horowitz. “Firms in a Declining Market: The Brewing Case.” Journal of Industrial Economics 13 (1965): 129-153.

Jackson, Michael. The New World Guide To Beer. Philadelphia: Running Press, 1988: Good overview of the international world of beer and of America’s place in the international beer market.

Keithan, Charles. The Brewing Industry. Washington D.C: Federal Trade Commission, 1978.

Kerr, K. Austin. Organized for Prohibition. New Haven: Yale Press, 1985: Excellent study of the rise of the Anti-Saloon League in the United States.

Kostka, William. The Pre-prohibition History of Adolph Coors Company: 1873-1933. Golden, CO: self-published book, Adolph Coors Company, 1973: A self-published book by the Coors company that provides some interesting insights into the origins of the Colorado brewery.

Krebs, Roland and Orthwein, Percy. Making Friends Is Our Business: 100 Years of Anheuser-Busch. St. Louis, MO: self-published book, Anheuser-Busch, 1953: A self-published book by the Anheuser-Busch brewery that has some nice illustrations and data on firm output levels. The story is nicely told but rather self-congratulatory.

“Large Brewers Boost Share of U.S. Beer Business,” Brewers Digest, 15 (July 1940): 55-57.

Leisley, Bruce. A History of Leisley Brewing. North Newton Kansas: Mennonite Press, 1975: A short but useful history of the Leisley Brewing Company. This was the author’s undergraduate thesis.

Lender, Mark and James Martin. Drinking in America. New York: The Free Press, 1987: Good overview of the social history of drinking in America.

McGahan, Ann. “The Emergence of the National Brewing Oligopoly: Competition in the American Market, 1933-58.” Business History Review 65 (1991): 229-284: Excellent historical analysis of the origins of the brewing oligopoly following the repeal of Prohibition.

McGahan, Ann. “Cooperation in Prices and Capacities: Trade Associations in Brewing after Repeal.” Journal of Law and Economics 38 (1995): 521-559.

Meier, Gary and Meier, Gloria. Brewed in the Pacific Northwest: A History of Beer Making in Oregon and Washington. Seattle: Fjord Press, 1991: A survey of the history of brewing in the Pacific Northwest.

Miller, Carl. Breweries of Cleveland. Cleveland, OH: Schnitzelbank Press, 1998: Good historical overview of the brewing industry in Cleveland.

Norman, Donald. Structural Change and Performance in the U.S. Brewing Industry. Ph.D. dissertation, UCLA, 1975.

One Hundred Years of Brewing. Chicago and New York: Arno Press Reprint, 1903 (Reprint 1974): A very important work. Very detailed historical discussion of the American brewing industry through the end of the nineteenth century.

Persons, Warren. Beer and Brewing In America: An Economic Study. New York: United Brewers Industrial Foundation, 1940.

Plavchan, Ronald. A History of Anheuser-Busch, 1852-1933. Ph.D. dissertation, St. Louis University, 1969: Apart from Cochran’s analysis of Pabst, one of a very few detailed business histories of a major American brewery.

Research Company of America. A National Survey of the Brewing Industry. self-published, 1941: A well research industry analysis with a wealth of information and data.

Rorbaugh, William. The Alcoholic Republic: An American Tradition. New York: Oxford University Press, 1979: Excellent scholarly overview of drinking habits in America.

Rubin, Jay. “The Wet War: American Liquor, 1941-1945.” In Alcohol, Reform, and Society, edited by J. Blocker. Westport, CT: Greenwood Press, 1979: Interesting discussion of American drinking during World War II.

Salem, Frederick. 1880. Beer: Its History and Its Economic Value as a National Beverage. New York: Arno Press, 1880 (Reprint 1972): Early but valuable discussion of American brewing industry.

Scherer, F.M. Industry Structure, Strategy, and Public Policy. New York: Harper Collins, 1996: A very good essay on the brewing industry.

Shih, Ko Ching and C. Ying Shih. American Brewing Industry and the Beer Market. Brookfield, WI, 1958: Good overview of the industry with some excellent data tables.

Skilnik, Bob. The History of Beer and Brewing in Chicago: 1833-1978. Pogo Press, 1999: Good overview of the history of brewing in Chicago.

Smith, Greg. Beer in America: The Early Years, 1587 to 1840. Boulder, CO: Brewers Publications, 1998: Well written account of beer’s development in America, from the Pilgrims to mid-nineteenth century.

Stack, Martin. “Local and Regional Breweries in America’s Brewing Industry, 1865-1920.” Business History Review 74 (Autumn 2000): 435-63.

Thomann, Gallus. American Beer: Glimpses of Its History and Description of Its Manufacture. New York: United States Brewing Association, 1909: Interesting account of the state of the brewing industry at the turn of the twentieth century.

United States Brewers Association. Annual Year Book, 1909-1921. Very important primary source document published by the leading brewing trade association.

United States Brewers Foundation. Brewers Almanac, published annually, 1941-present: Very important primary source document published by the leading brewing trade association.

Van Wieren, Dale. American Breweries II. West Point, PA: Eastern Coast Brewiana Association, 1995. Comprehensive historical listing of every brewery in every state, arranged by city within each state.


[1] A barrel of beer is 31 gallons. One Hundred Years of Brewing, Chicagoand New York: Arno Press Reprint, 1974: 252.

[2] During the nineteenth century, there were often distinctions between temperance advocates, who differentiated between spirits and beer, and prohibition supporters, who campaigned on the need to eliminate all alcohol.

[3] The major shippers may have been taken aback by the loss suffered by Lemp, one of the leading pre-Prohibition shipping breweries. Lemp was sold at auction in 1922 at a loss of 90 percent on the investment (Baron, 1962, 315).

[4] The Herfinhahl Index sums the squared market shares of the fifty largest firms.

[5] China overtook the United States as the world’s largest beer market in 2002.

[6] http://www.anheuser-busch.com/over/international.html

1650 to 1800: The Early Days of Brewing in America

Brewing in America dates to the first communities established by English and Dutch settlers in the early to mid seventeenth century. Dutch immigrants quickly recognized that the climate and terrain of present-day New York were particularly well suited to brewing beer and growing malt and hops, two of beer’s essential ingredients. A 1660 map of New Amsterdam details twenty-six breweries and taverns, a clear indication that producing and selling beer were popular and profitable trades in the American colonies (Baron, Chapter Three). Despite the early popularity of beer, other alcoholic beverages steadily grew in importance and by the early eighteenth century several of them had eclipsed beer commercially.

Between 1650 and the Civil War, the market for beer did not change a great deal: both production and consumption remained essentially local affairs. Bottling was expensive, and beer did not travel well. Nearly all beer was stored in, and then served from, wooden kegs. While there were many small breweries, it was not uncommon for households to brew their own beer. In fact, several of America’s founding fathers brewed their own beer, including George Washington and Thomas Jefferson (Baron, Chapters 13 and 16).

1800-1865: Brewing Begins to Expand

National production statistics are unavailable before 1810, an omission which reflects the rather limited importance of the early brewing industry. In 1810, America’s 140 commercial breweries collectively produced just over 180,000 barrels of beer.[1] During the next fifty years, total beer output continued to increase, but production remained small scale and local. This is not to suggest, however, that brewing could not prove profitable. In 1797, James Vassar founded a brewery in Poughkeepsie, New York whose successes echoed far beyond the brewing industry. After several booming years Vassar ceded control of the brewery to his two sons, Matthew and John. Following the death of his brother in an accident and a fire that destroyed the plant, Matthew Vassar rebuilt the brewery in 1811. Demand for his beer grew rapidly, and by the early 1840s, the Vassar brewery produced nearly 15,000 barrels of ale and porter annually, a significant amount for this period. Continued investment in the firm facilitated even greater production levels, and by 1860 its fifty employees turned out 30,000 barrels of beer, placing it amongst the nation’s largest breweries. Today, the Vassar name is better known for the college Matthew Vassar endowed in 1860 with earnings from the brewery (Baron, Chapter 17).

1865-1920: Brewing Emerges as a Significant Industry

While there were several hundred small scale, local breweries in the 1840s and 1850s, beer did not become a mass-produced, mass-consumed beverage until the decades following the Civil War. Several factors contributed to beer’s emergence as the nation’s dominant alcoholic drink. First, widespread immigration from strong beer drinking countries such as Britain, Ireland, and Germany contributed to the creation of a beer culture in the U.S.. Second, America was becoming increasingly industrialized and urbanized during these years, and many workers in the manufacturing and mining sectors drank beer during work and after. Third, many workers began to receive higher wages and salaries during these years, enabling them to buy more beer. Fourth, beer benefited from members of the temperance movement who advocated lower alcohol beer over higher alcohol spirits such as rum or whiskey.[2] Fifth, a series of technological and scientific developments fostered greater beer production and the brewing of new styles of beer. For example, artificial refrigeration enabled brewers to brew during warm American summers, and pasteurization, the eponymous procedure developed by Louis Pasteur, helped extend packaged beer’s shelf life, making storage and transportation more reliable (Stack, 2000). Finally, American brewers began brewing lager beer, a style that had long been popular in Germany and other continental European countries. Traditionally, beer in America meant British-style ale. Ales are brewed with top fermenting yeasts, and this category ranges from light pale ales to chocolate-colored stouts and porters. During the 1840s, American brewers began making German-style lager beers. In addition to requiring a longer maturation period than ales, lager beers use a bottom fermenting yeast and are much more temperature sensitive. Lagers require a great deal of care and attention from brewers, but to the increasing numbers of nineteenth century German immigrants, lager was synonymous with beer. As the nineteenth century wore on, lager production soared, and by 1900, lager outsold ale by a significant margin.

Together, these factors helped transform the market for beer. Total beer production increased from 3.6 million barrels in 1865 to over 66 million barrels in 1914. By 1910, brewing had grown into one of the leading manufacturing industries in America. Yet, this increase in output did not simply reflect America’s growing population. While the number of beer drinkers certainly did rise during these years, perhaps just as importantly, per capita consumption also rose dramatically, from under four gallons in 1865 to 21 gallons in the early 1910s.

Table 1: Industry Production and per Capita Consumption, 1865-1915

Year National Production (millions of barrels) Per Capita Consumption (gallons)
1865 3.7 3.4
1870 6.6 5.3
1875 9.5 6.6
1880 13.3 8.2
1885 19.2 10.5
1890 27.6 13.6
1895 33.6 15.0
1900 39.5 16.0
1905 49.5 18.3
1910 59.6 20.0
1915 59.8 18.7

Source: United States Brewers Association, 1979 Brewers Almanac, Washington, DC: 12-13.

An equally impressive transformation was underway at the level of the firm. Until the 1870s and 1880s, American breweries had been essentially small scale, local operations. By the late nineteenth century, several companies began to increase their scale of production and scope of distribution. Pabst Brewing Company in Milwaukee and Anheuser-Busch in St. Louis became two of the nation’s first nationally-oriented breweries, and the first to surpass annual production levels of one million barrels. By utilizing the growing railroad system to distribute significant amounts of their beer into distant beer markets, Pabst, Anheuser-Busch and a handful of other enterprises came to be called “shipping” breweries. Though these firms became very powerful, they did not control the pre-Prohibition market for beer. Rather, an equilibrium emerged that pitted large and regional shipping breweries that incorporated the latest innovations in pasteurizing, bottling, and transporting beer against a great number of locally-oriented breweries that mainly supplied draught beer in wooden kegs to their immediate markets (Stack, 2000).

Table 2: Industry Production, the Number of Breweries, and Average Brewery Size

1865-1915

Year National Production (millions of barrels) Number of Breweries Average Brewery Size (thousands of barrels)
1865 3.7 2,252 1,643
1870 6.6 3,286 2,009
1875 9.5 2,783 3,414
1880 13.3 2,741 4,852
1885 19.2 2,230 8,610
1890 27.6 2,156 12,801
1895 33.6 1,771 18,972
1900 39.5 1,816 21,751
1905 49.5 1,847 26,800
1910 59.6 1,568 38,010
1915 59.8 1,345 44,461

Source: United States Brewers Association, 1979 Brewers Almanac, Washington DC: 12-13.

Between the Civil War and national prohibition, the production and consumption of beer greatly outpaced spirits. Though consumption levels of absolute alcohol had peaked in the early 1800s, temperance and prohibition forces grew increasingly vocal and active as the century wore on, and by the late 1800s, they constituted one of the best-organized political pressure groups of the day (Kerr, Chapter 5, 1985). Their efforts culminated in the ratification of the Eighteenth Amendment on January 29, 1919 that, along with the Volstead Act, made the production and distribution of any beverages with more than one-half of one percent alcohol illegal. While estimates of alcohol activity during Prohibition’s thirteen year reign — from 1920 to 1933 — are imprecise, beer consumption almost certainly fell, though spirit consumption may have remained constant or actually even increased slightly (Rorbaugh, Appendices).

1920-1933: The Dark Years, Prohibition

The most important decision all breweries had to make after 1920 was what to do with their plants and equipment. As they grappled with this question, they made implicit bets as to whether Prohibition would prove to be merely a temporary irritant. Pessimists immediately divested themselves of all their brewing equipment, often at substantial losses. Other firms decided to carry on with related products, and so stay prepared for any modifications to the Volstead Act which would allow for beer. Schlitz, Blatz, Pabst, and Anheuser-Busch, the leading pre-Prohibition shippers, began producing near beer, a malt beverage with under one-half of one percent alcohol. While it was not a commercial success, its production allowed these firms to keep current their beer-making skills. Anheuser-Busch called its near beer “Budweiser” which was “simply the old Budweiser lager beer, brewed according to the traditional method, and then de-alcoholized. … August Busch took the same care in purchasing the costly materials as he had done during pre-prohibition days” (Krebs and Orthwein, 1953, 165). Anheuser-Busch and some of the other leading breweries were granted special licenses by the federal government for brewing alcohol greater than one half of one percent for “medicinal purposes” (Plavchan, 1969, 168). Receiving these licensees gave these breweries a competitive advantage as they were able to keep their brewing staff active in beer-making.

The shippers, and some local breweries, also made malt syrup. While they officially advertised it as an ingredient for baking cookies, and while its production was left alone by the government, it was readily apparent to all that its primary use was for homemade beer.

Of perhaps equal importance to the day-to-day business activities of the breweries were their investment decisions. Here, as in so many other places, the shippers exhibited true entrepreneurial insight. Blatz, Pabst, and Anheuser-Busch all expanded their inventories of automobiles and trucks, which became key assets after repeal. In the 1910s, Anheuser-Busch invested in motorized vehicles to deliver beer; by the 1920s, it was building its own trucks in great numbers. While it never sought to become a major producer of delivery vehicles, its forward expansion in this area reflected its appreciation of the growing importance of motorized delivery, an insight which they built on after repeal.

The leading shippers also furthered their investments in bottling equipment and machinery, which was used in the production of near beer, root beer, ginger ale, and soft drinks. These products were not the commercial successes beer had been, but they gave breweries important experience in bottling. While 85 percent of pre-Prohibition beer was kegged, during Prohibition over 80 percent of near beer and a smaller, though growing, percentage of soft drinks was sold in bottles.

This remarkable increase in packaged product impelled breweries to refine their packaging skills and modify their retailing practice. As they sold near beer and soft drinks to drugstores and drink stands, they encountered new marketing problems (Cochran, 1948, 340). Experience gained during these years helped the shippers meet radically different distribution requirements of the post-repeal beer market.

They were learning about canning as well as bottling. In 1925, Blatz’s canned malt syrup sales were more than $1.3 million, significantly greater than its bulk sales. Anheuser-Busch used cans from the American Can Company for its malt syrup in the early 1920s, a firm which would gain national prominence in 1935 for helping to pioneer the beer can. Thus, the canning of malt syrup helped create the first contacts between the leading shipping brewers and American Can Company (Plavchan, 1969, 178; Conny, 1990, 35-36; and American Can Company, 1969, 7-9).

These expensive investments in automobiles and bottling equipment were paid for in part by selling off branch properties, namely saloons (See Cochran, 1948; Plavchan, 1969; Krebs and Orthwein, 1953). Some had equipped their saloons with furniture and bar fixtures, but as Prohibition wore on, they progressively divested themselves of these assets.

1933-1945: The Industry Reawakens after the Repeal of Prohibition

In April 1933 Congress amended the Volstead Act to allow for 3.2 percent beer. Eight months later, in December, Congress and the states ratified the Twenty-first Amendment, officially repealing Prohibition. From repeal until World War II, the brewing industry struggled to regain its pre-Prohibition fortunes. Prior to prohibition, breweries owned or controlled many saloons, which were the dominant retail outlets for alcohol. To prevent the excesses that had been attributed to saloons from reoccurring, post-repeal legislation forbade alcohol manufacturers from owning bars or saloons, requiring them instead to sell their beer to wholesalers that in turn would distribute their beverages to retailers.

Prohibition meant the end of many small breweries that had been profitable, and that, taken together, had posed a formidable challenge to the large shipping breweries. The shippers, who had much greater investments, were not as inclined to walk away from brewing.[3] After repeal, therefore, they reopened for business in a radically new environment, one in which their former rivals were absent or disadvantaged. From this favorable starting point, they continued to consolidate their position. Several hundred locally oriented breweries did reopen, but were unable to regain their pre-Prohibition competitive edge, and they quickly exited the market. From 1935 to 1940, the number of breweries fell by ten percent.

Table 3: U.S. Brewing Industry Data, 1910-1940

Year Number of Breweries Number of Barrels Produced (millions) Average Barrelage per Brewery Largest Firm Production (millions of barrels) Per Capita Consumption (gallons)
1910 1,568 59.5 37,946 1.5 20.1
1915 1,345 59.8 44,461 1.1 18.7
1934 756 37.7 49,867 1.1 7.9
1935 766 45.2 59,008 1.1 10.3
1936 739 51.8 70,095 1.3 11.8
1937 754 58.7 77,851 1.8 13.3
1938 700 56.3 80,429 2.1 12.9
1939 672 53.8 80,059 2.3 12.3
1940 684 54.9 80,263 2.5 12.5

Source: Cochran, 1948; Krebs and Orthwein, 1953; and United States Brewers Almanac, 1956.

Annual industry output, after struggling in 1934 and 1935, began to approach the levels reached in the 1910s. Yet, these total increases are somewhat misleading, as the population of the U.S. had risen from 92 to 98 million in the 1910s to 125 to 130 million in the 1930s (Brewers Almanac, 1956, 10). This translated directly into the lower per capita consumption levels reported in Table 3.

The largest firms grew even larger in the years following repeal, quickly surpassing their pre-Prohibition annual production levels. The post-repeal industry leaders, Anheuser-Busch and Pabst, doubled their annual production levels from 1935 to 1940.

To take for granted the growing importance of the leading shippers during this period is to ignore their momentous reversal of pre-Prohibition trends. While medium-sized breweries dominated the industry output in the years leading up to Prohibition, the shippers regained in the 1930s the dynamism they manifested from the 1870s to the 1890s. Table 4 compares the fortunes of the shippers in relation to the industry as a whole. From 1877 to 1895, Anheuser-Busch and Pabst, the two most prominent shippers, grew much faster than the industry, and their successes helped pull the industry along. This picture changed during the years 1895 to 1915, when the industry grew much faster than the shippers (Stack, 2000). With the repeal of Prohibition, the tides changed again: from 1934 to 1940, the brewing industry grew very slowly, while Anheuser-Busch and Pabst enjoyed tremendous increases in their annual sales.

Table 4: Percentage Change in Output among Shipping Breweries, 1877-1940

Period Anheuser-Busch Pabst Industry
1877-1895 1,106% 685% 248%
1895-1914 58% -23% 78%
1934-1940 173% 87% 26%

Source: Cochran, 1948; Krebs and Orthwein, 1953; and Brewers Almanac, 1956.

National and regional shippers increasingly dominated the market. Breweries such as Anheuser-Busch, Pabst and Schlitz came to exemplify the modern business enterprise, as described by Alfred Chandler (Chandler, 1977), which adeptly integrated mass production and mass distribution.

Table 5: Leading Brewery Output Levels, 1938-1940

Brewery Plant Location 1938 (bls) 1939 (bls) 1940 (bls)
Anheuser-Busch St. Louis, MO 2,087,000 2,306,000 2,468,000
Pabst Brewing Milwaukee, WI

Peoria Heights, IL

1,640,000 1,650,000 1,730,000
Jos. Schlitz Milwaukee, WI 1,620,000 1,651,083 1,570,000
F & M Schafer Brooklyn, NY 1,025,000 1,305,000 1,390,200
P. Ballantine Newark, NJ 1,120,000 1,289,425 1,322,346
Jacob Ruppert New York, NY 1,417,000 1,325,350 1,228,400
Falstaff Brewing St. Louis, MO

New Orleans, LA

Omaha, NE

622,000 622,004 684,537
Duquesne Brewing Pittsburgh, PA

Carnegie, PA

McKees Rock, PA

625,000 680,000 690,000
Theo. Hamm Brewing St. Paul, MN 750,000 780,000 694,200
Liebman Breweries Brooklyn, NY 625,000 632,558 670,198

Source: Fein, 1942, 35.

World War One had presented a direct threat to the brewing industry. Government officials used war-time emergencies to impose grain rationing, a step that led to the lowering of the alcohol level of beer to 2.75 percent. World War Two had a completely different effect on the industry: rather than output falling, beer production rose from 1941 to 1945.

Table 6: Production and Per Capita Consumption, 1940-1945

Year Number of Breweries Number of barrels withdrawn (millions) Per Capita Consumption (gallons)
1940 684 54.9 12.5
1941 574 55.2 12.3
1942 530 63.7 14.1
1943 491 71.0 15.8
1944 469 81.7 18.0
1945 468 86.6 18.6

Source: 1979 USBA, 12-14.

During the war, the industry mirrored the nation at large by casting off its sluggish depression-era growth. As the war economy boomed, consumers, both troops and civilians, used some of their wages for beer, and per capita consumption grew by 50 percent between 1940 and 1945.

1945-1980: Following World War II, the Industry Continues to Grow and to Consolidate

Yet, the take-off registered during the World War II was not sustained during the ensuing decades. Total production continued to grow, but at a slower rate than overall population.

Table 7: Production and per Capita Consumption, 1945-1980

Year Number of Breweries Number of barrels withdrawn (millions) Per Capita Consumption (gallons)
1945 468 86.6 18.6
1950 407 88.8 17.2
1955 292 89.8 15.9
1960 229 94.5 15.4
1965 197 108.0 16.0
1970 154 134.7 18.7
1975 117 157.9 21.1
1980 101 188.4 23.1

Source: 1993 USBA, 7-8.

The period following WWII was characterized by great industry consolidation. Total output continued to grow, though per capita consumption fell into the 1960s before rebounding to levels above 21 gallons per capita in the 1970s, the highest rates in the nation’s history. Not since the 1910s, had consumption levels topped 21 gallons a year; however, there was a significant difference. Prior to Prohibition most consumers bought their beer from local or regional firms and over 85 percent of the beer was served from casks in saloons. Following World War II, two significant changes radically altered the market for beer. First, the total number of breweries operating fell dramatically. This signaled the growing importance of the large national breweries. While many of these firms — Anheuser-Busch, Pabst, Schlitz, and Blatz — had grown into prominence in the late nineteenth century, the scale of their operations grew tremendously in the years after the repeal of prohibition. From the mid 1940s to 1980, the five largest breweries saw their share of the national market grow from 19 to 75 percent (Adams, 125).

Table 8: Concentration of the Brewing Industry, 1947-1981

Year Five Largest (%) Ten Largest (%) Herfindahl Index[4]
1947 19.0 28.2 140
1954 24.9 38.3 240
1958 28.5 45.2 310
1964 39.0 58.2 440
1968 47.6 63.2 690
1974 64.0 80.8 1080
1978 74.3 92.3 1292
1981 75.9 93.9 1614

Source: Adams, 1995, 125.

The other important change concerned how beer was sold. Prior to Prohibition, nearly all beer was sold on-tap in bars or saloons; while approximately 10-15 percent of the beer was bottled, it was much more expensive than draught beer. In 1935, a few years after repeal, the American Can Company successfully canned beer for the first time. The spread of home refrigeration helped spur consumer demand for canned and bottled beer, and from 1935 onwards, draught beer sales have fallen markedly.

Table 9: Packaged vs. Draught Sales, 1935-1980

Year Packaged sales as a percentage of total sales

(bottled and canned)

Draught sales as a percentage of total sales
1935 30 70
1940 52 48
1945 64 36
1950 72 28
1955 78 22
1960 81 19
1965 82 18
1970 86 14
1975 88 12
1980 88 12

Source: 1979 USBA, 20; 1993 USBA, 14.

The rise of packaged beer contributed to the growing industry consolidation detailed in Table 8.

1980-2000: Continued Growth, the Microbrewery Movement, and International Dimensions of the Brewing Industry

From 1980 to 2000, beer production continued to rise, reaching nearly 200 million barrels in 2000. Per capita consumption hit its highest recorded level in 1981 with 23.8 gallons. Since then, though, consumption levels have dropped a bit, and during the 1990s, consumption was typically in the 21-22 gallon range.

Table 10: Production and Per Capita Consumption, 1980-1990

Year Number of Breweries Number of barrels withdrawn (millions) Per Capita Consumption (gallons)
1980 101 188.4 23.1
1985 105 193.8 22.7
1990 286 201.7 22.6

Source: 1993 USBA, 7-8.

Beginning around 1980, the long decline in the number of breweries slowed and then was reversed. Judging solely by the number of breweries in operation, it appeared that a significant change had occurred: the number of firms began to increase, and by the late 1990s, hundreds of new breweries were operating in the U.S. However, this number is rather misleading: the overall industry remained very concentrated, with a three firm concentration ratio in 2000 of 81 percent.

Table 11: Production Levels of the Leading Breweries, 2000

Production (millions of barrels)
Anheuser-Busch 99.2
Miller 39.8
Coors 22.7
Total Domestic Sales 199.4

Source: Beverage Industry, May 2003, 19.

Although entrepreneurs and beer enthusiasts began hundreds of new breweries during this period, most of them were very small, with annual production levels of between 5,000 to 100,000 barrels annually. Reflecting their small size, these new firms were nicknamed microbreweries. Collectively, microbreweries have grown to account for approximately 5-7 percent of the total beer market.

Microbreweries represented a new strategy in the brewing industry: rather than competing on the basis of price or advertising, they attempted to compete on the basis of inherent product characteristics. They emphasized the freshness of locally produced beer; they experimented with much stronger malt and hop flavors; they tried new and long-discarded brewing recipes, often reintroducing styles that had been popular in America decades earlier. Together, these breweries have had an influence much greater than their market share would suggest. The big three breweries, Anheuser Busch, Miller, and Coors, have all tried to incorporate ideas from the microbrewery movement. They have introduced new marquee brands intended to compete for some of this market, and when this failed, they have bought shares in or outright control of some microbreweries.

A final dimension of the brewing industry that has been changing concerns the emerging global market for beer. Until very recently, America was the biggest beer market in the world: as a result, American breweries have not historically looked abroad for additional sales, preferring to expand their share of the domestic market.[5] In the1980s, Anheuser-Busch began to systematically evaluate its market position. While it had done very well in the U.S., it had not tapped markets overseas; as a result, it began a series of international business dealings. It gradually moved from exporting small amounts of its flagship brand Budwesier to entering into licensing accords whereby breweries in a range of countries such as Ireland, Japan, and Argentina began to brew Budweiser for sale in their domestic markets. In 1995, it established its first breweries outside of the U.S., one in England for the European market and the other in China, to service the growing markets in China and East Asia.[6]

While U.S. breweries such as Anheuser-Busch have only recently begun to explore the opportunities abroad, foreign firms have long appreciated the significance of the American market. Beginning in the late 1990s, imports began to increase their market share and by the early 2000s, they accounted for approximately 12 percent of the large U.S. market. Imports and microbrews typically cost more than the big three’s beers and they provide a wider range of flavors and tastes. One of the most interesting developments in the international market for beer occurred in 2002 when South African Breweries (SAB), the dominant brewery in South Africa, and an active firm in Europe, acquired Miller, the second largest brewery in the U.S. Though not widely discussed in the U.S., this may portend a general move towards increased global integration in the world market for beer.

Annotated Bibliography

Adams, Walter and James Brock, editors. The Structure of American Industry, ninth edition. Englewood Cliffs, New Jersey: Prentice Hall, 1995.

Apps, Jerry. Breweries of Wisconsin. Madison, WI: University of Wisconsin Press, 1992. Detailed examination of the history of breweries and brewing in Wisconsin.

Baron, Stanley. Brewed In America: A History of Beer and Ale in the United States.

Boston: Little, Brown, and Co, 1962: Very good historical overview of brewing in America, from the Pilgrims through the post-World War II era.

Baum, Dan. Citizen Coors: A Grand Family Saga of Business, Politics, and Beer. New York: Harper Collins, 2000. Very entertaining story of the Coors family and the brewery they made famous.

Beverage Industry (May 2003): 19-20.

Blum, Peter. Brewed In Detroit: Breweries and Beers since 1830. Detroit: Wayne State University Press, 1999. Very good discussion of Detroit’s major breweries and how they evolved. Particularly strong on the Stroh brewery.

Cochran, Thomas. Pabst Brewing Company: The History of an American Business. New York: New York University Press, 1948: A very insightful, well-researched, and well- written history of one of America’s most important breweries. It is strongest on the years leading up to Prohibition.

Downard, William. The Cincinnati Brewing Industry: A Social and Economic History. Ohio University Press, 1973: A good history of brewing in Cincinnati; particularly strong in the years prior to Prohibition.

Downard, William. Dictionary of the History of the American Brewing and Distilling Industries. Westport, CT: Greenwood Press, 1980: Part dictionary and part encyclopedia, a useful compendium of terms, people, and events relating to the brewing and distilling industries.

Duis, Perry. The Saloon: Public Drinking in Chicagoand Boston, 1880-1920. Urbana: University of Illinois Press, 1983: An excellent overview of the institution of the saloon in pre-Prohibition America.

Eckhardt, Fred. The Essentials of Beer Style. Portland, OR: Fred Eckhardt Communications, 1995: A helpful introduction into the basics of how beer is made and how beer styles differ.

Ehert, George. Twenty-Five Years of Brewing. New York: Gast Lithograph and Engraving, 1891: An interesting snapshot of an important late nineteenth century New York City brewery.

Elzinga, Kenneth. “The Beer Industry.” In The Structure of American Industry, ninth edition, edited by W. Adams and J. Brock. Englewood Cliffs, New Jersey: Prentice Hall, 1995: A good overview summary of the history, structure, conduct, and performance of America’s brewing industry.

Fein, Edward. “The 25 Leading Brewers in the United States Produce 41.5% of the Nation’s Total Beer Output.” Brewers Digest 17 (October 1942): 35.

Greer, Douglas. “Product Differentiation and Concentration in the Brewing Industry,” Journal of Industrial Economics 29 (1971): 201-19.

Greer, Douglas. “The Causes of Concentration in the Brewing Industry,” Quarterly Review of Economics and Business 21 (1981): 87-106.

Greer, Douglas. “Beer: Causes of Structural Change.” In Industry Studies, second edition, edited by Larry Duetsch, Armonk, New York: M.E. Sharpe, 1998.

Hernon, Peter and Terry Ganey. Under the Influence: The Unauthorized Story of the Anheuser-Busch Dynasty. New York: Simon and Schuster, 1991: Somewhat sensationalistic history of the family that has controlled America’s largest brewery, but some interesting pieces on the brewery are included.

Horowitz, Ira and Ann Horowitz. “Firms in a Declining Market: The Brewing Case.” Journal of Industrial Economics 13 (1965): 129-153.

Jackson, Michael. The New World Guide To Beer. Philadelphia: Running Press, 1988: Good overview of the international world of beer and of America’s place in the international beer market.

Keithan, Charles. The Brewing Industry. Washington D.C: Federal Trade Commission, 1978.

Kerr, K. Austin. Organized for Prohibition. New Haven: Yale Press, 1985: Excellent study of the rise of the Anti-Saloon League in the United States.

Kostka, William. The Pre-prohibition History of Adolph Coors Company: 1873-1933. Golden, CO: self-published book, Adolph Coors Company, 1973: A self-published book by the Coors company that provides some interesting insights into the origins of the Colorado brewery.

Krebs, Roland and Orthwein, Percy. Making Friends Is Our Business: 100 Years of Anheuser-Busch. St. Louis, MO: self-published book, Anheuser-Busch, 1953: A self-published book by the Anheuser-Busch brewery that has some nice illustrations and data on firm output levels. The story is nicely told but rather self-congratulatory.

“Large Brewers Boost Share of U.S. Beer Business,” Brewers Digest, 15 (July 1940): 55-57.

Leisley, Bruce. A History of Leisley Brewing. North Newton Kansas: Mennonite Press, 1975: A short but useful history of the Leisley Brewing Company. This was the author’s undergraduate thesis.

Lender, Mark and James Martin. Drinking in America. New York: The Free Press, 1987: Good overview of the social history of drinking in America.

McGahan, Ann. “The Emergence of the National Brewing Oligopoly: Competition in the American Market, 1933-58.” Business History Review 65 (1991): 229-284: Excellent historical analysis of the origins of the brewing oligopoly following the repeal of Prohibition.

McGahan, Ann. “Cooperation in Prices and Capacities: Trade Associations in Brewing after Repeal.” Journal of Law and Economics 38 (1995): 521-559.

Meier, Gary and Meier, Gloria. Brewed in the Pacific Northwest: A History of Beer Making in Oregon and Washington. Seattle: Fjord Press, 1991: A survey of the history of brewing in the Pacific Northwest.

Miller, Carl. Breweries of Cleveland. Cleveland, OH: Schnitzelbank Press, 1998: Good historical overview of the brewing industry in Cleveland.

Norman, Donald. Structural Change and Performance in the U.S. Brewing Industry. Ph.D. dissertation, UCLA, 1975.

One Hundred Years of Brewing. Chicago and New York: Arno Press Reprint, 1903 (Reprint 1974): A very important work. Very detailed historical discussion of the American brewing industry through the end of the nineteenth century.

Persons, Warren. Beer and Brewing In America: An Economic Study. New York: United Brewers Industrial Foundation, 1940.

Plavchan, Ronald. A History of Anheuser-Busch, 1852-1933. Ph.D. dissertation, St. Louis University, 1969: Apart from Cochran’s analysis of Pabst, one of a very few detailed business histories of a major American brewery.

Research Company of America. A National Survey of the Brewing Industry. self-published, 1941: A well research industry analysis with a wealth of information and data.

Rorbaugh, William. The Alcoholic Republic: An American Tradition. New York: Oxford University Press, 1979: Excellent scholarly overview of drinking habits in America.

Rubin, Jay. “The Wet War: American Liquor, 1941-1945.” In Alcohol, Reform, and Society, edited by J. Blocker. Westport, CT: Greenwood Press, 1979: Interesting discussion of American drinking during World War II.

Salem, Frederick. 1880. Beer: Its History and Its Economic Value as a National Beverage. New York: Arno Press, 1880 (Reprint 1972): Early but valuable discussion of American brewing industry.

Scherer, F.M. Industry Structure, Strategy, and Public Policy. New York: Harper Collins, 1996: A very good essay on the brewing industry.

Shih, Ko Ching and C. Ying Shih. American Brewing Industry and the Beer Market. Brookfield, WI, 1958: Good overview of the industry with some excellent data tables.

Skilnik, Bob. The History of Beer and Brewing in Chicago: 1833-1978. Pogo Press, 1999: Good overview of the history of brewing in Chicago.

Smith, Greg. Beer in America: The Early Years, 1587 to 1840. Boulder, CO: Brewers Publications, 1998: Well written account of beer’s development in America, from the Pilgrims to mid-nineteenth century.

Stack, Martin. “Local and Regional Breweries in America’s Brewing Industry, 1865-1920.” Business History Review 74 (Autumn 2000): 435-63.

Thomann, Gallus. American Beer: Glimpses of Its History and Description of Its Manufacture. New York: United States Brewing Association, 1909: Interesting account of the state of the brewing industry at the turn of the twentieth century.

United States Brewers Association. Annual Year Book, 1909-1921. Very important primary source document published by the leading brewing trade association.

United States Brewers Foundation. Brewers Almanac, published annually, 1941-present: Very important primary source document published by the leading brewing trade association.

Van Wieren, Dale. American Breweries II. West Point, PA: Eastern Coast Brewiana Association, 1995. Comprehensive historical listing of every brewery in every state, arranged by city within each state.


[1] A barrel of beer is 31 gallons. One Hundred Years of Brewing, Chicagoand New York: Arno Press Reprint, 1974: 252.

[2] During the nineteenth century, there were often distinctions between temperance advocates, who differentiated between spirits and beer, and prohibition supporters, who campaigned on the need to eliminate all alcohol.

[3] The major shippers may have been taken aback by the loss suffered by Lemp, one of the leading pre-Prohibition shipping breweries. Lemp was sold at auction in 1922 at a loss of 90 percent on the investment (Baron, 1962, 315).

[4] The Herfinhahl Index sums the squared market shares of the fifty largest firms.

[5] China overtook the United States as the world’s largest beer market in 2002.

[6] http://www.anheuser-busch.com/over/international.html

Citation: Stack, Martin. “A Concise History of America’s Brewing Industry”. EH.Net Encyclopedia, edited by Robert Whaples. July 4, 2003. URL http://eh.net/encyclopedia/a-concise-history-of-americas-brewing-industry/

The Economic History of Australia from 1788: An Introduction

Bernard Attard, University of Leicester

Introduction

The economic benefits of establishing a British colony in Australia in 1788 were not immediately obvious. The Government’s motives have been debated but the settlement’s early character and prospects were dominated by its original function as a jail. Colonization nevertheless began a radical change in the pattern of human activity and resource use in that part of the world, and by the 1890s a highly successful settler economy had been established on the basis of a favorable climate in large parts of the southeast (including Tasmania ) and the southwest corner; the suitability of land for European pastoralism and agriculture; an abundance of mineral wealth; and the ease with which these resources were appropriated from the indigenous population. This article will focus on the creation of a colonial economy from 1788 and its structural change during the twentieth century. To simplify, it will divide Australian economic history into four periods, two of which overlap. These are defined by the foundation of the ‘bridgehead economy’ before 1820; the growth of a colonial economy between 1820 and 1930; the rise of manufacturing and the protectionist state between 1891 and 1973; and the experience of liberalization and structural change since 1973. The article will conclude by suggesting briefly some of the similarities between Australia and other comparable settler economies, as well as the ways in which it has differed from them.

The Bridgehead Economy, 1788-1820

The description ‘bridgehead economy’ was used by one of Australia’s foremost economic historians, N. G. Butlin to refer to the earliest decades of British occupation when the colony was essentially a penal institution. The main settlements were at Port Jackson (modern Sydney, 1788) in New South Wales and Hobart (1804) in what was then Van Diemen’s Land (modern Tasmania). The colony barely survived its first years and was largely neglected for much of the following quarter-century while the British government was preoccupied by the war with France. An important beginning was nevertheless made in the creation of a private economy to support the penal regime. Above all, agriculture was established on the basis of land grants to senior officials and emancipated convicts, and limited freedoms were allowed to convicts to supply a range of goods and services. Although economic life depended heavily on the government Commissariat as a supplier of goods, money and foreign exchange, individual rights in property and labor were recognized, and private markets for both started to function. In 1808, the recall of the New South Wales Corps, whose officers had benefited most from access to land and imported goods (thus hopelessly entangling public and private interests), coupled with the appointment of a new governor, Lachlan Macquarie, in the following year, brought about a greater separation of the private economy from the activities and interests of the colonial government. With a significant increase in the numbers transported after 1810, New South Wales’ future became more secure. As laborers, craftsmen, clerks and tradesmen, many convicts possessed the skills required in the new settlements. As their terms expired, they also added permanently to the free population. Over time, this would inevitably change the colony’s character.

Natural Resources and the Colonial Economy, 1820-1930

Pastoral and Rural Expansion

For Butlin, the developments around 1810 were a turning point in the creation of a ‘colonial’ economy. Many historians have preferred to view those during the 1820s as more significant. From that decade, economic growth was based increasingly upon the production of fine wool and other rural commodities for markets in Britain and the industrializing economies of northwestern Europe. This growth was interrupted by two major depressions during the 1840s and 1890s and stimulated in complex ways by the rich gold discoveries in Victoria in 1851, but the underlying dynamics were essentially unchanged. At different times, the extraction of natural resources, whether maritime before the 1840s or later gold and other minerals, was also important. Agriculture, local manufacturing and construction industries expanded to meet the immediate needs of growing populations, which concentrated increasingly in the main urban centers. The colonial economy’s structure, growth of population and significance of urbanization are illustrated in tables 1 and 2. The opportunities for large profits in pastoralism and mining attracted considerable amounts of British capital, while expansion generally was supported by enormous government outlays for transport, communication and urban infrastructures, which also depended heavily on British finance. As the economy expanded, large-scale immigration became necessary to satisfy the growing demand for workers, especially after the end of convict transportation to the eastern mainland in 1840. The costs of immigration were subsidized by colonial governments, with settlers coming predominantly from the United Kingdom and bringing skills that contributed enormously to the economy’s growth. All this provided the foundation for the establishment of free colonial societies. In turn, the institutions associated with these — including the rule of law, secure property rights, and stable and democratic political systems — created conditions that, on balance, fostered growth. In addition to New South Wales, four other British colonies were established on the mainland: Western Australia (1829), South Australia (1836), Victoria (1851) and Queensland (1859). Van Diemen’s Land (Tasmania after 1856) became a separate colony in 1825. From the 1850s, these colonies acquired responsible government. In 1901, they federated, creating the Commonwealth of Australia.

Table 1
The Colonial Economy: Percentage Shares of GDP, 1891 Prices, 1861-1911

Pastoral Other rural Mining Manuf. Building Services Rent
1861 9.3 13.0 17.5 14.2 8.4 28.8 8.6
1891 16.1 12.4 6.7 16.6 8.5 29.2 10.3
1911 14.8 16.7 9.0 17.1 5.3 28.7 8.3

Source: Haig (2001), Table A1. Totals do not sum to 100 because of rounding.

Table 2
Colonial Populations (thousands), 1851-1911

Australia Colonies Cities
NSW Victoria Sydney Melbourne
1851 257 100 46 54 29
1861 669 198 328 96 125
1891 1,704 608 598 400 473
1911 2,313 858 656 648 593

Source: McCarty (1974), p. 21; Vamplew (1987), POP 26-34.

The process of colonial growth began with two related developments. First, in 1820, Macquarie responded to land pressure in the districts immediately surrounding Sydney by relaxing restrictions on settlement. Soon the outward movement of herdsmen seeking new pastures became uncontrollable. From the 1820s, the British authorities also encouraged private enterprise by the wholesale assignment of convicts to private employers and easy access to land. In 1831, the principles of systematic colonization popularized by Edward Gibbon Wakefield (1796-1862) were put into practice in New South Wales with the substitution of land sales for grants in order to finance immigration. This, however, did not affect the continued outward movement of pastoralists who simply occupied land where could find it beyond the official limits of settlement. By 1840, they had claimed a vast swathe of territory two hundred miles in depth running from Moreton Bay in the north (the site of modern Brisbane) through the Port Phillip District (the future colony of Victoria, whose capital Melbourne was marked out in 1837) to Adelaide in South Australia. The absence of any legal title meant that these intruders became known as ‘squatters’ and the terms of their tenure were not finally settled until 1846 after a prolonged political struggle with the Governor of New South Wales, Sir George Gipps.

The impact of the original penal settlements on the indigenous population had been enormous. The consequences of squatting after 1820 were equally devastating as the land and natural resources upon which indigenous hunter-gathering activities and environmental management depended were appropriated on a massive scale. Aboriginal populations collapsed in the face of disease, violence and forced removal until they survived only on the margins of the new pastoral economy, on government reserves, or in the arid parts of the continent least touched by white settlement. The process would be repeated again in northern Australia during the second half of the century.

For the colonists this could happen because Australia was considered terra nullius, vacant land freely available for occupation and exploitation. The encouragement of private enterprise, the reception of Wakefieldian ideas, and the wholesale spread of white settlement were all part of a profound transformation in official and private perceptions of Australia’s prospects and economic value as a British colony. Millennia of fire-stick management to assist hunter-gathering had created inland grasslands in the southeast that were ideally suited to the production of fine wool. Both the physical environment and the official incentives just described raised expectations of considerable profits to be made in pastoral enterprise and attracted a growing stream of British capital in the form of organizations like the Australian Agricultural Company (1824); new corporate settlements in Western Australia (1829) and South Australia (1836); and, from the 1830s, British banks and mortgage companies formed to operate in the colonies. By the 1830s, wool had overtaken whale oil as the colony’s most important export, and by 1850 New South Wales had displaced Germany as the main overseas supplier to British industry (see table 3). Allowing for the colonial economy’s growing complexity, the cycle of growth based upon land settlement, exports and British capital would be repeated twice. The first pastoral boom ended in a depression which was at its worst during 1842-43. Although output continued to grow during the 1840s, the best land had been occupied in the absence of substantial investment in fencing and water supplies. Without further geographical expansion, opportunities for high profits were reduced and the flow of British capital dried up, contributing to a wider downturn caused by drought and mercantile failure.

Table 3
Imports of Wool into Britain (thousands of bales), 1830-50

German Australian
1830 74.5 8.0
1840 63.3 41.0
1850 30.5 137.2

Source: Sinclair (1976), p. 46

When pastoral growth revived during the 1860s, borrowed funds were used to fence properties and secure access to water. This in turn allowed a further extension of pastoral production into the more environmentally fragile semi-arid interior districts of New South Wales, particularly during the 1880s. As the mobs of sheep moved further inland, colonial governments increased the scale of their railway construction programs, some competing to capture the freight to ports. Technical innovation and government sponsorship of land settlement brought greater diversity to the rural economy (see table 4). Exports of South Australian wheat started in the 1870s. The development of drought resistant grain varieties from the turn of the century led to an enormous expansion of sown acreage in both the southeast and southwest. From the 1880s, sugar production increased in Queensland, although mainly for the domestic market. From the 1890s, refrigeration made it possible to export meat, dairy products and fruit.

Table 4
Australian Exports (percentages of total value of exports), 1881-1928/29

Wool Minerals Wheat,flour Butter Meat Fruit
1881-90 54.1 27.2 5.3 0.1 1.2 0.2
1891-1900 43.5 33.1 2.9 2.4 4.1 0.3
1901-13 34.3 35.4 9.7 4.1 5.1 0.5
1920/21-1928/29 42.9 8.8 20.5 5.6 4.6 2.2

Source: Sinclair (1976), p. 166

Gold and Its Consequences

Alongside rural growth and diversification, the remarkable gold discoveries in central Victoria in 1851 brought increased complexity to the process of economic development. The news sparked an immediate surge of gold seekers into the colony, which was soon reinforced by a flood of overseas migrants. Until the 1870s, gold displaced wool as Australia’s most valuable export. Rural industries either expanded output (wheat in South Australia) or, in the case of pastoralists, switched production to meat and tallow, to supply a much larger domestic market. Minerals had been extracted since earliest settlement and, while yields on the Victorian gold fields soon declined, rich mineral deposits continued to be found. During the 1880s alone these included silver, lead and zinc at Broken Hill in New South Wales; copper at Mount Lyell in Tasmania; and gold at Charters Towers and Mount Morgan in Queensland. From 1893, what eventually became the richest goldfields in Australia were discovered at Coolgardie in Western Australia. The mining industry’s overall contribution to output and exports is illustrated in tables 1 and 4.

In Victoria, the deposits of easily extracted alluvial gold were soon exhausted and mining was taken over by companies that could command the financial and organizational resources needed to work the deep lodes. But the enormous permanent addition to the colonial population caused by the gold rush had profound effects throughout eastern Australia, dramatically accelerating the growth of the local market and workforce, and deeply disturbing the social balance that had emerged during the decade before. Between 1851 and 1861, the Australian population more than doubled. In Victoria it increased sevenfold; Melbourne outgrew Sydney, Chicago and San Francisco (see table 2). Significantly enlarged populations required social infrastructure, political representation, employment and land; and the new colonial legislatures were compelled to respond. The way this was played out varied between colonies but the common outcomes were the introduction of manhood suffrage, access to land through ‘free selection’ of small holdings, and, in the Victorian case, the introduction of a protectionist tariff in 1865. The particular age structure of the migrants of the 1850s also had long-term effects on the building cycle, notably in Victoria. The demand for housing accelerated during the 1880s, as the children of the gold generation matured and established their own households. With pastoral expansion and public investment also nearing their peaks, the colony experienced a speculative boom which added to the imbalances already being caused by falling export prices and rising overseas debt. The boom ended with the wholesale collapse of building companies, mortgage banks and other financial institutions during 1891-92 and the stoppage of much of the banking system during 1893.

The depression of the 1890s was worst in Victoria. Its impact on employment was softened by the Western Australian gold discoveries, which drew population away, but the colonial economy had grown to such an extent since the 1850s that the stimulus provided by the earlier gold finds could not be repeated. Severe drought in eastern Australia from the mid-1890s until 1903 caused the pastoral industry to contract. Yet, as we have seen, technological innovation also created opportunities for other rural producers, who were now heavily supported by government with little direct involvement by foreign investors. The final phase of rural expansion, with its associated public investment in rural (and increasingly urban) infrastructure continued until the end of the 1920s. Yields declined, however, as farmers moved onto the most marginal land. The terms of trade also deteriorated with the oversupply of several commodities in world markets after the First World War. As a result, the burden of servicing foreign debt rose once again. Australia’s position as a capital importer and exporter of natural resources meant that the Great Depression arrived early. From late 1929, the closure of overseas capital markets and collapse of export prices forced the Federal Government to take drastic measures to protect the balance of payments. The falls in investment and income transmitted the contraction to the rest of the economy. By 1932, average monthly unemployment amongst trade union members was over 22 percent. Although natural resource industries continued to have enduring importance as earners of foreign exchange, the Depression finally ended the long period in which land settlement and technical innovation had together provided a secure foundation for economic growth.

Manufacturing and the Protected Economy, 1891-1973

The ‘Australian Settlement’

There is a considerable chronological overlap between the previous section, which surveyed the growth of a colonial economy during the nineteenth century based on the exploitation of natural resources, and this one because it is a convenient way of approaching the two most important developments in Australian economic history between Federation and the 1970s: the enormous increase in government regulation after 1901 and, closely linked to this, the expansion of domestic manufacturing, which from the Second World War became the most dynamic part of the Australian economy.

The creation of the Commonwealth of Australia on 1 January 1901 broadened the opportunities for public intervention in private markets. The new Federal Government was given clearly-defined but limited powers over obviously ‘national’ matters like customs duties. The rest, including many affecting economic development and social welfare, remained with the states. The most immediate economic consequence was the abolition of inter-colonial tariffs and the establishment of a single Australian market. But the Commonwealth also soon set about transferring to the national level several institutions that different the colonies had experimented with during the 1890s. These included arrangements for the compulsory arbitration of industrial disputes by government tribunals, which also had the power to fix wages, and a discriminatory ‘white Australia’ immigration policy designed to exclude non-Europeans from the labor market. Both were partly responses to organized labor’s electoral success during the 1890s. Urban business and professional interests had always been represented in colonial legislatures; during the 1910s, rural producers also formed their own political parties. Subsequently, state and federal governments were typically formed by the either Australian Labor Party or coalitions of urban conservatives and the Country Party. The constituencies they each represented were thus able to influence the regulatory structure to protect themselves against the full impact of market outcomes, whether in the form of import competition, volatile commodity prices or uncertain employment conditions. The institutional arrangements they created have been described as the ‘Australian settlement’ because they balanced competing producer interests and arguably provided a stable framework for economic development until the 1970s, despite the inevitable costs.

The Growth of Manufacturing

An important part of the ‘Australian settlement’ was the imposition of a uniform federal tariff and its eventual elaboration into a system of ‘protection all round’. The original intended beneficiaries were manufacturers and their employees; indeed, when the first protectionist tariff was introduced in 1907, its operation was linked to the requirement that employers pay their workers ‘fair and reasonable wages’. Manufacturing’s actual contribution to economic growth before Federation has been controversial. The population influx of the 1850s widened opportunities for import-substitution but the best evidence suggests that manufacturing grew slowly as the industrial workforce increased (see table 1). Production was small-scale and confined largely to the processing of rural products and raw materials; assembly and repair-work; or the manufacture of goods for immediate consumption (e.g. soap and candle-making, brewing and distilling). Clothing and textile output was limited to a few lines. For all manufacturing, growth was restrained by the market’s small size and the limited opportunities for technical change it afforded.

After Federation, production was stimulated by several factors: rural expansion, the increasing use of agricultural machinery and refrigeration equipment, and the growing propensity of farm incomes to be spent locally. The removal of inter-colonial tariffs may also have helped. The statistical evidence indicates that between 1901 and the outbreak of the First World War manufacturing grew faster than the economy as a whole, while output per worker increased. But manufacturers also aspired mainly to supply the domestic market and expended increasing energy on retaining privileged access. Tariffs rose considerably between the two world wars. Some sectors became more capital intensive, particularly with the establishment of a local steel industry, the beginnings of automobile manufacture, and the greater use of electricity. But, except during the first half of the 1920s, there was little increase in labor productivity and the inter-war expansion of textile manufacturing reflected the heavy bias towards import substitution. Not until the Second World War and after did manufacturing growth accelerate and extend to those sectors most characteristic of an advance industrial economy (table 5). Amongst these were automobiles, chemicals, electrical and electronic equipment, and iron-and-steel. Growth was sustained during 1950s by similar factors to those operating in other countries during the ‘long boom’, including a growing stream of American direct investment, access to new and better technology, and stable conditions of full employment.

Table 5
Manufacturing and the Australian Economy, 1913-1949

1938-39 prices
Manufacturing share of GDP % Manufacturing annual rate of growth % GDP, annual rate of growth %
1913/14 21.9
1928/29 23.6 2.6 2.1
1948/49 29.8 3.4 2.2

Calculated from Haig (2001), Table A2. Rates of change are average annual changes since the previous year in the first column.

Manufacturing peaked in the mid-1960s at about 28 percent of national output (measured in 1968-69 prices) but natural resource industries remained the most important suppliers of exports. Since the 1920s, over-supply in world markets and the need to compensate farmers for manufacturing protection, had meant that virtually all rural industries, with the exception of wool, had been drawn into a complicated system of subsidies, price controls and market interventions at both federal and state levels. The post-war boom in the world economy increased demand for commodities, benefiting rural producers but also creating new opportunities for Australian miners. Most important of all, the first surge of breakneck growth in East Asia opened a vast new market for iron ore, coal and other mining products. Britain’s significance as a trading partner had declined markedly since the 1950s. By the end of the 1960s, Japan overtook it as Australia’s largest customer, while the United States was now the main provider of imports.

The mining bonanza contributed to the boom conditions experienced generally after 1950. The Federal Government played its part by using the full range of macroeconomic policies that were also increasingly familiar in similar western countries to secure stability and full employment. It encouraged high immigration, relaxing the entry criteria to allow in large numbers of southern Europeans, who added directly to the workforce, but also brought knowledge and experience. With state governments, the Commonwealth increased expenditure on education significantly, effectively entering the field for the first time after 1945. Access to secondary education was widened with the abandonment of fees in government schools and federal finance secured an enormous expansion of university places, especially after 1960. Some weaknesses remained. Enrolment rates after primary school were below those in many industrial countries and funding for technical education was poor. Despite this, the Australian population’s rising levels of education and skill continued to be important additional sources of growth. Finally, although government advisers expressed misgivings, industry policy remained determinedly interventionist. While state governments competed to attract manufacturing investment with tax and other incentives, by the 1960s protection had reached its highest level, with Australia playing virtually no part in the General Agreement on Tariffs and Trade (GATT), despite being an original signatory. The effects of rising tariffs since 1900 were evident in the considerable decline in Australia’s openness to trade (Table 6). Yet, as the post-war boom approached its end, the country still relied upon commodity exports and foreign investment to purchase the manufactures it was unable to produce itself. The impossibility of sustaining growth in this way was already becoming clear, even though the full implications would only be felt during the decades to come.

Table 6
Trade (Exports Plus Imports)
as a Share of GDP, Current Prices, %

1900/1 44.9
1928/29 36.9
1938/38 32.7
1964/65 33.3
1972/73 29.5

Calculated from Vamplew (1987), ANA 119-129.

Liberalization and Structural Change, 1973-2005

From the beginning of the 1970s, instability in the world economy and weakness at home ended Australia’s experience of the post-war boom. During the following decades, manufacturing’s share in output (table 7) and employment fell, while the long-term relative decline of commodity prices meant that natural resources could no longer be relied on to cover the cost of imports, let alone the long-standing deficits in payments for services, migrant remittances and interest on foreign debt. Until the early 1990s, Australia also suffered from persistent inflation and rising unemployment (which remained permanently higher, see chart 1). As a consequence, per capita incomes fluctuated during the 1970s, and the economy contracted in absolute terms during 1982-83 and 1990-91.

Even before the 1970s, new sources of growth and rising living standards had been needed, but the opportunities for economic change were restricted by the elaborate regulatory structure that had evolved since Federation. During that decade itself, policy and outlook were essentially defensive and backward looking, despite calls for reform and some willingness to alter the tariff. Governments sought to protect employment in established industries, while dependence on mineral exports actually increased as a result of the commodity booms at the decade’s beginning and end. By the 1980s, however, it was clear that the country’s existing institutions were failing and fundamental reform was required.

Table 7
The Australian Economy, 1974-2004

A. Percentage shares of value-added, constant prices

1974 1984 1994 2002
Agriculture 4.4 4.3 3.0 2.7
Manufacturing 18.1 15.2 13.3 11.8
Other industry, inc. mining 14.2 14.0 14.6 14.4
Services 63.4 66.4 69.1 71.1

B. Per capita GDP, annual average rate of growth %, constant prices

1973-84 1.2
1984-94 1.7
1994-2004 2.5

Calculated from World Bank, World Development Indicators (Sept. 2005).

Figure 1
Unemployment, 1971-2005, percent

Unemployment, 1971-2005, percent

Source: Reserve Bank of Australia (1988); Reserve Bank of Australia, G07Hist.xls. Survey data at August. The method of data collection changed in 1978.

The catalyst was the resumption of the relative fall of commodity prices since the Second World War which meant that the cost of purchasing manufactured goods inexorably rose for primary producers. The decline had been temporarily reversed by the oil shocks of the 1970s but, from the 1980/81 financial year until the decade’s end, the value of Australia’s merchandise imports exceeded that of merchandise exports in every year but two. The overall deficit on current account measured as a proportion of GDP also moved became permanently higher, averaging around 4.7 percent. During the 1930s, deflation had been followed by the further closing of the Australian economy. There was no longer much scope for this. Manufacturing had stagnated since the 1960s, suffering especially from the inflation of wage and other costs during the 1970s. It was particularly badly affected by the recession of 1982-83, when unemployment rose to almost ten percent, its highest level since the Great Depression. In 1983, a new federal Labor Government led by Bob Hawke sought to engineer a recovery through an ‘Accord’ with the trade union movement which aimed at creating employment by holding down real wages. But under Hawke and his Treasurer, Paul Keating — who warned colorfully that otherwise the country risked becoming a ‘banana republic’ — Labor also started to introduce broader reforms to increase the efficiency of Australian firms by improving their access to foreign finance and exposing them to greater competition. Costs would fall and exports of more profitable manufactures increase, reducing the economy’s dependence on commodities. During the 1980s and 1990s, the reforms deepened and widened, extending to state governments and continuing with the election of a conservative Liberal-National Party government under John Howard in 1996, as each act of deregulation invited further measures to consolidate them and increase their effectiveness. Key reforms included the floating of the Australian dollar and the deregulation of the financial system; the progressive removal of protection of most manufacturing and agriculture; the dismantling of the centralized system of wage-fixing; taxation reform; and the promotion of greater competition and better resource use through privatization and the restructuring of publicly-owned corporations, the elimination of government monopolies, and the deregulation of sectors like transport and telecommunications. In contrast with the 1930s, the prospects of further domestic reform were improved by an increasingly favorable international climate. Australia contributed by joining other nations in the Cairns Group to negotiate reductions of agricultural protection during the Uruguay round of GATT negotiations and by promoting regional liberalization through the Asia Pacific Economic Cooperation (APEC) forum.

Table 8
Exports and Openness, 1983-2004

Shares of total exports, % Shares of GDP: exports + imports, %
Goods Services
Rural Resource Manuf. Other
1983 30 34 9 3 24 26
1989 23 37 11 5 24 27
1999 20 34 17 4 24 37
2004 18 33 19 6 23 39

Calculated from: Reserve Bank of Australia, G10Hist.xls and H03Hist.xls; World Bank, World Development Indicators (Sept. 2005). Chain volume measures, except shares of GDP, 1983, which are at current prices.

The extent to which institutional reform had successfully brought about long-term structural change was still not clear at the end of the century. Recovery from the 1982-83 recession was based upon a strong revival of employment. By contrast, the uninterrupted growth experienced since 1992 arose from increases in the combined productivity of workers and capital. If this persisted, it was a historic change in the sources of growth from reliance on the accumulation of capital and the increase of the workforce to improvements in the efficiency of both. From the 1990s, the Australian economy also became more open (table 8). Manufactured goods increased their share of exports, while rural products continued to decline. Yet, although growth was more broadly-based, rapid and sustained (table 7), the country continued to experience large trade and current account deficits, which were augmented by the considerable increase of foreign debt after financial deregulation during the 1980s. Unemployment also failed to return to its pre-1974 level of around 2 percent, although much of the permanent rise occurred during the mid to late 1970s. In 2005, it remained 5 percent (Figure 1). Institutional reform clearly contributed to these changes in economic structure and performance but they were also influenced by other factors, including falling transport costs, the communications and information revolutions, the greater openness of the international economy, and the remarkable burst of economic growth during the century’s final decades in southeast and east Asia, above all China. Reform was also complemented by policies to provide the skills needed in a technologically-sophisticated, increasingly service-oriented economy. Retention rates in the last years of secondary education doubled during the 1980s, followed by a sharp increase of enrolments in technical colleges and universities. By 2002, total expenditure on education as a proportion of national income had caught up with the average of member countries of the OECD (Table 9). Shortages were nevertheless beginning to be experienced in the engineering and other skilled trades, raising questions about some priorities and the diminishing relative financial contribution of government to tertiary education.

Table 9
Tertiary Enrolments and Education Expenditure, 2002

Tertiary enrolments, gross percent Education expenditure as a proportion of GDP, percent
Australia 63.22 6.0
OECD 61.68 5.8
United States 70.67 7.2

Source: World Bank, World Development Indicators (Sept. 2005); OECD (2005). Gross enrolments are total enrolments, regardless of age, as a proportion of the population in the relevant official age group. OECD enrolments are for fifteen high-income members only.

Summing Up: The Australian Economy in a Wider Context

Virtually since the beginning of European occupation, the Australian economy had provided the original British colonizers, generations of migrants, and the descendants of both with a remarkably high standard of living. Towards the end of the nineteenth century, this was by all measures the highest in the world (see table 10). After 1900, national income per member of the population slipped behind that of several countries, but continued to compare favorably with most. In 2004, Australia was ranked behind only Norway and Sweden in the United Nation’s Human Development Index. Economic historians have differed over the sources of growth that made this possible. Butlin emphasized the significance of local factors like the unusually high rate of urbanization and the expansion of domestic manufacturing. In important respects, however, Australia was subject to the same forces as other European settler societies in New Zealand and Latin America, and its development bore striking similarities to theirs. From the 1820s, its economy grew as one frontier of an expanding western capitalism. With its close institutional ties to, and complementarities with, the most dynamic parts of the world economy, it drew capital and migrants from them, supplied them with commodities, and shared the benefits of their growth. Like other settler societies, it sought population growth as an end in itself and, from the turn of the nineteenth century, aspired to the creation of a national manufacturing base. Finally, when openness to the world economy appeared to threaten growth and living standards, governments intervened to regulate and protect with broader social objectives in mind. But there were also striking contrasts with other settler economies, notably those in Latin America like Argentina, with which it has been frequently compared. In particular, Australia responded to successive challenges to growth by finding new opportunities for wealth creation with a minimum of political disturbance, social conflict or economic instability, while sharing a rising national income as widely as possible.

Table 10
Per capita GDP in Australia, United States and Argentina
(1990 international dollars)

Australia United States Argentina
1870 3,641 2,457 1,311
1890 4,433 3,396 2,152
1950 7,493 9,561 4,987
1998 20,390 27,331 9,219

Sources: Australia: GDP, Haig (2001) as converted in Maddison (2003); all other data Maddison (1995) and (2001)

From the mid-twentieth century, Australia’s experience also resembled that of many advanced western countries. This included the post-war willingness to use macroeconomic policy to maintain growth and full employment; and, after the 1970s, the abandonment of much government intervention in private markets while at the same time retaining strong social services and seeking to improve education and training. Australia also experienced a similar relative decline of manufacturing, permanent rise of unemployment, and transition to a more service-based economy typical of high income countries. By the beginning of the new millennium, services accounted for over 70 percent of national income (table 7). Australia remained vulnerable as an exporter of commodities and importer of capital but its endowment of natural resources and the skills of its population were also creating opportunities. The country was again favorably positioned to take advantage of growth in the most dynamic parts of the world economy, particularly China. With the final abandonment of the White Australia policy during the 1970s, it had also started to integrate more closely with its region. This was further evidence of the capacity to change that allowed Australians to face the future with confidence.

References:

Anderson, Kym. “Australia in the International Economy.” In Reshaping Australia’s Economy: Growth with Equity and Sustainability, edited by John Nieuwenhuysen, Peter Lloyd and Margaret Mead, 33-49. Cambridge: Cambridge University Press, 2001.

Blainey, Geoffrey. The Rush that Never Ended: A History of Australian Mining, fourth edition. Melbourne: Melbourne University Press, 1993.

Borland, Jeff. “Unemployment.” In Reshaping Australia’s Economy: Growth and with Equity and Sustainable Development, edited by John Nieuwenhuysen, Peter Lloyd and Margaret Mead, 207-228. Cambridge: Cambridge University Press, 2001.

Butlin, N. G. Australian Domestic Product, Investment and Foreign Borrowing 1861-1938/39. Cambridge: Cambridge University Press, 1962.

Butlin, N.G. Economics and the Dreamtime, A Hypothetical History. Cambridge: Cambridge University Press, 1993.

Butlin, N.G. Forming a Colonial Economy: Australia, 1810-1850. Cambridge: Cambridge University Press, 1994.

Butlin, N.G. Investment in Australian Economic Development, 1861-1900. Cambridge: Cambridge University Press, 1964.

Butlin, N. G., A. Barnard and J. J. Pincus. Government and Capitalism: Public and Private Choice in Twentieth Century Australia. Sydney: George Allen and Unwin, 1982.

Butlin, S. J. Foundations of the Australian Monetary System, 1788-1851. Sydney: Sydney University Press, 1968.

Chapman, Bruce, and Glenn Withers. “Human Capital Accumulation: Education and Immigration.” In Reshaping Australia’s economy: growth with equity and sustainability, edited by John Nieuwenhuysen, Peter Lloyd and Margaret Mead, 242-267. Cambridge: Cambridge University Press, 2001.

Dowrick, Steve. “Productivity Boom: Miracle or Mirage?” In Reshaping Australia’s Economy: Growth with Equity and Sustainability, edited by John Nieuwenhuysen, Peter Lloyd and Margaret Mead, 19-32. Cambridge: Cambridge University Press, 2001.

Economist. “Has he got the ticker? A survey of Australia.” 7 May 2005.

Haig, B. D. “Australian Economic Growth and Structural Change in the 1950s: An International Comparison.” Australian Economic History Review 18, no. 1 (1978): 29-45.

Haig, B.D. “Manufacturing Output and Productivity 1910 to 1948/49.” Australian Economic History Review 15, no. 2 (1975): 136-61.

Haig, B.D. “New Estimates of Australian GDP: 1861-1948/49.” Australian Economic History Review 41, no. 1 (2001): 1-34.

Haig, B. D., and N. G. Cain. “Industrialization and Productivity: Australian Manufacturing in the 1920s and 1950s.” Explorations in Economic History 20, no. 2 (1983): 183-98.

Jackson, R. V. Australian Economic Development in the Nineteenth Century. Canberra: Australian National University Press, 1977.

Jackson, R.V. “The Colonial Economies: An Introduction.” Australian Economic History Review 38, no. 1 (1998): 1-15.

Kelly, Paul. The End of Certainty: The Story of the 1980s. Sydney: Allen and Unwin, 1992.

Macintyre, Stuart. A Concise History of Australia. Cambridge: Cambridge University Press, 1999.

McCarthy, J. W. “Australian Capital Cities in the Nineteenth Century.” In Urbanization in Australia; The Nineteenth Century, edited by J. W. McCarthy and C. B. Schedvin, 9-39. Sydney: Sydney University Press, 1974.

McLean, I.W. “Australian Economic Growth in Historical Perspective.” The Economic Record 80, no. 250 (2004): 330-45.

Maddison, Angus. Monitoring the World Economy 1820-1992. Paris: OECD, 1995.

Maddison, Angus. The World Economy: A Millennial Perspective. Paris: OECD, 2001.

Maddison, Angus. The World Economy: Historical Statistics. Paris: OECD, 2003.

Meredith, David, and Barrie Dyster. Australia in the Global Economy: Continuity and Change. Cambridge: Cambridge University Press, 1999.

Nicholas, Stephen, editor. Convict Workers: Reinterpreting Australia’s Past. Cambridge: Cambridge University Press, 1988.

OECD. Education at a Glance 2005 – Tables OECD, 2005 [cited 9 February 2006]. Available from http://www.oecd.org/document/11/0,2340,en_2825_495609_35321099_1_1_1_1,00.html.

Pope, David, and Glenn Withers. “The Role of Human Capital in Australia’s Long-Term Economic Growth.” Paper presented to 24th Conference of Economists, Adelaide, 1995.

Reserve Bank of Australia. “Australian Economic Statistics: 1949-50 to 1986-7: I Tables.” Occasional Paper No. 8A (1988).

Reserve Bank of Australia. Current Account – Balance of Payments – H1 [cited 29 November 2005]. Available from http://www.rba.gov.au/Statistics/Bulletin/H01bhist.xls.

Reserve Bank of Australia. Gross Domestic Product – G10 [cited 29 November 2005]. Available from http://www.rba.gov.au/Statistics/Bulletin/G10hist.xls.

Reserve Bank of Australia. Unemployment – Labour Force – G1 [cited 2 February 2006]. Available from http://www.rba.gov.au/Statistics/Bulletin/G07hist.xls.

Schedvin, C. B. Australia and the Great Depression: A Study of Economic Development and Policy in the 120s and 1930s. Sydney: Sydney University Press, 1970.

Schedvin, C.B. “Midas and the Merino: A Perspective on Australian Economic History.” Economic History Review 32, no. 4 (1979): 542-56.

Sinclair, W. A. The Process of Economic Development in Australia. Melbourne: Longman Cheshire, 1976.

United Nations Development Programme. Human Development Index [cited 29 November 2005]. Available from http://hdr.undp.org/statistics/data/indicators.cfm?x=1&y=1&z=1.

Vamplew, Wray, ed. Australians: Historical Statistics. Edited by Alan D. Gilbert and K. S. Inglis, Australians: A Historical Library. Sydney: Fairfax, Syme and Weldon Associates, 1987.

White, Colin. Mastering Risk: Environment, Markets and Politics in Australian Economic History. Melbourne: Oxford University Press, 1992.

World Bank. World Development Indicators ESDS International, University of Manchester, September 2005 [cited 29 November 2005]. Available from http://www.esds.ac.uk/International/Introduction.asp.

Citation: Attard, Bernard. “The Economic History of Australia from 1788: An Introduction”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL
http://eh.net/encyclopedia/the-economic-history-of-australia-from-1788-an-introduction/

Apprenticeship in the United States

Daniel Jacoby, University of Washington, Bothell

Once the principal means by which craft workers learned their trades, apprenticeship plays a relatively small part in American life today. The essence of this institution has always involved an exchange of labor for training, yet apprenticeship has been far from constant over time as its survival in the United States has required nearly continual adaptation to new challenges.

Four distinct challenges define the periods of major apprenticeship changes. The colonial period required the adaptation of Old World practices to New World contexts. In the era of the new republic, apprenticeship was challenged by the clash between traditional authority and the logic of expanding markets and contracts. The main concern after the Civil War was to find a training contract that could resolve the heightening tensions between organized labor and capital. Finally, in the modern era following World War I, industrialization’s skill-leveling effects constituted a challenge to apprenticeship against which it largely failed. Apprenticeship lost ground as schooling was instead increasingly sought as the vehicle for upward social mobility that offset the leveling effects of industrialization. After reviewing these episodes this essay concludes by speculating whether we are now in a new era of challenges that will reshape apprenticeship.

Apprenticeship came to American soil by way of England, where it was the first step on the road to economic independence. In England, master craftsmen hired apprentices in an exchange of training for service. Once their term of apprenticeship was completed, former apprentices traveled from employer to employer earning wages as journeymen. When, or if, they accumulated enough capital, journeymen set up shop as independent masters and became members of their craft guilds. These institutions had the power to bestow and withdraw rights and privileges upon their members, and thereby to regulate competition among themselves.

One major concern of the guilds was to prevent unrestricted trade entry and thus apprenticeship became the object of much regulation. Epstein (1998), however, argues that monopoly or rent-seeking activity (the deliberate production of scarcity) was only incidental to the guilds’ primary interest in supplying skilled workmen. To the extent that guilds successfully regulated apprenticeship in Britain, that pattern was less readily replicated in the Americas whose colonists came to exploit the bounty of natural resources under mercantilistic proscriptions that forbade most forms of manufacturing. The result was an agrarian society practically devoid of large towns and guilds. Absent these entities, the regulation of apprenticeship relied upon government actions that appear to have been become more pronounced towards the mid-eighteenth century. The passage of Britain’s 1563 Statute of Artificers involved government regulation in the Old World as well. However, as Davies (1956) shows, English apprenticeship was different in that craft guilds and their attendant traditions were more significant.

The Colonial Period

During the colonial period, the U.S was predominantly an agrarian society. As late as 1790 no city possessed a population in excess of 50,000. In 1740, the largest colonial city, Philadelphia, possessed 13,000 inhabitants. Even so, the colonies could not operate successfully without some skilled tradesmen in fields like carpentry, cordwaining (shoemaking), and coopering (barrel making). Neither the training of slaves, nor the immigration of skilled European workmen was sufficient to prevent labor short colonies from developing their own apprenticeship systems. No uniform system of apprenticeship developed because municipalities, and even states, lacked the authority either to enforce their rules outside their own jurisdictions or to restore distant runaways to their masters. Accordingly, apprenticeship remained a local institution.

Records from the colonial period are sparse, but both Philadelphia and Boston have preserved important evidence. In Philadelphia, Quimby (1963) traced official apprenticeship back, at least, to 1716. By 1745 the city had recorded 149 indentures in 33 crafts. The stock of apprentices grew more rapidly than did population and after an additional 25 years it had reached 537.

Quimby’s Colonial Philadelphia data indicate that apprenticeship typically consigned boys, aged 14 to 17, to serve their masters until their twenty-first birthdays. Girls, too, were apprenticed, but females comprised less than one-fifth of recorded indentures, most of whom were apprenticed to learn housewifery. One significant variation on the standard indenture involved the binding of parish orphans. Such paupers were usually indented to less remunerative trades, usually farming. Yet another variation involved the coveted apprenticeships with merchants, lawyers, and other professions. In these instances, parents usually paid masters beforehand to take their children.

Apprenticeship’s distinguishing feature was its contract of indenture, which elaborated the terms of the arrangement. This contract differed in two major ways from the contracts of indenture that bound immigrants. First, the apprenticeship contract involved young people and, as such, required the signature of their parents or guardians. Second, indentured servitude, which Galenson (1981) argues was adapted from apprenticeship, substituted Atlantic transportation for trade instruction in the exchange of a servant’s labor. Both forms of labor involved some degree of mutuality or voluntary agreement. In apprenticeship, however, legal or natural parents transferred legal authority over their child to another, the apprentice’s master, for a substantial portion of his or her youth. In exchange for rights to their child’s labor, parents were also relieved of direct responsibility for child rearing and occupational training. Thus the child’s consent could be of less consequence than that of the parents.

The articles of indenture typically required apprentices to serve their terms faithfully and obediently. Indentures commonly included clauses prohibiting specific behaviors, such a playing dice or fornication. Masters generally pledged themselves to raise, feed, lodge, educate, and train apprentices and then to provide “freedom dues” consisting of clothes, tools, or money once they completed the terms of their indentures. Parents or guardian were co-signatories of the agreements. Although practice in the American colonies is incompletely documented, we know that in Canada parents were held financially responsible to apprentice masters when their children ran away.

To enforce their contracts parties to the agreement could appeal to local magistrates. Problems arose for many reasons, but the long duration of the contract inevitably involved unforeseen contingencies giving rise to dissatisfactions with the arrangements. Unlike other simple exchanges of goods, the complications of child rearing inevitably made apprenticeship a messy concern.

The Early Republic

William Rorabaugh (1986) argues that the revolutionary era increased the complications inherent in apprenticeship. The rhetoric of independence could not be contained within the formal political realm involving relations between nations, but instead involved the interpersonal realms wherein the independence to govern one’s self challenged traditions of deference based upon social status. Freedom was increasingly equated with contractual relations and consent. However, exchange based on contract undermined the authority of masters. And so it was with servants and apprentices who, empowered by Republican ideology, began to challenge their masters conceiving themselves, not as willful children, but as free and independent citizens of the Revolution.

The revolutionary logic of contract ate away at the edges of the long-term apprenticeship relationship and such indentures became substantially less common in the first half of the nineteenth century. Gillian Hamilton (2000) has tested whether the decline in apprenticeship stemmed from problems in enforcing long-term contracts, or whether it was the result of a shift by employers to hire unskilled workers for factory work. While neither theory alone explains the decline in the stock of apprenticeship contracts, both demonstrate how emerging contractual relations undermined tradition by providing new choices. During this period she finds that masters began to pay their apprentices, that over time those payments rose more steeply with experience, and that indenture contracts were shortened, all of which suggest employers consciously patterned contracts to reduce the turnover that resulted when apprentices left for preferable situations. That employers increasingly preferred to be freed from the long-term obligations they owed their apprentices suggests that these responsibilities in loco parentis imposed burdens upon masters as well as apprentices. The payment of money wages reflected, in part, costs associated with their parental authorities that could now, more easily, be avoided in urban areas by shifting responsibilities back to youths and their parents.

Hamilton’s evidence comes from Montreal, where indentures were centrally recorded. While Canadian experiences differed in several identifiable ways from those in the United States, the broader trends she describes are consistent with those observed in the United States. In Frederick County Maryland, for example, Rorabaugh (1986) finds that the percentage of white males formally bound as apprentices fell from nearly 20% of boys aged 15 to 20 to less than 1% between 1800 and 1860. The U.S decline however, is more difficult to gage because informal apprenticeship arrangements that were not officially recorded appear to have risen. In key respects issues pertaining to the master’s authority remained an unresolved complication preventing a uniform apprenticeship system and encouraging informal apprenticeship arrangements into the period well after slavery was abolished.

Postbellum Period

While the Thirteenth Amendment to the U.S. Constitution in 1865 formally ended involuntary servitude, the boundary line between involuntary and voluntary contracts remained problematic, especially in regards to apprenticeship. Although courts explained that labor contracts enforced under penalty of imprisonment generally created involuntary servitude, employers explored contract terms that gave them unusual authority over their apprentices. States sometimes developed statutes to protect minors by prescribing the terms of legally enforceable apprenticeship indentures. Yet, doing so necessarily limited freedom of contract: making it difficult, if not impossible, to rearrange the terms of an apprenticeship agreement to fit any particular situation. Both the age of the apprentice and the length of the indenture agreement made the arrangement vulnerable to abuse. However, it proved extremely difficult for lawmakers to specify the precise circumstances warranting statutory indentures without making them unattractive. In good measure this was because representatives of labor and capital seldom agreed when it came to public policy regarding skilled employment. Yet, the need for some policy increased, especially after the labor scarcities created by the Civil War.

Companies, unions and governments all sought solutions to the shortages of skills caused by the Civil War. In Boston and Chicago, for example, women were recruited to perform skilled typography work that had previously been restricted to men. The Connecticut legislature authorized a new company to recruit and contract skilled workers from abroad. Other states either wrote new apprenticeship laws or experimented with new ways of training workers. The success of craft unionism was itself an indication of the dearth of organizations capable of implementing skill standards. Virtually any new action challenged the authority of either labor or capital, leading one or the other to contest them. Jacoby (1996) argues that the most important new strategy involved the introduction of short trade school courses intended to substitute for apprenticeship. Schooling fed employers’ hope that they might sidestep organized labor’s influence in determining the supply of skilled labor.

Independent of the expansion of schooling, issues pertaining to apprenticeship contract rights gained in importance. Firms like Philadelphia’s Baldwin Locomotive held back wages until contract completion in order to keep their apprentices with them. The closer young apprentices were bound to their employers, the less viable became organized labor’s demand to consult over or to unilaterally control the expansion or contraction of training. One-sided long-term apprenticeship contracts provided employers other advantages as well. Once under contract, competitors and unions could be legally enjoined for “enticing” their workers into breaking their contracts. Although employers rarely brought suit against each other for enticement of their apprentices, their associations, like the Metal Manufactures Association in Philadelphia, prevented apprentices from leaving one master for another by requiring consent and recommendation of member employers (Howell, 2000). Employer associations could, in this way, effectively blacklist union supporters and require apprentices to break strikes.

These employer actions did not occur in a vacuum. Many businessmen faulted labor for tying their hands when responding to increased demands for labor. Unions lost support among the working class when they restricted the number of apprentices an employer could hire. Such restrictions frequently involved ethnic, racial and gender preferences that locked minorities out of the well-paid crafts. Organized labor’s control was, nonetheless, less effective than it would have liked: It could not restrict non-union firms from taking on apprentices nor was it able to stem the flow of half-trained craftsmen from the small towns where apprenticeship standards were weak. Yet by fines, boycotts, and walkouts organized labor did intimidate workers and firms who disregarded their rules. Such actions failed to endear it to less skilled workers, who often regarded skilled unionists as a conservative aristocracy only slightly less onerous, if at all, than big business.

This weakness in labor’s support made it vulnerable to Colonel Richard T Auchmuty’s New York Trade School. Auchmuty’s school, begun in 1881, became the leading institution challenging labor’s control over its own supply. The school was designed and marketed as an alternative to apprenticeship and Auchmuty encouraged its use as a weapon in “the battle for the boys” waged by New York City Plumbers in 1886-87. Those years mark the starting point for a series of skirmishes between organized capital and labor in which momentum seesawed back and forth. Those battles encouraged public officials and educators to get involved. Where the public sector took greater interest in training, schooling more frequently supplemented, rather than replaced, on-the-job apprenticeship training. Public involvement also helped formalized the structure of trade learning in ways that apprenticeship laws had failed to do.

The Modern Era

In 1917, with the benefit of prior collaborations involving the public sector, a coalition of labor, business and social services secured passage of the Smith-Hughes Law to provide federal aid for vocational education. Despite this broad support, it is not clear that the bill would have passed had it not been for America’s entry into the First World War and the attendant priority for an increase in the supply of skilled labor. Prior to this law, demands for skilled labor had been partially muted by new mass production technologies and scientific management, both of which reduced industry’s reliance upon craft workers. War changed the equation.

Not only did war spur the Wilson administration into training more workers, it also raised organized labor’s visibility in industries, like shipbuilding, where it had previously been locked out. Under Smith-Hughes, cities as distant as Seattle and New York invited unions to join formal training partnerships. In the twenties, a number of schools systems provided apprentice extension classes where prior employment was made prerequisite, thereby limiting public apprenticeship support to workers who were already unionized. These arrangements made it easier for organized labor to control entry into the craft. This was most true in the building trades, where the unions remained well-organized throughout the twenties. However, in the twenties, the fast expanding factory sector more successfully reduced union influence. The largest firms, such as the General Electric Company, had long since set up their own non-union–usually informal–apprenticeship plans. Large firms able to provide significant employment security, like those that belonged to the National Association for Corporation Schools, typically operated in a union-free environment, which enabled them to establish training arrangements that were flexible and responsive to their needs.

The depression in the early thirties stopped nearly all training. Moreover, the prior industrial transformation shifted power within organized labor from the American Federation of Labor’s bedrock craft unions to the Congress of Industrial Organizations. With this change labor increasingly emphasized pay equality by narrowing skill differentials and accordingly de-emphasized training issues. Even so, by the late 1930s shortages of skilled workers were again felt that led to a national apprenticeship plan. Under the Fitzgerald Act (1937), apprenticeship standards were formalized in indentures that specified the kinds and quantity of training to be provided, as well as the responsibilities of joint labor-management apprenticeship committees. Standards helped minimize incentives to abuse low-wage apprentices through inadequate training and advancement. Nationally, however, the percentage of apprentices nationally remained very small, and overall young people increasingly chose formal education rather than apprenticeship to open opportunity. While the Fitzgerald Law worked to protect labor’s immediate interests, very few firms chose formal apprenticeships when less structured training relationships were possible.

This system persisted through the heyday of organized labor in the forties and fifties, but began to come undone in the late sixties and seventies, particularly when Civil Rights groups attacked the racial and gender discrimination too often used to ration scarce apprenticeship opportunities. Discrimination was sometimes passive, occurring as the result of preferential treatment extended to the sons and friends of craft workers, while in other instances it involved active and deliberate policies aimed at exclusion (Hill, 1968). Affirmative action accords and court orders have forced unions and firms to provide more apprenticeship opportunities for minorities.

Along with a declining influence of labor and civil rights organizations, work relations appear to have changed as we begin the new millennium. Forms of labor contracting that provide fewer benefits and security are on the rise. Incomes once again have become more stratified by education and skill levels, making them a much more important issue. Gary Becker’s (1964) work on human capital theory has encouraged businessmen and educators to rethink the economics of training and apprenticeship. Conceptualizing training as an investment, theory suggests that enforceable long-term apprenticeships enable employers to increase their investments in the skills of their workers. Binding indentures are rationalized as efficient devices to prevent youths from absconding with the capital employers have invested in them. Armed with this understanding, increasingly policy makers have permitted and encouraged arrangements that look more like older-style employer dominated apprenticeships. Whether this is the beginning of new era for apprenticeship, or merely a return to the prior battles over the abuses of one-sided employer control, only time will tell.

References and further reading:

Becker, Gary. Human Capital. Chicago: University of Chicago Press, 1964.

Davies, Margaret. The Enforcement of English Apprenticeship, 1563-1642. Cambridge, MA: Harvard University Press, 1956.

Douglas, Paul. American Apprenticeship and Industrial Education. New York: Columbia University Press, 1921.

Elbaum, Bernard. “Why Apprenticeship Persisted in Britain but Not in the United States.” Journal of Economic History 49 (1989): 337-49.

Epstein, S. R. “Craft Guilds, Apprenticeship and Technological Change in Pre-industrial Europe.” Journal of Economic History 58, no. 3 (1998): 684-713.

Galenson, David. White Servitude in Colonial America: An Economic Analysis. New York: Cambridge University Press, 1981.

Hamilton, Gillian. “The Decline of Apprenticeship in North America: Evidence from Montreal.” Journal of Economic History 60, no. 3, (2000): 627-664.

Harris, Howell John. Bloodless Victories: The Rise and Decline of the Open Shop Movement in Philadelphia; 1890-1940. New York: Cambridge University Press, 2000.

Hill, Herbert. “The Racial Practices of Organized Labor: The Contemporary Record.” In The Negro and The American Labor Movement, edited by Julius Jacobson. Garden City, New York: Doubleday Press, 1968.

Jacoby, Daniel. “The Transformation of Industrial Apprenticeship in the United States.” Journal of Economic History 52, no. 4 (1991): 887- 910.

Jacoby, Daniel. “Plumbing the Origins of American Vocationalism.” Labor History 37, no. 2 (1996): 235-272.

Licht, Walter. Getting Work: Philadelphia, 1840-1950. Cambridge, MA: Harvard University Press, 1992.

Quimby, Ian M.G. “Apprenticeship in Colonial Philadelphia.” Ph.D. Dissertation, University of Delaware, 1963.

Rorabaugh, William. The Craft Apprentice from Franklin to the Machine Age in America. New York: Oxford University Press, 1986.

Citation: Cuff, Timothy. “Historical Anthropometrics”. EH.Net Encyclopedia, edited by Robert Whaples. August 29, 2004. URL http://eh.net/encyclopedia/apprenticeship-in-the-united-states/

Agricultural Tenures and Tithes

David R. Stead, University of York

The Tenurial Ladder

Agricultural land tenures, the arrangements under which farmers occupied farmland, continue to be the subject of extensive study by agricultural historians and economists. They have identified a “ladder” of tenures broadly classified by the degree of independence each type offered the farmer. Some of the key features of the main forms of tenurial agreements are briefly described below. In practice, though, the characteristics of these different tenures shaded into one another and they often had their own particular local features, ensuring that the distinctions among them were frequently blurred.

At the top of the tenurial ladder was owner occupation, where the farmer owned and farmed his property as a peasant proprietor or capitalist producer. All other types of tenure involved a separation between the ownership and the use of land. On the next rung were hereditary tenures, which gave the occupant quasi-ownership of the farm. The hereditary tenant had a lifelong right to cultivate the holding, and was allowed to bequeath it to his direct heirs. However, his freedom of action was subject to various restrictions imposed by the superior landowner, whose permission may have been needed to adopt a new course of husbandry, for example, and who might have levied a payment when the farm changed hands. Under some circumstances the landlord could also possess the right to evict the hereditary tenant, for instance if the property was not kept in a good state of maintenance.

After owner occupation and hereditary tenures was leasehold, where the tenant occupied under a lease either lasting until a number of persons named in the contract had died, or for some certain term of years (for example, “tacks” for nineteen years were prevalent in Scotland around the turn of the nineteenth century). In the former case the names stated were often those of the farmer, his wife and son, and thus this kind of lease approached hereditary tenure. The typical leaseholder for years was charged a fixed cash rent per annum which was equal or close to the yearly economic value of the land (a rack rent). In contrast, the typical leaseholder for lives paid a small annual rent that was well below the rack rent, together with a much larger “fine” levied when the landlord granted a new lease or when the sitting tenant wished to add another name to the contract after an existing life had ended.

On a lower rung of the tenurial ladder was sharecropping (the modern preference is for “cropsharing”). Here, the landlord took the rent in kind, instead of in cash, as some share of the farm’s annual produce (predominately one half). The sharecropping landlord tended to be closely involved in the management of farming operations, and met part of the production costs. Tenancy-at-will was the next broad category of tenure. The farmer did not have a written lease but instead held from year to year at the will of the landowner, who in theory could evict the occupant at short notice for no reason. In practice, however, many landlords tended to leave tenants-at-will undisturbed so long as their husbandry was satisfactory. Changing tenants was costly for the landowner if only because the incumbent occupier possessed specialist knowledge of the idiosyncrasies of the farm’s soil, which would take a newcomer time to learn.

Serfdom and slavery were on the lowest rungs of the tenurial ladder because under these tenures the farmer was compelled to till the soil and often received little of the returns from his labors. In the feudal system in medieval Europe, even servile peasants were not the property of their manorial lord (unlike slaves), but they were – to varying degrees – bound to the land because they usually could not move (or marry) without their lord’s permission. Feudal tenants were generally required to pay some form of rent and also render personal labor services to their lord, most commonly working on his land a few days a week. Over time, these labor services were gradually commuted to a money payment. Finally, it is probably not unreasonable to include most communal forms of land tenure near the bottom of the tenurial ladder. Where land was owned or used by multiple persons, as on the village common and under Soviet collectivization, the communal nature of decision-making must have curbed the freedom of action of the enterprising farmer.

Tenurial Choice

It is possible to identify, as a very rough worldwide generalization, at least three main changes over the centuries in the types of tenure employed. First, with the gradual decline or abolition of communism, feudalism and slavery, there has been a shift towards tenurial systems based on market relations rather than collectivism or coercion. The second change has been the progressive substitution of leases for lives with leases for years, and the third has been a move towards owner occupation and fixed rent leasing at the expense of sharecropping. These shifts have occurred at different rates in different regions, and the progression has not always been linear, but sometimes characterized by reversals. This has produced enormous variation in the popularity of the various tenures. For example, in the eighteenth century sharecropping was common throughout much of the European Continent but was almost unknown in England and Ireland. Indeed, it was not uncommon for multiple forms of tenure to co-exist in the same village at the same time. After the emancipation of slaves in the American South, for instance, a diverse mix of tenures was employed: the traditional assertion that sharecropping replaced slavery in the postbellum countryside is an oversimplification.

Of the tenures listed above, it is the choice of sharecropping that has most fascinated agricultural economists. Its popularity appeared puzzling after many eighteenth and nineteenth century writers argued that this arrangement acted as a check on agrarian improvement because the farmer did not receive the full amount of any increase in farm output. More recently, however, the benefits of share tenancy have been recognised. For example, by dividing the crop, the sharecropping landlord shared the risks of a bad harvest with the tenant, thereby providing partial insurance to farmers who disliked being exposed to risk whilst still preserving some incentive for the occupier to undertake improvements. (By contrast, the fixed rent tenant contracted to pay the same amount irrespective of whether the harvest was profitable or poor.) Another sharecropping puzzle was why the output split was predominately 50/50 – indeed the French and Italian words for share tenancy, metayage and mezzadria respectively, mean splitting in half – when it might be expected that the landlord’s cut would have varied far more from farm to farm. This “easy” and “fair” fraction appears to have been a natural focal point that landlords and tenants were drawn to, thereby avoiding potentially protracted haggling that might have scarred their subsequent relationship.

Tenurial choice over the past century or so can be described using the (albeit imperfect) available body of statistics. Table 1 provides benchmarks of the percentage of agricultural land leased by farmers in several western European countries since the late nineteenth century (land not leased was owner occupied). Almost all farmland in England and Wales and Ireland at the beginning of the period covered by the table was owned by large landowners who divided their estates into farm-sized pieces which were rented out. The farm tenancy sectors of the three Continental countries in 1880 were noticeably smaller. One common factor among the various possible explanations for this was the 1804 Napoleonic Code, introduced in France and the then French empire which included Belgium and the Netherlands. The Code created inheritance laws that split the deceased’s landholdings equally among all heirs rather than, as elsewhere, the eldest son inheriting the whole property. This legal pressure for the fragmentation of landownership helped to produce a sizeable class of small owner occupying farmers on the Continent.

Table 1

Share of Land Leased by Tenant Farmers
in Selected Western European Countries, 1880-1997
(% of total agricultural land)

Belgium England & Wales France Ireland Netherlands
1880 64 85a 40 96b 40
1910 72 89 n/a 42 53
1930 62 63 40 6 49
1950 67 62 44 5 56
1980 71 47 51 8 41
1997 68 33 58 13 34

Source: Swinnen (2002), table 2.
Notes: a figure for 1885; b figure for 1870. Land not leased was owner occupied.

The most striking change since the late nineteenth century has been the rapid shrinkage of the English and Welsh, and especially Irish, tenancy sectors by 1930. In England and Wales, higher taxation (including increased death duties) combined with the legacy of an agricultural depression and the deaths of many landlords or their heirs in World War One to produce a situation where numerous owners were forced to sell to tenant farmers who had profited during the wartime agricultural boom. The even more dramatic decline of tenancy in Ireland was chiefly due to a series of state legislation beginning in 1870 that provided subsidized government loans – made on increasingly favorable terms – to help tenants purchase their holdings: the 1923 Land Act made such sales compulsory. Since the Second World War, most of the countries covered by Table 1 have enacted legal changes increasing rent controls and especially the security of leases. These restrictions have made tenancy more attractive for tenants but more importantly less so for landowners, which helps explain the post-war shrinkage of the tenancy sectors in England and Wales and the Netherlands. By contrast, in France the proportion of land leased has risen in recent years partly in response to government policies encouraging leasing, such as lower taxes on land rents.

The general prevalence of owner occupation in the second half of the twentieth century suggested by Table 1 is supported by Figure 1, which gives a snapshot of the global situation in 1970 using data from the world census of agriculture. The first of each pair of columns shows the percentage of all farmland in each region held under owner occupation. Usually the majority of land was cultivated by its owner, although in Africa communal tenures were more widespread. The second of each pair of columns shows the proportion of land in just the tenancy sector of each region that was let under a sharecropping contract. Despite its traditional association with poverty, sharecropping remains persistently popular even in modern advanced farming sectors, notably in North America where nearly a third of tenanted land in 1970 was occupied by sharecroppers.

Figure 1
Percentage of Total Farmland Held under Owner Cultivation, and the Percentage of Tenanted Land Held under Sharecropping, Various Regions, 1970

Source: Otsuka et al. (1992), table 1

The Historical Role of the Lease

A number of contemporaries and historians have suggested that the lease played an important role in influencing farming practices. Short leases, especially tenancies at will, were loudly criticized by the eighteenth-century English writers Arthur Young and William Marshall on the grounds that these contracts did not provide the tenant with sufficient security to make long-term investments to the farm, such as draining the land. If the benefits from these types of expenditures were not fully realized until after the original lease expired, then there was a danger that the tenant would lose part of his investment returns if the landlord acted opportunistically by evicting him, or by renewing his lease but at a higher rent. Tenants may therefore have been wary of making large expenditures for fear of the later consequences, inhibiting agricultural improvement. How serious a problem the potential insecurity of short leases was in practice is a moot point. Landlords not lessees undertook much of the long-term investments, and for those expenditures that were made by tenants, legal or customary rights existing outside the tenancy agreement might have provided at least some security. Outgoing farmers, for instance, could be due compensation for their “unexhausted improvements,” as under Ulster (Ireland) and English tenant right, and some landlords might have been able to establish a reputation for not unfairly treating their tenantry. Furthermore, when the economic conditions faced by farmers were depressed or uncertain, many tenants actually preferred a short lease because this ensured that they were not tied to the holding if it turned out to be unprofitable.

Leases could have promoted innovative, or at least best practice, farming if the landowner had used these documents to insist on the tenantry adopting certain types of crops and crop rotations. Evidence from England during the long eighteenth century, however, indicates that the husbandry clauses written into leases were primarily designed to restrict tenants from engaging in a course of farming that would be deleterious to long-term soil fertility, rather than stipulating that the latest agricultural methods be employed. Thus instead of demanding that (say) turnips be cultivated, popular covenants in English leases included those prohibiting the growing of more than two successive cereal crops on the same field or the plowing of pasture land without the landowner’s prior written consent.

Tithes

Landlords and tenants were not the only parties with a close interest in the produce of the soil. Farmers frequently had to pay tithe, a tax payable for the support of the church. Probably originating as a voluntary payment in early Christian communities, tithes became a legally enforced obligation in many countries – particularly in western Europe – during the Middle Ages. The tithe was supposedly levied at one tenth of the gross value of the farm’s annual produce and was traditionally paid in kind, whereby the clergyman would claim every tenth sheaf of corn (etc.). In practice, a complex combination of case law and custom exempted various types of land and products. Moreover, frequently the tithe owner was not actually a member of the clergy, often because a layperson had purchased church-owned land that had tithing rights attached to it. Many contemporary agricultural writers, not without some justification, criticized tithes in kind on the grounds that they acted as a disincentive to agrarian improvement because, as with a sharecropping agreement, the farmer did not receive the full amount of any rise in farm output. Payment of tithes in kind also offered substantial scope for friction between tithe payers and collectors, for example over whether new crops, such as potatoes, were titheable. To thwart those farmers who sought to under-report their produce, or give poor quality products as tithe (one milkmaid urinated in the tithe milk), the tithing man typically collected his due from the fields rather than allowing the payer to deliver it to the tithe barn.

On account of these disputes and inconveniences, tithes in kind were often commuted to a fixed or variable annual cash payment. Alternatively an allotment of land or a lump sum might be given in return for the church extinguishing tithes. Many of these substitutions were achieved under government legislation, such as the 1836 and 1936 Tithe Acts in England. Yet cash payment was far from being free from conflict arising, for instance, when the church attempted to annul a fixed money charge that, owing to inflation, had fallen to a trifling amount. The underlying friction peaked in so-called tithe wars, which were characterized by demonstrations by payers and varying degrees of violent clashes with collectors; examples include Ireland in the 1830s and England and Wales in the 1930s. In short, the multiplicity of tithing customs and seemingly endless disputes over payment suggest that some tithe owners at some times got closer to obtaining their tenth than others.

Bibliography

Alston, Lee J. and Robert Higgs. “Contractual Mix in Southern Agriculture since the Civil War: Facts, Hypotheses, and Tests.” Journal of Economic History 42 (1982): 327-53.

Blum, Jerome. The End of the Old Order in Rural Europe. Princeton: Princeton University Press, 1978.

Brinkman, Carl, Heinrich Cunow, Fritz Heichelheim, Robert H. Lowie, George McCutchen McBride, David Mitrany, Radha Kamal Mukerjee, Peter Struve and Yosaburo Takekoshi. “Land Tenure.” In The Encyclopaedia of the Social Sciences, Volume 9, 73-127. London: Macmillan, 1933.

Cameron, Rondo and Larry Neal. A Concise Economic History of the World: From Paleolithic Times to the Present. Oxford: Oxford University Press, fourth edition, 2003.

Carmona, Juan and James Simpson. “The ‘Rabassa Morta’ in Catalan Viticulture: The Rise and Decline of a Long-Term Sharecropping Contract, 1670s-1920s.” Journal of Economic History 59 (1999): 290-315.

Evans, Eric J. The Contentious Tithe: The Tithe Problem and English Agriculture, 1750-1850. London: Routledge and Kegan Paul, 1976.

Harvey, Barbara. “The Leasing of the Abbot of Westminster’s Demesnes in the Later Middle Ages.” Economic History Review 22 (1969): 17-27.

Le Roy Ladurie, Emmanuel and Joseph Goy. Tithe and Agrarian History from the Fourteenth to the Nineteenth Centuries. Cambridge: Cambridge University Press, 1982.

O Grada, Cormac. Ireland: A New Economic History, 1780-1939. Oxford: Oxford University Press, 1994.

Otsuka, Keijiro, Hiroyuki Chuma and Yujiro Hayami. “Land and Labor Contracts in Agrarian Economies: Theories and Facts.” Journal of Economic Literature 30 (1992): 1965-2018.

Overton, Mark. Agricultural Revolution in England: The Transformation of the Agrarian Economy, 1500-1850. Cambridge: Cambridge University Press, 1996.

Stead, David R. “Crops and Contracts: Land Tenure in England, c. 1700-1850.” D.Phil. thesis, University of Oxford, 2002.

Swinnen, Johan F. M. “Political Reforms, Rural Crises and Land Tenure in Western Europe.” Food Policy 27 (2002): 371-94.

Wade Martins, Susanna and Tom Williamson. “The Development of the Lease and Its Role in Agricultural Improvement in East Anglia, 1660-1870.” Agricultural History Review 46 (1998): 127-41.

Whyte, Ian D. “Written Leases and Their Impact on Scottish Agriculture in the Seventeenth Century.” Agricultural History Review 27 (1979): 1-9.

Young, H. Peyton and Mary A. Burke. “Competition and Custom in Economic Contracts: A Case Study of Illinois Agriculture.” American Economic Review 91 (2001): 559-73.

Citation: Stead, David. “Agricultural Tenures and Tithes”. EH.Net Encyclopedia, edited by Robert Whaples. January 25, 2004. URL http://eh.net/encyclopedia/agricultural-tenures-and-tithes/

The Charleston Orphan House: Children?s Lives in the First Public Orphan House in America

Author(s):Murray, John E.
Reviewer(s):Rothenberg, Winifred B.

Published by EH.Net (July 2013)
?
John E. Murray, The Charleston Orphan House: Children?s Lives in the First Public Orphan House in America. Chicago: University of Chicago Press, 2013.? xx + 268 pp. (hardcover), $30 (hardcover), ISBN: 978-0-226-92409-0.

Reviewed for EH.Net by Winifred B. Rothenberg, Department of Economics, Tufts University.

The first public orphanage in America was founded not in Boston, citadel of civic virtue, but in Charleston, South Carolina. Because it was the first, it is not unreasonable to assume that it became the blueprint after which all other municipal orphanages were modeled ? which is to say, it set the dimensions of the ?great confinement? within which forsaken children would live for generations to come. Sufficient reason, then, for the Charleston Orphan House to have attracted the attention of John E. Murray, whose previous publications on orphans, paupers, child labor, charity, literacy, epidemic disease, a Shaker community, and the history of health insurance in America testify to a tender and enduring concern for ?the least of these.? Scholars less tender-hearted than Murray may wonder why a book on one southern orphanage should be of interest to economic historians, to which Murray can reply that charity ? or, more accurately, altruism ? has engaged the likes of Arrow, Debreu, Sen, Kahneman, and innumerable others in arcane conversations around rational expectations, decision theory, social welfare functions, intergenerational wealth transfers, the theory of the firm, and the specification of a Happiness GNP measure.

The narrative density of Murray?s book comes from his exhaustive research in the Orphan House archives. He has managed to link at least 500 children by name to their life-cycle events, allowing him to track at least a quarter of the 2,000 children who passed through the orphanage. Beyond that, it appears that he has found every donor, every Commissioners? report, every repair bill, contract, bill of sale, loan application, housekeeping account, public health inspection, doctor?s order, teacher?s diary, minister?s sermon, church attendance record, and the testimony of every impoverished and widowed parent on behalf of his child at risk. Murray calls this archive ?the single greatest collection of first-person reports on work and family lives of the [white, that?s important] poor anywhere in the United States? (p. 4).

First in the course of his ten chapters are the conditions in the House. They are Dickensian. Visitors found it ?miserable,? ?extremely comfortless,? ?appalling,? ?swathed in darkness,? ?beds drenched with water when it rained,? without light, without sheets, without beds or bedsteads, waste water leaking into the drinking wells, one toilet for 100 children, privies in the vegetable garden. ?Yet many children hoped to enter the institution? (p. 66). It improved over time, and Murray moves on to devote a chapter each to the financing, management, diet, discipline, education, training, and medical care offered to the children. In chapters 8 and 9 where Murray follows the young people into apprenticeships and beyond, he opens the orphanage up and out to the urban, industrial, and export-driven economy of the Charleston that will have to absorb them. The book ends with an Epilogue, and it is there, as I read him, that Murray relaxes the courtesies that have constrained him thus far, and ?tells it like it is.? It is there that he undertakes to answer the question: what really motivated the Commissioners to fund a public orphanage in Charleston? But more about that later.
?
For this reviewer, the gold standard for a project like Murray?s is Civic Charity in a Golden Age by Anne E.C. McCants (1997), a magisterial study of the Municipal Orphanage of Amsterdam from its founding in 1578 to its demise in 1815. I have adapted from that book and applied to Murray?s a list of six large questions which project these two institutions onto a wide and consequential canvas. I want to use these questions as a heuristic device upon which to hang the balance of this review.

1. What impulse motivated the founding of the public orphanage?
2. In what sense was the public orphanage ?public??
3. What role did state, county, and city government play?
4. Why did the charitable impulse take institutional form? In the case of abandoned children, was there no other solution?
5. Or was the choice of an institutional solution dictated in some way by the consilience (E.O.Wilson?s term: ?accordance of two or more lines of induction drawn from different sets of phenomena?) of capitalism, urbanization, secularism, and the nuclear family that emerged in America at the end of the eighteenth century?
6. Did the orphanage effect genuine redistribution, or was it rather ?an elaborate ploy? to perpetuate the inequality that had motivated it? This last will not be discussed in this review, which is already too long, but will remain as a question ? if only to tease the righteous.

McCants?s book does not appear in Murray?s bibliography, but these questions are the nuts and bolts, the What, When, Who, Where, and Why of his story no less than of hers. And while some are dealt with implicitly in his text, until the Epilogue none of them is discussed explicitly, and I wish they had been.

When the orphanage was founded in 1790, there were 8,089 white persons in Charleston, and 8,270 black persons, and of the blacks 7,684 were slaves, and 586 were freed blacks. Complicating things was the revolution in Haiti the following year. The uneasy equilibrium in Charleston was overwhelmed by a wave Haitian ?migr?s, of the white elite, yes, but mostly by a new population of slaves, free blacks, and mulatto refugees. Complicating things further was that as the number of freed blacks in the city increased, so did the share that were mulatto. White anxiety about mulattoes would reach such a level by 1848 that Charleston would require by law that all freed people wear a tag identifying them as black, and carry proof of manumission, at risk being re-enslaved.

In this climate it will come as no surprise to learn that the Charleston Orphan House and the Free School associated with it admitted only white children; not just white but who, while certifiably poor, were not very poor, in fact whose homes were decent enough to pass an inspection.? Thus defended, the orphanage played an important part in forging a race-based ?alliance of whites? against blacks that cut across, was orthogonal to and subversive of the class-based alliance that a new industrial working class was trying to build against capital. ?It is this link between civic society and racial unity that helps explain the puzzling question, why the first (and for many years the only) large-scale public orphanage in America should have been built in Charleston? (p. 199). ?Charleston was unique in the early republic in creating the charitable orphan house because in no other city did the elite need to make common cause with the white poor and working class against the potential common black enemy? (p. 201). ?Webs of white cooperation reached across class lines, as if the other half of Charleston?s population weren?t there at all? (p. 204).

Amsterdam?s public orphanage was also restricted: open only to citizens of Amsterdam, tax payers, members of the Dutch Reform (Calvinist) church, wealth-holders, of the ?middling classes.? If the Charleston orphanage was an oasis of white unity, and the Amsterdam orphanage was an oasis of middling unity, then in what sense were they ?public??

To answer that, begin with the meaning of “private”: how do we understand “private”? Sir William Blackstone, the great eighteenth century jurist, defined private property for the ages. It is, he wrote, ?that despotic dominion which one man claims and exercises over the things of this world in total exclusion of the rights of any other individual in the universe.?

If ?private? is the right to exclude, then is ?public? the obligation to include?? It doesn?t appear so. Public swimming pools, public housing, public schools, public water fountains, public transportation, public lands, public access to the Internet … all of these masquerade as forms of Commons but they have all, at one time or another, been ?restricted? against some portion of the public: against unmarried couples, single women, families with children, families with pets, against smokers, blacks, Asians, Jews, Latinos, and on and on ? that sorry history is too well known. We are no closer to discovering the meaning of ?public.?

The Oxford English Dictionary makes short shrift of a definition: “of or provided by the state rather than an independent, commercial company”; “ordinary people in general”; “done, perceived or existing in open view.” Of these the only relevant definition for our purposes is the first: “of or provided by the State.” The Charleston orphanage, even if not of the State, was provided by the State.? Then how can it assert a privacy right to exclude?

There were three sources of funds for the orphanage: donations, income from the institution?s own assets, support from all levels of government. Accounting for (in the sense of keeping account of) the donations will always be problematical to the extent that it is a non-market transaction. Gift-giving is driven not by reciprocity but ?by the pursuit of ?regard?: the approbation of others? (Avner Offer, ?Between the Gift and the Market,? Economic History Review, 1997).? To keep account of a gift is a small desecration of a private benevolence. But inevitably the charitable impulse would have waned as the increasing pace of commercial development both of the port and of the city would have lured private wealth into emerging capital markets and land speculation.

Market sources of income, however, were built into the endowment of the institution by design. The orphanage earned income on its holdings of B.U.S. bonds; and by law the value of all escheated estates in South Carolina (the estates of those who died intestate and without heirs) automatically reverted to the orphanage, along with ?small bits of wealth belonging to the children? (p. 24).

But eventually the institution needed to depend ?heavily? for its ongoing expenses on contributions from what we now call the public sector. To the extent that the ?public? orphanage was supported by the public, where did the city, county, or state get the money?? Were these pay-outs opportunistic, or were they funded? And if funded, was it supported from taxes or bond issues? If taxes, what kind: property taxes? A poor tax? Port duties? Excise taxes? If so, on what? I found this discussion to be the thinnest in the book, but on the answers to these sorts of questions depends the question we asked above, by what right does a public institution assert a privacy right to exclude?

What was left unsaid about the sources of government funding in the Charleston book is sharpened by the contrast with how much it is possible to say about it in the Amsterdam book. Unlike every level of government in the U.S., the city of Amsterdam appears to have faced no inhibitions on its power to tax income and spending directly. Every ?foreigner? applying for citizenship of the city was obliged to pay a fee in support of the orphanage. Additional support came from taxes levied on burials and marriages; real property was taxed; taxes were levied on all who worked for wages; and excise taxes were levied on all consumption. In addition, graduates of the orphanage were expected to ?give back? to support its upkeep; revenue was earned from the sale of the girls? needlework. Most significant were the assets bequeathed to the children and held in fiduciary trust for them until their maturity, which assets were prudently invested by the orphanage in real estate, commercial property, commercial paper, and annuities, such that by 1790, private donations accounted for only 8% of the income of the Amsterdam Burgerweeshuis.

Institutionalizing orphaned children is so bad an idea that one wonders if some other solution could not have been found. Why did institutional care prevail over alternatives like foster care, adoption, and government support to extended families?
a) Was institutionalization motivated by a rational calculation of its relative efficiency? Were there in fact economies of scale in warehousing children as there are in warehousing, say, Amazon?s inventory of CDs?
b) Or should we look to a moment in time, say 1780-1810 ? the consilience of the Four Modernizations: capitalism, urbanization, secularism, and the nuclear family ? to provide the clue? There are American historians (I among them) who see the decade of the 1780s as an ?Axial Moment? in American history ? ?the most critical moment in the entire history of America,? wrote Gordon Wood in The New York Review of Books (1994) ? in which, in the midst of ?Deep Change? in almost everything else, family responsibility for the intimate care of the aged, the young, the crippled, the alcoholic, the violent, the developmentally challenged, the homeless, the (oops!) pregnant, and the insane were professionalized and transferred to institutions.
c) Or was institutionalization motivated by the nature of institutions themselves which, in the language of the New Institutional Economics, ?provide incentives to agents to work through formal and informal rules and their enforcement? (John Nye, 2003). In the case of the Orphan House ? ?a white island in a sea of blacks? (p. 199) ? what Nye calls ?the institutions-rules nexus? must have provided a measure of security to the increasingly anxious people of Charleston in whom, says Murray, was lurking always the fear of a slave rebellion in the city at large. An ?institutions-rules nexus? to suppress any disorder in the orphan house would have been projected outward to repress any disorder in the society at large.

??The Orphan House was an integral part of the city?s collection of institutions that maintained the prevailing social order the foundation of which was white unity… [It] was at once an integral part of the most repressive social order in America and the most humane and progressive child-care institution in America, and it remained both for decades? (p. 12).
?
John Murray?s book has turned out to be provocative and utterly absorbing.

Winifred B. Rothenberg?s publications include From Market Places to a Market Economy: The Transformation of Rural Massachusetts, 1750-1850 (University of Chicago Press, 1992).?
???
Copyright (c) 2013 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator (administrator@eh.net). Published by EH.Net (July 2013). All EH.Net reviews are archived at http://www.eh.net/BookReview

Subject(s):Social and Cultural History, including Race, Ethnicity and Gender
Geographic Area(s):North America
Time Period(s):18th Century
19th Century

Working Knowledge: Employee Innovation and the Rise of Corporate Intellectual Property, 1800-1930

Author(s):Fisk, Catherine L.
Reviewer(s):Khan, B. Zorina

Published by EH.Net (June 2012)

Catherine L. Fisk, Working Knowledge: Employee Innovation and the Rise of Corporate Intellectual Property, 1800-1930.? Chapel Hill: University of North Carolina Press, 2009.? x + 376 pp. $45 (hardcover), ISBN: 978-0-8078-3302-5.

Reviewed for EH.Net by B. Zorina Khan, Department of Economics, Bowdoin College

Economists promote the notion that specialization and the division of labor benefit society, but the marketplace of ideas places greater value on scholarship that transcends narrow disciplinary boundaries.? Catherine Fisk, Chancellor?s Professor of Law at the University of California, Irvine, has produced a richly detailed and comprehensive study of property rights and creativity in the workplace that encompasses legal, historical, technological, business and economic issues.? The analysis incorporates an extensive reading of legal decisions and treatises, as well as archival material from such firms as Rand McNally, Eastman Kodak and E.I. Du Pont de Nemours.? The focus is primarily on patent rights and (to a lesser extent) copyrights, but the discussion touches on the entire range of intellectual property.? The book is organized in chronological order, with three parts that roughly correspond to the antebellum period, the second half of the nineteenth century, and the first three decades of the twentieth century.

Sir Henry Sumner Maine proposed that all ?progressive societies? graduate from relationships ruled by status to objective interactions based on contract.? For Fisk, this is the false perspective of an apologist for unbridled liberalism.? Instead, the change from status to contract signaled a regressive move from entrepreneurial independence to corporate employment that narrowed the rights of innovative workers.? Her central thesis is that law courts were initially responsive to the rights of employee-entrepreneurs, because the judiciary wished to protect skilled workers from downward mobility.? However, by the early twentieth century courts regarded such efforts as unnecessary because corporate employment had become an emblem of respectable middle class standing.? Hence, the legal rights of innovative employees were eroded because ?workers? freedom would be ensured by a legal and economic regime that provided stable and sustaining corporate employment, not by protecting the right of workers to become entrepreneurs? (p. 10).? An economist might question this interpretation, but such misgivings do not detract from the substantive research findings that make this work an invaluable addition to labor history and to the literature on the evolution of intellectual property rights in the United States.

The first section outlines early intellectual property doctrines, which regarded patent property as the just reward for individual genius rather than as unwarranted monopolies that created barriers to entry.? Property rights were held to be sacred or, as Fisk puts it (p. 34), property comprised ?the enabler of republican democracy.??? Early legal decisions were directed to the determination of the first and true inventor of the rights under debate, rather than toward the employment standing of the innovator.? Employers obtained access to the patented ideas of their employees through the assignments of rights or through licensing.? By way of contrast, copyright varied in many regards from patent laws.? One of the most significant differences was that the 1790 statutes allowed ?proprietors? as well as authors to obtain copyrights.? Registrations primarily included works of low creativity that were likely to be collaborative, such as maps, charts, and dictionaries.? Very few opinions are on record, but most of them comprised litigation involving individual authors, and Fisk contends that such decisions reflected the ?elevated social status? of the parties to the dispute.? She concludes that during the antebellum period relatively pro-employee rules prevailed in the realm of copyrights as well as patents.

The Civil War heralded a transformation that ultimately resulted in the Second Industrial Revolution, and an era in which American cultural goods became internationally competitive.? Similarly, the employment relationship morphed from considerations of status toward the free negotiations between the parties to a contract, which Fisk characterizes as ?a contractarian regime that eliminated employer obligation while yet enforcing dependence and subservience on employees under the guise of formal equality? (p. 79).? This ?legal fiction? of freedom to contract was used to transfer rights from workers to firms.? In the realm of intellectual property transactions, the author finds that the default rule altered ?dramatically? in favor of employer ownership (p. 175).? Patentees, who were initially valued and perceived to be of high social status, lost caste and ended up being relegated to mere corporate drudges.? Their diminished status was both a cause and consequence of their lack of control over the fruits of their creativity.? The Copyright Act of 1909 included an explicit work for hire provision that allowed firms the rights to the output of their employees.? Economies of scale, large multidivisional firms, and a mass market ultimately resulted in the rank commodification of ingenuity and the ?desantification of the creation of art, music, and literature? (p. 176).? Attribution, rather than ownership of property rights, increasingly comprised ?the currency that would enable employees to advance their careers? (p. 210).?? The regime that was established during this period largely characterized the intellectual property system of the rest of the twentieth century.

The most intriguing part of the study assesses rights in knowledge and information outside the formal intellectual property system.? Rights to exclude can be replicated through a variety of alternative mechanisms, such as bilateral or multilateral contracts, non-compete agreements, trade restraints, cartelization, and technological barriers.? Firms can also prevent access to their discoveries if they treat them as trade secrets, but initially their ability to control trade secrets or know-how was limited to prohibitions against the enticement of workers from their place of employment during the term of service. Before the 1880s firms generously shared information, and did not attempt to control the knowledge that employees retained even when workers migrated to other enterprises.? However, such craft knowledge quickly became displaced towards the end of the nineteenth century, and trade secrets doctrines at law applied to a larger scope of activities and information.? Courts enhanced such measures as trade secrecy and restrictive covenants through decisions that attempted to prevent the misappropriation of knowledge and intangible assets that were deemed to belong to the firm.? Thus, the ownership of trade secrets shifted from skilled workers to the corporation.?

In short, the compass of intellectual property rights expanded enormously over the course of industrialization, and Fisk argues that this expansion came at the expense of labor and labor relations, heroic inventors and creative individuals, and also the public domain.? Legal rules towards patents and copyrights were in large part responsible for the denouement in which the ?happy marriage between invention and entrepreneurship? (p. 242) of the nineteenth century had given way to a compatible but dull corporate marriage of inconvenience.

Working Knowledge
was twelve years in the writing; it has been well worth the wait.

B. Zorina Khan is Professor of Economics at Bowdoin College, and author of The Democratization of Invention: Patents and Copyrights in American Economic Development, 1790-1920 (Cambridge University Press, 2005).? Her current projects examine the role of patents and prizes for innovations, and the contributions of great inventors to technological progress.

Copyright (c) 2012 by EH.Net. All rights reserved. This work may be copied for non-profit educational uses if proper credit is given to the author and the list. For other permission, please contact the EH.Net Administrator (administrator@eh.net). Published by EH.Net (June 2012). All EH.Net reviews are archived at http://www.eh.net/BookReview.

Subject(s):History of Technology, including Technological Change
Geographic Area(s):North America
Time Period(s):19th Century
20th Century: Pre WWII