EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

Antebellum Banking in the United States

Howard Bodenhorn, Lafayette College

The first legitimate commercial bank in the United States was the Bank of North America founded in 1781. Encouraged by Alexander Hamilton, Robert Morris persuaded the Continental Congress to charter the bank, which loaned to the cash-strapped Revolutionary government as well as private citizens, mostly Philadelphia merchants. The possibilities of commercial banking had been widely recognized by many colonists, but British law forbade the establishment of commercial, limited-liability banks in the colonies. Given that many of the colonists’ grievances against Parliament centered on economic and monetary issues, it is not surprising that one of the earliest acts of the Continental Congress was the establishment of a bank.

The introduction of banking to the U.S. was viewed as an important first step in forming an independent nation because banks supplied a medium of exchange (banknotes1 and deposits) in an economy perpetually strangled by shortages of specie money and credit, because they animated industry, and because they fostered wealth creation and promoted well-being. In the last case, contemporaries typically viewed banks as an integral part of a wider system of government-sponsored commercial infrastructure. Like schools, bridges, road, canals, river clearing and harbor improvements, the benefits of banks were expected to accrue to everyone even if dividends accrued only to shareholders.

Financial Sector Growth

By 1800 each major U.S. port city had at least one commercial bank serving the local mercantile community. As city banks proved themselves, banking spread into smaller cities and towns and expanded their clientele. Although most banks specialized in mercantile lending, others served artisans and farmers. In 1820 there were 327 commercial banks and several mutual savings banks that promoted thrift among the poor. Thus, at the onset of the antebellum period (defined here as the period between 1820 and 1860), urban residents were familiar with the intermediary function of banks and used bank-supplied currencies (deposits and banknotes) for most transactions. Table 1 reports the number of banks and the value of loans outstanding at year end between 1820 and 1860. During the era, the number of banks increased from 327 to 1,562 and total loans increased from just over $55.1 million to $691.9 million. Bank-supplied credit in the U.S. economy increased at a remarkable annual average rate of 6.3 percent. Growth in the financial sector, then outpaced growth in aggregate economic activity. Nominal gross domestic product increased an average annual rate of about 4.3 percent over the same interval. This essay discusses how regional regulatory structures evolved as the banking sector grew and radiated out from northeastern cities to the hinterlands.

Table 1

Number of Banks and Total Loans, 1820-1860

Year Banks Loans ($ millions)
1820 327 55.1
1821 273 71.9
1822 267 56.0
1823 274 75.9
1824 300 73.8
1825 330 88.7
1826 331 104.8
1827 333 90.5
1828 355 100.3
1829 369 103.0
1830 381 115.3
1831 424 149.0
1832 464 152.5
1833 517 222.9
1834 506 324.1
1835 704 365.1
1836 713 457.5
1837 788 525.1
1838 829 485.6
1839 840 492.3
1840 901 462.9
1841 784 386.5
1842 692 324.0
1843 691 254.5
1844 696 264.9
1845 707 288.6
1846 707 312.1
1847 715 310.3
1848 751 344.5
1849 782 332.3
1850 824 364.2
1851 879 413.8
1852 913 429.8
1853 750 408.9
1854 1208 557.4
1855 1307 576.1
1856 1398 634.2
1857 1416 684.5
1858 1422 583.2
1859 1476 657.2
1860 1562 691.9

Sources: Fenstermaker (1965); U.S. Comptroller of the Currency (1931).

Adaptability

As important as early American banks were in the process of capital accumulation, perhaps their most notable feature was their adaptability. Kuznets (1958) argues that one measure of the financial sector’s value is how and to what extent it evolves with changing economic conditions. Put in place to perform certain functions under one set of economic circumstances, how did it alter its behavior and service the needs of borrowers as circumstances changed. One benefit of the federalist U.S. political system was that states were given the freedom to establish systems reflecting local needs and preferences. While the political structure deserves credit in promoting regional adaptations, North (1994) credits the adaptability of America’s formal rules and informal constraints that rewarded adventurism in the economic, as well as the noneconomic, sphere. Differences in geography, climate, crop mix, manufacturing activity, population density and a host of other variables were reflected in different state banking systems. Rhode Island’s banks bore little resemblance to those in far away Louisiana or Missouri, or even those in neighboring Connecticut. Each state’s banks took a different form, but their purpose was the same; namely, to provide the state’s citizens with monetary and intermediary services and to promote the general economic welfare. This section provides a sketch of regional differences. A more detailed discussion can be found in Bodenhorn (2002).

State Banking in New England

New England’s banks most resemble the common conception of the antebellum bank. They were relatively small, unit banks; their stock was closely held; they granted loans to local farmers, merchants and artisans with whom the bank’s managers had more than a passing familiarity; and the state took little direct interest in their daily operations.

Of the banking systems put in place in the antebellum era, New England’s is typically viewed as the most stable and conservative. Friedman and Schwartz (1986) attribute their stability to an Old World concern with business reputations, familial ties, and personal legacies. New England was long settled, its society well established, and its business community mature and respected throughout the Atlantic trading network. Wealthy businessmen and bankers with strong ties to the community — like the Browns of Providence or the Bowdoins of Boston — emphasized stability not just because doing so benefited and reflected well on them, but because they realized that bad banking was bad for everyone’s business.

Besides their reputation for soundness, the two defining characteristics of New England’s early banks were their insider nature and their small size. The typical New England bank was small compared to banks in other regions. Table 2 shows that in 1820 the average Massachusetts country bank was about the same size as a Pennsylvania country bank, but both were only about half the size of a Virginia bank. A Rhode Island bank was about one-third the size of a Massachusetts or Pennsylvania bank and a mere one-sixth as large as Virginia’s banks. By 1850 the average Massachusetts bank declined relatively, operating on about two-thirds the paid-in capital of a Pennsylvania country bank. Rhode Island’s banks also shrank relative to Pennsylvania’s and were tiny compared to the large branch banks in the South and West.

Table 2

Average Bank Size by Capital and Lending in 1820 and 1850 Selected States and Cities

(in $ thousands)

1820

Capital

Loans 1850 Capital Loans
Massachusetts $374.5 $480.4 $293.5 $494.0
except Boston 176.6 230.8 170.3 281.9
Rhode Island 95.7 103.2 186.0 246.2
except Providence 60.6 72.0 79.5 108.5
New York na na 246.8 516.3
except NYC na na 126.7 240.1
Pennsylvania 221.8 262.9 340.2 674.6
except Philadelphia 162.6 195.2 246.0 420.7
Virginia1,2 351.5 340.0 270.3 504.5
South Carolina2 na na 938.5 1,471.5
Kentucky2 na na 439.4 727.3

Notes: 1 Virginia figures for 1822. 2 Figures represent branch averages.

Source: Bodenhorn (2002).

Explanations for New England Banks’ Relatively Small Size

Several explanations have been offered for the relatively small size of New England’s banks. Contemporaries attributed it to the New England states’ propensity to tax bank capital, which was thought to work to the detriment of large banks. They argued that large banks circulated fewer banknotes per dollar of capital. The result was a progressive tax that fell disproportionately on large banks. Data compiled from Massachusetts’s bank reports suggest that large banks were not disadvantaged by the capital tax. It was a fact, as contemporaries believed, that large banks paid higher taxes per dollar of circulating banknotes, but a potentially better benchmark is the tax to loan ratio because large banks made more use of deposits than small banks. The tax to loan ratio was remarkably constant across both bank size and time, averaging just 0.6 percent between 1834 and 1855. Moreover, there is evidence of constant to modestly increasing returns to scale in New England banking. Large banks were generally at least as profitable as small banks in all years between 1834 and 1860, and slightly more so in many.

Lamoreaux (1993) offers a different explanation for the modest size of the region’s banks. New England’s banks, she argues, were not impersonal financial intermediaries. Rather, they acted as the financial arms of extended kinship trading networks. Throughout the antebellum era banks catered to insiders: directors, officers, shareholders, or business partners and kin of directors, officers, shareholders and business partners. Such preferences toward insiders represented the perpetuation of the eighteenth-century custom of pooling capital to finance family enterprises. In the nineteenth century the practice continued under corporate auspices. The corporate form, in fact, facilitated raising capital in greater amounts than the family unit could raise on its own. But because the banks kept their loans within a relatively small circle of business connections, it was not until the late nineteenth century that bank size increased.2

Once the kinship orientation of the region’s banks was established it perpetuated itself. When outsiders could not obtain loans from existing insider organizations, they formed their own insider bank. In doing so the promoters assured themselves of a steady supply of credit and created engines of economic mobility for kinship networks formerly closed off from many sources of credit. State legislatures accommodated the practice through their liberal chartering policies. By 1860, Rhode Island had 91 banks, Maine had 68, New Hampshire 51, Vermont 44, Connecticut 74 and Massachusetts 178.

The Suffolk System

One of the most commented on characteristic of New England’s banking system was its unique regional banknote redemption and clearing mechanism. Established by the Suffolk Bank of Boston in the early 1820s, the system became known as the Suffolk System. With so many banks in New England, each issuing it own form of currency, it was sometimes difficult for merchants, farmers, artisans, and even other bankers, to discriminate between real and bogus banknotes, or to discriminate between good and bad bankers. Moreover, the rural-urban terms of trade pulled most banknotes toward the region’s port cities. Because country merchants and farmers were typically indebted to city merchants, country banknotes tended to flow toward the cities, Boston more so than any other. By the second decade of the nineteenth century, country banknotes became a constant irritant for city bankers. City bankers believed that country issues displaced Boston banknotes in local transactions. More irritating though was the constant demand by the city banks’ customers to accept country banknotes on deposit, which placed the burden of interbank clearing on the city banks.3

In 1803 the city banks embarked on a first attempt to deal with country banknotes. They joined together, bought up a large quantity of country banknotes, and returned them to the country banks for redemption into specie. This effort to reduce country banknote circulation encountered so many obstacles that it was quickly abandoned. Several other schemes were hatched in the next two decades, but none proved any more successful than the 1803 plan.

The Suffolk Bank was chartered in 1818 and within a year embarked on a novel scheme to deal with the influx of country banknotes. The Suffolk sponsored a consortium of Boston bank in which each member appointed the Suffolk as its lone agent in the collection and redemption of country banknotes. In addition, each city bank contributed to a fund used to purchase and redeem country banknotes. When the Suffolk collected a large quantity of a country bank’s notes, it presented them for immediate redemption with an ultimatum: Join in a regular and organized redemption system or be subject to further unannounced redemption calls.4 Country banks objected to the Suffolk’s proposal, because it required them to keep noninterest-earning assets on deposit with the Suffolk in amounts equal to their average weekly redemptions at the city banks. Most country banks initially refused to join the redemption network, but after the Suffolk made good on a few redemption threats, the system achieved near universal membership.

Early interpretations of the Suffolk system, like those of Redlich (1949) and Hammond (1957), portray the Suffolk as a proto-central bank, which acted as a restraining influence that exercised some control over the region’s banking system and money supply. Recent studies are less quick to pronounce the Suffolk a successful experiment in early central banking. Mullineaux (1987) argues that the Suffolk’s redemption system was actually self-defeating. Instead of making country banknotes less desirable in Boston, the fact that they became readily redeemable there made them perfect substitutes for banknotes issued by Boston’s prestigious banks. This policy made country banknotes more desirable, which made it more, not less, difficult for Boston’s banks to keep their own notes in circulation.

Fenstermaker and Filer (1986) also contest the long-held view that the Suffolk exercised control over the region’s money supply (banknotes and deposits). Indeed, the Suffolk’s system was self-defeating in this regard as well. By increasing confidence in the value of a randomly encountered banknote, people were willing to hold increases in banknotes issues. In an interesting twist on the traditional interpretation, a possible outcome of the Suffolk system is that New England may have grown increasingly financial backward as a direct result of the region’s unique clearing system. Because banknotes were viewed as relatively safe and easily redeemed, the next big financial innovation — deposit banking — in New England lagged far behind other regions. With such wide acceptance of banknotes, there was no reason for banks to encourage the use of deposits and little reason for consumers to switch over.

Summary: New England Banks

New England’s banking system can be summarized as follows: Small unit banks predominated; many banks catered to small groups of capitalists bound by personal and familial ties; banking was becoming increasingly interconnected with other lines of business, such as insurance, shipping and manufacturing; the state took little direct interest in the daily operations of the banks and its supervisory role amounted to little more than a demand that every bank submit an unaudited balance sheet at year’s end; and that the Suffolk developed an interbank clearing system that facilitated the use of banknotes throughout the region, but had little effective control over the region’s money supply.

Banking in the Middle Atlantic Region

Pennsylvania

After 1810 or so, many bank charters were granted in New England, but not because of the presumption that the bank would promote the commonweal. Charters were granted for the personal gain of the promoter and the shareholders and in proportion to the personal, political and economic influence of the bank’s founders. No New England state took a significant financial stake in its banks. In both respects, New England differed markedly from states in other regions. From the beginning of state-chartered commercial banking in Pennsylvania, the state took a direct interest in the operations and profits of its banks. The Bank of North America was the obvious case: chartered to provide support to the colonial belligerents and the fledgling nation. Because the bank was popularly perceived to be dominated by Philadelphia’s Federalist merchants, who rarely loaned to outsiders, support for the bank waned.5 After a pitched political battle in which the Bank of North America’s charter was revoked and reinstated, the legislature chartered the Bank of Pennsylvania in 1793. As its name implies, this bank became the financial arm of the state. Pennsylvania subscribed $1 million of the bank’s capital, giving it the right to appoint six of thirteen directors and a $500,000 line of credit. The bank benefited by becoming the state’s fiscal agent, which guaranteed a constant inflow of deposits from regular treasury operations as well as western land sales.

By 1803 the demand for loans outstripped the existing banks’ supply and a plan for a new bank, the Philadelphia Bank, was hatched and its promoters petitioned the legislature for a charter. The existing banks lobbied against the charter, and nearly sank the new bank’s chances until it established a precedent that lasted throughout the antebellum era. Its promoters bribed the legislature with a payment of $135,000 in return for the charter, handed over one-sixth of its shares, and opened a line of credit for the state.

Between 1803 and 1814, the only other bank chartered in Pennsylvania was the Farmers and Mechanics Bank of Philadelphia, which established a second substantive precedent that persisted throughout the era. Existing banks followed a strict real-bills lending policy, restricting lending to merchants at very short terms of 30 to 90 days.6 Their adherence to a real-bills philosophy left a growing community of artisans, manufacturers and farmers on the outside looking in. The Farmers and Mechanics Bank was chartered to serve excluded groups. At least seven of its thirteen directors had to be farmers, artisans or manufacturers and the bank was required to lend the equivalent of 10 percent of its capital to farmers on mortgage for at least one year. In later years, banks were established to provide services to even more narrowly defined groups. Within a decade or two, most substantial port cities had banks with names like Merchants Bank, Planters Bank, Farmers Bank, and Mechanics Bank. By 1860 it was common to find banks with names like Leather Manufacturers Bank, Grocers Bank, Drovers Bank, and Importers Bank. Indeed, the Emigrant Savings Bank in New York City served Irish immigrants almost exclusively. In the other instances, it is not known how much of a bank’s lending was directed toward the occupational group included in its name. The adoption of such names may have been marketing ploys as much as mission statements. Only further research will reveal the answer.

New York

State-chartered banking in New York arrived less auspiciously than it had in Philadelphia or Boston. The Bank of New York opened in 1784, but operated without a charter and in open violation of state law until 1791 when the legislature finally sanctioned it. The city’s second bank obtained its charter surreptitiously. Alexander Hamilton was one of the driving forces behind the Bank of New York, and his long-time nemesis, Aaron Burr, was determined to establish a competing bank. Unable to get a charter from a Federalist legislature, Burr and his colleagues petitioned to incorporate a company to supply fresh water to the inhabitants of Manhattan Island. Burr tucked a clause into the charter of the Manhattan Company (the predecessor to today’s Chase Manhattan Bank) granting the water company the right to employ any excess capital in financial transactions. Once chartered, the company’s directors announced that $500,000 of its capital would be invested in banking.7 Thereafter, banking grew more quickly in New York than in Philadelphia, so that by 1812 New York had seven banks compared to the three operating in Philadelphia.

Deposit Insurance

Despite its inauspicious banking beginnings, New York introduced two innovations that influenced American banking down to the present. The Safety Fund system, introduced in 1829, was the nation’s first experiment in bank liability insurance (similar to that provided by the Federal Deposit Insurance Corporation today). The 1829 act authorized the appointment of bank regulators charged with regular inspections of member banks. An equally novel aspect was that it established an insurance fund insuring holders of banknotes and deposits against loss from bank failure. Ultimately, the insurance fund was insufficient to protect all bank creditors from loss during the panic of 1837 when eleven failures in rapid succession all but bankrupted the insurance fund, which delayed noteholder and depositor recoveries for months, even years. Even though the Safety Fund failed to provide its promised protections, it was an important episode in the subsequent evolution of American banking. Several Midwestern states instituted deposit insurance in the early twentieth century, and the federal government adopted it after the banking panics in the 1930s resulted in the failure of thousands of banks in which millions of depositors lost money.

“Free Banking”

Although the Safety Fund was nearly bankrupted in the late 1830s, it continued to insure a number of banks up to the mid 1860s when it was finally closed. No new banks joined the Safety Fund system after 1838 with the introduction of free banking — New York’s second significant banking innovation. Free banking represented a compromise between those most concerned with the underlying safety and stability of the currency and those most concerned with competition and freeing the country’s entrepreneurs from unduly harsh and anticompetitive restraints. Under free banking, a prospective banker could start a bank anywhere he saw fit, provided he met a few regulatory requirements. Each free bank’s capital was invested in state or federal bonds that were turned over to the state’s treasurer. If a bank failed to redeem even a single note into specie, the treasurer initiated bankruptcy proceedings and banknote holders were reimbursed from the sale of the bonds.

Actually Michigan preempted New York’s claim to be the first free-banking state, but Michigan’s 1837 law was modeled closely after a bill then under debate in New York’s legislature. Ultimately, New York’s influence was profound in this as well, because free banking became one of the century’s most widely copied financial innovations. By 1860 eighteen states adopted free banking laws closely resembling New York’s law. Three other states introduced watered-down variants. Eventually, the post-Civil War system of national banking adopted many of the substantive provisions of New York’s 1838 act.

Both the Safety Fund system and free banking were attempts to protect society from losses resulting from bank failures and to entice people to hold financial assets. Banks and bank-supplied currency were novel developments in the hinterlands in the early nineteenth century and many rural inhabitants were skeptical about the value of small pieces of paper. They were more familiar with gold and silver. Getting them to exchange one for the other was a slow process, and one that relied heavily on trust. But trust was built slowly and destroyed quickly. The failure of a single bank could, in a week, destroy the confidence in a system built up over a decade. New York’s experiments were designed to mitigate, if not eliminate, the negative consequences of bank failures. New York’s Safety Fund, then, differed in the details but not in intent, from New England’s Suffolk system. Bankers and legislators in each region grappled with the difficult issue of protecting a fragile but vital sector of the economy. Each region responded to the problem differently. The South and West settled on yet another solution.

Banking in the South and West

One distinguishing characteristic of southern and western banks was their extensive branch networks. Pennsylvania provided for branch banking in the early nineteenth century and two banks jointly opened about ten branches. In both instances, however, the branches became a net liability. The Philadelphia Bank opened four branches in 1809 and by 1811 was forced to pass on its semi-annual dividends because losses at the branches offset profits at the Philadelphia office. At bottom, branch losses resulted from a combination of ineffective central office oversight and unrealistic expectations about the scale and scope of hinterland lending. Philadelphia’s bank directors instructed branch managers to invest in high-grade commercial paper or real bills. Rural banks found a limited number of such lending opportunities and quickly turned to mortgage-based lending. Many of these loans fell into arrears and were ultimately written when land sales faltered.

Branch Banking

Unlike Pennsylvania, where branch banking failed, branch banks throughout the South and West thrived. The Bank of Virginia, founded in 1804, was the first state-chartered branch bank and up to the Civil War branch banks served the state’s financial needs. Several small, independent banks were chartered in the 1850s, but they never threatened the dominance of Virginia’s “Big Six” banks. Virginia’s branch banks, unlike Pennsylvania’s, were profitable. In 1821, for example, the net return to capital at the Farmers Bank of Virginia’s home office in Richmond was 5.4 percent. Returns at its branches ranged from a low of 3 percent at Norfolk (which was consistently the low-profit branch) to 9 percent in Winchester. In 1835, the last year the bank reported separate branch statistics, net returns to capital at the Farmers Bank’s branches ranged from 2.9 and 11.7 percent, with an average of 7.9 percent.

The low profits at the Norfolk branch represent a net subsidy from the state’s banking sector to the political system, which was not immune to the same kind of infrastructure boosterism that erupted in New York, Pennsylvania, Maryland and elsewhere. In the immediate post-Revolutionary era, the value of exports shipped from Virginia’s ports (Norfolk and Alexandria) slightly exceeded the value shipped from Baltimore. In the 1790s the numbers turned sharply in Baltimore’s favor and Virginia entered the internal-improvements craze and the battle for western shipments. Banks represented the first phase of the state’s internal improvements plan in that many believed that Baltimore’s new-found advantage resulted from easier credit supplied by the city’s banks. If Norfolk, with one of the best natural harbors on the North American Atlantic coast, was to compete with other port cities, it needed banks and the state required three of the state’s Big Six branch banks to operate branches there. Despite its natural advantages, Norfolk never became an important entrepot and it probably had more bank capital than it required. This pattern was repeated elsewhere. Other states required their branch banks to serve markets such as Memphis, Louisville, Natchez and Mobile that might, with the proper infrastructure grow into important ports.

State Involvement and Intervention in Banking

The second distinguishing characteristic of southern and western banking was sweeping state involvement and intervention. Virginia, for example, interjected the state into the banking system by taking significant stakes in its first chartered banks (providing an implicit subsidy) and by requiring them, once they established themselves, to subsidize the state’s continuing internal improvements programs of the 1820s and 1830s. Indiana followed such a strategy. So, too, did Kentucky, Louisiana, Mississippi, Illinois, Kentucky, Tennessee and Georgia in different degrees. South Carolina followed a wholly different strategy. On one hand, it chartered several banks in which it took no financial interest. On the other, it chartered the Bank of the State of South Carolina, a bank wholly owned by the state and designed to lend to planters and farmers who complained constantly that the state’s existing banks served only the urban mercantile community. The state-owned bank eventually divided its lending between merchants, farmers and artisans and dominated South Carolina’s financial sector.

The 1820s and 1830s witnessed a deluge of new banks in the South and West, with a corresponding increase in state involvement. No state matched Louisiana’s breadth of involvement in the 1830s when it chartered three distinct types of banks: commercial banks that served merchants and manufacturers; improvement banks that financed various internal improvements projects; and property banks that extended long-term mortgage credit to planters and other property holders. Louisiana’s improvement banks included the New Orleans Canal and Banking Company that built a canal connecting Lake Ponchartrain to the Mississippi River. The Exchange and Banking Company and the New Orleans Improvement and Banking Company were required to build and operate hotels. The New Orleans Gas Light and Banking Company constructed and operated gas streetlights in New Orleans and five other cities. Finally, the Carrollton Railroad and Banking Company and the Atchafalaya Railroad and Banking Company were rail construction companies whose bank subsidiaries subsidized railroad construction.

“Commonwealth Ideal” and Inflationary Banking

Louisiana’s 1830s banking exuberance reflected what some historians label the “commonwealth ideal” of banking; that is, the promotion of the general welfare through the promotion of banks. Legislatures in the South and West, however, never demonstrated a greater commitment to the commonwealth ideal than during the tough times of the early 1820s. With the collapse of the post-war land boom in 1819, a political coalition of debt-strapped landowners lobbied legislatures throughout the region for relief and its focus was banking. Relief advocates lobbied for inflationary banking that would reduce the real burden of debts taken on during prior flush times.

Several western states responded to these calls and chartered state-subsidized and state-managed banks designed to reinflate their embattled economies. Chartered in 1821, the Bank of the Commonwealth of Kentucky loaned on mortgages at longer than customary periods and all Kentucky landowners were eligible for $1,000 loans. The loans allowed landowners to discharge their existing debts without being forced to liquidate their property at ruinously low prices. Although the bank’s notes were not redeemable into specie, they were given currency in two ways. First, they were accepted at the state treasury in tax payments. Second, the state passed a law that forced creditors to accept the notes in payment of existing debts or agree to delay collection for two years.

The commonwealth ideal was not unique to Kentucky. During the depression of the 1820s, Tennessee chartered the State Bank of Tennessee, Illinois chartered the State Bank of Illinois and Louisiana chartered the Louisiana State Bank. Although they took slightly different forms, they all had the same intent; namely, to relieve distressed and embarrassed farmers, planters and land owners. What all these banks shared in common was the notion that the state should promote the general welfare and economic growth. In this instance, and again during the depression of the 1840s, state-owned banks were organized to minimize the transfer of property when economic conditions demanded wholesale liquidation. Such liquidation would have been inefficient and imposed unnecessary hardship on a large fraction of the population. To the extent that hastily chartered relief banks forestalled inefficient liquidation, they served their purpose. Although most of these banks eventually became insolvent, requiring taxpayer bailouts, we cannot label them unsuccessful. They reinflated economies and allowed for an orderly disposal of property. Determining if the net benefits were positive or negative requires more research, but for the moment we are forced to accept the possibility that the region’s state-owned banks of the 1820s and 1840s advanced the commonweal.

Conclusion: Banks and Economic Growth

Despite notable differences in the specific form and structure of each region’s banking system, they were all aimed squarely at a common goal; namely, realizing that region’s economic potential. Banks helped achieve the goal in two ways. First, banks monetized economies, which reduced the costs of transacting and helped smooth consumption and production across time. It was no longer necessary for every farm family to inventory their entire harvest. They could sell most of it, and expend the proceeds on consumption goods as the need arose until the next harvest brought a new cash infusion. Crop and livestock inventories are prone to substantial losses and an increased use of money reduced them significantly. Second, banks provided credit, which unleashed entrepreneurial spirits and talents. A complete appreciation of early American banking recognizes the banks’ contribution to antebellum America’s economic growth.

Bibliographic Essay

Because of the large number of sources used to construct the essay, the essay was more readable and less cluttered by including a brief bibliographic essay. A full bibliography is included at the end.

Good general histories of antebellum banking include Dewey (1910), Fenstermaker (1965), Gouge (1833), Hammond (1957), Knox (1903), Redlich (1949), and Trescott (1963). If only one book is read on antebellum banking, Hammond’s (1957) Pulitzer-Prize winning book remains the best choice.

The literature on New England banking is not particularly large, and the more important historical interpretations of state-wide systems include Chadbourne (1936), Hasse (1946, 1957), Simonton (1971), Spencer (1949), and Stokes (1902). Gras (1937) does an excellent job of placing the history of a single bank within the larger regional and national context. In a recent book and a number of articles Lamoreaux (1994 and sources therein) provides a compelling and eminently readable reinterpretation of the region’s banking structure. Nathan Appleton (1831, 1856) provides a contemporary observer’s interpretation, while Walker (1857) provides an entertaining if perverse and satirical history of a fictional New England bank. Martin (1969) provides details of bank share prices and dividend payments from the establishment of the first banks in Boston through the end of the nineteenth century. Less technical studies of the Suffolk system include Lake (1947), Trivoli (1979) and Whitney (1878); more technical interpretations include Calomiris and Kahn (1996), Mullineaux (1987), and Rolnick, Smith and Weber (1998).

The literature on Middle Atlantic banking is huge, but the better state-level histories include Bryan (1899), Daniels (1976), and Holdsworth (1928). The better studies of individual banks include Adams (1978), Lewis (1882), Nevins (1934), and Wainwright (1953). Chaddock (1910) provides a general history of the Safety Fund system. Golembe (1960) places it in the context of modern deposit insurance, while Bodenhorn (1996) and Calomiris (1989) provide modern analyses. A recent revival of interest in free banking has brought about a veritable explosion in the number of studies on the subject, but the better introductory ones remain Rockoff (1974, 1985), Rolnick and Weber (1982, 1983), and Dwyer (1996).

The literature on southern and western banking is large and of highly variable quality, but I have found the following to be the most readable and useful general sources: Caldwell (1935), Duke (1895), Esary (1912), Golembe (1978), Huntington (1915), Green (1972), Lesesne (1970), Royalty (1979), Schweikart (1987) and Starnes (1931).

References and Further Reading

Adams, Donald R., Jr. Finance and Enterprise in Early America: A Study of Stephen Girard’s Bank, 1812-1831. Philadelphia: University of Pennsylvania Press, 1978.

Alter, George, Claudia Goldin and Elyce Rotella. “The Savings of Ordinary Americans: The Philadelphia Saving Fund Society in the Mid-Nineteenth-Century.” Journal of Economic History 54, no. 4 (December 1994): 735-67.

Appleton, Nathan. A Defence of Country Banks: Being a Reply to a Pamphlet Entitled ‘An Examination of the Banking System of Massachusetts, in Reference to the Renewal of the Bank Charters.’ Boston: Stimpson & Clapp, 1831.

Appleton, Nathan. Bank Bills or Paper Currency and the Banking System of Massachusetts with Remarks on Present High Prices. Boston: Little, Brown and Company, 1856.

Berry, Thomas Senior. Revised Annual Estimates of American Gross National Product: Preliminary Estimates of Four Major Components of Demand, 1789-1889. Richmond: University of Richmond Bostwick Paper No. 3, 1978.

Bodenhorn, Howard. “Zombie Banks and the Demise of New York’s Safety Fund.” Eastern Economic Journal 22, no. 1 (1996): 21-34.

Bodenhorn, Howard. “Private Banking in Antebellum Virginia: Thomas Branch & Sons of Petersburg.” Business History Review 71, no. 4 (1997): 513-42.

Bodenhorn, Howard. A History of Banking in Antebellum America: Financial Markets and Economic Development in an Era of Nation-Building. Cambridge and New York: Cambridge University Press, 2000.

Bodenhorn, Howard. State Banking in Early America: A New Economic History. New York: Oxford University Press, 2002.

Bryan, Alfred C. A History of State Banking in Maryland. Baltimore: Johns Hopkins University Press, 1899.

Caldwell, Stephen A. A Banking History of Louisiana. Baton Rouge: Louisiana State University Press, 1935.

Calomiris, Charles W. “Deposit Insurance: Lessons from the Record.” Federal Reserve Bank of Chicago Economic Perspectives 13 (1989): 10-30.

Calomiris, Charles W., and Charles Kahn. “The Efficiency of Self-Regulated Payments Systems: Learnings from the Suffolk System.” Journal of Money, Credit, and Banking 28, no. 4 (1996): 766-97.

Chadbourne, Walter W. A History of Banking in Maine, 1799-1930. Orono: University of Maine Press, 1936.

Chaddock, Robert E. The Safety Fund Banking System in New York, 1829-1866. Washington, D.C.: Government Printing Office, 1910.

Daniels, Belden L. Pennsylvania: Birthplace of Banking in America. Harrisburg: Pennsylvania Bankers Association, 1976.

Davis, Lance, and Robert E. Gallman. “Capital Formation in the United States during the Nineteenth Century.” In Cambridge Economic History of Europe (Vol. 7, Part 2), edited by Peter Mathias and M.M. Postan, 1-69. Cambridge: Cambridge University Press, 1978.

Davis, Lance, and Robert E. Gallman. “Savings, Investment, and Economic Growth: The United States in the Nineteenth Century.” In Capitalism in Context: Essays on Economic Development and Cultural Change in Honor of R.M. Hartwell, edited by John A. James and Mark Thomas, 202-29. Chicago: University of Chicago Press, 1994.

Dewey, Davis R. State Banking before the Civil War. Washington, D.C.: Government Printing Office, 1910.

Duke, Basil W. History of the Bank of Kentucky, 1792-1895. Louisville: J.P. Morton, 1895.

Dwyer, Gerald P., Jr. “Wildcat Banking, Banking Panics, and Free Banking in the United States.” Federal Reserve Bank of Atlanta Economic Review 81, no. 3 (1996): 1-20.

Engerman, Stanley L., and Robert E. Gallman. “U.S. Economic Growth, 1783-1860.” Research in Economic History 8 (1983): 1-46.

Esary, Logan. State Banking in Indiana, 1814-1873. Indiana University Studies No. 15. Bloomington: Indiana University Press, 1912.

Fenstermaker, J. Van. The Development of American Commercial Banking, 1782-1837. Kent, Ohio: Kent State University, 1965.

Fenstermaker, J. Van, and John E. Filer. “Impact of the First and Second Banks of the United States and the Suffolk System on New England Bank Money, 1791-1837.” Journal of Money, Credit, and Banking 18, no. 1 (1986): 28-40.

Friedman, Milton, and Anna J. Schwartz. “Has the Government Any Role in Money?” Journal of Monetary Economics 17, no. 1 (1986): 37-62.

Gallman, Robert E. “American Economic Growth before the Civil War: The Testimony of the Capital Stock Estimates.” In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John Joseph Wallis, 79-115. Chicago: University of Chicago Press, 1992.

Goldsmith, Raymond. Financial Structure and Development. New Haven: Yale University Press, 1969.

Golembe, Carter H. “The Deposit Insurance Legislation of 1933: An Examination of its Antecedents and Purposes.” Political Science Quarterly 76, no. 2 (1960): 181-200.

Golembe, Carter H. State Banks and the Economic Development of the West. New York: Arno Press, 1978.

Gouge, William M. A Short History of Paper Money and Banking in the United States. Philadelphia: T.W. Ustick, 1833.

Gras, N.S.B. The Massachusetts First National Bank of Boston, 1784-1934. Cambridge, MA: Harvard University Press, 1937.

Green, George D. Finance and Economic Development in the Old South: Louisiana Banking, 1804-1861. Stanford: Stanford University Press, 1972.

Hammond, Bray. Banks and Politics in America from the Revolution to the Civil War. Princeton: Princeton University Press, 1957.

Hasse, William F., Jr. A History of Banking in New Haven, Connecticut. New Haven: privately printed, 1946.

Hasse, William F., Jr. A History of Money and Banking in Connecticut. New Haven: privately printed, 1957.

Holdsworth, John Thom. Financing an Empire: History of Banking in Pennsylvania. Chicago: S.J. Clarke Publishing Company, 1928.

Huntington, Charles Clifford. A History of Banking and Currency in Ohio before the Civil War. Columbus: F. J. Herr Printing Company, 1915.

Knox, John Jay. A History of Banking in the United States. New York: Bradford Rhodes & Company, 1903.

Kuznets, Simon. “Foreword.” In Financial Intermediaries in the American Economy, by Raymond W. Goldsmith. Princeton: Princeton University Press, 1958.

Lake, Wilfred. “The End of the Suffolk System.” Journal of Economic History 7, no. 4 (1947): 183-207.

Lamoreaux, Naomi R. Insider Lending: Banks, Personal Connections, and Economic Development in Industrial New England. Cambridge: Cambridge University Press, 1994.

Lesesne, J. Mauldin. The Bank of the State of South Carolina. Columbia: University of South Carolina Press, 1970.

Lewis, Lawrence, Jr. A History of the Bank of North America: The First Bank Chartered in the United States. Philadelphia: J.B. Lippincott & Company, 1882.

Lockard, Paul A. Banks, Insider Lending and Industries of the Connecticut River Valley of Massachusetts, 1813-1860. Unpublished Ph.D. thesis, University of Massachusetts, 2000.

Martin, Joseph G. A Century of Finance. New York: Greenwood Press, 1969.

Moulton, H.G. “Commercial Banking and Capital Formation.” Journal of Political Economy 26 (1918): 484-508, 638-63, 705-31, 849-81.

Mullineaux, Donald J. “Competitive Monies and the Suffolk Banking System: A Contractual Perspective.” Southern Economic Journal 53 (1987): 884-98.

Nevins, Allan. History of the Bank of New York and Trust Company, 1784 to 1934. New York: privately printed, 1934.

New York. Bank Commissioners. “Annual Report of the Bank Commissioners.” New York General Assembly Document No. 74. Albany, 1835.

North, Douglass. “Institutional Change in American Economic History.” In American Economic Development in Historical Perspective, edited by Thomas Weiss and Donald Schaefer, 87-98. Stanford: Stanford University Press, 1994.

Rappaport, George David. Stability and Change in Revolutionary Pennsylvania: Banking, Politics, and Social Structure. University Park, PA: The Pennsylvania State University Press, 1996.

Redlich, Fritz. The Molding of American Banking: Men and Ideas. New York: Hafner Publishing Company, 1947.

Rockoff, Hugh. “The Free Banking Era: A Reexamination.” Journal of Money, Credit, and Banking 6, no. 2 (1974): 141-67.

Rockoff, Hugh. “New Evidence on the Free Banking Era in the United States.” American Economic Review 75, no. 4 (1985): 886-89.

Rolnick, Arthur J., and Warren E. Weber. “Free Banking, Wildcat Banking, and Shinplasters.” Federal Reserve Bank of Minneapolis Quarterly Review 6 (1982): 10-19.

Rolnick, Arthur J., and Warren E. Weber. “New Evidence on the Free Banking Era.” American Economic Review 73, no. 5 (1983): 1080-91.

Rolnick, Arthur J., Bruce D. Smith, and Warren E. Weber. “Lessons from a Laissez-Faire Payments System: The Suffolk Banking System (1825-58).” Federal Reserve Bank of Minneapolis Quarterly Review 22, no. 3 (1998): 11-21.

Royalty, Dale. “Banking and the Commonwealth Ideal in Kentucky, 1806-1822.” Register of the Kentucky Historical Society 77 (1979): 91-107.

Schumpeter, Joseph A. The Theory of Economic Development: An Inquiry into Profit, Capital, Credit, Interest, and the Business Cycle. Cambridge, MA: Harvard University Press, 1934.

Schweikart, Larry. Banking in the American South from the Age of Jackson to Reconstruction. Baton Rouge: Louisiana State University Press, 1987.

Simonton, William G. Maine and the Panic of 1837. Unpublished master’s thesis: University of Maine, 1971.

Sokoloff, Kenneth L. “Productivity Growth in Manufacturing during Early Industrialization.” In Long-Term Factors in American Economic Growth, edited by Stanley L. Engerman and Robert E. Gallman. Chicago: University of Chicago Press, 1986.

Sokoloff, Kenneth L. “Invention, Innovation, and Manufacturing Productivity Growth in the Antebellum Northeast.” In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John Joseph Wallis, 345-78. Chicago: University of Chicago Press, 1992.

Spencer, Charles, Jr. The First Bank of Boston, 1784-1949. New York: Newcomen Society, 1949.

Starnes, George T. Sixty Years of Branch Banking in Virginia. New York: Macmillan Company, 1931.

Stokes, Howard Kemble. Chartered Banking in Rhode Island, 1791-1900. Providence: Preston & Rounds Company, 1902.

Sylla, Richard. “Forgotten Men of Money: Private Bankers in Early U.S. History.” Journal of Economic History 36, no. 2 (1976):

Temin, Peter. The Jacksonian Economy. New York: W. W. Norton & Company, 1969.

Trescott, Paul B. Financing American Enterprise: The Story of Commercial Banking. New York: Harper & Row, 1963.

Trivoli, George. The Suffolk Bank: A Study of a Free-Enterprise Clearing System. London: The Adam Smith Institute, 1979.

U.S. Comptroller of the Currency. Annual Report of the Comptroller of the Currency. Washington, D.C.: Government Printing Office, 1931.

Wainwright, Nicholas B. History of the Philadelphia National Bank. Philadelphia: William F. Fell Company, 1953.

Walker, Amasa. History of the Wickaboag Bank. Boston: Crosby, Nichols & Company, 1857.

Wallis, John Joseph. “What Caused the Panic of 1839?” Unpublished working paper, University of Maryland, October 2000.

Weiss, Thomas. “U.S. Labor Force Estimates and Economic Growth, 1800-1860.” In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John Joseph Wallis, 19-75. Chicago: University of Chicago Press, 1992.

Whitney, David R. The Suffolk Bank. Cambridge, MA: Riverside Press, 1878.

Wright, Robert E. “Artisans, Banks, Credit, and the Election of 1800.” The Pennsylvania Magazine of History and Biography 122, no. 3 (July 1998), 211-239.

Wright, Robert E. “Bank Ownership and Lending Patterns in New York and Pennsylvania, 1781-1831.” Business History Review 73, no. 1 (Spring 1999), 40-60.

1 Banknotes were small demonination IOUs printed by banks and circulated as currency. Modern U.S. money are simply banknotes issued by the Federal Reserve Bank, which has a monopoly privilege in the issue of legal tender currency. In antebellum American, when a bank made a loan, the borrower was typically handed banknotes with a face value equal to the dollar value of the loan. The borrower then spent these banknotes in purchasing goods and services, putting them into circulation. Contemporary law held that banks were required to redeem banknotes into gold and silver legal tender on demand. Banks found it profitable to issue notes because they typically held about 30 percent of the total value of banknotes in circulation as reserves. Thus, banks were able to leverage $30 in gold and silver into $100 in loans that returned about 7 percent interest on average.

2 Paul Lockard (2000) challenges Lamoreaux’s interpretation. In a study of 4 banks in the Connecticut River valley, Lockard finds that insiders did not dominate these banks’ resources. As provocative as Lockard’s findings are, he draws conclusions from a small and unrepresentative sample. Two of his four sample banks were savings banks, which were designed as quasi-charitable organizations designed to encourage savings by the working classes and provide small loans. Thus, Lockard’s sample is effectively reduced to two banks. At these two banks, he identifies about 10 percent of loans as insider loans, but readily admits that he cannot always distinguish between insiders and outsiders. For a recent study of how early Americans used savings banks, see Alter, Goldin and Rotella (1994). The literature on savings banks is so large that it cannot be be given its due here.

3 Interbank clearing involves the settling of balances between banks. Modern banks cash checks drawn on other banks and credit the funds to the depositor. The Federal Reserve system provides clearing services between banks. The accepting bank sends the checks to the Federal Reserve, who credits the sending bank’s accounts and sends the checks back to the bank on which they were drawn for reimbursement. In the antebellum era, interbank clearing involved sending banknotes back to issuing banks. Because New England had so many small and scattered banks, the costs of returning banknotes to their issuers were large and sometimes avoided by recirculating notes of distant banks rather than returning them. Regular clearings and redemptions served an important purpose, however, because they kept banks in touch with the current market conditions. A massive redemption of notes was indicative of a declining demand for money and credit. Because the bank’s reserves were drawn down with the redemptions, it was forced to reduce its volume of loans in accord with changing demand conditions.

4 The law held that banknotes were redeemable on demand into gold or silver coin or bullion. If a bank refused to redeem even a single $1 banknote, the banknote holder could have the bank closed and liquidated to recover his or her claim against it.

5 Rappaport (1996) found that the bank’s loans were about equally divided between insiders (shareholders and shareholders’ family and business associates) and outsiders, but nonshareholders received loans about 30 percent smaller than shareholders. The issue remains about whether this bank was an “insider” bank, and depends largely on one’s definition. Any modern bank which made half of its loans to shareholders and their families would be viewed as an “insider” bank. It is less clear where the line can be usefully drawn for antebellum banks.

6 Real-bills lending followed from a nineteenth-century banking philosophy, which held that bank lending should be used to finance the warehousing or wholesaling of already-produced goods. Loans made on these bases were thought to be self-liquidating in that the loan was made against readily sold collateral actually in the hands of a merchant. Under the real-bills doctrine, the banks’ proper functions were to bridge the gap between production and retail sale of goods. A strict adherence to real-bills tenets excluded loans on property (mortgages), loans on goods in process (trade credit), or loans to start-up firms (venture capital). Thus, real-bills lending prescribed a limited role for banks and bank credit. Few banks were strict adherents to the doctrine, but many followed it in large part.

7 Robert E. Wright (1998) offers a different interpretation, but notes that Burr pushed the bill through at the end of a busy legislative session so that many legislators voted on the bill without having read it thoroughly or at all.

The Protestant Ethic Thesis

Donald Frey, Wake Forest University

German sociologist Max Weber (1864 -1920) developed the Protestant-ethic thesis in two journal articles published in 1904-05. The English translation appeared in book form as The Protestant Ethic and the Spirit of Capitalism in 1930. Weber argued that Reformed (i.e., Calvinist) Protestantism was the seedbed of character traits and values that under-girded modern capitalism. This article summarizes Weber’s formulation, considers criticisms of Weber’s thesis, and reviews evidence of linkages between cultural values and economic growth.

Outline of Weber’s Thesis

Weber emphasized that money making as a calling had been “contrary to the ethical feelings of whole epochs…” (Weber 1930, p.73; further Weber references by page number alone). Lacking moral support in pre-Protestant societies, business had been strictly limited to “the traditional manner of life, the traditional rate of profit, the traditional amount of work…” (67). Yet, this pattern “was suddenly destroyed, and often entirely without any essential change in the form of organization…” Calvinism, Weber argued, changed the spirit of capitalism, transforming it into a rational and unashamed pursuit of profit for its own sake.

In an era when religion dominated all of life, Martin Luther’s (1483-1546) insistence that salvation was by God’s grace through faith had placed all vocations on the same plane. Contrary to medieval belief, religious vocations were no longer considered superior to economic vocations for only personal faith mattered with God. Nevertheless, Luther did not push this potential revolution further because he clung to a traditional, static view of economic life. John Calvin (1509-1564), or more accurately Calvinism, changed that.

Calvinism accomplished this transformation, not so much by its direct teachings, but (according to Weber) by the interaction of its core theology with human psychology. Calvin had pushed the doctrine of God’s grace to the limits of the definition: grace is a free gift, something that the Giver, by definition, must be free to bestow or withhold. Under this definition, sacraments, good deeds, contrition, virtue, assent to doctrines, etc. could not influence God (104); for, if they could, that would turn grace into God’s side of a transaction instead its being a pure gift. Such absolute divine freedom, from mortal man’s perspective, however, seemed unfathomable and arbitrary (103). Thus, whether one was among those saved (the elect) became the urgent question for the average Reformed churchman according to Weber.

Uncertainty about salvation, according to Weber, had the psychological effect of producing a single-minded search for certainty. Although one could never influence God’s decision to extend or withhold election, one might still attempt to ascertain his or her status. A life that “… served to increase the glory of God” presumably flowed naturally from a state of election (114). If one glorified God and conformed to what was known of God’s requirements for this life then that might provide some evidence of election. Thus upright living, which could not earn salvation, returned as evidence of salvation.

The upshot was that the Calvinist’s living was “thoroughly rationalized in this world and dominated by the aim to add to the glory of God in earth…” (118). Such a life became a systematic living out of God’s revealed will. This singleness of purpose left no room for diversion and created what Weber called an ascetic character. “Not leisure and enjoyment, but only activity serves to increase the glory of God, according to the definite manifestations of His will” (157). Only in a calling does this focus find full expression. “A man without a calling thus lacks the systematic, methodical character which is… demanded by worldly asceticism” (161). A calling represented God’s will for that person in the economy and society.

Such emphasis on a calling was but a small step from a full-fledged capitalistic spirit. In practice, according to Weber, that small step was taken, for “the most important criterion [of a calling] is … profitableness. For if God … shows one of His elect a chance of profit, he must do it with a purpose…” (162). This “providential interpretation of profit-making justified the activities of the business man,” and led to “the highest ethical appreciation of the sober, middle-class, self-made man” (163).

A sense of calling and an ascetic ethic applied to laborers as well as to entrepreneurs and businessmen. Nascent capitalism required reliable, honest, and punctual labor (23-24), which in traditional societies had not existed (59-62). That free labor would voluntarily submit to the systematic discipline of work under capitalism required an internalized value system unlike any seen before (63). Calvinism provided this value system (178-79).

Weber’s “ascetic Protestantism” was an all-encompassing value system that shaped one’s whole life, not merely ethics on the job. Life was to be controlled the better to serve God. Impulse and those activities that encouraged impulse, such as sport or dance, were to be shunned. External finery and ornaments turned attention away from inner character and purpose; so the simpler life was better. Excess consumption and idleness were resources wasted that could otherwise glorify God. In short, the Protestant ethic ordered life according to its own logic, but also according to the needs of modern capitalism as understood by Weber.

An adequate summary requires several additional points. First, Weber virtually ignored the issue of usury or interest. This contrasts with some writers who take a church’s doctrine on usury to be the major indicator of its sympathy to capitalism. Second, Weber magnified the extent of his Protestant ethic by claiming to find Calvinist economic traits in later, otherwise non-Calvinist Protestant movements. He recalled the Methodist John Wesley’s (1703-1791) “Earn all you can, save all you can, give all you can,” and ascetic practices by followers of the eighteenth-century Moravian leader Nicholas Von Zinzendorf (1700-1760). Third, Weber thought that once established the spirit of modern capitalism could perpetuate its values without religion, citing Benjamin Franklin whose ethic already rested on utilitarian foundations. Fourth, Weber’s book showed little sympathy for either Calvinism, which he thought encouraged a “spiritual aristocracy of the predestined saints” (121), or capitalism , which he thought irrational for valuing profit for its own sake. Finally, although Weber’s thesis could be viewed as a rejoinder to Karl Marx (1818-1883), Weber claimed it was not his goal to replace Marx’s one-sided materialism with “an equally one-sided spiritualistic causal interpretation…” of capitalism (183).

Critiques of Weber

Critiques of Weber can be put into three categories. First, Weber might have been wrong about the facts: modern capitalism might have arisen before Reformed Protestantism or in places where the Reformed influence was much smaller than Weber believed. Second, Weber might have misinterpreted Calvinism or, more narrowly, Puritanism; if Reformed teachings were not what Weber supposed, then logically they might not have supported capitalism. Third, Weber might have overstated capitalism’s need for the ascetic practices produced by Reformed teachings.

On the first count, Weber has been criticized by many. During the early twentieth century, historians studied the timing of the emergence of capitalism and Calvinism in Europe. E. Fischoff (1944, 113) reviewed the literature and concluded that the “timing will show that Calvinism emerged later than capitalism where the latter became decisively powerful,” suggesting no cause-and-effect relationship. Roland Bainton also suggests that the Reformed contributed to the development of capitalism only as a “matter of circumstance” (Bainton 1952, 254). The Netherlands “had long been the mart of Christendom, before ever the Calvinists entered the land.” Finally, Kurt Samuelsson (1957) concedes that “the Protestant countries, and especially those adhering to the Reformed church, were particularly vigorous economically” (Samuelsson, 102). However, he finds much reason to discredit a cause-and-effect relationship. Sometimes capitalism preceded Calvinism (Netherlands), and sometimes lagged by too long a period to suggest causality (Switzerland). Sometimes Catholic countries (Belgium) developed about the same time as the Protestant countries. Even in America, capitalist New England was cancelled out by the South, which Samuelsson claims also shared a Puritan outlook.

Weber himself, perhaps seeking to circumvent such evidence, created a distinction between traditional capitalism and modern capitalism. The view that traditional capitalism could have existed first, but that Calvinism in some meaningful sense created modern capitalism, depends on too fine a distinction according to critics such as Samuelsson. Nevertheless, because of the impossibility of controlled experiments to firmly resolve the question, the issue will never be completely closed.

The second type of critique is that Weber misinterpreted Calvinism or Puritanism. British scholar R. H. Tawney in Religion and the Rise of Capitalism (1926) noted that Weber treated multi-faceted Reformed Christianity as though it were equivalent to late-era English Puritanism, the period from which Weber’s most telling quotes were drawn. Tawney observed that the “iron collectivism” of Calvin’s Geneva had evolved before Calvinism became harmonious with capitalism. “[Calvinism] had begun by being the very soul of authoritarian regimentation. It ended by being the vehicle of an almost Utilitarian individualism” (Tawney 1962, 226-7). Nevertheless, Tawney affirmed Weber’s point that Puritanism “braced [capitalism’s] energies and fortified its already vigorous temper.”

Roland Bainton in his own history of the Reformation disputed Weber’s psychological claims. Despite the psychological uncertainty Weber imputed to Puritans, their activism could be “not psychological and self-centered but theological and God-centered” (Bainton 1952, 252-53). That is, God ordered all of life and society, and Puritans felt obliged to act on His will. And if some Puritans scrutinized themselves for evidence of election, “the test was emphatically not economic activity as such but upright character…” He concludes that Calvinists had no particular affinity for capitalism but that they brought “vitality and drive into every area … whether they were subduing a continent, overthrowing a monarchy, or managing a business, or reforming the evils of the very order which they helped to create” (255).

Samuelsson, in a long section (27-48), argued that Puritan leaders did not truly endorse capitalistic behavior. Rather, they were ambivalent. Given that Puritan congregations were composed of businessmen and their families (who allied with Puritan churches because both wished for less royal control of society), the preachers could hardly condemn capitalism. Instead, they clarified “the moral conditions under which a prosperous, even wealthy, businessman may, despite success and wealth, become a good Christian” (38). But this, Samuelsson makes clear, was hardly a ringing endorsement of capitalism.

Criticisms that what Weber described as Puritanism was not true Puritanism, much less Calvinism, may be correct but beside the point. Puritan leaders indeed condemned exclusive devotion to one’s business because it excluded God and the common good. Thus, the Protestant ethic as described by Weber apparently would have been a deviation from pure doctrine. However, the pastors’ very attacks suggest that such a (mistaken) spirit did exist within their flocks. But such mistaken doctrine, if widespread enough, could still have contributed to the formation of the capitalist spirit.

Furthermore, any misinterpretation of Puritan orthodoxy was not entirely the fault of Puritan laypersons. Puritan theologians and preachers could place heavier emphasis on economic success and virtuous labor than critics such as Samuelsson would admit. The American preacher John Cotton (1582-1652) made clear that God “would have his best gifts improved to the best advantage.” The respected theologian William Ames (1576-1633) spoke of “taking and using rightly opportunity.” And, speaking of the idle, Cotton Mather said, “find employment for them, set them to work, and keep them at work…” A lesser standard would hardly apply to his hearers. Although these exhortations were usually balanced with admonitions to use wealth for the common good, and not to be motivated by greed, they are nevertheless clear endorsements of vigorous economic behavior. Puritan leaders may have placed boundaries around economic activism, but they still preached activism.

Frey (1998) has argued that orthodox Puritanism exhibited an inherent tension between approval of economic activity and emphasis upon the moral boundaries that define acceptable economic activity. A calling was never meant for the service of self alone but for the service of God and the common good. That is, Puritan thinkers always viewed economic activity against the backdrop of social and moral obligation. Perhaps what orthodox Puritanism contributed to capitalism was a sense of economic calling bounded by moral responsibility. In an age when Puritan theologians were widely read, Williams Ames defined the essence of the business contract as “upright dealing, by which one does sincerely intend to oblige himself…” If nothing else, business would be enhanced and made more efficient by an environment of honesty and trust.

Finally, whether Weber misinterpreted Puritanism is one issue. Whether he misinterpreted capitalism by exaggerating the importance of asceticism is another. Weber’s favorite exemplar of capitalism, Benjamin Franklin, did advocate unremitting personal thrift and discipline. No doubt, certain sectors of capitalism advanced by personal thrift, sometimes carried to the point of deprivation. Samuelsson (83-87) raises serious questions, however, that thrift could have contributed even in a minor way to the creation of the large fortunes of capitalists. Perhaps more important than personal fortunes is the finance of business. The retained earnings of successful enterprises, rather than personal savings, probably have provided a major source of funding for business ventures from the earliest days of capitalism. And successful capitalists, even in Puritan New England, have been willing to enjoy at least some of the fruits of their labors. Perhaps the spirit of capitalism was not the spirit of asceticism.

Evidence of Links between Values and Capitalism

Despite the critics, some have taken the Protestant ethic to be a contributing cause of capitalism, perhaps a necessary cause. Sociologist C. T. Jonassen (1947) understood the Protestant ethic this way. By examining a case of capitalism’s emergence in the nineteenth century, rather than in the Reformation or Puritan eras, he sought to resolve some of the uncertainties of studying earlier eras. Jonassen argued that capitalism emerged in nineteenth-century Norway only after an indigenous, Calvinist-like movement challenged the Lutheranism and Catholicism that had dominated the country. Capitalism had not “developed in Norway under centuries of Catholic and Lutheran influence,” although it appeared only “two generations after the introduction of a type of religion that produced the same behavior as Calvinism” (Jonassen, 684). Jonassen’s argument also discounted other often-cited causes of capitalism, such as the early discoveries of science, the Renaissance, or developments in post-Reformation Catholicism; these factors had existed for centuries by the nineteenth century and still had left Norway as a non-capitalist society. Only in the nineteenth century, after a Calvinist-like faith emerged, did capitalism develop.

Engerman’s (2000) review of economic historians shows that they have given little explicit attention to Weber in recent years. However, they show an interest in the impact of cultural values broadly understood on economic growth. A modified version of the Weber thesis has also found some support in empirical economic research. Granato, Inglehart and Leblang (1996, 610) incorporated cultural values in cross-country growth models on the grounds that Weber’s thesis fits the historical evidence in Europe and America. They did not focus on Protestant values, but accepted “Weber’s more general concept, that certain cultural factors influence economic growth…” Specifically they incorporated a measure of “achievement motivation” in their regressions and concluded that such motivation “is highly relevant to economic growth rates” (625). Conversely, they found that “post-materialist” (i.e., environmentalist) values are correlated with slower economic growth. Barro’s (1997, 27) modified Solow growth models also find that a “rule of law index” is associated with more rapid economic growth. This index is a proxy for such things as “effectiveness of law enforcement, sanctity of contracts and … the security of property rights.” Recalling Puritan theologian William Ames’ definition of a contract, one might conclude that a religion such as Puritanism could create precisely the cultural values that Barro finds associated with economic growth.

Conclusion

Max Weber’s thesis has attracted the attention of scholars and researchers for most of a century. Some (including Weber) deny that the Protestant ethic should be understood to be a cause of capitalism — that it merely points to a congruency between and culture’s religion and its economic system. Yet Weber, despite his own protests, wrote as though he believed that traditional capitalism would never have turned into modern capitalism except for the Protestant ethic– implying causality of sorts. Historical evidence from the Reformation era (sixteenth century) does not provide much support for a strong (causal) interpretation of the Protestant ethic. However, the emergence of a vigorous capitalism in Puritan England and its American colonies (and the case of Norway) at least keeps the case open. More recent quantitative evidence supports the hypothesis that cultural values count in economic development. The cultural values examined in recent studies are not religious values, as such. Rather, such presumably secular values as the need to achieve, intolerance for corruption, respect for property rights, are all correlated with economic growth. However, in its own time Puritanism produced a social and economic ethic known for precisely these sorts of values.

References

Bainton, Roland. The Reformation of the Sixteenth Century. Boston: Beacon Press, 1952.

Barro, Robert. Determinants of Economic Growth: A Cross-country Empirical Study. Cambridge, MA: MIT Press, 1997.

Engerman, Stanley. “Capitalism, Protestantism, and Economic Development.” EH.NET, 2000. http://www.eh.net/bookreviews/library/engerman.shtml

Fischoff, Ephraim. “The Protestant Ethic and the Spirit of Capitalism: The History of a Controversy.” Social Research (1944). Reprinted in R. W. Green (ed.), Protestantism and Capitalism: The Weber Thesis and Its Critics. Boston: D.C. Heath, 1958.

Frey, Donald E. “Individualist Economic Values and Self-Interest: The Problem in the Protestant Ethic.” Journal of Business Ethics (Oct. 1998).

Granato, Jim, R. Inglehart and D. Leblang. “The Effect of Cultural Values on Economic Development: Theory, Hypotheses and Some Empirical Tests.” American Journal of Political Science (Aug. 1996).

Green, Robert W. (ed.), Protestantism and Capitalism: The Weber Thesis and Its Critics. Boston: D.C. Heath, 1959.

Jonassen, Christen. “The Protestant Ethic and the Spirit of Capitalism in Norway.” American Sociological Review (Dec. 1947).

Samuelsson, Kurt. Religion and Economic Action. Toronto: University of Toronto Press, 1993 [orig. 1957].

Tawney, R. H. Religion and the Rise of Capitalism. Gloucester, MA: Peter Smith, 1962 [orig., 1926].

Weber, Max, The Protestant Ethic and the Spirit of Capitalism. New York: Charles Scribner’s Sons, 1958 [orig. 1930].

Citation: Frey, Donald. “Protestant Ethic Thesis”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/the-protestant-ethic-thesis/

Path Dependence

Douglas Puffert, University of Warwick

Path dependence is the dependence of economic outcomes on the path of previous outcomes, rather than simply on current conditions. In a path dependent process, “history matters” — it has an enduring influence. Choices made on the basis of transitory conditions can persist long after those conditions change. Thus, explanations of the outcomes of path-dependent processes require looking at history, rather than simply at current conditions of technology, preferences, and other factors that determine outcomes.

Path-dependent features of the economy range from small-scale technical standards to large-scale institutions and patterns of economic development. Several of the most prominent path-dependent features of the economy are technical standards, such as the “QWERTY” standard typewriter (and computer) keyboard and the “standard gauge” of railway track — i.e., the width between the rails. The case of QWERTY has been particularly controversial, and it is discussed at some length below. The case of track gauge is useful for introducing several typical features of path-dependent processes and their outcomes.

Standard Railway Gauges and the Questions They Suggest

Four feet 8-1/2 inches (1.435 meters) is the standard gauge for railways throughout North America, in much of Europe, and altogether on over half of the world’s railway routes. Indeed, it has been the most common gauge throughout the history of modern railways, since the late 1820s. Should we conclude, as economists often do for popular products or practices, that this standard gauge has proven itself technically and economically optimal? Has it been chosen because of its superior performance or lower costs? If so, has it proven superior for every new generation of railway technology and for all changes in traffic conditions? What of the other gauges, broader or narrower, that are used as local standards in some parts of the world — are these gauges generally used because different technology or different traffic conditions in those regions favor these gauges?

The answer to all these questions is no. The consensus of engineering opinion has usually favored gauges broader than 4’8.5″, and in the late nineteenth century an important minority of engineers favored narrower gauges. Nevertheless, the gauge of 4’8.5″ has always had greater use in practice because of the history of its use. Indeed, even the earliest modern railways adopted the gauge as a result of history. The “father of railways,” British engineer George Stephenson, had experience using the gauge on an older system of primitive coal tramways serving a small group of mines near Newcastle, England. Rather than determining optimal gauge anew for a new generation of railways, he simply continued his prior practice. Thus the gauge first adopted more than two hundred years ago for horse-drawn coal carts is the gauge now used for powerful locomotives, massive tonnages of freight shipments, and passenger trains traveling at speeds as great as 300 kilometers per hour (186 mph).

We will examine the case of railway track gauge in more detail below, along with other instances of path dependence. We first take an analytical look at what conditions may give rise to path dependence — or prevent it from arising, as some critics of the importance of path dependence have argued.

What Conditions Give Rise to Path Dependence?

Durability of Capital Equipment

The most trivial — and uninteresting — form of path dependence is based simply on the durability of capital equipment. Obsolete, inferior equipment may remain in use because its fixed cost is already “sunk” or paid for, while its variable costs are lower than the total costs of replacing it with a new generation of equipment. The duration of this sort of path dependence is limited by the service life of the obsolete equipment.

Technical Interrelatedness

In railways, none of the original gauge-specific capital equipment from the early nineteenth century remains in use today. Why, then, has Stephenson’s standard gauge persisted? Part of the reason is the technical interrelatedness of railway track and the wheel sets of rolling stock. When either track or rolling stock wears out, it must be replaced with equipment of the same gauge, so that the wheels will still fit the track and the track will still fit the wheels. Railways almost never replace all their track and rolling stock at the same time. Thus a gauge readily persists beyond the life of any piece of equipment that uses it.

Increasing Returns

A further reason for the persistence, and indeed spread, of the Stephenson gauge is increasing returns to the extent of use. Different railway companies or administrations benefit from using a common gauge, because this saves costs and improves both service quality and profits on through-shipments or passenger trips that pass over each other’s track. New railways have therefore nearly always adopted the gauge of established connecting lines, even when engineers have favored different gauges. Once built, railway lines are reluctant to change their gauge unless neighboring lines do so as well. This adds coordination costs to the physical costs of any conversion.

In early articles on path dependence, Paul David (1985, 1987) listed these same three conditions for path dependence: first, the technical interrelatedness of system components; second, increasing returns to scale in the use of a common technique; and, third, “quasi-irreversibility of investment,” for example in the durability of capital equipment (or of human capital). The third condition gives rise to switching costs, while the first two conditions make gradual change impractical and rapid change costly, due to the transactions costs required to coordinate the actions of different agents. Thus together, these three conditions may lend persistence or stability to a particular path of outcomes, “locking in” a particular feature of the economy, such as a standard railway track gauge.

David’s early work on path dependence represents, in part, the culmination of an earlier economic literature on technical interrelatedness (Veblen 1915; Frankel 1955; Kindleberger 1964; David 1975). By contrast, the other co-developer of the concept of path dependence, W. Brian Arthur, based his ideas on an analogy between increasing returns in the economy, particularly when expressed in the form of positive externalities, and conditions that give rise to positive feedbacks in the natural sciences.

Dynamic Increasing Returns to Adoption

In a series of theoretical papers starting in the early 1980s, Arthur (1989, 1990, 1994) emphasized the role of “increasing returns to adoption,” especially dynamic increasing returns that develop over time. These increasing returns might arise on the supply side of a market, as a result of learning effects that lower the cost or improve the quality of a product as its cumulative production increases. Alternatively, increasing returns might arise on the demand side of a market, as a result of positive “network” externalities, which raise the value of a product or technique for each user as the total number of users increases (Katz and Shapiro 1985, 1994). In the context of railways, for example, a railway finds a particular track gauge more valuable if a greater number of connecting railways use that gauge. (Note that a track gauge is not a “product” but rather a “technology,” as Arthur puts it, or a “technique,” as I prefer to call it.)

In Arthur’s (1989) basic analytical framework, “small events,” which he treated as random, lead to early fluctuations in the market shares of competing techniques. These fluctuations are magnified by positive feedbacks, because techniques with larger market shares tend to be more valuable to new adopters. As a result, one technique grows in market share until it is “locked in” as a de facto standard. In a simple version of Arthur’s model (Table 1), different consumers or firms initially favor different products or techniques. At first, market share for each technique fluctuates randomly, depending on how many early adopters happen to prefer each technique. Eventually, however, one of the techniques will gain enough of a lead in market share that it will offer higher payoffs to everyone — including to the consumers or firms that have a preference for the minority technique. For example, if the total number of adoptions for technique A reaches 80, while the number of adoptions of B is less than 60, then technique A offers higher payoffs for everyone, and it is locked in as the de facto standard.

Table 1. Adoption Payoffs in Arthur’s Basic Model

Number of previous adoptions 0 10 20 30 40 50 60 70 80 90
“R-type agents” (who prefer technique A):
Technique A 10 11 12 13 14 15 16 17 18 19
Technique B 8 9 10 11 12 13 14 15 16 17
“S-type agents” (who prefer technique B):
Technique A 8 9 10 11 12 13 14 15 16 17
Technique B 10 11 12 13 14 15 16 17 18 19

Source: Adapted from Arthur (1989).

Which of the competing techniques becomes the de facto standard is unpredictable on the basis of systematic conditions. Rather, later outcomes depend on the specific early history of the process. If early “small” events and choices are governed in part by non-systematic factors — even “historical accidents” — then these factors may have large effects on later outcomes. This is in contrast to the predictions of standard economic models, where decreasing returns and negative feedbacks diminish the impact of non-systematic factors. To cite another illustration from the history of railways, George Stephenson’s personal background was a non-systematic or “accidental” factor that, due to positive feedbacks, had a large influence on the entire subsequent history of track gauge.

Efficiency, Foresight, Remedies, and the Controversy over Path Dependence

Arthur’s (1989) basic model of a path-dependent process considered a case in which the selection of one outcome (or one path of outcomes) rather than another has no consequences for general economic efficiency — different economic agents favor different techniques, but no technique is best for all. Arthur also, however, used a variation of his modeling approach to argue that an inefficient outcome is possible. He considered a case where one technique offers higher payoffs than another for larger numbers of cumulative adoptions (technique B in Table 2), while for smaller numbers the other technique offers higher payoffs (technique A). Arthur argued that, given his model’s assumptions, each new adopter, arriving in turn, will prefer technique A and adopt only it, resulting later in lower total payoffs than would have resulted if each adopter had chosen technique B. Arthur’s assumptions were, first, that each agent’s payoff depends only on the number of previous adoptions and, second, that the competing techniques are “unsponsored,” that is, not owned and promoted by suppliers.

Table 2. Adoption Payoffs in Arthur’s Alternative Model

Number of previous adoptions 0 10 20 30 40 50 60 70 80 90
All agents:
Technique A 10 11 12 13 14 15 16 17 18 19
Technique B 4 7 10 13 16 19 22 25 28 31

Source: Arthur (1989), table 2.

Liebowitz and Margolis’s Critique of Arthur’s Model

Arthur’s discussion of efficiency provided the starting point for a theoretical critique of path dependence offered by Stan Liebowitz and Stephen E. Margolis (1995). Liebowitz and Margolis argued that two conditions, when present, prevent path-dependent processes from resulting in inefficient outcomes: first, foresight into the effects of choices and, second, opportunities to coordinate people’s choices, using direct communication, market interactions, and active product promotion. Using Arthur’s payoff table (Table 2), Liebowitz and Margolis argued that the purposeful, rational behavior of forward-looking, profit-seeking economic agents can override the effects of events in the past. In particular, if agents can foresee that some potential outcomes will be more efficient than others, then they have incentives to avoid the suboptimal ones. Agents who already own — or else find ways to appropriate — products or techniques that offer superior outcomes can often earn substantial profits by steering the process to favor those products or techniques. For the situation in Table 2, for example, the supplier of product or technique B could draw early adopters to that technique by temporarily setting a price below cost, making a profit by raising price above cost later.

Thus, in Liebowitz and Margolis’s analysis, the sort of inefficient or inferior outcomes that can arise in Arthur’s model are often not true equilibrium outcomes that market processes would lead to in the real world. Rather, they argued, purposeful behavior is likely to remedy any inferior outcome — except where the costs of a remedy, including transactions costs, are greater than the potential benefits. In that case, they argued, an apparently “inferior” outcome is actually the most efficient one available, once all costs are taken into account. “Remediable” inefficiency, they argued in contrast, is highly unlikely to persist.

Liebowitz and Margolis’s analysis gave rise to a substantial controversy over the meaning and implications of path dependence. In the view of Liebowitz and Margolis, the major claims of the economists who promote the concept of path dependence have amounted to assertions of remediable inefficiency. Liebowitz and Margolis coined the term “third-degree” path dependence to refer to such cases. They contrasted this category both to “first-degree” path dependence, which has no implications for efficiency, and to “second-degree” path dependence, where transactions costs and/or the impossibility of foresight lead to outcomes that offer lower payoffs than some hypothetical — but unattainable — alternative. In Liebowitz and Margolis’s view, only “third-degree” path dependence offers scope for optimizing behavior, and thus only this type stands in conflict with what they call “the neoclassical model of relentlessly rational behavior leading to efficient, and therefore predictable, outcomes” (1995). Only this category of path dependence, they argue, would constitute market failure. They cast strong doubt on the likelihood of its occurrence, and they asserted that no empirical examples have been demonstrated.

Responses to Liebowitz and Margolis’s Critique

Proponents of the importance of path dependence have responded, in large part, by asserting that the interesting features of path dependence have little to do with the question of remediability. David (1997, 2000) argued that the concept of third-degree path dependence proves incoherent upon close examination and that Liebowitz and Margolis had misconstrued the issues at stake. The present author asserted that one can usefully incorporate several of Liebowitz and Margolis’s ideas on foresight and forward-looking behavior into the theory of path dependence while still affirming the claims made by proponents (Puffert 2000, 2002, 2003).

Imperfect Foresight and Inefficiency

One point that I have emphasized is that the cases of path dependence cited by proponents typically involve imperfect foresight, and sometimes other features, that make remediation impossible. Indeed, proponents of the importance of path dependence partly recognized this point prior to the work of Liebowitz and Margolis. Nobel Prize-winner Kenneth Arrow argued in his foreword to Arthur’s collected articles that Arthur’s modeling approach applies specifically to cases where foresight is imperfect, or “expectations are based on limited information” (Arthur 1994). Thus, economic agents cannot foresee future payoffs, and they cannot know how best to direct the process to the outcomes they would prefer. In terms of the payoffs in Table 2, technique A might become locked-in because adopters as well as suppliers initially think, mistakenly, that technique A will continue to offer the higher payoffs. Similarly, David (1987) had argued still earlier that path dependence is sometimes of interest precisely because lock-in might happen too quickly, before the payoffs of different paths are known. Lock-in, as David and Arthur use the term, applies to a stable equilibrium — i.e., to an outcome that, if inefficient, is not remediable. (Liebowitz and Margolis introduce a different definition of lock-in.)

Imperfect foresight is, of course, a common condition — and especially common for new, unproven products (or techniques) in untested markets. Part of the difference between path-dependent and “path-independent” processes is that foresight doesn’t matter for path-independent processes. No matter what the path of events, path-independent processes still end up at unique outcomes that are predictable on the basis of fundamental conditions. Generally, these predictable outcomes are those that are most efficient and that offer the highest payoffs. By contrast, path-dependent processes have multiple potential outcomes, and the outcome selected is not necessarily the one offering the highest payoffs. This contrast to the results of standard economic analysis is part of what makes path dependence interesting.

Winners, Losers and Path Dependence

Path dependence is also interesting, however, when the issue at stake is not the overall efficiency (i.e., Pareto efficiency) of the outcome, but rather the distribution of rewards between “winners” and “losers” — for example, between firms competing to establish their products or techniques as a de facto standard, resulting in profits or economic rents to the winner only. This is something that finds no place in Liebowitz and Margolis’s taxonomy of “degrees.” In keeping with Liebowitz and Margolis’s analysis, competing firms certainly exercise forward-looking behavior in efforts to determine the outcome, but imperfect information and imperfect control over circumstances still make the outcome path dependent, as some of the case studies below illustrate.

Lack of Agreement on What the Debate Is About

Finally, market failure per se has never been the primary concern of proponents of the importance of path dependence. Even when proponents have highlighted inefficiency as one possible consequence of path dependence, this inefficiency is often the result of imperfect foresight rather than of market failure. Market failure is, however, the primary concern of Liebowitz and Margolis. This difference in perspective is one reason that the arguments of proponents and opponents have often failed to meet head on, as we shall consider in several case studies.

These contrasting analytical arguments can best be assessed through empirical cases. The case of the QWERTY keyboard is considered first, because it has generated the most controversy and it illustrates opposing arguments. Three further cases are particularly useful for the lessons they offer. Britain’s “coal wagon problem” offers a strong example of inefficiency. The worldwide history of railway track gauge, now considered at greater length, illustrates the roles of foresight (or lack thereof) and transitory circumstances, as well as the role of purposeful behavior to remedy outcomes. The case of competition in videocassette recorders illustrates how path dependence is compatible with purposeful behavior, and it shows how proponents and critics of the importance of path dependence can offer different interpretations of the same events.

The Debate over QWERTY

The most influential empirical case has been that of the “QWERTY” standard typewriter and computer keyboard, named for the first letters appearing on the top row of keys. The concept of path dependence first gained widespread attention through David’s (1985, 1986) interpretation of the emergence and persistence of the QWERTY standard. The critique of path dependence began with the alternative interpretation offered by Liebowitz and Margolis (1990).

David (1986) noted that the QWERTY keyboard was designed, in part, to reduce mechanical jamming on an early typewriter design that quickly went out of use, while other early keyboards were designed more with the intention of facilitating fast, efficient typing. In David’s account, QWERTY’s triumph over its initial revivals resulted largely from the happenstance that typing schools and manuals offered instruction in eight-finger “touch” typing first for QWERTY. The availability of trained typists encouraged office managers to buy QWERTY machines, which in turn gave further encouragement to budding typists to learn QWERTY. These positive feedbacks increased QWERTY’s market share until it was established as the de facto standard keyboard.

Furthermore, according to David, similar positive feedbacks have kept typewriter users “locked in” to QWERTY, so that new, superior keyboards could gain no more than a small foothold in the market. In particular the Dvorak Simplified Keyboard, introduced during the 1930s, has been locked out of the market despite experiments showing its superior ergonomic efficiency. David concluded that our choice of a keyboard even today is governed by history, not by what would be ergonomically and economically optimal apart from history.

Liebowitz and Margolis (1990) directed much of their counterargument to the alleged superiority of the Dvorak keyboard. They showed, indeed, that claims David cited for the dramatic superiority of the Dvorak keyboard were based on dubious experiments. The experiments that Liebowitz and Margolis prefer support the conclusion that it could never be profitable to retrain typists from QWERTY to the Dvorak keyboard. Moreover, Liebowitz and Margolis cited ergonomic studies that conclude that the Dvorak keyboard offers at most only a two to six percent efficiency advantage over QWERTY.

Liebowitz and Margolis did not address David’s proposed mechanism for the original triumph of QWERTY. Instead, they argued against the claims of some popular accounts that QWERTY owes its success largely to the demonstration effect of winning a single early typing contest. Liebowitz and Margolis showed that other, well-known typing contests were won by non-QWERTY typists, and so they cast doubt on the impact of a single historical accident. This, however, did not address the argument that David made about that one typing contest. David’s argument was that the contest’s modest impact consisted largely in vindicating the effectiveness of eight-finger touch-typing, which was being taught at the time only for QWERTY.

Although Liebowitz and Margolis never addressed David’s claims about the role of third-party typing instruction, they did argue that suppliers had opportunities to offer training in conjunction with selling typewriters to new offices, so that non-QWERTY keyboards would not have been disadvantaged. They did not, however, present evidence that suppliers actually offered such training during the early years of touch-typing, the time when QWERTY became dominant. Whether the early history of QWERTY was path dependent thus seems to depend largely on the unaddressed question of how much typing instruction was offered directly by suppliers, as Liebowitz and Margolis suggest could have happened, and how much was offered by third parties using QWERTY, as David showed did happen.

Liebowitz and Margolis showed that early typewriter manufacturers competed vigorously in the features of their machines. They inferred, therefore, that the reason that typewriter suppliers increasingly supported and promoted QWERTY must have been that it offered a competitive advantage as the most effective system available. This reasoning is plausible, but it was not supported by direct evidence. The alternative, path-dependent explanation would be that QWERTY’s competitive advantage in winning new customers consisted largely in its lead in trained typists and market share. That is, positive feedbacks would have affected the decisions of customers and, thus, also suppliers. David presented some evidence for this, although, in light of the issues raised by Liebowitz and Margolis, this evidence might now appear less than conclusive.

Liebowitz and Margolis highlighted the following lines from David’s article: “… competition in the absence of perfect futures markets drove the industry prematurely into de facto standardization on the wrong system — and that is where decentralized decision-making subsequently has sufficed to hold it” (emphasis original in David’s article). In Liebowitz and Margolis’s view, the focus here on decentralized decision-making constitutes a claim for market failure and third-degree path dependence, and they treat this as the central claim of David’s article. In the view of the present author, this interpretation is mistaken. David’s claim here plays only a minor role in his argument — indeed it is less than one sentence. Moreover, it is not clear that David’s comment about decentralized decision-making amounts to anything more than a reference to the high transactions costs that would be entailed in organizing a coordinated movement to an alternative outcome — a point that Liebowitz and Margolis themselves have argued in other (non-QWERTY) contexts. (A coordinated change would be necessary because few typists would wish to learn a non-QWERTY system unless they could be sure of conveniently finding a compatible keyboard wherever they go.) David may have wished to suggest that centralized decision-making (by government?) would have greatly reduced these transactions costs, but David made no explicit claim that such a remedy would be feasible. If David had wished to make market failure or remediable inefficiency the central focus of his claims for path dependence, then he surely could and would have done so in a more explicit and forceful manner.

Part of what remains of the case of QWERTY is modest support for David’s central claim that history has mattered, leaving us with a standard keyboard that is less efficient than alternatives available today — not as inefficient as the claims David cited, but still somewhat so. Donald Norman, one of the world’s leading authorities on ergonomics, estimates on the basis of several recent studies that QWERTY is about 10 percent less efficient than the Dvorak keyboard and other alternatives (Norman, 1990, and recent personal correspondence).

For Liebowitz and Margolis, it was most important to show that the costs of switching to an alternative keyboard would outweigh any benefits, so that there is no market failure in remaining with the QWERTY standard. This claim appears to stand. David had made no explicit claim for market failure, but Liebowitz and Margolis — as well, indeed, as some supporters of David’s account — took that as the main issue at stake in David’s argument.

Britain’s “Silly Little Bobtailed” Coal Wagons

A strong example of inefficiency in path dependence is offered by the small coal wagons that persisted in British railway traffic until the mid-twentieth century. Already in 1915, economist Thorstein Veblen cited these “silly little bobtailed carriages” as an example of how industrial modernization may be inhibited by “the restraining dead hand of … past achievement,” that is, the historical legacy of interrelated physical infrastructure: “the terminal facilities, tracks, shunting facilities, and all the ways and means of handling freight on this oldest and most complete of railway systems” (Veblen, 1915, pp. 125-8). Veblen’s analysis was the starting point for the literature on technical and institutional interrelatedness that formed the background to David’s early views on path dependence.

In recent years Van Vleck (1997, 1999) has defended the efficiency of Britain’s small coal wagons, arguing that they offered “a crude just-in-time approach to inventory” for coal users while economizing on the substantial costs of road haulage that would have been necessary for small deliveries if railway coal wagons were larger. More recently, however, Scott (1999, 2001) presented evidence that few coal users benefited from small deliveries. Rather, he showed, the wagons’ small size, widely dispersed ownership and control, antiquated braking and lubrication systems, and generally poor physical condition made them quite inefficient indeed. Replacing these cars and associated infrastructure with modern, larger wagons owned and controlled by the railways would have offered savings in railway operating costs of about 56 percent and a social rate of return of about 24 percent. Nevertheless, the small wagons were not replaced until both railways and collieries were nationalized after World War II. The reason, according to Scott, lay partly in the regulatory system that allocated certain rights to collieries and other car owners at the expense of the railways, and partly in the massive coordination problem that arose because railways would not have realized much savings in costs until a large proportion of antiquated cars were replaced. Together, these factors lowered the railways’ realizable private rate of return below profitable levels. (Van Vleck’s smaller estimates for potential efficiency gains from scrapping the small wagons were largely the result of assuming that there would be no change in the regulatory system or in the ownership and control of wagons. Scott argued that such changes added greatly to the potential cost savings.)

Scott noted that the persistence of small wagons was path dependent, because both the technology embodied in the small wagons and the institutions that supported fragmented ownership long outlasted the earlier, transitory conditions to which they were a rational response. Ownership of wagons by the collieries had been advantageous to railways as well as collieries in the mid-nineteenth century, and government regulation had assigned rights in a way designed to protect the interests of wagon owners from opportunistic behavior by the railways. By the early twentieth century, these regulatory institutions imposed a heavy burden on the railways, because they required either conveyance even of antiquated wagons for set rates or else payment of high levels of compensation to the wagon owners. The requirement for compensation helped to raise the railways’ private costs of scrapping the small wagons above the social costs of doing so.

The case shows the relevance of Paul David’s approach to path dependence, with its discussion of technical (and institutional) interrelatedness and quasi-irreversible investment, above and beyond Brian Arthur’s more narrow focus on increasing returns.

The case also supports Liebowitz and Margolis’s insight that an inferior path-dependent outcome can only persist where transactions costs (and other costs) prevent remediation, but it undercuts those authors’ skepticism toward the possibility of market failure. The high transactions costs that would have been entailed in scrapping Britain’s small wagons indeed outweighed the potential gains, but these costs were high only due to the institutions of property rights that supported fragmented ownership. When these institutions were later changed, a remedy to Britain’s coal-wagon problem followed quickly. Thus, the failure to scrap the small wagons earlier can be ascribed to institutional and market failure.

The case thus appears to satisfy Liebowitz and Margolis’s criterion for “third-degree” path dependence. This is not completely clear, however. Whether Britain’s coal-wagon problem qualifies for that status depends on whether the benefits of solving the problem would have been worth the cost of implementing the necessary institutional changes, a question that Scott did not address. Liebowitz and Margolis argue that an inferior outcome cannot be considered a result of market failure, or even meaningfully inefficient, unless this criterion of remediability is satisfied.

In this present author’s view, Liebowitz and Margolis’s criterion has some usefulness in the context of considering government policy toward inferior outcomes, which is Liebowitz and Margolis’s chief concern, but the criterion is much less useful for a more general analysis of these outcomes. If Britain’s coal-wagon problem does not qualify for “third-degree” status, then it suggests that Liebowitz and Margolis’s dismissive approach toward cases that they relegate to “second-degree” status is misplaced. The case seems to show that path dependence can have substantial effects on the economy, that the outcomes of path-dependent processes can vary substantially from the predictions of standard economic models, that these outcomes can exhibit substantial inefficiency of a sort discussed by proponents of path dependence, and that all this can happen despite the exercise of foresight and forward-looking behavior.

Railway Track Gauges

The case of railway track gauge illustrates how “accidental” or “contingent” events and transitory circumstances can affect choice of technique and economic efficiency over a period now approaching two centuries (Puffert 2000, 2002). The gauge now used on over half the world’s railways, 4 feet 8.5 inches (4’8.5″, 1435 mm), comes from the primitive mining tramway where George Stephenson gained his early experience. Stephenson transferred this gauge to the Liverpool and Manchester Railway, opened in 1830, which served as the model of best practice for many of the earliest modern railways in Britain, continental Europe, and North America. Many railway engineers today view this gauge as narrower than optimal. Yet, although they would choose a broader gauge today if the choice were open, they do not view potential gains in operating efficiency as worth the costs of conversion.

A much greater source of inefficiency has been the emergence of diversity in gauge. Six gauges came into widespread use in North America by the 1870s, and Britain’s extensive Great Western Railway system maintained a variant gauge for over half a century until 1892. Even today, Australia and Argentina each have three different regional-standard gauges, while India, Chile, and several other countries each make extensive use of two gauges. Breaks of gauge also persist at the border of France and Spain and most external borders of the former Russian and Soviet empires. This diversity adds costs and impairs service in interregional and international traffic. Where diversity has been resolved, conversion costs have sometimes been substantial.

This diversity arose as a result of several contributing factors: limited foresight, the search for an improved railway technology, transitory circumstances, and contingent events or “historical accidents.” Many early railway builders sought simply to serve local or regional transportation needs, and they did not foresee the later importance of railways in interregional traffic. Beginning in the late 1830s, locomotive builders found their ability to construct more powerful, easily maintained engines constrained by the Stephenson gauge, while some civil engineers thought that a broader gauge would offer improved capacity, speed, and passenger comfort. This led to a wave of adoption of broad gauges for new regions in Europe, the Americas, South Asia, and Australia. Changes in locomotive design soon eliminated much of the advantage of broad gauges, and by the 1860s it became possible to take advantage of the ability of narrow gauges to make sharper curves, following the contours of rugged landscape and reducing the need for costly bridges, embankments, cuttings, and tunnels. This, together with the beliefs of some engineers and promoters that narrow gauges would offer savings in operating costs, led to a wave of introductions of narrow gauges to new regions.

At every point of time there was some variation in engineering opinion and practice, so that which gauge was introduced to each new region often depended on the contingent circumstances of who decided the gauge. To cite only the most fateful example, Stephenson’s rivals for the contract to build the Liverpool and Manchester Railway proposed to adopt the gauge of 5’6″ (1676 mm). If that team had been employed, or if Stephenson had gained his earlier experience on almost any other mining tramway, then the ensuing worldwide history of railway gauge would have been different — perhaps far different.

After the introduction of particular gauges to new regions, later railways nearly always adopted the gauge of established connecting lines, reinforcing early contingent choices with positive feedbacks. As different local common-gauge regions expanded, regions that happened to have the same gauge merged into one another, but breaks of gauge emerged between regions of differing gauge. The extent of diversity that emerged at the national and continental levels, and thus the relative efficiency of the outcome, thus depended on earlier contingent events.

Once these patterns of diversity had been established by a path-dependent process, they were partly rationalized by the sort of forward-looking, profit-seeking behavior proposed by Liebowitz and Margolis. In North America, for example, a continental standard emerged quickly after demand for interregional transport grew, and standardization was facilitated both by the formation of interregional railway systems and by cooperation among independent railways. Elsewhere as well, much of the most inefficient diversity was resolved relatively quickly. Nonetheless, a costly diversity has persisted in places where variant-gauge regions had grown large and costly to convert before the value of conversion became apparent. Spain’s variant gauge has become more costly in recent years as the country’s economy has been integrated into that of the European Union, but estimated costs of (U.S.) $5 billion have precluded conversion. India and Australia have only recently made substantial progress toward the resolution of their century-old diversity.

Wherever gauge diversity has been resolved, it is one of the earliest gauges that has emerged as the standard. In no significant part of the world has current practice in gauge broken free of its early history. The inefficiency that has resulted, relative to what other sequences of events might have produced, was not the result of market failure. Rather, it resulted primarily from the natural inability of railway builders to foresee how railway networks and traffic patterns would develop and how technology would evolve.

The case also illustrates the usefulness of Arthur’s (1989) modeling approach for cases of unsponsored techniques and limited foresight (Puffert 2000, 2002). These were essentially the conditions Arthur assumed in proposing his model.

Videocassette Recording Systems

Markets for technical systems exhibiting network externalities (where users benefit from using the same system as other users) often tend to give rise to de facto standards — one system used by all. Foreseeing this, suppliers sometimes join to offer a common system standard from the outset, precluding any possibility for path-dependent competition. Examples include first-generation compact discs (CDs and CD-ROMs) and second-generation DVDs.

In the case of consumer videocassette recorders (VCRs), however, Sony with its Betamax system and JVC with its VHS system were unable to agree on a common set of technical specifications. This gave rise to a celebrated battle between the systems lasting from the mid-1970s to the mid-1980s. Arthur (1990) used this competition as the basis for a thought experiment to illustrate path dependence. He explained the triumph of VHS as the result of positive feedbacks in the video film rental market, as video rental stores stocked more film titles for the system with the larger user base, while new adopters chose the system for which they could rent more videos. He also suggested tentatively that, if the common perception that Betamax offered a superior picture quality is true, then the “the market’s choice” was not the best possible outcome.

In a closer look at the case, Cusumano et al. (1992) showed that Arthur’s suggested positive-feedback mechanism was real, and that this mechanism explains why Sony eventually withdrew Betamax from the market rather than continuing to offer it as an alternative system. However, they also showed that the video rental market emerged only at a late stage in the competition, after VHS already had a strong lead in market share. Thus, Arthur’s mechanism does not explain how the initial symmetry in competitors’ positions was broken.

Cusumano et al. argued, nonetheless, that the earlier competition already had a path-dependent market-share dynamic. They presented evidence that suppliers and distributors of VCRs increasingly chose to support VHS rather than Betamax because they saw other market participants doing so, leading them to believe that VHS would win the competition and emerge as a de facto standard. The authors did not make clear, however, why market participants believed that a single system would become so dominant. (In a private communication, coauthor Richard Rosenbloom said that this was largely because they foresaw the later emergence of a market for prerecorded videos.)

The authors argue that three early differences in promoters’ strategies gave VHS its initial lead. First, Sony proceeded without major co-sponsors for its Betamax system, while JVC shared VHS with several major competitors. Second, the VHS consortium quickly installed a large manufacturing capacity. Third, Sony opted for a more compact videocassette, while JVC chose instead a longer playing time for VHS. In the event, a longer playing time proved more important to many consumers and distributors, at least during early years of the competition when Sony cassettes could not accommodate a full (U.S.) football game.

This interpretation shows how purposeful, forward-looking behavior interacted with positive feedbacks in producing the final outcome. The different strategies, made under conditions of limited foresight, were contingent decisions that set competition among the firms on one path rather than another (Puffert 2003). Furthermore, the early inability of Sony cassettes to accommodate a football game was a transitory circumstance that may have affected outcomes long afterward.

Liebowitz and Margolis’s (1995) initial interpretation of the case responded only to Arthur’s brief discussion. They argued that the playing-time advantage for VHS was the crucial factor in the competition, so that VHS won because its features most closely matched consumer demand — and not due to path dependence. Although their discussion covers part of the same ground as that of Cusumano et al., Liebowitz and Margolis did not respond to the earlier article’s argument that the purposeful behavior of suppliers interacted with positive feedbacks. Rather, they treated this purposeful behavior as the antithesis of the mechanistic, non-purposeful evolution of market share that they see as the ultimate basis of path dependence.

Liebowitz and Margolis also presented substantial evidence that Betamax was not, in fact, a superior system for the consumer market. The primary concern of their argument was to refute a suggested case of path-dependent lock-in to an inferior technique, and in this they succeeded. It is arguable that they overstated their case, however, in asserting that what they refuted amounted to a claim for “third-degree” path dependence. Arthur had not argued that the selection of VHS, if inferior to Betamax, would have been remediable.

Recently, Liebowitz (2002) did respond to Cusumano et al. He argued, in part, that the larger VHS tape size offered a permanent rather than transitory advantage, as this size facilitated higher tape speeds and thus better picture quality for any given total playing time.

A Brief Discussion of Further Cases

Pest Control

Cowan and Gunby (1996) showed that there is path dependence in farmers’ choices between systems of chemical pest control and integrated pest management (IPM). IPM relies in part on predatory insects to devour harmful ones, and the drift of chemical pesticides from neighboring fields often makes the use of IPM impossible. Predatory insects also drift among fields, further raising farmers’ incentives to use the same techniques as neighbors. To be practical, IPM must be used on the whole set of farms that are in proximity to each other. Where this set is large, the transactions costs of persuading all farmers to forego chemical methods often prevent adoption. In addition to these localized positive feedbacks, local learning effects also make the choice between systems path dependent. The path-dependent local lock-in of each technique has sometimes been upset by such developments as invasions by new pests and the emergence of resistance to pesticides.

Nuclear Power Reactors

Cowan (1990) argued that transitory circumstances led to the establishment of the dominant “light-water” design for civilian nuclear power reactors. This design, adapted from power plants for nuclear submarines, was rushed into use during the Cold War because the political value of demonstrating peaceful uses for nuclear technology overrode the value of finding the most efficient technique. Thereafter, according to Cowan, learning effects arising from engineering experience for the light-water design continued to make it the rational choice for new reactors. He argued that there are fundamental scientific and engineering reasons for believing, however, that an equivalent degree of development of alternative designs may have made them superior.

Information Technology

Although Shapiro and Varian (1998) did not emphasize the term path dependence, they pointed to a broad range of research documenting positive feedbacks that affect competition in contemporary information technology. Like Morris and Ferguson (1993), they showed how competing firms recognize and seek to take advantage of these positive feedbacks. Strictly speaking, not all of these cases are path dependent, because in some cases firms have been able to control the direction and outcome of the allocation processes. In other cases, however, the allocation process has had its own path-dependent dynamic, affected both by the attempts of rival firms to promote their products and by factors that are unforeseen or out of their control.

Among the cases that Shapiro and Varian discuss are some involving Microsoft. In addition, some proponents of the importance of path dependence have argued that positive feedbacks favor Microsoft’s competitive position in ways that hinder competitors from developing and introducing innovative products (see, for example, Reback et al., 1995). Liebowitz and Margolis (2000), by contrast, offered evidence of cases where superior computer software products have had no trouble winning markets. Liebowitz and Margolis also argued that the lack of demonstrated empirical examples of “third-degree” path dependence creates a strong presumption against the existence of an inferior outcome that government antitrust measures could remedy.

Path Dependence at Larger Levels

Geography and Trade

The examples thus far all treat path dependence in the selection of alternative products or techniques. Krugman (1991, 1994) and Arthur (1994) have also pointed to a role for contingent events and positive feedbacks in economic geography, including in the establishment of Silicon Valley and other concentrations of economic activity. Some of these locations, they showed, are the result not of systematic advantages but rather of accidental origins reinforced by “agglomeration” economies that lead new firms to locate in the vicinity of similar established firms. Krugman (1994) also discussed how these same effects produce path dependence in patterns of international trade. Geographic patterns of economic activity, some of which arise as a result of contingent historical events, determine the patterns of comparative advantage that in turn determine patterns of trade.

Institutional Development

Path dependence also arises in the development of institutions — a term that economists use to refer to the “rules of the game” for an economy. Eichengreen (1996) showed, for example, that the emergence of international monetary systems, such as the classical gold standard of the late nineteenth century, was path dependent. This path dependence has been based on the benefits to different countries of adopting a common monetary system. Eichengreen noted that these benefits take the form of network externalities. Puffert (2003) has argued that path dependence in institutions is likely to be similar to path dependence in technology, as both are based on the value of adopting a common practice — some technique or rule — that becomes costly to change.

Thus path dependence can affect not only individual features of the economy but also larger patterns of economic activity and development. Indeed, some teachers of economic history interpret major regional and national patterns of industrialization and growth as partly the result of contingent events reinforced by positive feedbacks — that is, as path dependent. Some suggest, as well, that the institutions responsible for economic development in some parts of the world and those responsible for backwardness in others are, at least in part, path dependent. In the coming years we may expect these ideas to be included in a growing literature on path dependence.

Conclusion

Path dependence arises, ultimately, because there are increasing returns to the adoption of some technique or other practice and because there are costs in changing from an established practice to a different one. As a result, many current features of the economy are based on what appeared optimal or profit-maximizing at some point in the past, rather than on what might be preferred on the basis of current general conditions.

The theory of path dependence is not an alternative to neoclassical economics but rather a supplement to it. The theory of path dependence assumes, generally, that people optimize on the basis of their own interests and the information at their disposal, but it highlights ways that earlier choices put constraints on later ones, channeling the sequence of economic outcomes along one possible path rather than another. This theory offers reason to believe that some — or perhaps many — economic processes have multiple possible paths of outcomes, rather than a unique equilibrium (or unique path of equilibria). Thus the selection among outcomes may depend on nonsystematic or “contingent” choices or events. Empirical case studies offer examples of how such choices or events have led to the establishment, and “lock in,” of particular techniques, institutions, and other features of the economy that we observe today — although other outcomes would have been possible. Thus, the analysis of path dependence adds to what economists know on the basis of more established forms of neoclassical analysis.

It is not possible at this time to assess the overall importance of path dependence, either in determining individual features of the economy or in determining larger patterns of economic activity. Research has only partly sorted out the concrete conditions of technology, interactions among agents, foresight, and markets and other institutions that make allocation path dependent in some cases but not in others (Puffert 2003; see also David 1997, 1999, 2000 for recent refinements on theoretical conditions for path dependence).

Addendum: Technical Notes on Definitions

Path dependence, as economists use the term, corresponds closely to what mathematicians call non-ergodicity (David 2000). A non-ergodic stochastic process is one that, as it develops, undergoes a change in the limiting distribution of future states, that is, in the probabilities of different outcomes in the distant future. This is somewhat different from what mathematicians call path dependence. In mathematics, a stochastic process is called path dependent, as opposed to state dependent, if the probabilities of transition to alternative states depend not simply on the current state of the system but, additionally, on previous states.

Furthermore, the term path dependence is applied to economic processes in which small variations in early events can lead to large or discrete variations in later outcomes, but generally not to processes in which small variations in events lead only to small and continuous variations in outcomes. That is, the term is used for cases where positive feedbacks magnify the impact of early events, not for cases where negative feedbacks diminish this impact over time.

The term path dependence can also be used for cases in which the impact of early events persists without appreciably increasing or decreasing over time. The most important examples would be instances where transitory conditions have large, persistent impacts.

References

Arthur, W. Brian. 1989. “Competing Technologies, Increasing Returns, and Lock-in by Historical Events.” Economic Journal 99: 116‑31.

Arthur, W. Brian. 1990. “Positive Feedbacks in the Economy.” Scientific American 262 (February): 92-99.

Arthur, W. Brian. 1994. Increasing Returns and Path Dependence in the Economy. Ann Arbor: University of Michigan Press.

Cowan, Robin. 1990. “Nuclear Power Reactors: A Study in Technological Lock-in.” Journal of Economic History 50: 541-67.

Cowan, Robin, and Philip Gunby. 1996. “Sprayed to Death: Path Dependence, Lock-in and Pest Control Strategies.” Economic Journal 106: 521-42.

Cusumano, Michael A., Yiorgos Mylonadis, and Richard S. Rosenbloom. 1992. “Strategic Maneuvering and Mass-Market Dynamics: The Triumph of VHS over Beta.” Business History Review 66: 51-94.

David, Paul A. 1975. Technical Choice, Innovation and Economic Growth: Essays on American and British Experience in the Nineteenth Century. Cambridge: Cambridge University Press.

David, Paul A. 1985. “Clio and the Economics of QWERTY.” American Economic Review (Papers and Proceedings) 75: 332-37.

David, Paul A. 1986. “Understanding the Economics of QWERTY: The Necessity of History.” In W.N. Parker, ed., Economic History and the Modern Economist. Oxford: Oxford University Press.

David, Paul A. 1987. “Some New Standards for the Economics of Standardization in the Information Age.” In P. Dasgupta and P. Stoneman, eds., Economic Policy and Technological Performance. Cambridge, England: Cambridge University Press.

David, Paul A. 1997. “Path Dependence and the Quest for Historical Economics: One More Chorus of the Ballad of QWERTY.” University of Oxford Discussion Papers in Economic and Social History, Number 20. http://www.nuff.ox.ac.uk/economics/history/paper20/david3.pdf

David, Paul A. 1999. ” At Last, a Remedy for Chronic QWERTY-Skepticism!” Working paper, All Souls College, Oxford University. http://www.eh.net/Clio/Publications/remedy.shtml

David, Paul A. 2000. “Path Dependence, Its Critics and the Quest for ‘Historical Economics’.” Working paper, All Souls College, Oxford University.
http://www-econ.stanford.edu/faculty/workp/swp00011.html

Eichengreen, Barry. 1996 Globalizing Capital: A History of the International Monetary System. Princeton: Princeton University Press.

Frankel, M. 1955. “Obsolescence and Technological Change in a Maturing Economy.” American Economic Review 45: 296-319.

Katz, Michael L., and Carl Shapiro. 1985. “Network Externalities, Competition, and Compatibility.” American Economic Review 75: 424-40.

Katz, Michael L., and Carl Shapiro. 1994. “Systems Competition and Network Effects.” Journal of Economic Perspectives 8: 93-115.

Kindleberger, Charles P. 1964. Economic Growth in France and Britain, 1851-1950. Cambridge, MA: Harvard University Press.

Krugman, Paul. 1991. “Increasing Returns and Economic Geography.” Journal of Political Economy 99: 483-99.

Krugman, Paul. 1994. Peddling Prosperity. New York: W.W. Norton.

Liebowitz, S.J. 2002. Rethinking the Network Economy. New York: AMACOM

Liebowitz, S.J., and Stephen E. Margolis. 1990. “The Fable of the Keys.” Journal of Law and Economics 33: 1-25.

Liebowitz, S.J., and Stephen E. Margolis. 1995. “Path Dependence, Lock-In, and History.” Journal of Law, Economics, and Organization 11: 204-26. http://wwwpub.utdallas.edu/~liebowit/paths.html

Liebowitz, S.J., and Stephen E. Margolis. 2000. Winners, Losers, and Microsoft. Oakland: The Independent Institute.

Morris, Charles R., and Charles H. Ferguson. 1993. “How Architecture Wins Technology Wars.” Harvard Business Review (March-April): 86-96.

Norman, Donald A. 1990. The Design of Everyday Things. New York: Doubleday. (Originally published in 1988 as The Psychology of Everyday Things.)

Puffert, Douglas J. 2000. “The Standardization of Track Gauge on North American Railways, 1830-1890.” Journal of Economic History 60: 933-60.

Puffert, Douglas J. 2002. “Path Dependence in Spatial Networks: The Standardization of Railway Track Gauge.” Explorations in Economic History 39: 282-314.

Puffert, Douglas J. 2003 forthcoming. “Path Dependence, Network Form, and Technological Change.” In W. Sundstrom, T. Guinnane, and W. Whatley, eds., History Matters: Essays on Economic Growth, Technology, and Demographic Change. Stanford: Stanford University Press. http://www.vwl.uni-muenchen.de/ls_komlos/nettech1.pdf

Reback, Gary, Susan Creighton, David Killam, and Neil Nathanson. 1995. “Technological, Economic and Legal Perspectives Regarding Microsoft’s Business Strategy in Light of the Proposed Acquisition of Intuit, Inc.” (“Microsoft White Paper”). White paper, law firm of Wilson, Sonsini, Goodrich & Rosati. http://www.antitrust.org/cases/microsoft/whitep.html

Scott, Peter. 1999. “The Efficiency of Britain’s ‘Silly Little Bobtailed’ Coal Wagons: A Comment on Van Vleck.” Journal of Economic History 59: 1072-80.

Scott, Peter. 2001. “Path Dependence and Britain’s ‘Coal Wagon Problem’.” Explorations in Economic History 38: 366-85.

Shapiro, Carl and Hal R. Varian. 1998. Information Rules. Cambridge, MA: Harvard Business School Press.

Van Vleck, Va Nee L. 1997. “Delivering Coal by Road and Rail in Britain: The Efficiency of the ‘Silly Little Bobtailed’ Coal Wagons.” Journal of Economic History 57: 139-160.

Van Vleck, Va Nee L. 1999. “In Defense (Again) of ‘Silly Little Bobtailed’ Coal Wagons: Reply to Peter Scott.” Journal of Economic History 59: 1081-84.

Veblen, Thorstein. 1915. Imperial Germany and the Industrial Revolution. London: Macmillan.

Citation: Puffert, Douglas. “Path Dependence”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/path-dependence/

Labor Unions in the United States

Gerald Friedman, University of Massachusetts at Amherst

Unions and Collective Action

In capitalist labor markets, which developed in the nineteenth-century in the United States and Western Europe, workers exchange their time and effort for wages. But even while laboring under the supervision of others, wage earners have never been slaves, because they have recourse from abuse. They can quit to seek better employment. Or they are free to join with others to take collective action, forming political movements or labor unions. By the end of the nineteenth century, labor unions and labor-oriented political parties had become major forces influencing wages and working conditions. This article explores the nature and development of labor unions in the United States. It reviews the growth and recent decline of the American labor movement and makes comparisons with the experience of foreign labor unions to clarify particular aspects of the history of labor unions in the United States.

Unions and the Free-Rider Problem

Quitting, exit, is straightforward, a simple act for individuals unhappy with their employment. By contrast, collective action, such as forming a labor union, is always difficult because it requires that individuals commit themselves to produce “public goods” enjoyed by all, including those who “free ride” rather than contribute to the group effort. If the union succeeds, free riders receive the same benefits as do activists; but if it fails, the activists suffer while those who remained outside lose nothing. Because individualist logic leads workers to “free ride,” unions cannot grow by appealing to individual self-interest (Hirschman, 1970; 1982; Olson, 1966; Gamson, 1975).

Union Growth Comes in Spurts

Free riding is a problem for all collective movements, including Rotary Clubs, the Red Cross, and the Audubon Society. But unionization is especially difficult because unions must attract members against the opposition of often-hostile employers. Workers who support unions sacrifice money and risk their jobs, even their lives. Success comes only when large numbers simultaneously follow a different rationality. Unions must persuade whole groups to abandon individualism to throw themselves into the collective project. Rarely have unions grown incrementally, gradually adding members. Instead, workers have joined unions en masse in periods of great excitement, attracted by what the French sociologist Emile Durkheim labeled “collective effervescence” or the joy of participating in a common project without regard for individual interest. Growth has come in spurts, short periods of social upheaval punctuated by major demonstrations and strikes when large numbers see their fellow workers publicly demonstrating a shared commitment to the collective project. Union growth, therefore, is concentrated in short periods of dramatic social upheaval; in the thirteen countries listed in Tables 1 and 2, 67 percent of growth comes in only five years, and over 90 percent in only ten years. As Table 3 shows, in these thirteen countries, unions grew by over 10 percent a year in years with the greatest strike activity but by less than 1 percent a year in the years with the fewest strikers (Friedman, 1999; Shorter and Tilly, 1974; Zolberg, 1972).

Table 1
Union Members per 100 Nonagricultural Workers, 1880-1985: Selected Countries

Year Canada US Austria Denmark France Italy Germany Netherlands Norway Sweden UK Australia Japan
1880 n.a. 1.8 n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a.
1900 4.6 7.5 n.a. 20.8 5.0 n.a. 7.0 n.a. 3.4 4.8 12.7 n.a. n.a.
1914 8.6 10.5 n.a. 25.1 8.1 n.a. 16.9 17.0 13.6 9.9 23.0 32.8 n.a.
1928 11.6 9.9 41.7 39.7 8.0 n.a. 32.5 26.0 17.4 32.0 25.6 46.2 n.a.
1939 10.9 20.7 n.a. 51.8 22.4 n.a. n.a. 32.5 57.0 53.6 31.6 39.2 n.a.
1947 24.6 31.4 64.6 55.9 40.0 n.a. 29.1 40.4 55.1 64.6 44.5 52.9 45.3
1950 26.3 28.4 62.3 58.1 30.2 49.0 33.1 43.0 58.4 67.7 44.1 56.0 46.2
1960 28.3 30.4 63.4 64.4 20.0 29.6 37.1 41.8 61.5 73.0 44.2 54.5 32.2
1975 35.6 26.4 58.5 66.6 21.4 50.1 38.2 39.1 60.5 87.2 51.0 54.7 34.4
1985 33.7 18.9 57.8 82.2 14.5 51.0 39.3 28.6 65.3 103.0 44.2 51.5 28.9

Note: This table shows the unionization rate, the share of nonagricultural workers belonging to unions, in different countries in different years, 1880-1985. Because union membership often includes unemployed and retired union members it may exceed the number of employed workers, giving a unionization rate of greater than 100 percent.

Table 2
Union Growth in Peak and Other Years

Country Years Membership Growth Share of Growth (%) Excess Growth (%)
Top 5 Years Top 10 Years All Years 5 Years 10 Years 5 Years 10 Years
Australia 83 720 000 1 230 000 3 125 000 23.0 39.4 17.0 27.3
Austria 52 5 411 000 6 545 000 3 074 000 176.0 212.9 166.8 194.4
Canada 108 855 000 1 532 000 4 028 000 21.2 38.0 16.6 28.8
Denmark 85 521 000 795 000 1 883 000 27.7 42.2 21.8 30.5
France 92 6 605 000 7 557 000 2 872 000 230.0 263.1 224.5 252.3
Germany 82 10 849 000 13 543 000 9 120 000 119.0 148.5 112.9 136.3
Italy 38 3 028 000 4 671 000 3 713 000 81.6 125.8 68.4 99.5
Japan 43 4 757 000 6 692 000 8 983 000 53.0 74.5 41.3 51.2
Netherlands 71 671 000 1 009 000 1 158 000 57.9 87.1 50.9 73.0
Norway 85 304 000 525 000 1 177 000 25.8 44.6 19.9 32.8
Sweden 99 633 000 1 036 000 3 859 000 16.4 26.8 11.4 16.7
UK 96 4 929 000 8 011 000 8 662 000 56.9 92.5 51.7 82.1
US 109 10 247 000 14 796 000 22 293 000 46.0 66.4 41.4 57.2
Total 1043 49 530 000 67 942 000 73 947 000 67.0 91.9 60.7 79.4

Note: This table shows that most union growth comes in a few years. Union membership growth (net of membership losses) has been calculated for each country for each year. Years were then sorted for each country according to membership growth. This table reports growth for each country for the five and the ten years with the fastest growth and compares this with total growth over all years for which data are available. Excess growth has been calculated as the difference between the share of growth in the top five or ten years and the share that would have come in these periods if growth had been distributed evenly across all years.

Note that years of rapid growth are not necessarily contiguous. There can be more growth in years of rapid growth than over the entire period. This is because some is temporary when years of rapid growth are followed by years of decline.

Sources: Bain and Price (1980): 39, Visser (1989)

Table 3
Impact of Strike Activity on Union Growth
Average Union Membership Growth in Years Sorted by Proportion of Workers Striking

Country Striker Rate Quartile Change
Lowest Third Second Highest
Australia 5.1 2.5 4.5 2.7 -2.4
Austria 0.5 -1.9 0.4 2.4 1.9
Canada 1.3 1.9 2.3 15.8 14.5
Denmark 0.3 1.1 3.0 11.3 11.0
France 0.0 2.1 5.6 17.0 17.0
Germany -0.2 0.4 1.3 20.3 20.5
Italy -2.2 -0.3 2.3 5.8 8.0
Japan -0.2 5.1 3.0 4.3 4.5
Netherlands -0.9 1.2 3.5 6.3 7.2
Norway 1.9 4.3 8.6 10.3 8.4
Sweden 2.5 3.2 5.9 16.9 14.4
UK 1.7 1.7 1.9 3.4 1.7
US -0.5 0.6 2.1 19.9 20.4
Total: Average 0.72 1.68 3.42 10.49 9.78

Note: This table shows that except in Australia unions grew fastest in years with large number of strikers. The proportion of workers striking was calculated for each country for each year as the number of strikers divided by the nonagricultural labor force. Years were then sorted into quartiles, each including one-fourth of the years, according to this striker rate statistic. The average annual union membership growth rate was then calculated for each quartile as the mean of the growth rate in each year in the quartile.

Rapid Union Growth Provokes a Hostile Reaction

These periods of rapid union growth end because social upheaval provokes a hostile reaction. Union growth leads employers to organize, to discover their own collective interests. Emulating their workers, they join together to discharge union activists, to support each other in strikes, and to demand government action against unions. This rising opposition ends periods of rapid union growth, beginning a new phase of decline followed by longer periods of stagnant membership. The weakest unions formed during the union surge succumb to the post-boom reaction; but if enough unions survive they leave a movement larger and broader than before.

Early Labor Unions, Democrats and Socialists

Guilds

Before modern labor unions, guilds united artisans and their employees. Craftsmen did the work of early industry, “masters” working beside “journeymen” and apprentices in small workplaces. Throughout the cities and towns of medieval Europe, guilds regulated production by setting minimum prices and quality, and capping wages, employment, and output. Controlled by independent craftsmen, “masters” who employed journeymen and trained apprentices, guilds regulated industry to protect the comfort and status of the masters. Apprentices and journeymen benefited from guild restrictions only when they advanced to master status.

Guild power was gradually undermined in the early-modern period. Employing workers outside the guild system, including rural workers and semiskilled workers in large urban workplaces, merchants transformed medieval industry. By the early 1800s, few could anticipate moving up to becoming a master artisan or owning their own establishment. Instead, facing the prospect of a lifetime of wage labor punctuated by periods of unemployment, some wage earners began to seek a collective regulation of their individual employment (Thompson, 1966; Scott, 1974; Dawley, 1976; Sewell, 1980; Wilentz, 1984; Blewett, 1988).

The labor movement within the broader movement for democracy

This new wage-labor regime led to the modern labor movement. Organizing propertyless workers who were laboring for capitalists, organized labor formed one wing of a broader democratic movement struggling for equality and for the rights of commoners (Friedman, 1998). Within the broader democratic movement for legal and political equality, labor fought the rise of a new aristocracy that controlled the machinery of modern industry just as the old aristocracy had monopolized land. Seen in this light, the fundamental idea of the labor movement, that employees should have a voice in the management of industry, is comparable to the demand that citizens should have a voice in the management of public affairs. Democratic values do not, by any means, guarantee that unions will be fair and evenhanded to all workers. In the United States, by reserving good jobs for their members, unions of white men sometimes contributed to the exploitation of women and nonwhites. Democracy only means that exploitation will be carried out at the behest of a political majority rather than at the say of an individual capitalist (Roediger, 1991; Arnesen, 2001; Foner, 1974; 1979; Milkman, 1985).

Craft unions’ strategy

Workers formed unions to voice their interests against their employers, and also against other workers. Rejecting broad alliances along class lines, alliances uniting workers on the basis of their lack of property and their common relationship with capitalists, craft unions followed a narrow strategy, uniting workers with the same skill against both the capitalists and against workers in different trades. By using their monopoly of knowledge of the work process to restrict access to the trade, craft unions could have a strong bargaining position that was enhanced by alliances with other craftsmen to finance long strikes. A narrow craft strategy was followed by the first successful unions throughout Europe and America, especially in small urban shops using technologies that still depended on traditional specialized skills, including printers, furniture makers, carpenters, gold beaters and jewelry makers, iron molders, engineers, machinists, and plumbers. Craft unions’ characteristic action was the small, local strike, the concerted withdrawal of labor by a few workers critical to production. Typically, craft unions would present a set of demands to local employers on a “take-it-or-leave-it” basis; either the employer accepted their demands or fought a contest of strength to determine whether the employers could do without the skilled workers for longer than the workers could manage without their jobs.

The craft strategy offered little to the great masses of workers. Because it depends on restricting access to trades it could not be applied by common laborers, who were untrained, nor by semi-skilled employees in modern mass-production establishments whose employers trained them on-the-job. Shunned by craft unions, most women and African-Americans in the United States were crowded into nonunion occupations. Some sought employment as strikebreakers in occupations otherwise monopolized by craft unions controlled by white, native-born males (Washington, 1913; Whatley, 1993).

Unions among unskilled workers

To form unions, the unskilled needed a strategy of the weak that would utilize their numbers rather than specialized knowledge and accumulated savings. Inclusive unions have succeeded but only when they attract allies among politicians, state officials, and the affluent public. Sponsoring unions and protecting them from employer repression, allies can allow organization among workers without specialized skills. When successful, inclusive unions can grow quickly in mass mobilization of common laborers. This happened, for example, in Germany at the beginning of the Weimar Republic, during the French Popular Front of 1936-37, and in the United States during the New Deal of the 1930s. These were times when state support rewarded inclusive unions for organizing the unskilled. The bill for mass mobilization usually came later. Each boom was followed by a reaction against the extensive promises of the inclusive labor movement when employers and conservative politicians worked to put labor’s genie back in the bottle.

Solidarity and the Trade Unions

Unionized occupations of the late 1800s

By the late-nineteenth century, trade unions had gained a powerful position in several skilled occupations in the United States and elsewhere. Outside of mining, craft unions were formed among well-paid skilled craft workers — workers whom historian Eric Hobsbawm labeled the “labor aristocracy” (Hobsbawm, 1964; Geary, 1981). In 1892, for example, nearly two-thirds of British coal miners were union members, as were a third of machinists, millwrights and metal workers, cobblers and shoe makers, glass workers, printers, mule spinners, and construction workers (Bain and Price, 1980). French miners had formed relatively strong unions, as had skilled workers in the railroad operating crafts, printers, jewelry makers, cigar makers, and furniture workers (Friedman, 1998). Cigar makers, printers, furniture workers, some construction and metal craftsmen took the lead in early German unions (Kocka, 1986). In the United States, there were about 160,000 union members in 1880, including 120,000 belonging to craft unions, including carpenters, engineers, furniture makers, stone-cutters, iron puddlers and rollers, printers, and several railroad crafts. Another 40,000 belonged to “industrial” unions organized without regard for trade. About half of these were coal miners; most of the rest belonged to the Knights of Labor (KOL) (Friedman, 1999).

The Knights of Labor

In Europe, these craft organizations were to be the basis of larger, mass unions uniting workers without regard for trade or, in some cases, industry (Ansell, 2001). This process began in the United States in the 1880s when craft workers in the Knights of Labor reached out to organize more broadly. Formed by skilled male, native-born garment cutters in 1869, the Knights of Labor would seem an odd candidate to mobilize the mass of unskilled workers. But from a few Philadelphia craft workers, the Knights grew to become a national and even international movement. Membership reached 20,000 in 1881 and grew to 100,000 in 1885. Then, in 1886, when successful strikes on some western railroads attracted a mass of previously unorganized unskilled workers, the KOL grew to a peak membership of a million workers. For a brief time, the Knights of Labor was a general movement of the American working class (Ware, 1929; Voss, 1993).

The KOL became a mass movement with an ideology and program that united workers without regard for occupation, industry, race or gender (Hattam, 1993). Never espousing Marxist or socialist doctrines, the Knights advanced an indigenous form of popular American radicalism, a “republicanism” that would overcome social problems by extending democracy to the workplace. Valuing citizens according to their work, their productive labor, the Knights were true heirs of earlier bourgeois radicals. Open to all producers, including farmers and other employers, they excluded only those seen to be parasitic on the labor of producers — liquor dealers, gamblers, bankers, stock manipulators and lawyers. Welcoming all others without regard for race, gender, or skill, the KOL was the first American labor union to attract significant numbers of women, African-Americans, and the unskilled (Foner, 1974; 1979; Rachleff, 1984).

The KOL’s strategy

In practice, most KOL local assemblies acted like craft unions. They bargained with employers, conducted boycotts, and called members out on strike to demand higher wages and better working conditions. But unlike craft unions that depended on the bargaining leverage of a few strategically positioned workers, the KOL’s tactics reflected its inclusive and democratic vision. Without a craft union’s resources or control over labor supply, the Knights sought to win labor disputes by widening them to involve political authorities and the outside public able to pressure employers to make concessions. Activists hoped that politicizing strikes would favor the KOL because its large membership would tempt ambitious politicians while its members’ poverty drew public sympathy.

In Europe, a strategy like that of the KOL succeeded in promoting the organization of inclusive unions. But it failed in the United States. Comparing the strike strategies of trade unions and the Knights provides insight into the survival and eventual success of the trade unions and their confederation, the American Federation of Labor (AFL) in late-nineteenth century America. Seeking to transform industrial relations, local assemblies of the KOL struck frequently with large but short strikes involving skilled and unskilled workers. The Knights’ industrial leverage depended on political and social influence. It could succeed where trade unions would not go because the KOL strategy utilized numbers, the one advantage held by common laborers. But this strategy could succeed only where political authorities and the outside public might sympathize with labor. Later industrial and regional unions tried the same strategy, conducting short but large strikes. By demonstrating sufficient numbers and commitment, French and Italian unions, for example, would win from state officials concessions they could not force from recalcitrant employers (Shorter and Tilly, 1974; Friedman, 1998). But compared with the small strikes conducted by craft unions, “solidarity” strikes must walk a fine line, aggressive enough to draw attention but not so threatening to provoke a hostile reaction from threatened authorities. Such a reaction doomed the KOL.

The Knights’ collapse in 1886

In 1886, the Knights became embroiled in a national general strike demanding an eight-hour workday, the world’s first May Day. This led directly to the collapse of the KOL. The May Day strike wave in 1886 and the bombing at Haymarket Square in Chicago provoked a “red scare” of historic proportions driving membership down to half a million in September 1887. Police in Chicago, for example, broke up union meetings, seized union records, and even banned the color red from advertisements. The KOL responded politically, sponsoring a wave of independent labor parties in the elections of 1886 and supporting the Populist Party in 1890 (Fink, 1983). But even relatively strong showings by these independent political movements could not halt the KOL’s decline. By 1890, its membership had fallen by half again, and it fell to under 50,000 members by 1897.

Unions and radical political movements in Europe in the late 1800s

The KOL spread outside the United States, attracting an energetic following in the Canada, the United Kingdom, France, and other European countries. Industrial and regional unionism fared better in these countries than in the United States. Most German unionists belonged to industrial unions allied with the Social Democratic Party. Under Marxist leadership, unions and political party formed a centralized labor movement to maximize labor’s political leverage. English union membership was divided between members of a stable core of craft unions and a growing membership in industrial and regional unions based in mining, cotton textiles, and transportation. Allied with political radicals, these industrial and regional unions formed the backbone of the Labor Party, which held the balance of power in British politics after 1906.

The most radical unions were found in France. By the early 1890s, revolutionary syndicalists controlled the national union center, the Confédération générale du travail (or CGT), which they tried to use as a base for a revolutionary general strike where the workers would seize economic and political power. Consolidating craft unions into industrial and regional unions, the Bourses du travail, syndicalists conducted large strikes designed to demonstrate labor’s solidarity. Paradoxically, the syndicalists’ large strikes were effective because they provoked friendly government mediation. In the United States, state intervention was fatal for labor because government and employers usually united to crush labor radicalism. But in France, officials were more concerned to maintain a center-left coalition with organized labor against reactionary employers opposed to the Third Republic. State intervention helped French unionists to win concessions beyond any they could win with economic leverage. A radical strategy of inclusive industrial and regional unionism could succeed in France because the political leadership of the early Third Republic needed labor’s support against powerful economic and social groups who would replace the Republic with an authoritarian regime. Reminded daily of the importance of republican values and the coalition that sustained the Republic, French state officials promoted collective bargaining and labor unions. Ironically, it was the support of liberal state officials that allowed French union radicalism to succeed, and allowed French unions to grow faster than American unions and to organize the semi-skilled workers in the large establishments of France’s modern industries (Friedman, 1997; 1998).

The AFL and American Exceptionalism

By 1914, unions outside the United States had found that broad organization reduced the availability of strike breakers, advanced labor’s political goals, and could lead to state intervention on behalf of the unions. The United States was becoming exceptional, the only advanced capitalist country without a strong, united labor movement. The collapse of the Knights of Labor cleared the way for the AFL. Formed in 1881 as the Federation of Trade and Labor Unions, the AFL was organized to uphold the narrow interests of craft workers against the general interests of common laborers in the KOL. In practice, AFL-craft unions were little labor monopolies, able to win concessions because of their control over uncommon skills and because their narrow strategy did not frighten state officials. Many early AFL leaders, notably the AFL’s founding president Samuel Gompers and P. J. McGuire of the Carpenters, had been active in radical political movements. But after 1886, they learned to reject political involvements for fear that radicalism might antagonize state officials or employers and provoke repression.

AFL successes in the early twentieth-century

Entering the twentieth century, the AFL appeared to have a winning strategy. Union membership rose sharply in the late 1890s, doubling between 1896 and 1900 and again between 1900 and 1904. Fewer than 5 percent of industrial wage earners belonged to labor unions in 1895, but this share rose to 7 percent in 1900 and 13 percent in 1904, including over 21 percent of industrial wage earners (workers outside of commerce, government, and the professions). Half of coal miners in 1904 belonged to an industrial union (the United Mine Workers of America), but otherwise, most union members belonged to craft organizations, including nearly half the printers, and a third of cigar makers, construction workers and transportation workers. As shown in Table 4, other pockets of union strength included skilled workers in the metal trades, leather, and apparel. These craft unions had demonstrated their economic power, raising wages by around 15 percent and reducing hours worked (Friedman, 1991; Mullin, 1993).

Table 4
Unionization rates by industry in the United States, 1880-2000

Industry 1880 1910 1930 1953 1974 1983 2000
Agriculture Forestry Fishing 0.0 0.1 0.4 0.6 4.0 4.8 2.1
Mining 11.2 37.7 19.8 64.7 34.7 21.1 10.9
Construction 2.8 25.2 29.8 83.8 38.0 28.0 18.3
Manufacturing 3.4 10.3 7.3 42.4 37.2 27.9 14.8
Transportation Communication Utilities 3.7 20.0 18.3 82.5 49.8 46.4 24.0
Private Services 0.1 3.3 1.8 9.5 8.6 8.7 4.8
Public Employment 0.3 4.0 9.6 11.3 38.0 31.1 37.5
All Private 1.7 8.7 7.0 31.9 22.4 18.4 10.9
All 1.7 8.5 7.1 29.6 24.8 20.4 14.1

Note: This table shows the unionization rate, the share of workers belonging to unions, in different industries in the United States, 1880-1996.

Sources: 1880 and 1910: Friedman (1999): 83; 1930: Union membership from Wolman (1936); employment from United States, Bureau of the Census (1932); 1953: Troy (1957); 1974, 1986, 2000: United States, Current Population Survey.

Limits to the craft strategy

Even at this peak, the craft strategy had clear limits. Craft unions succeeded only in a declining part of American industry among workers still performing traditional tasks where training was through apprenticeship programs controlled by the workers themselves. By contrast, there were few unions in the rapidly growing industries employing semi-skilled workers. Nor was the AFL able to overcome racial divisions and state opposition to organize in the South (Friedman, 2000; Letwin, 1998). Compared with the KOL in the early 1880s, or with France’s revolutionary syndicalist unions, American unions were weak in steel, textiles, chemicals, paper and metal fabrication using technologies without traditional craft skills. AFL strongholds included construction, printing, cigar rolling, apparel cutting and pressing, and custom metal engineering, employed craft workers in relatively small establishments little changed from 25 years earlier (see Table 4).

Dependent on skilled craftsmen’s economic leverage, the AFL was poorly organized to battle large, technologically dynamic corporations. For a brief time, the revolutionary International Workers of the World (IWW), formed in 1905, organized semi-skilled workers in some mass production industries. But by 1914, it too had failed. It was state support that forced powerful French employers to accept unions. Without such assistance, no union strategy could force large American employers to accept unions.

Unions in the World War I Era

The AFL and World War I

For all its limits, it must be acknowledged that the AFL and its craft affiliates survived after their rivals ignited and died. The AFL formed a solid union movement among skilled craftsmen that with favorable circumstances could form the core of a broader union movement like what developed in Europe after 1900. During World War I, the Wilson administration endorsed unionization and collective bargaining in exchange for union support for the war effort. AFL affiliates used state support to organize mass-production workers in shipbuilding, metal fabrication, meatpacking and steel doubling union membership between 1915 and 1919. But when Federal support ended after the war’s end, employers mobilized to crush the nascent unions. The post-war union collapse has been attributed to the AFL’s failings. The larger truth is that American unions needed state support to overcome the entrenched power of capital. The AFL did not fail because of its deficient economic strategy; it failed because it had an ineffective political strategy (Friedman, 1998; Frank, 1994; Montgomery, 1987).

International effects of World War I

War gave labor extraordinary opportunities. Combatant governments rewarded pro-war labor leaders with positions in the expanded state bureaucracy and support for collective bargaining and unions. Union growth also reflected economic conditions when wartime labor shortages strengthened the bargaining position of workers and unions. Unions grew rapidly during and immediately after the war. British unions, for example, doubled their membership between 1914 and 1920, to enroll eight million workers, almost half the nonagricultural labor force (Bain and Price, 1980; Visser, 1989). Union membership tripled in Germany and Sweden, doubled in Canada, Denmark, the Netherlands, and Norway, and almost doubled in the United States (see Table 5 and Table 1). For twelve countries, membership grew by 121 percent between 1913 and 1920, including 119 percent growth in seven combatant countries and 160 percent growth in five neutral states.

Table 5
Impact of World War I on Union Membership Growth
Membership Growth in Wartime and After

12 Countries 7 Combatants 5 Neutrals
War-Time 1913 12 498 000 11 742 000 756 000
1920 27 649 000 25 687 000 1 962 000
Growth 1913-20: 121% 119% 160%
Post-war 1920 27 649 000
1929 18 149 000
Growth 1920-29: -34%

Shift toward the revolutionary left

Even before the war, frustration with the slow pace of social reform had led to a shift towards the revolutionary socialist and syndicalist left in Germany, the United Kingdom, and the United States (Nolan, 1981; Montgomery, 1987). In Europe, frustrations with rising prices, declining real wages and working conditions, and anger at catastrophic war losses fanned the flames of discontent into a raging conflagration. Compared with pre-war levels, the number of strikers rose ten or even twenty times after the war, including 2.5 million strikers in France in 1919 and 1920, compared with 200,000 strikers in 1913, 13 million German strikers, up from 300,000 in 1913, and 5 million American strikers, up from under 1 million in 1913. British Prime Minister Lloyd George warned in March 1919 that “The whole of Europe is filled with the spirit of revolution. There is a deep sense not only of discontent, but of anger and revolt among the workmen . . . The whole existing order in its political, social and economic aspects is questioned by the masses of the population from one end of Europe to the other” (quoted in Cronin, 1983: 22).

Impact of Communists

Inspired by the success of the Bolshevik revolution in Russia, revolutionary Communist Parties were organized throughout the world to promote revolution by organizing labor unions, strikes, and political protest. Communism was a mixed blessing for labor. The Communists included some of labor’s most dedicated activists and organizers who contributed greatly to union organization. But Communist help came at a high price. Secretive, domineering, intolerant of opposition, the Communists divided unions between their dwindling allies and a growing collection of outraged opponents. Moreover, they galvanized opposition, depriving labor of needed allies among state officials and the liberal bourgeoisie.

The “Lean Years”: Welfare Capitalism and the Open Shop

Aftermath of World War I

As with most great surges in union membership, the postwar boom was self-limiting. Helped by a sharp post- war economic contraction, employers and state officials ruthlessly drove back the radical threat, purging their workforce of known union activists and easily absorbing futile strikes during a period of rising unemployment. Such campaigns drove membership down by a third from a 1920 peak of 26 million members in eleven countries in 1920 to fewer than 18 million in 1924. In Austria, France, Germany, and the United States, labor unrest contributed to the election of conservative governments; in Hungary, Italy, and Poland it led to the installation of anti- democratic dictatorships that ruthlessly crushed labor unions. Economic stagnation, state repression, and anti-union campaigns by employers prevented any union resurgence through the rest of the 1920s. By 1929, unions in these eleven countries had added only 30,000 members, one-fifth of one percent.

Injunctions and welfare capitalism

The 1920s was an especially dark period for organized labor in the United States where weaknesses visible before World War I became critical failures. Labor’s opponents used fear of Communism to foment a post-war red scare that targeted union activists for police and vigilante violence. Hundreds of foreign-born activists were deported, and mobs led by the American Legion and the Ku Klux Klan broke up union meetings and destroyed union offices (see, for example, Frank, 1994: 104-5). Judges added law to the campaign against unions. Ignoring the intent of the Clayton Anti-Trust Act (1914) they used anti-trust law and injunctions against unions, forbidding activists from picketing or publicizing disputes, holding signs, or even enrolling new union members. Employers competed for their workers’ allegiance, offering paternalist welfare programs and systems of employee representation as substitutes for independent unions. They sought to build a nonunion industrial relations system around welfare capitalism (Cohen, 1990).

Stagnation and decline

After the promises of the war years, the defeat of postwar union drives in mass production industries like steel and meatpacking inaugurated a decade of union stagnation and decline. Membership fell by a third between 1920 and 1924. Unions survived only in the older trades where employment was usually declining. By 1924, they were almost completely eliminated from the dynamic industries of the second industrial revolution: including steel, automobiles, consumer electronics, chemicals and rubber manufacture.

New Deals for Labor

Great Depression

The nonunion industrial relations system of the 1920s might have endured and produced a docile working class organized in company unions (Brody, 1985). But the welfare capitalism of the 1920s collapsed when the Great Depression of the 1930s exposed its weaknesses and undermined political support for the nonunion, open shop. Between 1929 and 1933, real national income in the United States fell by one third, nonagricultural employment fell by a quarter, and unemployment rose from under 2 million in 1929 to 13 million in 1933, a quarter of the civilian labor force. Economic decline was nearly as great elsewhere, raising unemployment to over 15 percent in Austria, Canada, Germany, and the United Kingdom (Maddison, 1991: 260-61). Only the Soviet Union, with its authoritarian political economy was largely spared the scourge of unemployment and economic collapse — a point emphasized by Communists throughout the 1930s and later. Depression discredited the nonunion industrial relations system by forcing welfare capitalists to renege on promises to stabilize employment and to maintain wages. Then, by ignoring protests from members of employee representation plans, welfare capitalists further exposed the fundamental weakness of their system. Lacking any independent support, paternalist promises had no standing but depended entirely on the variable good will of employers. And sometimes that was not enough (Cohen, 1990).

Depression-era political shifts

Voters, too, lost confidence in employers. The Great Depression discredited the old political economy. Even before Franklin Roosevelt’s election as President of the United States in 1932, American states enacted legislation restricting the rights of creditors and landlords, restraining the use of the injunction in labor disputes, and providing expanded relief for the unemployed (Ely, 1998; Friedman, 2001). European voters abandoned centrist parties, embracing extremists of both left and right, Communists and Fascists. In Germany, the Nazis won, but Popular Front governments uniting Communists and socialists with bourgeois liberals assumed power in other countries, including Sweden, France and Spain. (The Spanish Popular Front was overthrown by a Fascist rebellion that installed a dictatorship led by Francisco Franco.) Throughout there was an impulse to take public control over the economy because free market capitalism and orthodox finance had led to disaster (Temin, 1990).

Economic depression lowers union membership when unemployed workers drop their membership and employers use their stronger bargaining position to defeat union drives (Bain and Elsheikh, 1976). Indeed, union membership fell with the onset of the Great Depression but, contradicting the usual pattern, membership rebounded sharply after 1932 despite high unemployment, rising by over 76 percent in ten countries by 1938 (see Table 6 and Table 1). The fastest growth came in countries with openly pro-union governments. In France, where the Socialist Léon Blum led a Popular Front government, and the United States, during Franklin Roosevelt’s New Deal, membership rose by 160 percent 1933-38. But membership grew by 33 percent in eight other countries even without openly pro-labor governments.

Table 6
Impact of the Great Depression and World War II on Union Membership Growth

11 Countries (no Germany) 10 Countries (no Austria)
Depression 1929 12 401 000 11 508 000
1933 11 455 000 10 802 000
Growth 1929-33 -7.6% -6.1%
Popular Front Period 1933 10 802 000
1938 19 007 000
Growth 1933-38 76.0%
Second World War 1938 19 007 000
1947 35 485 000
Growth 1938-47 86.7%

French unions and the Matignon agreements

French union membership rose from under 900,000 in 1935 to over 4,500,000 in 1937. The Popular Front’s victory in the elections of June 1936 precipitated a massive strike wave and the occupation of factories and workplaces throughout France. Remembered in movie, song and legend, the factory occupations were a nearly spontaneous uprising of French workers that brought France’s economy to a halt. Contemporaries were struck by the extraordinarily cheerful feelings that prevailed, the “holiday feeling” and sense that the strikes were a new sort of non-violent revolution that would overturn hierarchy and replace capitalist authoritarianism with true social democracy (Phillippe and Dubief, 1993: 307-8). After Blum assumed office, he brokered the Matignon agreements, named after the premier’s official residence in Paris. Union leaders and heads of France’s leading employer associations agreed to end the strikes and occupations in exchange for wage increases of around 15 percent, a 40 hour workweek, annual vacations, and union recognition. Codified in statute by the Popular Front government, French unions gained new rights and protections from employer repression. Only then did workers flock into unions. In a few weeks, French unions gained four million members with the fastest growth in the new industries of the second industrial revolution. Unions in metal fabrication and chemicals grew by 1,450 percent and 4,000 percent respectively (Magraw, 1992: 2, 287-88).

French union leader Léon Jouhaux hailed the Matignon agreements as “the greatest victory of the workers’ movement.” It included lasting gains, including annual vacations and shorter workweeks. But Simone Weil described the strikers of May 1936 as “soldiers on leave,” and they were soon returned to work. Regrouping, employers discharged union activists and attacked the precarious unity of the Popular Front government. Fighting an uphill battle against renewed employer resistance, the Popular Front government fell before it could build a new system of cooperative industrial relations. Contained, French unions were unable to maintain their momentum towards industrial democracy. Membership fell by a third in 1937-39.

The National Industrial Recovery Act

A different union paradigm was developed in the United States. Rather than vehicles for a democratic revolution, the New Deal sought to integrate organized labor into a reformed capitalism that recognized capitalist hierarchy in the workplace, using unions only to promote macroeconomic stabilization by raising wages and consumer spending (Brinkley, 1995). Included as part of a program for economic recovery was section 7(a) of the National Industrial Recovery Act (NIRA) giving “employees . . . the right to organize and bargain collectively through representatives of their own choosing . . . free from the interference, restraint, or coercion of employers.” AFL-leader William Green pronounced this a “charter of industrial freedom” and workers rushed into unions in a wave unmatched since the Knights of Labor in 1886. As with the KOL, the greatest increase came among the unskilled. Coal miners, southern textile workers, northern apparel workers, Ohio tire makers, Detroit automobile workers, aluminum, lumber and sawmill workers all rushed into unions. For the first time in fifty years, American unions gained a foothold in mass production industries.

AFL’s lack of enthusiasm

Promises of state support brought common laborers into unions. But once there, the new unionists received little help from aging AFL leaders. Fearing that the new unionists’ impetuous zeal and militant radicalism would provoke repression, AFL leaders tried to scatter the new members among contending craft unions with archaic craft jurisdictions. The new unionists were swept up in the excitement of unity and collective action but a half-century of experience had taught the AFL’s leadership to fear such enthusiasms.

The AFL dampened the union boom of 1933-34, but, again, the larger problem was not with the AFL’s flawed tactics but with its lack of political leverage. Doing little to enforce the promises of Section 7(a), the Federal government left employers free to ignore the law. Some flatly prohibited union organization; others formally honored the law but established anemic employee representation plans while refusing to deal with independent unions (Irons, 2000). By 1935 almost as many industrial establishments had employer-dominated employee- representation plans (27 percent) as had unions (30 percent). The greatest number had no labor organization at all (43 percent).

Birth of the CIO

Implacable management resistance and divided leadership killed the early New Deal union surge. It died even before the NIRA was ruled unconstitutional in 1935. Failure provoked rebellion within the AFL. Led by John L. Lewis of the United Mine Workers, eight national unions launched a campaign for industrial organization as the Committee for Industrial Organization. After Lewis punched Carpenter’s Union leader William L Hutcheson on the floor of the AFL convention in 1935, the Committee became an independent Congress of Industrial Organization (CIO). Including many Communist activists, CIO committees fanned out to organize workers in steel, automobiles, retail trade, journalism and other industries. Building effectively on local rank and file militancy, including sitdown strikes in automobiles, rubber, and other industries, the CIO quickly won contracts from some of the strongest bastions of the open shop, including United States Steel and General Motors (Zieger, 1995).

The Wagner Act

Creative strategy and energetic organizing helped. But the CIO owed its lasting success to state support. After the failure of the NIRA, New Dealers sought another way to strengthen labor as a force for economic stimulus. This led to the enactment in 1935 of the National Labor Relations Act, also known as the “Wagner Act.” The Wagner Act established a National Labor Relations Board charged to enforce employees’ “right to self-organization, to form, join, or assist labor organizations to bargain collectively through representatives of their own choosing and to engage in concerted activities for the purpose of collective bargaining or other mutual aid or protection.” It provided for elections to choose union representation and required employers to negotiate “in good faith” with their workers’ chosen representatives. Shifting labor conflict from strikes to elections and protecting activists from dismissal for their union work, the Act lowered the cost to individual workers of supporting collective action. It also put the Federal government’s imprimatur on union organization.

Crucial role of rank-and-file militants and state government support

Appointed by President Roosevelt, the first NLRB was openly pro-union, viewing the Act’s preamble as mandate to promote organization. By 1945 the Board had supervised 24,000 union elections involving some 6,000,000 workers, leading to the unionization of nearly 5,000,000 workers. Still, the NLRB was not responsible for the period’s union boom. The Wagner Act had no direct role in the early CIO years because it was ignored for two years until its constitutionality was established by the Supreme Court in National Labor Relations Board v. Jones and Laughlin Steel Company (1937). Furthermore, the election procedure’s gross contribution of 5,000,000 members was less than half of the period’s net union growth of 11,000,000 members. More important than the Wagner Act were crucial union victories over prominent open shop employers in cities like Akron, Ohio, Flint, Michigan, and among Philadelphia-area metal workers. Dedicated rank-and-file militants and effective union leadership were crucial in these victories. As important was the support of pro-New Deal local and state governments. The Roosevelt landslides of 1934 and 1936 brought to office liberal Democratic governors and mayors who gave crucial support to the early CIO. Placing a right to collective bargaining above private property rights, liberal governors and other elected officials in Michigan, Ohio, Pennsylvania and elsewhere refused to send police to evict sit-down strikers who had seized control of factories. This state support allowed the minority of workers who actively supported unionization to use force to overcome the passivity of the majority of workers and the opposition of the employers. The Open Shop of the 1920s was not abandoned; it was overwhelmed by an aggressive, government-backed labor movement (Gall, 1999; Harris, 2000).

World War II

Federal support for union organization was also crucial during World War II. Again, war helped unions both by eliminating unemployment and because state officials supported unions to gain support for the war effort. Established to minimize labor disputes that might disrupt war production, the National War Labor Board instituted a labor truce where unions exchanged a no-strike pledge for employer recognition. During World War II, employers conceded union security and “maintenance of membership” rules requiring workers to pay their union dues. Acquiescing to government demands, employers accepted the institutionalization of the American labor movement, guaranteeing unions a steady flow of dues to fund an expanded bureaucracy, new benefit programs, and even to raise funds for political action. After growing from 3.5 to 10.2 million members between 1935 and 1941, unions added another 4 million members during the war. “Maintenance of membership” rules prevented free riders even more effectively than had the factory takeovers and violence of the late-1930s. With millions of members and money in the bank, labor leaders like Sidney Hillman and Phillip Murray had the ear of business leaders and official Washington. Large, established, and respected: American labor had made it, part of a reformed capitalism committed to both property and prosperity.

Even more than the First World War, World War Two promoted unions and social change. A European civil war, the war divided the continent not only between warring countries but within countries between those, usually on the political right, who favored fascism over liberal parliamentary government and those who defended democracy. Before the war, left and right contended over the appeasement of Nazi Germany and fascist Italy; during the war, many businesses and conservative politicians collaborated with the German occupation against a resistance movement dominated by the left. Throughout Europe, victory over Germany was a triumph for labor that led directly to the entry into government of socialists and Communists.

Successes and Failures after World War II

Union membership exploded during and after the war, nearly doubling between 1938 and 1946. By 1947, unions had enrolled a majority of nonagricultural workers in Scandinavia, Australia, and Italy, and over 40 percent in most other European countries (see Table 1). Accumulated depression and wartime grievances sparked a post- war strike wave that included over 6 million strikers in France in 1948, 4 million in Italy in 1949 and 1950, and 5 million in the United States in 1946. In Europe, popular unrest led to a dramatic political shift to the left. The Labor Party government elected in the United Kingdom in 1945 established a new National Health Service, and nationalized mining, the railroads, and the Bank of England. A center-left post-war coalition government in France expanded the national pension system and nationalized the Bank of France, Renault, and other companies associated with the wartime Vichy regime. Throughout Europe, the share of national income devoted to social services jumped dramatically, as did the share of income going to the working classes.

Europeans unions and the state after World War II

Unions and the political left were stronger everywhere throughout post-war Europe, but in some countries labor’s position deteriorated quickly. In France, Italy, and Japan, the popular front uniting Communists, socialists, and bourgeois liberals dissolved, and labor’s management opponents recovered state support, with the onset of the Cold War. In these countries, union membership dropped after 1947 and unions remained on the defensive for over a decade in a largely adversarial industrial relations system. Elsewhere, notably in countries with weak Communist movements, such as in Scandinavia but also in Austria, Germany, and the Netherlands, labor was able to compel management and state officials to accept strong and centralized labor movements as social partners. In these countries, stable industrial relations allowed cooperation between management and labor to raise productivity and to open new markets for national companies. High-union-density and high-union-centralization allowed Scandinavian and German labor leaders to negotiate incomes policies with governments and employers restraining wage inflation in exchange for stable employment, investment, and wages linked to productivity growth. Such policies could not be instituted in countries with weaker and less centralized labor movements, including France, Italy, Japan, the United Kingdom and the United States because their unions had not been accepted as bargaining partners by management and they lacked the centralized authority to enforce incomes policies and productivity bargains (Alvarez, Garrett, and Lange, 1992).

Europe since the 1960s

Even where European labor was the weakest, in France or Italy in the 1950s, unions were stronger than before World War II. Working with entrenched socialist and labor political parties, European unions were able to maintain high wages, restrictions on managerial autonomy, and social security. The wave of popular unrest in the late 1960s and early 1970s would carry most European unions to new heights, briefly bringing membership to over 50 percent of the labor force in the United Kingdom and in Italy, and bringing socialists into the government in France, Germany, Italy, and the United Kingdom. Since 1980, union membership has declined some and there has been some retrenchment in the welfare state. But the essentials of European welfare states and labor relations have remained (Western, 1997; Golden and Pontusson, 1992).

Unions begin to decline in the US

It was after World War II that American Exceptionalism became most valid, when the United States emerged as the advanced, capitalist democracy with the weakest labor movement. The United States was the only advanced capitalist democracy where unions went into prolonged decline right after World War II. At 35 percent, the unionization rate in 1945 was the highest in American history, but even then it was lower than in most other advanced capitalist economies. It has been falling since. The post-war strike wave, including three million strikers in 1945 and five million in 1946, was the largest in American history but it did little to enhance labor’s political position or bargaining leverage. Instead, it provoked a powerful reaction among employers and others suspicious of growing union power. A concerted drive by the CIO to organize the South, “Operation Dixie,” failed dismally in 1946. Unable to overcome private repression, racial divisions, and the pro-employer stance of southern local and state governments, the CIO’s defeat left the South as a nonunion, low-wage domestic enclave and a bastion of anti- union politics (Griffith, 1988). Then, in 1946, a conservative Republican majority was elected to Congress, dashing hopes for a renewed, post-war New Deal.

The Taft-Hartley Act and the CIO’s Expulsion of Communists

Quickly, labor’s wartime dreams turned to post-war nightmares. The Republican Congress amended the Wagner Act, enacting the Taft-Hartley Act in 1947 to give employers and state officials new powers against strikers and unions. The law also required union leaders to sign a non-Communist affidavit as a condition for union participation in NLRB-sponsored elections. This loyalty oath divided labor during a time of weakness. With its roots in radical politics and an alliance of convenience between Lewis and the Communists, the CIO was torn by the new Red Scare. Hoping to appease the political right, the CIO majority in 1949 expelled ten Communist-led unions with nearly a third of the organization’s members. This marked the end of the CIO’s expansive period. Shorn of its left, the CIO lost its most dynamic and energetic organizers and leaders. Worse, it plunged the CIO into a civil war; non-Communist affiliates raided locals belonging to the “communist-led” unions fatally distracting both sides from the CIO’s original mission to organize the unorganized and empower the dispossessed. By breaking with the Communists, the CIO’s leadership signaled that it had accepted its place within a system of capitalist hierarchy. Little reason remained for the CIO to remain independent. In 1955 it merged with the AFL to form the AFL-CIO.

The Golden Age of American Unions

Without the revolutionary aspirations now associated with the discredited Communists, America’s unions settled down to bargain over wages and working conditions without challenging such managerial prerogatives as decisions about prices, production, and investment. Some labor leaders, notably James Hoffa of the Teamsters but also local leaders in construction and service trades, abandoned all higher aspirations to use their unions for purely personal financial gain. Allying themselves with organized crime, they used violence to maintain their power over employers and their own rank-and-file membership. Others, including former-CIO leaders, like Walter Reuther of the United Auto Workers, continued to push the envelope of legitimate bargaining topics, building challenges to capitalist authority at the workplace. But even the UAW was unable to force major managerial prerogatives onto the bargaining table.

The quarter century after 1950 formed a ‘golden age’ for American unions. Established unions found a secure place at the bargaining table with America’s leading firms in such industries as autos, steel, trucking, and chemicals. Contracts were periodically negotiated providing for the exchange of good wages for cooperative workplace relations. Rules were negotiated providing a system of civil authority at work, with negotiated regulations for promotion and layoffs, and procedures giving workers opportunities to voice grievances before neutral arbitrators. Wages rose steadily, by over 2 percent per year and union workers earned a comfortable 20 percent more than nonunion workers of similar age, experience and education. Wages grew faster in Europe but American wages were higher and growth was rapid enough to narrow the gap between rich and poor, and between management salaries and worker wages. Unions also won a growing list of benefit programs, medical and dental insurance, paid holidays and vacations, supplemental unemployment insurance, and pensions. Competition for workers forced many nonunion employers to match the benefit packages won by unions, but unionized employers provided benefits worth over 60 percent more than were given nonunion workers (Freeman and Medoff, 1984; Hirsch and Addison, 1986).

Impact of decentralized bargaining in the US

In most of Europe, strong labor movements limit the wage and benefit advantages of union membership by forcing governments to extend union gains to all workers in an industry regardless of union status. By compelling nonunion employers to match union gains, this limited the competitive penalty borne by unionized firms. By contrast, decentralized bargaining and weak unions in the United States created large union wage differentials that put unionized firms at a competitive disadvantage, encouraging them to seek out nonunion labor and localities. A stable and vocal workforce with more experience and training did raise unionized firms’ labor productivity by 15 percent or more above the level of nonunion firms and some scholars have argued that unionized workers earn much of their wage gain. Others, however, find little productivity gain for unionized workers after account is taken of greater use of machinery and other nonlabor inputs by unionized firms (compare Freeman and Medoff, 1984 and Hirsch and Addison, 1986). But even unionized firms with higher labor productivity were usually more conscious of the wages and benefits paid to union worker than they were of unionization’s productivity benefits.

Unions and the Civil Rights Movement

Post-war unions remained politically active. European unions were closely associated with political parties, Communists in France and Italy, socialists or labor parties elsewhere. In practice, notwithstanding revolutionary pronouncements, even the Communist’s political agenda came to resemble that of unions in the United States, liberal reform including a commitment to full employment and the redistribution of income towards workers and the poor (Boyle, 1998). Golden age unions have also been at the forefront of campaigns to extend individual rights. The major domestic political issue of the post-war United States, civil rights, was troubling for many unions because of the racist provisions in their own practice. Nonetheless, in the 1950s and 1960s, the AFL-CIO strongly supported the civil rights movement, funded civil rights organizations and lobbied in support of civil rights legislation. The AFL-CIO pushed unions to open their ranks to African-American workers, even at the expense of losing affiliates in states like Mississippi. Seizing the opportunity created by the civil rights movement, some unions gained members among nonwhites. The feminist movement of the 1970s created new challenges for the masculine and sometimes misogynist labor movement. But, here too, the search for members and a desire to remove sources of division eventually brought organized labor to the forefront. The AFL-CIO supported the Equal Rights Amendment and began to promote women to leadership positions.

Shift of unions to the public sector

In no other country have women and members of racial minorities assumed such prominent positions in the labor movement as they have in the United States. The movement of African-American and women to leadership positions in the late-twentieth century labor movement was accelerated by a shift in the membership structure of the United States union movement. Maintaining their strength in traditional, masculine occupations in manufacturing, construction, mining, and transportation, European unions remained predominantly male. Union decline in these industries combined with growth in heavily female public sector employments in the United States led to the femininization of the American labor movement. Union membership began to decline in the private sector in the United States immediately after World War II. Between 1953 and 1983, for example, the unionization rate fell from 42 percent to 28 percent in manufacturing, by nearly half in transportation, and by over half in construction and mining (see Table 4). By contrast, after 1960, public sector workers won new opportunities to form unions. Because women and racial minorities form a disproportionate share of these public sector workers, increasing union membership there has changed the American labor movement’s racial and gender composition. Women comprised only 19 percent of American union members in the mid-1950s but their share rose to 40 percent by the late 1990s. By then, the most unionized workers were no longer the white male skilled craftsmen of old. Instead, they were nurses, parole officers, government clerks, and most of all, school teachers.

Union Collapse and Union Avoidance in the US

Outside the United States, unions grew through the 1970s and, despite some decline since the 1980s, European and Canadian unions remain large and powerful. The United States is different. Union decline since World War II has brought the United States private-sector labor movement down to early twentieth century levels. As a share of the nonagricultural labor force, union membership fell from its 1945 peak of 35 percent down to under 30 percent in the early 1970s. From there, decline became a general rout. In the 1970s, rising unemployment, increasing international competition, and the movement of industry to the nonunion South and to rural areas undermined the bargaining position of many American unions leaving them vulnerable to a renewed management offensive. Returning to pre-New Deal practices, some employers established new welfare and employee representation programs, hoping to lure worker away from unions (Heckscher, 1987; Jacoby, 1997). Others returned to pre-New Deal repression. By the early 1980s, union avoidance had become an industry. Anti-union consultants and lawyers openly counseled employers how to use labor law to evade unions. Findings of employers’ unfair labor practices in violation of the Wagner Act tripled in the 1970s; by the 1980s, the NLRB reinstated over 10,000 workers a year who were illegally discharged for union activity, nearly one for every twenty who voted for a union in an NLRB election (Weiler, 1983). By the 1990s, the unionization rate in the United States fell to under 14 percent, including only 9 percent of the private sector workers and 37 percent of those in the public sector. Unions now have minimal impact on wages or working conditions for most American workers.

Nowhere else have unions collapsed as in the United States. With a unionization rate dramatically below that of other countries, including Canada, the United States has achieved exceptional status (see Table 7). There remains great interest in unions among American workers; where employers do not resist, unions thrive. In the public sector and in some private employers where workers have free choice to join a union, they are as likely as they ever were, and as likely as workers anywhere. In the past, as after 1886 and in the 1920s, when American employers broke unions, they revived when a government committed to workplace democracy sheltered them from employer repression. If we see another such government, we may yet see another union revival.

Table 7
Union Membership Rates for the United States and Six Other Leading Industrial Economies, 1970 to 1990

1970 1980 1990
U.S.: Unionization Rate: All industries 30.0 24.7 17.6
U.S.: Unionization Rate: Manufacturing 41.0 35.0 22.0
U.S.: Unionization Rate: Financial services 5.0 4.0 2.0
Six Countries: Unionization Rate: All industries 37.1 39.7 35.3
Six Countries: Unionization Rate: Manufacturing 38.8 44.0 35.2
Five Countries: Unionization Rate: Financial services 23.9 23.8 24.0
Ratio: U.S./Six Countries: All industries 0.808 0.622 0.499
Ratio: U.S./Six Countries: Manufacturing 1.058 0.795 0.626
Ratio: U.S./Five Countries: Financial services 0.209 0.168 0.083

Note: The unionization rate reported is the number of union members out of 100 workers in the specified industry. The ratio shown is the unionization rate for the United States divided by the unionization rate for the other countries. The six countries are Canada, France, Germany, Italy, Japan, and the United Kingdom. Data on union membership in financial services in France are not available.

Source: Visser (1991): 110.

References

Alvarez, R. Michael, Geoffrey Garrett and Peter Lange. “Government Partisanship, Labor Organization, and Macroeconomic Performance,” American Political Science Review 85 (1992): 539-556.

Ansell, Christopher K. Schism and Solidarity in Social Movements: The Politics of Labor in the French Third Republic. Cambridge: Cambridge University Press, 2001.

Arnesen, Eric, Brotherhoods of Color: Black Railroad Workers and the Struggle for Equality. Cambridge, MA: Harvard University Press, 2001.

Bain, George S., and Farouk Elsheikh. Union Growth and the Business Cycle: An Econometric Analysis. Oxford: Basil Blackwell, 1976.

Bain, George S. and Robert Price. Profiles of Union Growth: A Comparative Statistical Portrait of Eight Countries. Oxford: Basil Blackwell, 1980.

Bernard, Phillippe and Henri Dubief. The Decline of the Third Republic, 1914-1938. Cambridge: Cambridge University Press, 1993.

Blewett, Mary H. Men, Women, and Work: Class, Gender and Protest in the New England Shoe Industry, 1780-1910. Urbana, IL: University of Illinois Press, 1988.

Boyle, Kevin, editor. Organized Labor and American Politics, 1894-1994: The Labor-Liberal Alliance. Albany, NY: State University of New York Press, 1998.

Brinkley, Alan. The End of Reform: New Deal Liberalism in Recession and War. New York: Alfred A. Knopf, 1995.

Brody, David. Workers in Industrial America: Essays on the Twentieth-Century Struggle. New York: Oxford University Press, 1985.

Cazals, Rémy. Avec les ouvriers de Mazamet dans la grève et l’action quotidienne, 1909-1914. Paris: Maspero, 1978.

Cohen, Lizabeth. Making A New Deal: Industrial Workers in Chicago, 1919-1939. Cambridge: Cambridge University Press, 1990.

Cronin, James E. Industrial Conflict in Modern Britain. London: Croom Helm, 1979.

Cronin, James E. “Labor Insurgency and Class Formation.” In Work, Community, and Power: The Experience of Labor in Europe and America, 1900-1925, edited by James E. Cronin and Carmen Sirianni. Philadelphia: Temple University Press, 1983. .

Cronin, James E. and Carmen Sirianni, editors. Work, Community, and Power: The Experience of Labor in Europe and America, 1900-1925. Philadelphia: Temple University Press, 1983.

Dawley, Alan. Class and Community: The Industrial Revolution in Lynn. Cambridge, MA: Harvard University Press, 1976.

Ely, James W., Jr. The Guardian of Every Other Right: A Constitutional History of Property Rights. New York: Oxford, 1998.

Fink, Leon. Workingmen’s Democracy: The Knights of Labor and American Politics. Urbana, IL: University of Illinois Press, 1983.

Fink, Leon. “The New Labor History and the Powers of Historical Pessimism: Consensus, Hegemony, and the Case of the Knights of Labor.” Journal of American History 75 (1988): 115-136.

Foner, Philip S. Organized Labor and the Black Worker, 1619-1973. New York: International Publishers, 1974.

Foner, Philip S. Women and the American Labor Movement: From Colonial Times to the Eve of World War I. New York: Free Press, 1979.

Frank, Dana. Purchasing Power: Consumer Organizing, Gender, and the Seattle Labor Movement, 1919- 1929. Cambridge: Cambridge University Press, 1994.

Freeman, Richard and James Medoff. What Do Unions Do? New York: Basic Books, 1984.

Friedman, Gerald. “Dividing Labor: Urban Politics and Big-City Construction in Late-Nineteenth Century America.” In Strategic Factors in Nineteenth-Century American Economic History, edited by Claudia Goldin and Hugh Rockoff, 447-64. Chicago: University of Chicago Press, 1991.

Friedman, Gerald. “Revolutionary Syndicalism and French Labor: The Rebels Behind the Cause.” French Historical Studies 20 (Spring 1997).

Friedman, Gerald. State-Making and Labor Movements: France and the United States 1876-1914. Ithaca, NY: Cornell University Press, 1998.

Friedman, Gerald. “New Estimates of United States Union Membership, 1880-1914.” Historical Methods 32 (Spring 1999): 75-86.

Friedman, Gerald. “The Political Economy of Early Southern Unionism: Race, Politics, and Labor in the South, 1880-1914.” Journal of Economic History 60, no. 2 (2000): 384-413.

Friedman, Gerald. “The Sanctity of Property in American Economic History” (manuscript, University of Massachusetts, July 2001).

Gall, Gilbert. Pursuing Justice: Lee Pressman, the New Deal, and the CIO. Albany, NY: State University of New York Press, 1999.

Gamson, William A. The Strategy of Social Protest. Homewood, IL: Dorsey Press, 1975.

Geary, Richard. European Labour Protest, 1848-1939. New York: St. Martin’s Press, 1981.

Golden, Miriam and Jonas Pontusson, editors. Bargaining for Change: Union Politics in North America and Europe. Ithaca, NY: Cornell University Press, 1992.

Griffith, Barbara S. The Crisis of American Labor: Operation Dixie and the Defeat of the CIO. Philadelphia: Temple University Press, 1988.

Harris, Howell John. Bloodless Victories: The Rise and Fall of the Open Shop in the Philadelphia Metal Trades, 1890-1940. Cambridge: Cambridge University Press, 2000.

Hattam, Victoria C. Labor Visions and State Power: The Origins of Business Unionism in the United States. Princeton: Princeton University Press, 1993.

Heckscher, Charles C. The New Unionism: Employee Involvement in the Changing Corporation. New York: Basic Books, 1987.

Hirsch, Barry T. and John T. Addison. The Economic Analysis of Unions: New Approaches and Evidence. Boston: Allen and Unwin, 1986.

Hirschman, Albert O. Exit, Voice and Loyalty: Responses to Decline in Firms, Organizations, and States. Cambridge, MA, Harvard University Press, 1970.

Hirschman, Albert O. Shifting Involvements: Private Interest and Public Action. Princeton: Princeton University Press, 1982.

Hobsbawm, Eric J. Labouring Men: Studies in the History of Labour. London: Weidenfeld and Nicolson, 1964.

Irons, Janet. Testing the New Deal: The General Textile Strike of 1934 in the American South. Urbana, IL: University of Illinois Press, 2000.

Jacoby, Sanford. Modern Manors: Welfare Capitalism Since the New Deal. Princeton: Princeton University Press, 1997.

Katznelson, Ira and Aristide R. Zolberg, editors. Working-Class Formation: Nineteenth-Century Patterns in Western Europe and the United States. Princeton: Princeton University Press, 1986. Kocka, Jurgen. “Problems of Working-Class Formation in Germany: The Early Years, 1800-1875.” In Working- Class Formation: Nineteenth-Century Patterns in Western Europe and the United States, edited by Ira Katznelson and Aristide R. Zolberg, 279-351. Princeton: Princeton University Press, 1986. Letwin, Daniel. The Challenge of Interracial Unionism: Alabama Coal Miners, 1878-1921. Chapel Hill: University of North Carolina Press, 1998. Maddison, Angus. Dynamic Forces in Capitalist Development: A Long-Run Comparative View. Oxford: Oxford University Press, 1991. Magraw, Roger. A History of the French Working Class, two volumes. London: Blackwell, 1992. Milkman, Ruth. Women, Work, and Protest: A Century of United States Women’s Labor. Boston: Routledge and Kegan Paul, 1985.

Montgomery, David. The Fall of the House of Labor: The Workplace, the State, and American Labor Activism, 1865-1920. Cambridge: Cambridge University Press, 1987.

Mullin, Debbie Dudley. “The Porous Umbrella of the AFL: Evidence From Late Nineteenth-Century State Labor Bureau Reports on the Establishment of American Unions.” Ph.D. diss., University of Virginia, 1993.

Nolan, Mary. Social Democracy and Society: Working-Class Radicalism in Dusseldorf, 1890-1920. Cambridge: Cambridge University Press, 1981.

Olson, Mancur. The Logic of Collective Action: Public Goods and the Theory of Groups. Cambridge, MA: Harvard University Press, 1971.

Perlman, Selig. A Theory of the Labor Movement. New York: MacMillan, 1928.

Rachleff, Peter J. Black Labor in the South, 1865-1890. Philadelphia: Temple University Press, 1984.

Roediger, David. The Wages of Whiteness: Race and the Making of the American Working Class. London: Verso, 1991.

Scott, Joan. The Glassworkers of Carmaux: French Craftsmen in Political Action in a Nineteenth-Century City. Cambridge, MA: Harvard University Press, 1974.

Sewell, William H. Jr. Work and Revolution in France: The Language of Labor from the Old Regime to 1848. Cambridge: Cambridge University Press, 1980.

Shorter, Edward and Charles Tilly. Strikes in France, 1830-1968. Cambridge: Cambridge University Press, 1974.

Temin, Peter. Lessons from the Great Depression. Cambridge, MA: MIT Press, 1990.

Thompson, Edward P. The Making of the English Working Class. New York: Vintage, 1966.

Troy, Leo. Distribution of Union Membership among the States, 1939 and 1953. New York: National Bureau of Economic Research, 1957.

United States, Bureau of the Census. Census of Occupations, 1930. Washington, DC: Government Printing Office, 1932.

Visser, Jelle. European Trade Unions in Figures. Boston: Kluwer, 1989.

Voss, Kim. The Making of American Exceptionalism: The Knights of Labor and Class Formation in the Nineteenth Century. Ithaca, NY: Cornell University Press, 1993.

Ware, Norman. The Labor Movement in the United States, 1860-1895: A Study in Democracy. New York: Vintage, 1929.

Washington, Booker T. “The Negro and the Labor Unions.” Atlantic Monthly (June 1913).

Weiler, Paul. “Promises to Keep: Securing Workers Rights to Self-Organization Under the NLRA.” Harvard Law Review 96 (1983).

Western, Bruce. Between Class and Market: Postwar Unionization in the Capitalist Democracies. Princeton: Princeton University Press, 1997.

Whatley, Warren. “African-American Strikebreaking from the Civil War to the New Deal.” Social Science History 17 (1993), 525-58.

Wilentz, Robert Sean. Chants Democratic: New York City and the Rise of the American Working Class, 1788-1850. Oxford: Oxford University Press, 1984.

Wolman, Leo. Ebb and Flow in Trade Unionism. New York: National Bureau of Economic Research, 1936.

Zieger, Robert. The CIO, 1935-1955. Chapel Hill: University of North Carolina Press, 1995.

Zolberg, Aristide. “Moments of Madness.” Politics and Society 2 (Winter 1972): 183-207. 60

Citation: Friedman, Gerald. “Labor Unions in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/labor-unions-in-the-united-states/

The History of American Labor Market Institutions and Outcomes

Joshua Rosenbloom, University of Kansas

One of the most important implications of modern microeconomic theory is that perfectly competitive markets produce an efficient allocation of resources. Historically, however, most markets have not approached the level of organization of this theoretical ideal. Instead of the costless and instantaneous communication envisioned in theory, market participants must rely on a set of incomplete and often costly channels of communication to learn about conditions of supply and demand; and they may face significant transaction costs to act on the information that they have acquired through these channels.

The economic history of labor market institutions is concerned with identifying the mechanisms that have facilitated the allocation of labor effort in the economy at different times, tracing the historical processes by which they have responded to shifting circumstances, and understanding how these mechanisms affected the allocation of labor as well as the distribution of labor’s products in different epochs.

Labor market institutions include both formal organizations (such as union hiring halls, government labor exchanges, and third party intermediaries such as employment agents), and informal mechanisms of communication such as word-of-mouth about employment opportunities passed between family and friends. The impact of these institutions is broad ranging. It includes the geographic allocation of labor (migration and urbanization), decisions about education and training of workers (investment in human capital), inequality (relative wages), the allocation of time between paid work and other activities such as household production, education, and leisure, and fertility (the allocation of time between production and reproduction).

Because each worker possesses a unique bundle of skills and attributes and each job is different, labor market transactions require the communication of a relatively large amount of information. In other words, the transactions costs involved in the exchange of labor are relatively high. The result is that the barriers separating different labor markets have sometimes been quite high, and these markets are relatively poorly integrated with one another.

The frictions inherent in the labor market mean that even during macroeconomic expansions there may be both a significant number of unemployed workers and a large number of unfilled vacancies. When viewed from some distance and looked at in the long-run, however, what is most striking is how effective labor market institutions have been in adapting to the shifting patterns of supply and demand in the economy. Over the past two centuries American labor markets have accomplished a massive redistribution of labor out of agriculture into manufacturing, and then from manufacturing into services. At the same time they have accomplished a huge geographic reallocation of labor between the United States and other parts of the world as well as within the United States itself, both across states and regions and from rural locations to urban areas.

This essay is organized topically, beginning with a discussion of the evolution of institutions involved in the allocation of labor across space and then taking up the development of institutions that fostered the allocation of labor across industries and sectors. The third section considers issues related to labor market performance.

The Geographic Distribution of Labor

One of the dominant themes of American history is the process of European settlement (and the concomitant displacement of the native population). This movement of population is in essence a labor market phenomenon. From the beginning of European settlement in what became the United States, labor markets were characterized by the scarcity of labor in relation to abundant land and natural resources. Labor scarcity raised labor productivity and enabled ordinary Americans to enjoy a higher standard of living than comparable Europeans. Counterbalancing these inducements to migration, however, were the high costs of travel across the Atlantic and the significant risks posed by settlement in frontier regions. Over time, technological changes lowered the costs of communication and transportation. But exploiting these advantages required the parallel development of new labor market institutions.

Trans-Atlantic Migration in the Colonial Period

During the seventeenth and eighteenth centuries a variety of labor market institutions developed to facilitate the movement of labor in response to the opportunities created by American factor proportions. While some immigrants migrated on their own, the majority of immigrants were either indentured servants or African slaves.

Because of the cost of passage—which exceeded half a year’s income for a typical British immigrant and a full year’s income for a typical German immigrant—only a small portion of European migrants could afford to pay for their passage to the Americas (Grubb 1985a). They did so by signing contracts, or “indentures,” committing themselves to work for a fixed number of years in the future—their labor being their only viable asset—with British merchants, who then sold these contracts to colonists after their ship reached America. Indentured servitude was introduced by the Virginia Company in 1619 and appears to have arisen from a combination of the terms of two other types of labor contract widely used in England at the time: service in husbandry and apprenticeship (Galenson 1981). In other cases, migrants borrowed money for their passage and committed to repay merchants by pledging to sell themselves as servants in America, a practice known as “redemptioner servitude (Grubb 1986). Redemptioners bore increased risk because they could not predict in advance what terms they might be able to negotiate for their labor, but presumably they did so because of other benefits, such as the opportunity to choose their own master, and to select where they would be employed.

Although data on immigration for the colonial period are scattered and incomplete a number of scholars have estimated that between half and three quarters of European immigrants arriving in the colonies came as indentured or redemptioner servants. Using data for the end of the colonial period Grubb (1985b) found that close to three-quarters of English immigrants to Pennsylvania and nearly 60 percent of German immigrants arrived as servants.

A number of scholars have examined the terms of indenture and redemptioner contracts in some detail (see, e.g., Galenson 1981; Grubb 1985a). They find that consistent with the existence of a well-functioning market, the terms of service varied in response to differences in individual productivity, employment conditions, and the balance of supply and demand in different locations.

The other major source of labor for the colonies was the forced migration of African slaves. Slavery had been introduced in the West Indies at an early date, but it was not until the late seventeenth century that significant numbers of slaves began to be imported into the mainland colonies. From 1700 to 1780 the proportion of blacks in the Chesapeake region grew from 13 percent to around 40 percent. In South Carolina and Georgia, the black share of the population climbed from 18 percent to 41 percent in the same period (McCusker and Menard, 1985, p. 222). Galenson (1984) explains the transition from indentured European to enslaved African labor as the result of shifts in supply and demand conditions in England and the trans-Atlantic slave market. Conditions in Europe improved after 1650, reducing the supply of indentured servants, while at the same time increased competition in the slave trade was lowering the price of slaves (Dunn 1984). In some sense the colonies’ early experience with indentured servants paved the way for the transition to slavery. Like slaves, indentured servants were unfree, and ownership of their labor could be freely transferred from one owner to another. Unlike slaves, however, they could look forward to eventually becoming free (Morgan 1971).

Over time a marked regional division in labor market institutions emerged in colonial America. The use of slaves was concentrated in the Chesapeake and Lower South, where the presence of staple export crops (rice, indigo and tobacco) provided economic rewards for expanding the scale of cultivation beyond the size achievable with family labor. European immigrants (primarily indentured servants) tended to concentrate in the Chesapeake and Middle Colonies, where servants could expect to find the greatest opportunities to enter agriculture once they had completed their term of service. While New England was able to support self-sufficient farmers, its climate and soil were not conducive to the expansion of commercial agriculture, with the result that it attracted relatively few slaves, indentured servants, or free immigrants. These patterns are illustrated in Table 1, which summarizes the composition and destinations of English emigrants in the years 1773 to 1776.

Table 1

English Emigration to the American Colonies, by Destination and Type, 1773-76

Total Emigration
Destination Number Percentage Percent listed as servants
New England 54 1.20 1.85
Middle Colonies 1,162 25.78 61.27
New York 303 6.72 11.55
Pennsylvania 859 19.06 78.81
Chesapeake 2,984 66.21 96.28
Maryland 2,217 49.19 98.33
Virginia 767 17.02 90.35
Lower South 307 6.81 19.54
Carolinas 106 2.35 23.58
Georgia 196 4.35 17.86
Florida 5 0.11 0.00
Total 4,507 80.90

Source: Grubb (1985b, p. 334).

International Migration in the Nineteenth and Twentieth Centuries

American independence marks a turning point in the development of labor market institutions. In 1808 Congress prohibited the importation of slaves. Meanwhile, the use of indentured servitude to finance the migration of European immigrants fell into disuse. As a result, most subsequent migration was at least nominally free migration.

The high cost of migration and the economic uncertainties of the new nation help to explain the relatively low level of immigration in the early years of the nineteenth century. But as the costs of transportation fell, the volume of immigration rose dramatically over the course of the century. Transportation costs were of course only one of the obstacles to international population movements. At least as important were problems of communication. Potential migrants might know in a general way that the United States offered greater economic opportunities than were available at home, but acting on this information required the development of labor market institutions that could effectively link job-seekers with employers.

For the most part, the labor market institutions that emerged in the nineteenth century to direct international migration were “informal” and thus difficult to document. As Rosenbloom (2002, ch. 2) describes, however, word-of-mouth played an important role in labor markets at this time. Many immigrants were following in the footsteps of friends or relatives already in the United States. Often these initial pioneers provided material assistance—helping to purchase ship and train tickets, providing housing—as well as information. The consequences of this so-called “chain migration” are readily reflected in a variety of kinds of evidence. Numerous studies of specific migration streams have documented the role of a small group of initial migrants in facilitating subsequent migration (for example, Barton 1975; Kamphoefner 1987; Gjerde 1985). At a more aggregate level, settlement patterns confirm the tendency of immigrants from different countries to concentrate in different cities (Ward 1971, p. 77; Galloway, Vedder and Shukla 1974).

Informal word-of-mouth communication was an effective labor market institution because it served both employers and job-seekers. For job-seekers the recommendations of friends and relatives were more reliable than those of third parties and often came with additional assistance. For employers the recommendations of current employees served as a kind of screening mechanism, since their employees were unlikely to encourage the immigration of unreliable workers.

While chain migration can explain a quantitatively large part of the redistribution of labor in the nineteenth century it is still necessary to explain how these chains came into existence in the first place. Chain migration always coexisted with another set of more formal labor market institutions that grew up largely to serve employers who could not rely on their existing labor force to recruit new hires (such as railroad construction companies). Labor agents, often themselves immigrants, acted as intermediaries between these employers and job-seekers, providing labor market information and frequently acting as translators for immigrants who could not speak English. Steamship companies operating between Europe and the United States also employed agents to help recruit potential migrants (Rosenbloom 2002, ch. 3).

By the 1840s networks of labor agents along with boarding houses serving immigrants and other similar support networks were well established in New York, Boston, and other major immigrant destinations. The services of these agents were well documented in published guides and most Europeans considering immigration must have known that they could turn to these commercial intermediaries if they lacked friends and family to guide them. After some time working in America these immigrants, if they were successful, would find steadier employment and begin to direct subsequent migration, thus establishing a new link in the stream of chain migration.

The economic impacts of immigration are theoretically ambiguous. Increased labor supply, by itself, would tend to lower wages—benefiting employers and hurting workers. But because immigrants are also consumers, the resulting increase in demand for goods and services will increase the demand for labor, partially offsetting the depressing effect of immigration on wages. As long as the labor to capital ratio rises, however, immigration will necessarily lower wages. But if, as was true in the late nineteenth century, foreign lending follows foreign labor, then there may be no negative impact on wages (Carter and Sutch 1999). Whatever the theoretical considerations, however, immigration became an increasingly controversial political issue during the late nineteenth and early twentieth centuries. While employers and some immigrant groups supported continued immigration, there was a growing nativist sentiment among other segments of the population. Anti-immigrant sentiments appear to have arisen out of a mix of perceived economic effects and concern about the implications of the ethnic, religious and cultural differences between immigrants and the native born.

In 1882, Congress passed the Chinese Exclusion Act. Subsequent legislative efforts to impose further restrictions on immigration passed Congress but foundered on presidential vetoes. The balance of political forces shifted, however, in the wake of World War I. In 1917 a literacy requirement was imposed for the first time, and in 1921 an Emergency Quota Act was passed (Goldin 1994).

With the passage of the Emergency Quota Act in 1921 and subsequent legislation culminating in the National Origins Act, the volume of immigration dropped sharply. Since this time international migration into the United States has been controlled to varying degrees by legal restrictions. Variations in the rules have produced variations in the volume of legal immigration. Meanwhile the persistence of large wage gaps between the United States and Mexico and other developing countries has encouraged a substantial volume of illegal immigration. It remains the case, however, that most of this migration—both legal and illegal—continues to be directed by chains of friends and relatives.

Recent trends in outsourcing and off-shoring have begun to create a new channel by which lower-wage workers outside the United States can respond to the country’s high wages without physically relocating. Workers in India, China, and elsewhere possessing technical skills can now provide services such as data entry or technical support by phone and over the internet. While the novelty of this phenomenon has attracted considerable attention, the actual volume of jobs moved off-shore remains limited, and there are important obstacles to overcome before more jobs can be carried out remotely (Edwards 2004).

Internal Migration in the Nineteenth and Twentieth Centuries

At the same time that American economic development created international imbalances between labor supply and demand it also created internal disequilibrium. Fertile land and abundant natural resources drew population toward less densely settled regions in the West. Over the course of the century, advances in transportation technologies lowered the cost of shipping goods from interior regions, vastly expanding the area available for settlement. Meanwhile transportation advances and technological innovations encouraged the growth of manufacturing and fueled increased urbanization. The movement of population and economic activity from the Eastern Seaboard into the interior of the continent and from rural to urban areas in response to these incentives is an important element of U.S. economic history in the nineteenth century.

In the pre-Civil War era, the labor market response to frontier expansion differed substantially between North and South, with profound effects on patterns of settlement and regional development. Much of the cost of migration is a result of the need to gather information about opportunities in potential destinations. In the South, plantation owners could spread these costs over a relatively large number of potential migrants—i.e., their slaves. Plantations were also relatively self-sufficient, requiring little urban or commercial infrastructure to make them economically viable. Moreover, the existence of well-established markets for slaves allowed western planters to expand their labor force by purchasing additional labor from eastern plantations.

In the North, on the other hand, migration took place through the relocation of small, family farms. Fixed costs of gathering information and the risks of migration loomed larger in these farmers’ calculations than they did for slaveholders, and they were more dependent on the presence of urban merchants to supply them with inputs and market their products. Consequently the task of mobilizing labor fell to promoters who bought up large tracts of land at low prices and then subdivided them into individual lots. To increase the value of these lands promoters offered loans, actively encourage the development of urban services such as blacksmith shops, grain merchants, wagon builders and general stores, and recruited settlers. With the spread of railroads, railroad construction companies also played a role in encouraging settlement along their routes to speed the development of traffic.

The differences in processes of westward migration in the North and South were reflected in the divergence of rates of urbanization, transportation infrastructure investment, manufacturing employment, and population density, all of which were higher in the North than in the South in 1860 (Wright 1986, pp. 19-29).

The Distribution of Labor among Economic Activities

Over the course of U.S. economic development technological changes and shifting consumption patterns have caused the demand for labor to increase in manufacturing and services and decline in agriculture and other extractive activities. These broad changes are illustrated in Table 2. As technological changes have increased the advantages of specialization and the division of labor, more and more economic activity has moved outside the scope of the household, and the boundaries of the labor market have been enlarged. As a result more and more women have moved into the paid labor force. On the other hand, with the increasing importance of formal education, there has been a decline in the number of children in the labor force (Whaples 2005).

Table 2

Sectoral Distribution of the Labor Force, 1800-1999

Share in
Non-Agriculture
Year Total Labor Force (1000s) Agriculture Total Manufacturing Services
1800 1,658 76.2 23.8
1850 8,199 53.6 46.4
1900 29,031 37.5 59.4 35.8 23.6
1950 57,860 11.9 88.1 41.0 47.1
1999 133,489 2.3 97.7 24.7 73.0

Notes and Sources: 1800 and 1850 from Weiss (1986), pp. 646-49; remaining years from Hughes and Cain (2003), 547-48. For 1900-1999 Forestry and Fishing are included in the Agricultural labor force.

As these changes have taken place they have placed strains on existing labor market institutions and encouraged the development of new mechanisms to facilitate the distribution of labor. Over the course of the last century and a half the tendency has been a movement away from something approximating a “spot” market characterized by short-term employment relationships in which wages are equated to the marginal product of labor, and toward a much more complex and rule-bound set of long-term transactions (Goldin 2000, p. 586) While certain segments of the labor market still involve relatively anonymous and short-lived transactions, workers and employers are much more likely today to enter into long-term employment relationships that are expected to last for many years.

The evolution of labor market institutions in response to these shifting demands has been anything but smooth. During the late nineteenth century the expansion of organized labor was accompanied by often violent labor-management conflict (Friedman 2002). Not until the New Deal did unions gain widespread acceptance and a legal right to bargain. Yet even today, union organizing efforts are often met with considerable hostility.

Conflicts over union organizing efforts inevitably involved state and federal governments because the legal environment directly affected the bargaining power of both sides, and shifting legal opinions and legislative changes played an important part in determining the outcome of these contests. State and federal governments were also drawn into labor markets as various groups sought to limit hours of work, set minimum wages, provide support for disabled workers, and respond to other perceived shortcomings of existing arrangements. It would be wrong, however, to see the growth of government regulation as simply a movement from freer to more regulated markets. The ability to exchange goods and services rests ultimately on the legal system, and to this extent there has never been an entirely unregulated market. In addition, labor market transactions are never as simple as the anonymous exchange of other goods or services. Because the identities of individual buyers and sellers matter and the long-term nature of many employment relationships, adjustments can occur along other margins besides wages, and many of these dimensions involve externalities that affect all workers at a particular establishment, or possibly workers in an entire industry or sector.

Government regulations have responded in many cases to needs voiced by participants on both sides of the labor market for assistance to achieve desired ends. That has not, of course, prevented both workers and employers from seeking to use government to alter the way in which the gains from trade are distributed within the market.

The Agricultural Labor Market

At the beginning of the nineteenth century most labor was employed in agriculture, and, with the exception of large slave plantations, most agricultural labor was performed on small, family-run farms. There were markets for temporary and seasonal agricultural laborers to supplement family labor supply, but in most parts of the country outside the South, families remained the dominant institution directing the allocation of farm labor. Reliable estimates of the number of farm workers are not readily available before 1860, when the federal Census first enumerated “farm laborers.” At this time census enumerators found about 800 thousand such workers, implying an average of less than one-half farm worker per farm. Interpretation of this figure is complicated, however, and it may either overstate the amount of hired help—since farm laborers included unpaid family workers—or understate it—since it excluded those who reported their occupation simply as “laborer” and may have spent some of their time working in agriculture (Wright 1988, p. 193). A possibly more reliable indicator is provided by the percentage of gross value of farm output spent on wage labor. This figure fell from 11.4 percent in 1870 to around 8 percent by 1900, indicating that hired labor was on average becoming even less important (Wright 1988, pp. 194-95).

In the South, after the Civil War, arrangements were more complicated. Former plantation owners continued to own large tracts of land that required labor if they were to be made productive. Meanwhile former slaves needed access to land and capital if they were to support themselves. While some land owners turned to wage labor to work their land, most relied heavily on institutions like sharecropping. On the supply side, croppers viewed this form of employment as a rung on the “agricultural ladder” that would lead eventually to tenancy and possibly ownership. Because climbing the agricultural ladder meant establishing one’s credit-worthiness with local lenders, southern farm laborers tended to sort themselves into two categories: locally established (mostly older, married men) croppers and renters on the one hand, and mobile wage laborers (mostly younger and unmarried) on the other. While the labor market for each of these types of workers appears to have been relatively competitive, the barriers between the two markets remained relatively high (Wright 1987, p. 111).

While the predominant pattern in agriculture then was one of small, family-operated units, there was an important countervailing trend toward specialization that both depended on, and encouraged the emergence of a more specialized market for farm labor. Because specialization in a single crop increased the seasonality of labor demand, farmers could not afford to employ labor year-round, but had to depend on migrant workers. The use of seasonal gangs of migrant wage laborers developed earliest in California in the 1870s and 1880s, where employers relied heavily on Chinese immigrants. Following restrictions on Chinese entry, they were replaced first by Japanese, and later by Mexican workers (Wright 1988, pp. 201-204).

The Emergence of Internal Labor Markets

Outside of agriculture, at the beginning of the nineteenth century most manufacturing took place in small establishments. Hired labor might consist of a small number of apprentices, or, as in the early New England textile mills, a few child laborers hired from nearby farms (Ware 1931). As a result labor market institutions remained small-scale and informal, and institutions for training and skill acquisition remained correspondingly limited. Workers learned on the job as apprentices or helpers; advancement came through establishing themselves as independent producers rather than through internal promotion.

With the growth of manufacturing, and the spread of factory methods of production, especially in the years after the end of the Civil War, an increasing number of people could expect to spend their working-lives as employees. One reflection of this change was the emergence in the 1870s of the problem of unemployment. During the depression of 1873 for the first time cities throughout the country had to contend with large masses of industrial workers thrown out of work and unable to support themselves through, in the language of the time, “no fault of their own” (Keyssar 1986, ch. 2).

The growth of large factories and the creation of new kinds of labor skills specific to a particular employer created returns to sustaining long-term employment relationships. As workers acquired job- and employer-specific skills their productivity increased giving rise to gains that were available only so long as the employment relationship persisted. Employers did little, however, to encourage long-term employment relationships. Instead authority over hiring, promotion and retention was commonly delegated to foremen or inside contractors (Nelson 1975, pp. 34-54). In the latter case, skilled craftsmen operated in effect as their own bosses contracting with the firm to supply components or finished products for an agreed price, and taking responsibility for hiring and managing their own assistants.

These arrangements were well suited to promoting external mobility. Foremen were often drawn from the immigrant community and could easily tap into word-of-mouth channels of recruitment. But these benefits came increasingly into conflict with rising costs of hiring and training workers.

The informality of personnel policies prior to World War I seems likely to have discouraged lasting employment relationships, and it is true that rates of labor turnover at the beginning of the twentieth century were considerably higher than they were to be later (Owen, 2004). Scattered evidence on the duration of employment relationships gathered by various state labor bureaus at the end of the century suggests, however, at least some workers did establish lasting employment relationship (Carter 1988; Carter and Savocca 1990; Jacoby and Sharma 1992; James 1994).

The growing awareness of the costs of labor-turnover and informal, casual labor relations led reformers to advocate the establishment of more centralized and formal processes of hiring, firing and promotion, along with the establishment of internal job-ladders, and deferred payment plans to help bind workers and employers. The implementation of these reforms did not make significant headway, however, until the 1920s (Slichter 1929). Why employers began to establish internal labor markets in the 1920s remains in dispute. While some scholars emphasize pressure from workers (Jacoby 1984; 1985) others have stressed that it was largely a response to the rising costs of labor turnover (Edwards 1979).

The Government and the Labor Market

The growth of large factories contributed to rising labor tensions in the late nineteenth- and early twentieth-centuries. Issues like hours of work, safety, and working conditions all have a significant public goods aspect. While market forces of entry and exit will force employers to adopt policies that are sufficient to attract the marginal worker (the one just indifferent between staying and leaving), less mobile workers may find that their interests are not adequately represented (Freeman and Medoff 1984). One solution is to establish mechanisms for collective bargaining, and the years after the American Civil War were characterized by significant progress in the growth of organized labor (Friedman 2002). Unionization efforts, however, met strong opposition from employers, and suffered from the obstacles created by the American legal system’s bias toward protecting property and the freedom of contract. Under prevailing legal interpretation, strikes were often found by the courts to be conspiracies in restraint of trade with the result that the apparatus of government was often arrayed against labor.

Although efforts to win significant improvements in working conditions were rarely successful, there were still areas where there was room for mutually beneficial change. One such area involved the provision of disability insurance for workers injured on the job. Traditionally, injured workers had turned to the courts to adjudicate liability for industrial accidents. Legal proceedings were costly and their outcome unpredictable. By the early 1910s it became clear to all sides that a system of disability insurance was preferable to reliance on the courts. Resolution of this problem, however, required the intervention of state legislatures to establish mandatory state workers compensation insurance schemes and remove the issue from the courts. Once introduced workers compensation schemes spread quickly: nine states passed legislation in 1911; 13 more had joined the bandwagon by 1913, and by 1920 44 states had such legislation (Fishback 2001).

Along with workers compensation state legislatures in the late nineteenth century also considered legislation restricting hours of work. Prevailing legal interpretations limited the effectiveness of such efforts for adult males. But rules restricting hours for women and children were found to be acceptable. The federal government passed legislation restricting the employment of children under 14 in 1916, but this law was found unconstitutional in 1916 (Goldin 2000, p. 612-13).

The economic crisis of the 1930s triggered a new wave of government interventions in the labor market. During the 1930s the federal government granted unions the right to organize legally, established a system of unemployment, disability and old age insurance, and established minimum wage and overtime pay provisions.

In 1933 the National Industrial Recovery Act included provisions legalizing unions’ right to bargain collectively. Although the NIRA was eventually ruled to be unconstitutional, the key labor provisions of the Act were reinstated in the Wagner Act of 1935. While some of the provisions of the Wagner Act were modified in 1947 by the Taft-Hartley Act, its passage marks the beginning of the golden age of organized labor. Union membership jumped very quickly after 1935 from around 12 percent of the non-agricultural labor force to nearly 30 percent, and by the late 1940s had attained a peak of 35 percent, where it stabilized. Since the 1960s, however, union membership has declined steadily, to the point where it is now back at pre-Wagner Act levels.

The Social Security Act of 1935 introduced a federal unemployment insurance scheme that was operated in partnership with state governments and financed through a tax on employers. It also created government old age and disability insurance. In 1938, the federal Fair Labor Standards Act provided for minimum wages and for overtime pay. At first the coverage of these provisions was limited, but it has been steadily increased in subsequent years to cover most industries today.

In the post-war era, the federal government has expanded its role in managing labor markets both directly—through the establishment of occupational safety regulations, and anti-discrimination laws, for example—and indirectly—through its efforts to manage the macroeconomy to insure maximum employment.

A further expansion of federal involvement in labor markets began in 1964 with passage of the Civil Rights Act, which prohibited employment discrimination against both minorities and women. In 1967 the Age Discrimination and Employment Act was passed prohibiting discrimination against people aged 40 to 70 in regard to hiring, firing, working conditions and pay. The Family and Medical Leave Act of 1994 allows for unpaid leave to care for infants, children and other sick relatives (Goldin 2000, p. 614).

Whether state and federal legislation has significantly affected labor market outcomes remains unclear. Most economists would argue that the majority of labor’s gains in the past century would have occurred even in the absence of government intervention. Rather than shaping market outcomes, many legislative initiatives emerged as a result of underlying changes that were making advances possible. According to Claudia Goldin (2000, p. 553) “government intervention often reinforced existing trends, as in the decline of child labor, the narrowing of the wage structure, and the decrease in hours of work.” In other cases, such as Workers Compensation and pensions, legislation helped to establish the basis for markets.

The Changing Boundaries of the Labor Market

The rise of factories and urban employment had implications that went far beyond the labor market itself. On farms women and children had found ready employment (Craig 1993, ch. 4). But when the male household head worked for wages, employment opportunities for other family members were more limited. Late nineteenth-century convention largely dictated that married women did not work outside the home unless their husband was dead or incapacitated (Goldin 1990, p. 119-20). Children, on the other hand, were often viewed as supplementary earners in blue-collar households at this time.

Since 1900 changes in relative earnings power related to shifts in technology have encouraged women to enter the paid labor market while purchasing more of the goods and services that were previously produced within the home. At the same time, the rising value of formal education has lead to the withdrawal of child labor from the market and increased investment in formal education (Whaples 2005). During the first half of the twentieth century high school education became nearly universal. And since World War II, there has been a rapid increase in the number of college educated workers in the U.S. economy (Goldin 2000, p. 609-12).

Assessing the Efficiency of Labor Market Institutions

The function of labor markets is to match workers and jobs. As this essay has described the mechanisms by which labor markets have accomplished this task have changed considerably as the American economy has developed. A central issue for economic historians is to assess how changing labor market institutions have affected the efficiency of labor markets. This leads to three sets of questions. The first concerns the long-run efficiency of market processes in allocating labor across space and economic activities. The second involves the response of labor markets to short-run macroeconomic fluctuations. The third deals with wage determination and the distribution of income.

Long-Run Efficiency and Wage Gaps

Efforts to evaluate the efficiency of market allocation begin with what is commonly know as the “law of one price,” which states that within an efficient market the wage of similar workers doing similar work under similar circumstances should be equalized. The ideal of complete equalization is, of course, unlikely to be achieved given the high information and transactions costs that characterize labor markets. Thus, conclusions are usually couched in relative terms, comparing the efficiency of one market at one point in time with those of some other markets at other points in time. A further complication in measuring wage equalization is the need to compare homogeneous workers and to control for other differences (such as cost of living and non-pecuniary amenities).

Falling transportation and communications costs have encouraged a trend toward diminishing wage gaps over time, but this trend has not always been consistent over time, nor has it applied to all markets in equal measure. That said, what stands out is in fact the relative strength of forces of market arbitrage that have operated in many contexts to promote wage convergence.

At the beginning of the nineteenth century, the costs of trans-Atlantic migration were still quite high and international wage gaps large. By the 1840s, however, vast improvements in shipping cut the costs of migration, and gave rise to an era of dramatic international wage equalization (O’Rourke and Williamson 1999, ch. 2; Williamson 1995). Figure 1 shows the movement of real wages relative to the United States in a selection of European countries. After the beginning of mass immigration wage differentials began to fall substantially in one country after another. International wage convergence continued up until the 1880s, when it appears that the accelerating growth of the American economy outstripped European labor supply responses and reversed wage convergence briefly. World War I and subsequent immigration restrictions caused a sharper break, and contributed to widening international wage differences during the middle portion of the twentieth century. From World War II until about 1980, European wage levels once again began to converge toward the U.S., but this convergence reflected largely internally-generated improvements in European living standards rather then labor market pressures.

Figure 1

Relative Real Wages of Selected European Countries, 1830-1980 (US = 100)

Source: Williamson (1995), Tables A2.1-A2.3.

Wage convergence also took place within some parts of the United States during the nineteenth century. Figure 2 traces wages in the North Central and Southern regions of the U.S relative to those in the Northeast across the period from 1820 to the early twentieth century. Within the United States, wages in the North Central region of the country were 30 to 40 percent higher than in the East in the 1820s (Margo 2000a, ch. 5). Thereafter, wage gaps declined substantially, falling to the 10-20 percent range before the Civil War. Despite some temporary divergence during the war, wage gaps had fallen to 5 to 10 percent by the 1880s and 1890s. Much of this decline was made possible by faster and less expensive means of transportation, but it was also dependent on the development of labor market institutions linking the two regions, for while transportation improvements helped to link East and West, there was no corresponding North-South integration. While southern wages hovered near levels in the Northeast prior to the Civil War, they fell substantially below northern levels after the Civil War, as Figure 2 illustrates.

Figure 2

Relative Regional Real Wage Rates in the United States, 1825-1984

(Northeast = 100 in each year)

Notes and sources: Rosenbloom (2002, p. 133); Montgomery (1992). It is not possible to assemble entirely consistent data on regional wage variations over such an extended period. The nature of the wage data, the precise geographic coverage of the data, and the estimates of regional cost-of-living indices are all different. The earliest wage data—Margo (2000); Sundstrom and Rosenbloom (1993) and Coelho and Shepherd (1976) are all based on occupational wage rates from payroll records for specific occupations; Rosenbloom (1996) uses average earnings across all manufacturing workers; while Montgomery (1992) uses individual level wage data drawn from the Current Population Survey, and calculates geographic variations using a regression technique to control for individual differences in human capital and industry of employment. I used the relative real wages that Montgomery (1992) reported for workers in manufacturing, and used an unweighted average of wages across the cities in each region to arrive at relative regional real wages. Interested readers should consult the various underlying sources for further details.

Despite the large North-South wage gap Table 3 shows there was relatively little migration out of the South until large-scale foreign immigration came to an end. Migration from the South during World War I and the 1920s created a basis for future chain migration, but the Great Depression of the 1930s interrupted this process of adjustment. Not until the 1940s did the North-South wage gap begin to decline substantially (Wright 1986, pp. 71-80). By the 1970s the southern wage disadvantage had largely disappeared, and because of the decline fortunes of older manufacturing districts and the rise of Sunbelt cities, wages in the South now exceed those in the Northeast (Coelho and Ghali 1971; Bellante 1979; Sahling and Smith 1983; Montgomery 1992). Despite these shocks, however, the overall variation in wages appears comparable to levels attained by the end of the nineteenth century. Montgomery (1992), for example finds that from 1974 to 1984 the standard deviation of wages across SMSAs was only about 10 percent of the average wage.

Table 3

Net Migration by Region, and Race, 1870-1950

South Northeast North Central West
Period White Black White Black White Black White Black
Number (in 1,000s)
1870-80 91 -68 -374 26 26 42 257 0
1880-90 -271 -88 -240 61 -43 28 554 0
1890-00 -30 -185 101 136 -445 49 374 0
1900-10 -69 -194 -196 109 -1,110 63 1,375 22
1910-20 -663 -555 -74 242 -145 281 880 32
1920-30 -704 -903 -177 435 -464 426 1,345 42
1930-40 -558 -480 55 273 -747 152 1,250 55
1940-50 -866 -1581 -659 599 -1,296 626 2,822 356
Rate (migrants/1,000 Population)
1870-80 11 -14 -33 55 2 124 274 0
1880-90 -26 -15 -18 107 -3 65 325 0
1890-00 -2 -26 6 200 -23 104 141 0
1900-10 -4 -24 -11 137 -48 122 329 542
1910-20 -33 -66 -3 254 -5 421 143 491
1920-30 -30 -103 -7 328 -15 415 160 421
1930-40 -20 -52 2 157 -22 113 116 378
1940-50 -28 -167 -20 259 -35 344 195 964

Note: Net migration is calculated as the difference between the actual increase in population over each decade and the predicted increase based on age and sex specific mortality rates and the demographic structure of the region’s population at the beginning of the decade. If the actual increase exceeds the predicted increase this implies a net migration into the region; if the actual increase is less than predicted this implies net migration out of the region. The states included in the Southern region are Oklahoma, Texas, Arkansas, Louisiana, Mississippi, Alabama, Tennessee, Kentucky, West Virginia, Virginia, North Carolina, South Carolina, Georgia, and Florida.

Source: Eldridge and Thomas (1964, pp. 90, 99).

In addition to geographic wage gaps economists have considered gaps between farm and city, between black and white workers, between men and women, and between different industries. The literature on these topics is quite extensive and this essay can only touch on a few of the more general themes raised here as they relate to U.S. economic history.

Studies of farm-city wage gaps are a variant of the broader literature on geographic wage variation, related to the general movement of labor from farms to urban manufacturing and services. Here comparisons are complicated by the need to adjust for the non-wage perquisites that farm laborers typically received, which could be almost as large as cash wages. The issue of whether such gaps existed in the nineteenth century has important implications for whether the pace of industrialization was impeded by the lack of adequate labor supply responses. By the second half of the nineteenth century at least, it appears that farm-manufacturing wage gaps were small and markets were relatively integrated (Wright 1988, pp. 204-5). Margo (2000, ch. 4) offers evidence of a high degree of equalization within local labor markets between farm and urban wages as early as 1860. Making comparisons within counties and states, he reports that farm wages were within 10 percent of urban wages in eight states. Analyzing data from the late nineteenth century through the 1930s, Hatton and Williamson (1991) find that farm and city wages were nearly equal within U.S. regions by the 1890s. It appears, however that during the Great Depression farm wages were much more flexible than urban wages causing a large gap to emerge at this time (Alston and Williamson 1991).

Much attention has been focused on trends in wage gaps by race and sex. The twentieth century has seen a substantial convergence in both of these differentials. Table 4 displays comparisons of earnings of black males relative to white males for full time workers. In 1940, full-time black male workers earned only about 43 percent of what white male full-time workers did. By 1980 the racial pay ratio had risen to nearly 73 percent, but there has been little subsequent progress. Until the mid-1960s these gains can be attributed primarily to migration from the low-wage South to higher paying areas in the North, and to increases in the quantity and quality of black education over time (Margo 1995; Smith and Welch 1990). Since then, however, most gains have been due to shifts in relative pay within regions. Although it is clear that discrimination was a key factor in limiting access to education, the role of discrimination within the labor market in contributing to these differentials has been a more controversial topic (see Wright 1986, pp. 127-34). But the episodic nature of black wage gains, especially after 1964 is compelling evidence that discrimination has played a role historically in earnings differences and that federal anti-discrimination legislation was a crucial factor in reducing its effects (Donohue and Heckman 1991).

Table 4

Black Male Wages as a Percentage of White Male Wages, 1940-2004

Date Black Relative Wage
1940 43.4
1950 55.2
1960 57.5
1970 64.4
1980 72.6
1990 70.0
2004 77.0

Notes and Sources: Data for 1940 through 1980 are based on Census data as reported in Smith and Welch (1989, Table 8). Data for 1990 are from Ehrenberg and Smith (2000, Table 12.4) and refer to earnings of full time, full year workers. Data from 2004 are for median weekly earnings of full-time wage and salary workers derived from data in the Current Population Survey accessed on-line from the Bureau of Labor Statistic on 13 December 2005; URL ftp://ftp.bls.gov/pub/special.requests/lf/aat37.txt.

Male-Female wage gaps have also narrowed substantially over time. In the 1820s women’s earnings in manufacturing were a little less than 40 percent of those of men, but this ratio rose over time reaching about 55 percent by the 1920s. Across all sectors women’s relative pay rose during the first half of the twentieth century, but gains in female wages stalled during the 1950s and 1960s at the time when female labor force participation began to increase rapidly. Beginning in the late 1970s or early 1980s, relative female pay began to rise again, and today women earn about 80 percent what men do (Goldin 1990, table 3.2; Goldin 2000, pp. 606-8). Part of this remaining difference is explained by differences in the occupational distribution of men and women, with women tending to be concentrated in lower paying jobs. Whether these differences are the result of persistent discrimination or arise because of differences in productivity or a choice by women to trade off greater flexibility in terms of labor market commitment for lower pay remains controversial.

In addition to locational, sectoral, racial and gender wage differentials, economists have also documented and analyzed differences by industry. Krueger and Summers (1987) find that there are pronounced differences in wages by industry within well-specified occupational classes, and that these differentials have remained relatively stable over several decades. One interpretation of this phenomenon is that in industries with substantial market power workers are able to extract some of the monopoly rents as higher pay. An alternative view is that workers are in fact heterogeneous, and differences in wages reflect a process of sorting in which higher paying industries attract more able workers.

The Response to Short-run Macroeconomic Fluctuations

The existence of unemployment is one of the clearest indications of the persistent frictions that characterize labor markets. As described earlier, the concept of unemployment first entered common discussion with the growth of the factory labor force in the 1870s. Unemployment was not a visible social phenomenon in an agricultural economy, although there was undoubtedly a great deal of hidden underemployment.

Although one might have expected that the shift from spot toward more contractual labor markets would have increased rigidities in the employment relationship that would result in higher levels of unemployment there is in fact no evidence of any long-run increase in the level of unemployment.

Contemporaneous measurements of the rate of unemployment only began in 1940. Prior to this date, economic historians have had to estimate unemployment levels from a variety of other sources. Decennial censuses provide benchmark levels, but it is necessary to interpolate between these benchmarks based on other series. Conclusions about long-run changes in unemployment behavior depend to a large extent on the method used to interpolate between benchmark dates. Estimates prepared by Stanley Lebergott (1964) suggest that the average level of unemployment and its volatility have declined between the pre-1930 and post-World War II periods. Christina Romer (1986a, 1986b), however, has argued that there was no decline in volatility. Rather, she argues that the apparent change in behavior is the result of Lebergott’s interpolation procedure.

While the aggregate behavior of unemployment has changed surprisingly little over the past century, the changing nature of employment relationships has been reflected much more clearly in changes in the distribution of the burden of unemployment (Goldin 2000, pp. 591-97). At the beginning of the twentieth century, unemployment was relatively widespread, and largely unrelated to personal characteristics. Thus many employees faced great uncertainty about the permanence of their employment relationship. Today, on the other hand, unemployment is highly concentrated: falling heavily on the least skilled, the youngest, and the non-white segments of the labor force. Thus, the movement away from spot markets has tended to create a two-tier labor market in which some workers are highly vulnerable to economic fluctuations, while others remain largely insulated from economic shocks.

Wage Determination and Distributional Issues

American economic growth has generated vast increases in the material standard of living. Real gross domestic product per capita, for example, has increased more than twenty-fold since 1820 (Steckel 2002). This growth in total output has in large part been passed on to labor in the form of higher wages. Although labor’s share of national output has fluctuated somewhat, in the long-run it has remained surprisingly stable. According to Abramovitz and David (2000, p. 20), labor received 65 percent of national income in the years 1800-1855. Labor’s share dropped in the late nineteenth and early twentieth centuries, falling to a low of 54 percent of national income between 1890 and 1927, but has since risen, reaching 65 percent again in 1966-1989. Thus, over the long term, labor income has grown at the same rate as total output in the economy.

The distribution of labor’s gains across different groups in the labor force has also varied over time. I have already discussed patterns of wage variation by race and gender, but another important issue revolves around the overall level of inequality of pay, and differences in pay between groups of skilled and unskilled workers. Careful research by Picketty and Saez (2003) using individual income tax returns has documented changes in the overall distribution of income in the United States since 1913. They find that inequality has followed a U-shaped pattern over the course of the twentieth century. Inequality was relatively high at the beginning of the period they consider, fell sharply during World War II, held steady until the early 1970s and then began to increase, reaching levels comparable to those in the early twentieth century by the 1990s.

An important factor in the rising inequality of income since 1970 has been growing dispersion in wage rates. The wage differential between workers in the 90th percentile of the wage distribution and those in the 10th percentile increased by 49 percent between 1969 and 1995 (Plotnick et al 2000, pp. 357-58). These shifts are mirrored in increased premiums earned by college graduates relative to high school graduates. Two primary explanations have been advanced for these trends. First, there is evidence that technological changes—especially those associated with the increased use of information technology—has increased relative demand for more educated workers (Murnane, Willett and Levy (1995). Second, increased global integration has allowed low-wage manufacturing industries overseas to compete more effectively with U.S. manufacturers, thus depressing wages in what have traditionally been high-paying blue collar jobs.

Efforts to expand the scope of analysis over a longer-run encounter problems with more limited data. Based on selected wage ratios of skilled and unskilled workers Willamson and Lindert (1980) have argued that there was an increase in wage inequality over the course of the nineteenth century. But other scholars have argued that the wage series that Williamson and Lindert used are unreliable (Margo 2000b, pp. 224-28).

Conclusions

The history of labor market institutions in the United States illustrates the point that real world economies are substantially more complex than the simplest textbook models. Instead of a disinterested and omniscient auctioneer, the process of matching buyers and sellers takes place through the actions of self-interested market participants. The resulting labor market institutions do not respond immediately and precisely to shifting patterns of incentives. Rather they are subject to historical forces of increasing-returns and lock-in that cause them to change gradually and along path-dependent trajectories.

For all of these departures from the theoretically ideal market, however, the history of labor markets in the United States can also be seen as a confirmation of the remarkable power of market processes of allocation. From the beginning of European settlement in mainland North America, labor markets have done a remarkable job of responding to shifting patterns of demand and supply. Not only have they accomplished the massive geographic shifts associated with the settlement of the United States, but they have also dealt with huge structural changes induced by the sustained pace of technological change.

References

Abramovitz, Moses and Paul A. David. “American Macroeconomic Growth in the Era of Knowledge-Based Progress: The Long-Run Perspective.” In The Cambridge Economic History of the United States, Volume 3: The Twentieth Century, edited by Stanley L. Engerman and Robert Gallman. New York: Cambridge University Press, 2000.

Alston, Lee J. and Jeffery G. Williamson. “The Earnings Gap between Agricultural and Manufacturing Laborers, 1925-1941. Journal of Economic History 51, no. 1 (1991): 83-99.

Barton, Josef J. Peasants and Strangers: Italians, Rumanians, and Slovaks in an American City, 1890-1950. Cambridge, MA: Harvard University Press, 1975.

Bellante, Don. “The North-South Differential and the Migration of Heterogeneous Labor.” American Economic Review 69, no. 1 (1979): 166-75.

Carter, Susan B. “The Changing Importance of Lifetime Jobs in the U.S. Economy, 1892-1978.” Industrial Relations 27 (1988): 287-300.

Carter, Susan B. and Elizabeth Savoca. “Labor Mobility and Lengthy Jobs in Nineteenth-Century America.” Journal of Economic History 50, no. 1 (1990): 1-16.

Carter, Susan B. and Richard Sutch. “Historical Perspectives on the Economic Consequences of Immigration into the United States.” In The Handbook of International Migration: The American Experience, edited by Charles Hirschman, Philip Kasinitz and Josh DeWind. New York: Russell Sage Foundation, 1999.

Coelho, Philip R.P. and Moheb A. Ghali. “The End of the North-South Wage Differential.” American Economic Review 61, no. 5 (1971): 932-37.

Coelho, Philip R.P. and James F. Shepherd. “Regional Differences in Real Wages: The United States in 1851-1880.” Explorations in Economic History 13 (1976): 203-30.

Craig, Lee A. To Sow One Acre More: Childbearing and Farm Productivity in the Antebellum North. Baltimore: Johns Hopkins University Press, 1993.

Donahue, John H. III and James J. Heckman. “Continuous versus Episodic Change: The Impact of Civil Rights Policy on the Economic Status of Blacks.” Journal of Economic Literature 29, no. 4 (1991): 1603-43.

Dunn, Richard S. “Servants and Slaves: The Recruitment and Employment of Labor.” In Colonial British America: Essays in the New History of the Early Modern Era, edited by Jack P. Greene and J.R. Pole. Baltimore: Johns Hopkins University Press, 1984.

Edwards, B. “A World of Work: A Survey of Outsourcing.” Economist 13 November 2004.

Edwards, Richard. Contested Terrain: The Transformation of the Workplace in the Twentieth Century. New York: Basic Books, 1979.

Ehrenberg, Ronald G. and Robert S. Smith. Modern Labor Economics: Theory and Public Policy, seventh edition. Reading, MA; Addison-Wesley, 2000.

Eldridge, Hope T. and Dorothy Swaine Thomas. Population Redistribution and Economic Growth, United States 1870-1950, vol. 3: Demographic Analyses and Interrelations. Philadelphia: American Philosophical Society, 1964.

Fishback, Price V. “Workers’ Compensation.” EH.Net Encyclopedia, edited by Robert Whaples. August 15, 2001. URL http://www.eh.net/encyclopedia/articles/fishback.workers.compensation.

Freeman, Richard and James Medoff. What Do Unions Do? New York: Basic Books, 1984.

Friedman, Gerald (2002). “Labor Unions in the United States.” EH.Net Encyclopedia, edited by Robert Whaples. May 8, 2002. URL http://www.eh.net/encyclopedia/articles/friedman.unions.us.

Galenson, David W. White Servitude in Colonial America. New York: Cambridge University Press, 1981.

Galenson, David W. “The Rise and Fall of Indentured Servitude in the Americas: An Economic Analysis.” Journal of Economic History 44, no. 1 (1984): 1-26.

Galloway, Lowell E., Richard K. Vedder and Vishwa Shukla. “The Distribution of the Immigrant Population in the United States: An Econometric Analysis.” Explorations in Economic History 11 (1974): 213-26.

Gjerde, John. From Peasants to Farmers: Migration from Balestrand, Norway to the Upper Middle West. New York: Cambridge University Press, 1985.

Goldin, Claudia. “The Political Economy of Immigration Restriction in the United States, 1890 to 1921.” In The Regulated Economy: A Historical Approach to Political Economy, edited by Claudia Goldin and Gary Libecap. Chicago: University of Chicago Press, 1994.

Goldin, Claudia. “Labor Markets in the Twentieth Century.” In The Cambridge Economic History of the United States, Volume 3: The Twentieth Century, edited by Stanley L. Engerman and Robert Gallman. Cambridge: Cambridge University Press, 2000.

Grubb, Farley. “The Market for Indentured Immigrants: Evidence on the Efficiency of Forward Labor Contracting in Philadelphia, 1745-1773.” Journal of Economic History 45, no. 4 (1985a): 855-68.

Grubb, Farley. “The Incidence of Servitude in Trans-Atlantic Migration, 1771-1804.” Explorations in Economic History 22 (1985b): 316-39.

Grubb, Farley. “Redemptioner Immigration to Pennsylvania: Evidence on Contract Choice and Profitability.” Journal of Economic History 46, no. 2 (1986): 407-18.

Hatton, Timothy J. and Jeffrey G. Williamson (1991). “Integrated and Segmented Labor Markets: Thinking in Two Sectors.” Journal of Economic History 51, no. 2 (1991): 413-25.

Hughes, Jonathan and Louis Cain. American Economic History, sixth edition. Boston: Addison-Wesley, 2003.

Jacoby, Sanford M. “The Development of Internal Labor markets in American Manufacturing Firms.” In Internal Labor Markets, edited by Paul Osterman, 23-69. Cambridge, MA: MIT Press, 1984

Jacoby, Sanford M. Employing Bureaucracy: Managers, Unions, and the Transformation of Work in American Industry, 1900-1945. New York: Columbia University Press, 1985.

Jacoby, Sanford M. and Sunil Sharma. “Employment Duration and Industrial Labor Mobility in the United States, 1880-1980.” Journal of Economic History 52, no. 1 (1992): 161-79.

James, John A. “Job Tenure in the Gilded Age.” In Labour Market Evolution: The Economic History of Market Integration, Wage Flexibility, and the Employment Relation, edited by George Grantham and Mary MacKinnon. New York: Routledge, 1994.

Kamphoefner, Walter D. The Westfalians: From Germany to Missouri. Princeton, NJ: Princeton University Press, 1987.

Keyssar, Alexander. Out of Work: The First Century of Unemployment in Massachusetts. New York: Cambridge University Press, 1986.

Krueger, Alan B. and Lawrence H. Summers. “Reflections on the Inter-Industry Wage Structure.” In Unemployment and the Structure of Labor Markets, edited by Kevin Lang and Jonathan Leonard, 17-47. Oxford: Blackwell, 1987.

Lebergott, Stanley. Manpower in Economic Growth: The American Record since 1800. New York: McGraw-Hill, 1964.

Margo, Robert. “Explaining Black-White Wage Convergence, 1940-1950: The Role of the Great Compression.” Industrial and Labor Relations Review 48 (1995): 470-81.

Margo, Robert. Wages and Labor Markets in the United States, 1820-1860. Chicago: University of Chicago Press, 2000a.

Margo, Robert. “The Labor Force in the Nineteenth Century.” In The Cambridge Economic History of the United States, Volume 2: The Long Nineteenth Century, edited by Stanley L. Engerman and Robert E. Gallman, 207-44. New York: Cambridge University Press, 2000b.

McCusker, John J. and Russell R. Menard. The Economy of British America: 1607-1789. Chapel Hill: University of North Carolina Press, 1985.

Montgomery, Edward. “Evidence on Metropolitan Wage Differences across Industries and over Time.” Journal of Urban Economics 31 (1992): 69-83.

Morgan, Edmund S. “The Labor Problem at Jamestown, 1607-18.” American Historical Review 76 (1971): 595-611.

Murnane, Richard J., John B. Willett and Frank Levy. “The Growing Importance of Cognitive Skills in Wage Determination.” Review of Economics and Statistics 77 (1995): 251-66

Nelson, Daniel. Managers and Workers: Origins of the New Factory System in the United States, 1880-1920. Madison: University of Wisconsin Press, 1975.

O’Rourke, Kevin H. and Jeffrey G. Williamson. Globalization and History: The Evolution of a Nineteenth-Century Atlantic Economy. Cambridge, MA: MIT Press, 1999.

Owen, Laura. “History of Labor Turnover in the U.S.” EH.Net Encyclopedia, edited by Robert Whaples. April 30, 2004. URL http://www.eh.net/encyclopedia/articles/owen.turnover.

Piketty, Thomas and Emmanuel Saez. “Income Inequality in the United States, 1913-1998.” Quarterly Journal of Economics 118 (2003): 1-39.

Plotnick, Robert D. et al. “The Twentieth-Century Record of Inequality and Poverty in the United States” In The Cambridge Economic History of the United States, Volume 3: The Twentieth Century, edited by Stanley L. Engerman and Robert Gallman. New York: Cambridge University Press, 2000.

Romer, Christina. “New Estimates of Prewar Gross National Product and Unemployment.” Journal of Economic History 46, no. 2 (1986a): 341-52.

Romer, Christina. “Spurious Volatility in Historical Unemployment Data.” Journal of Political Economy 94 (1986b): 1-37.

Rosenbloom, Joshua L. “Was There a National Labor Market at the End of the Nineteenth Century? New Evidence on Earnings in Manufacturing.” Journal of Economic History 56, no. 3 (1996): 626-56.

Rosenbloom, Joshua L. Looking for Work, Searching for Workers: American Labor Markets during Industrialization. New York: Cambridge University Press, 2002.

Slichter, Sumner H. “The Current Labor Policies of American Industries.” Quarterly Journal of Economics 43 (1929): 393-435.

Sahling, Leonard G. and Sharon P. Smith. “Regional Wage Differentials: Has the South Risen Again?” Review of Economics and Statistics 65 (1983): 131-35.

Smith, James P. and Finis R. Welch. “Black Economic Progress after Myrdal.” Journal of Economic Literature 27 (1989): 519-64.

Steckel, Richard. “A History of the Standard of Living in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. July 22, 2002. URL http://eh.net/encyclopedia/article/steckel.standard.living.us

Sundstrom, William A. and Joshua L. Rosenbloom. “Occupational Differences in the Dispersion of Wages and Working Hours: Labor Market Integration in the United States, 1890-1903.” Explorations in Economic History 30 (1993): 379-408.

Ward, David. Cities and Immigrants: A Geography of Change in Nineteenth-Century America. New York: Oxford University Press, 1971.

Ware, Caroline F. The Early New England Cotton Manufacture: A Study in Industrial Beginnings. Boston: Houghton Mifflin, 1931.

Weiss, Thomas. “Revised Estimates of the United States Workforce, 1800-1860.” In Long Term Factors in American Economic Growth, edited by Stanley L. Engerman and Robert E. Gallman, 641-78. Chicago: University of Chicago, 1986.

Whaples, Robert. “Child Labor in the United States.” EH.Net Encyclopedia, edited by Robert Whaples. October 8, 2005. URL http://eh.net/encyclopedia/article/whaples.childlabor.

Williamson, Jeffrey G. “The Evolution of Global Labor Markets since 1830: Background Evidence and Hypotheses.” Explorations in Economic History 32 (1995): 141-96.

Williamson, Jeffrey G. and Peter H. Lindert. American Inequality: A Macroeconomic History. New York: Academic Press, 1980.

Wright, Gavin. Old South, New South: Revolutions in the Southern Economy since the Civil War. New York: Basic Books, 1986.

Wright, Gavin. “Postbellum Southern Labor Markets.” In Quantity and Quiddity: Essays in U.S. Economic History, edited by Peter Kilby. Middletown, CT: Wesleyan University Press, 1987.

Wright, Gavin. “American Agriculture and the Labor Market: What Happened to Proletarianization?” Agricultural History 62 (1988): 182-209.

Citation: Rosenbloom, Joshua. “The History of American Labor Market Institutions and Outcomes”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-history-of-american-labor-market-institutions-and-outcomes/

Hours of Work in U.S. History

Robert Whaples, Wake Forest University

In the 1800s, many Americans worked seventy hours or more per week and the length of the workweek became an important political issue. Since then the workweek’s length has decreased considerably. This article presents estimates of the length of the historical workweek in the U.S., describes the history of the shorter-hours “movement,” and examines the forces that drove the workweek’s decline over time.

Estimates of the Length of the Workweek

Measuring the length of the workweek (or workday or workyear) is a difficult task, full of ambiguities concerning what constitutes work and who is to be considered a worker. Estimating the length of the historical workweek is even more troublesome. Before the Civil War most Americans were employed in agriculture and most of these were self-employed. Like self-employed workers in other fields, they saw no reason to record the amount of time they spent working. Often the distinction between work time and leisure time was blurry. Therefore, estimates of the length of the typical workweek before the mid-1800s are very imprecise.

The Colonial Period

Based on the amount of work performed — for example, crops raised per worker — Carr (1992) concludes that in the seventeenth-century Chesapeake region, “for at least six months of the year, an eight to ten-hour day of hard labor was necessary.” This does not account for other required tasks, which probably took about three hours per day. This workday was considerably longer than for English laborers, who at the time probably averaged closer to six hours of heavy labor each day.

The Nineteenth Century

Some observers believe that most American workers adopted the practice of working from “first light to dark” — filling all their free hours with work — throughout the colonial period and into the nineteenth century. Others are skeptical of such claims and argue that work hours increased during the nineteenth century — especially its first half. Gallman (1975) calculates “changes in implicit hours of work per agricultural worker” and estimates that hours increased 11 to 18 percent from 1800 to 1850. Fogel and Engerman (1977) argue that agricultural hours in the North increased before the Civil War due to the shift into time-intensive dairy and livestock. Weiss and Craig (1993) find evidence suggesting that agricultural workers also increased their hours of work between 1860 and 1870. Finally, Margo (2000) estimates that “on an economy-wide basis, it is probable that annual hours of work rose over the (nineteenth) century, by around 10 percent.” He credits this rise to the shift out of agriculture, a decline in the seasonality of labor demand and reductions in annual periods of nonemployment. On the other hand, it is clear that working hours declined substantially for one important group. Ransom and Sutch (1977) and Ng and Virts (1989) estimate that annual labor hours per capita fell 26 to 35 percent among African-Americans with the end of slavery.

Manufacturing Hours before 1890

Our most reliable estimates of the workweek come from manufacturing, since most employers required that manufacturing workers remain at work during precisely specified hours. The Census of Manufactures began to collect this information in 1880 but earlier estimates are available. Much of what is known about average work hours in the nineteenth century comes from two surveys of manufacturing hours taken by the federal government. The first survey, known as the Weeks Report, was prepared by Joseph Weeks as part of the Census of 1880. The second was prepared in 1893 by Commissioner of Labor Carroll D. Wright, for the Senate Committee on Finance, chaired by Nelson Aldrich. It is commonly called the Aldrich Report. Both of these sources, however, have been criticized as flawed due to problems such as sample selection bias (firms whose records survived may not have been typical) and unrepresentative regional and industrial coverage. In addition, the two series differ in their estimates of the average length of the workweek by as much as four hours. These estimates are reported in Table 1. Despite the previously mentioned problems, it seems reasonable to accept two important conclusions based on these data — the length of the typical manufacturing workweek in the 1800s was very long by modern standards and it declined significantly between 1830 and 1890.

Table 1
Estimated Average Weekly Hours Worked in Manufacturing, 1830-1890

Year Weeks Report Aldrich Report
1830 69.1
1840 67.1 68.4
1850 65.5 69.0
1860 62.0 66.0
1870 61.1 63.0
1880 60.7 61.8
1890 60.0

Sources: U.S. Department of Interior (1883), U.S. Senate (1893)
Note: Atack and Bateman (1992), using data from census manuscripts, estimate average weekly hours to be 60.1 in 1880 — very close to Weeks’ contemporary estimate. They also find that the summer workweek was about 1.5 hours longer than the winter workweek.

Hours of Work during the Twentieth Century

Because of changing definitions and data sources there does not exist a consistent series of workweek estimates covering the entire twentieth century. Table 2 presents six sets of estimates of weekly hours. Despite differences among the series, there is a fairly consistent pattern, with weekly hours falling considerably during the first third of the century and much more slowly thereafter. In particular, hours fell strongly during the years surrounding World War I, so that by 1919 the eight-hour day (with six workdays per week) had been won. Hours fell sharply at the beginning of the Great Depression, especially in manufacturing, then rebounded somewhat and peaked during World War II. After World War II, the length of the workweek stabilized around forty hours. Owen’s nonstudent-male series shows little trend after World War II, but the other series show a slow, but steady, decline in the length of the average workweek. Greis’s two series are based on the average length of the workyear and adjust for paid vacations, holidays and other time-off. The last column is based on information reported by individuals in the decennial censuses and in the Current Population Survey of 1988. It may be the most accurate and representative series, as it is based entirely on the responses of individuals rather than employers.

Table 2
Estimated Average Weekly Hours Worked, 1900-1988

Year Census of Manu-facturing JonesManu-

facturing

OwenNonstudent Males GreisManu-

facturing

GreisAll Workers Census/CPS All Workers
1900 59.6* 55.0 58.5
1904 57.9 53.6 57.1
1909 56.8 (57.3) 53.1 55.7
1914 55.1 (55.5) 50.1 54.0
1919 50.8 (51.2) 46.1 50.0
1924 51.1* 48.8 48.8
1929 50.6 48.0 48.7
1934 34.4 40.6
1940 37.6 42.5 43.3
1944 44.2 46.9
1947 39.2 42.4 43.4 44.7
1950 38.7 41.1 42.7
1953 38.6 41.5 43.2 44.0
1958 37.8* 40.9 42.0 43.4
1960 41.0 40.9
1963 41.6 43.2 43.2
1968 41.7 41.2 42.0
1970 41.1 40.3
1973 40.6 41.0
1978 41.3* 39.7 39.1
1980 39.8
1988 39.2

Sources: Whaples (1990a), Jones (1963), Owen (1976, 1988), and Greis (1984). The last column is based on the author’s calculations using Coleman and Pencavel’s data from Table 4 (below).
* = these estimates are from one year earlier than the year listed.
(The figures in parentheses in the first column are unofficial estimates but are probably more precise, as they better estimate the hours of workers in industries with very long workweeks.)

Hours in Other Industrial Sectors

Table 3 compares the length of the workweek in manufacturing to that in other industries for which there is available information. (Unfortunately, data from the agricultural and service sectors are unavailable until late in this period.) The figures in Table 3 show that the length of the workweek was generally shorter in the other industries — sometimes considerably shorter. For example, in 1910 anthracite coalminers’ workweeks were about forty percent shorter than the average workweek among manufacturing workers. All of the series show an overall downward trend.

Table 3
Estimated Average Weekly Hours Worked, Other Industries

Year Manufacturing Construction Railroads Bituminous Coal Anthracite Coal
1850s about 66 about 66
1870s about 62 about 60
1890 60.0 51.3
1900 59.6 50.3 52.3 42.8 35.8
1910 57.3 45.2 51.5 38.9 43.3
1920 51.2 43.8 46.8 39.3 43.2
1930 50.6 42.9 33.3 37.0
1940 37.6 42.5 27.8 27.2
1955 38.5 37.1 32.4 31.4

Sources: Douglas (1930), Jones (1963), Licht (1983), and Tables 1 and 2.
Note: The manufacturing figures for the 1850s and 1870s are approximations based on averaging numbers from the Weeks and Aldrich reports from Table 1. The early estimates for the railroad industry are also approximations.

Recent Trends by Race and Gender

Some analysts, such as Schor (1992) have argued that the workweek increased substantially in the last half of the twentieth century. Few economists accept this conclusion, arguing that it is based on the use of faulty data (public opinion surveys) and unexplained methods of “correcting” more reliable sources. Schor’s conclusions are contradicted by numerous studies. Table 4 presents Coleman and Pencavel’s (1993a, 1993b) estimates of the average workweek of employed people — disaggregated by race and gender. For all four groups the average length of the workweek has dropped since 1950. Although median weekly hours were virtually constant for men, the upper tail of the hours distribution fell for those with little schooling and rose for the well-educated. In addition, Coleman and Pencavel also find that work hours declined for young and older men (especially black men), but changed little for white men in their prime working years. Women with relatively little schooling were working fewer hours in the 1980s than in 1940, while the reverse is true of well-educated women.

Table 4
Estimated Average Weekly Hours Worked, by Race and Gender, 1940-1988

Year White Men Black Men White Women Black Women
1940 44.1 44.5 40.6 42.2
1950 43.4 42.8 41.0 40.3
1960 43.3 40.4 36.8 34.7
1970 43.1 40.2 36.1 35.9
1980 42.9 39.6 35.9 36.5
1988 42.4 39.6 35.5 37.2

Source: Coleman and Pencavel (1993a, 1993b)

Broader Trends in Time Use, 1880 to 2040

In 1880 a typical male household head had very little leisure time — only about 1.8 hours per day over the course of a year. However, as Fogel’s (2000) estimates in Table 5 show, between 1880 and 1995 the amount of work per day fell nearly in half, allowing leisure time to more than triple. Because of the decline in the length of the workweek and the declining portion of a lifetime that is spent in paid work (due largely to lengthening periods of education and retirement) the fraction of the typical American’s lifetime devoted to work has become remarkably small. Based on these trends Fogel estimates that four decades from now less than one-fourth of our discretionary time (time not needed for sleep, meals, and hygiene) will be devoted to paid work — over three-fourths will be available for doing what we wish.

Table 5
Division of the Day for the Average Male Household Head over the Course of a Year, 1880 and 1995

Activity 1880 1995
Sleep 8 8
Meals and hygiene 2 2
Chores 2 2
Travel to and from work 1 1
Work 8.5 4.7
Illness .7 .5
Left over for leisure activities 1.8 5.8

Source: Fogel (2000)

Table 6
Estimated Trend in the Lifetime Distribution of Discretionary Time, 1880-2040

Activity 1880 1995 2040
Lifetime Discretionary Hours 225,900 298,500 321,900
Lifetime Work Hours 182,100 122,400 75,900
Lifetime Leisure Hours 43,800 176,100 246,000

Source: Fogel (2000)
Notes: Discretionary hours exclude hours used for sleep, meals and hygiene. Work hours include paid work, travel to and from work, and household chores.

Postwar International Comparisons

While hours of work have decreased slowly in the U.S. since the end of World War II, they have decreased more rapidly in Western Europe. Greis (1984) calculates that annual hours worked per employee fell from 1908 to 1704 in the U.S. between 1950 and 1979, a 10.7 percent decrease. This compares to a 21.8 percent decrease across a group of twelve Western European countries, where the average fell from 2170 hours to 1698 hours between 1950 and 1979. Perhaps the most precise way of measuring work hours is to have individuals fill out diaries on their day-to-day and hour-to-hour time use. Table 7 presents an international comparison of average work hours both inside and outside of the workplace, by adult men and women — averaging those who are employed with those who are not. (Juster and Stafford (1991) caution, however, that making these comparisons requires a good deal of guesswork.) These numbers show a significant drop in total work per week in the U.S. between 1965 and 1981. They also show that total work by men and women is very similar, although it is divided differently. Total work hours in the U.S. were fairly similar to those in Japan, but greater than in Denmark, while less than in the USSR.

Table 7
Weekly Work Time in Four Countries, Based on Time Diaries, 1960s-1980s

Activity US USSR (Pskov)
Men Women Men Women
1965 1981 1965 1981 1965 1981 1965 1981
Total Work 63.1 57.8 60.9 54.4 64.4 65.7 75.3 66.3
Market Work 51.6 44.0 18.9 23.9 54.6 53.8 43.8 39.3
Commuting 4.8 3.5 1.6 2.0 4.9 5.2 3.7 3.4
Housework 11.5 13.8 41.8 30.5 9.8 11.9 31.5 27.0
Activity Japan Denmark
Men Women Men Women
1965 1985 1965 1985 1964 1987 1964 1987
Total Work 60.5 55.5 64.7 55.6 45.4 46.2 43.4 43.9
Market Work 57.7 52.0 33.2 24.6 41.7 33.4 13.3 20.8
Commuting 3.6 4.5 1.0 1.2 n.a n.a n.a n.a
Housework 2.8 3.5 31.5 31.0 3.7 12.8 30.1 23.1

Source: Juster and Stafford (1991)

The Shorter Hours “Movement” in the U.S.

The Colonial Period

Captain John Smith, after mapping New England’s coast, came away convinced that three days’ work per week would satisfy any settler. Far from becoming a land of leisure, however, the abundant resources of British America and the ideology of its settlers, brought forth high levels of work. Many colonial Americans held the opinion that prosperity could be taken as a sign of God’s pleasure with the individual, viewed work as inherently good and saw idleness as the devil’s workshop. Rodgers (1978) argues that this work ethic spread and eventually reigned supreme in colonial America. The ethic was consistent with the American experience, since high returns to effort meant that hard work often yielded significant increases in wealth. In Virginia, authorities also transplanted the Statue of Artificers, which obliged all Englishmen (except the gentry) to engage in productive activity from sunrise to sunset. Likewise, a 1670 Massachusetts law demanded a minimum ten-hour workday, but it is unlikely that these laws had any impact on the behavior of most free workers.

The Revolutionary War Period

Roediger and Foner (1989) contend that the Revolutionary War era brought a series of changes that undermined support for sun-to-sun work. The era’s republican ideology emphasized that workers needed free time, away from work, to participate in democracy. Simultaneously, the development of merchant capitalism meant that there were, for the first time, a significant number of wageworkers. Roediger and Foner argue that reducing labor costs was crucial to the profitability of these workers’ employers, who reduced costs by squeezing more work from their employees — reducing time for meals, drink and rest and sometimes even rigging the workplace’s official clock. Incensed by their employers’ practice of paying a flat daily wage during the long summer shift and resorting to piece rates during short winter days, Philadelphia’s carpenters mounted America’s first ten-hour-day strike in May 1791. (The strike was unsuccessful.)

1820s: The Shorter Hours Movement Begins

Changes in the organization of work, with the continued rise of merchant capitalists, the transition from the artisanal shop to the early factory, and an intensified work pace had become widespread by about 1825. These changes produced the first extensive, aggressive movement among workers for shorter hours, as the ten-hour movement blossomed in New York City, Philadelphia and Boston. Rallying around the ten-hour banner, workers formed the first city-central labor union in the U.S., the first labor newspaper, and the first workingmen’s political party — all in Philadelphia — in the late 1820s.

Early Debates over Shorter Hours

Although the length of the workday is largely an economic decision arrived at by the interaction of the supply and demand for labor, advocates of shorter hours and foes of shorter hours have often argued the issue on moral grounds. In the early 1800s, advocates argued that shorter work hours improved workers’ health, allowed them time for self-improvement and relieved unemployment. Detractors countered that workers would abuse leisure time (especially in saloons) and that long, dedicated hours of work were the path to success, which should not be blocked for the great number of ambitious workers.

1840s: Early Agitation for Government Intervention

When Samuel Slater built the first textile mills in the U.S., “workers labored from sun up to sun down in summer and during the darkness of both morning and evening in the winter. These hours ? only attracted attention when they exceeded the common working day of twelve hours,” according to Ware (1931). During the 1830s, an increased work pace, tighter supervision, and the addition of about fifteen minutes to the work day (partly due to the introduction of artificial lighting during winter months), plus the growth of a core of more permanent industrial workers, fueled a campaign for a shorter workweek among mill workers in Lowell, Massachusetts, whose workweek averaged about 74 hours. This agitation was led by Sarah Bagley and the New England Female Labor Reform Association, which, beginning in 1845, petitioned the state legislature to intervene in the determination of hours. The petitions were followed by America’s first-ever examination of labor conditions by a governmental investigating committee. The Massachusetts legislature proved to be very unsympathetic to the workers’ demands, but similar complaints led to the passage of laws in New Hampshire (1847) and Pennsylvania (1848), declaring ten hours to be the legal length of the working day. However, these laws also specified that a contract freely entered into by employee and employer could set any length for the workweek. Hence, these laws had little impact. Legislation passed by the federal government had a more direct, though limited effect. On March 31, 1840, President Martin Van Buren issued an executive order mandating a ten-hour day for all federal employees engaged in manual work.

1860s: Grand Eight Hours Leagues

As the length of the workweek gradually declined, political agitation for shorter hours seems to have waned for the next two decades. However, immediately after the Civil War reductions in the length of the workweek reemerged as an important issue for organized labor. The new goal was an eight-hour day. Roediger (1986) argues that many of the new ideas about shorter hours grew out of the abolitionists’ critique of slavery — that long hours, like slavery, stunted aggregate demand in the economy. The leading proponent of this idea, Ira Steward, argued that decreasing the length of the workweek would raise the standard of living of workers by raising their desired consumption levels as their leisure expanded, and by ending unemployment. The hub of the newly launched movement was Boston and Grand Eight Hours Leagues sprang up around the country in 1865 and 1866. The leaders of the movement called the meeting of the first national organization to unite workers of different trades, the National Labor Union, which met in Baltimore in 1867. In response to this movement, eight states adopted general eight-hour laws, but again the laws allowed employer and employee to mutually consent to workdays longer than the “legal day.” Many critics saw these laws and this agitation as a hoax, because few workers actually desired to work only eight hours per day at their original hourly pay rate. The passage of the state laws did foment action by workers — especially in Chicago where parades, a general strike, rioting and martial law ensued. In only a few places did work hours fall after the passage of these laws. Many become disillusioned with the idea of using the government to promote shorter hours and by the late 1860s, efforts to push for a universal eight-hour day had been put on the back burner.

The First Enforceable Hours Laws

Despite this lull in shorter-hours agitation, in 1874, Massachusetts passed the nation’s first enforceable ten-hour law. It covered only female workers and became fully effective by 1879. This legislation was fairly late by European standards. Britain had passed its first effective Factory Act, setting maximum hours for almost half of its very young textile workers, in 1833.

1886: Year of Dashed Hopes

In the early 1880s organized labor in the U.S. was fairly weak. In 1884, the short-lived Federation of Organized Trades and Labor Unions (FOTLU) fired a “shot in the dark.” During its final meeting, before dissolving, the Federation “ordained” May 1, 1886 as the date on which workers would cease working beyond eight hours per day. Meanwhile, the Knights of Labor, which had begun as a secret fraternal society and evolved a labor union, began to gain strength. It appears that many nonunionized workers, especially the unskilled, came to see in the Knights a chance to obtain a better deal from their employers, perhaps even to obtain the eight-hour day. FOTLU’s call for workers to simply walk off the job after eight hours beginning on May 1, plus the activities of socialist and anarchist labor organizers and politicians, and the apparent strength of the Knights combined to attract members in record numbers. The Knights mushroomed and its new membership demanded that their local leaders support them in attaining the eight-hour day. Many smelled victory in the air — the movement to win the eight-hour day became frenzied and the goal became “almost a religious crusade” (Grob, 1961).

The Knights’ leader, Terence Powderly, thought that the push for a May 1 general strike for eight-hours was “rash, short-sighted and lacking in system” and “must prove abortive” (Powderly, 1890). He offered no effective alternative plan but instead tried to block the mass action, issuing a “secret circular” condemning the use of strikes. Powderly reasoned that low incomes forced workmen to accept long hours. Workers didn’t want shorter hours unless their daily pay was maintained, but employers were unwilling and/or unable to offer this. Powderly’s rival, labor leader Samuel Gompers, agreed that “the movement of ’86 did not have the advantage of favorable conditions” (Gompers, 1925). Nelson (1986) points to divisions among workers, which probably had much to do with the failure in 1886 of the drive for the eight-hour day. Some insisted on eight hours with ten hours’ pay, but others were willing to accept eight hours with eight hours’ pay,

Haymarket Square Bombing

The eight-hour push of 1886 was, in Norman Ware’s words, “a flop” (Ware, 1929). Lack of will and organization among workers was undoubtedly important, but its collapse was aided by violence that marred strikes and political rallies in Chicago and Milwaukee. The 1886 drive for eight-hours literally blew up in organized labor’s face. At Haymarket Square in Chicago an anarchist bomb killed fifteen policemen during an eight-hour rally, and in Milwaukee’s Bay View suburb nine strikers were killed as police tried to disperse roving pickets. The public backlash and fear of revolution damned the eight-hour organizers along with the radicals and dampened the drive toward eight hours — although it is estimated that the strikes of May 1886 shortened the workweek for about 200,000 industrial workers, especially in New York City and Cincinnati.

The AFL’s Strategy

After the demise of the Knights of Labor, the American Federation of Labor (AFL) became the strongest labor union in the U.S. It held shorter hours as a high priority. The inside cover of its Proceedings carried two slogans in large type: “Eight hours for work, eight hours for rest, eight hours for what we will” and “Whether you work by the piece or work by the day, decreasing the hours increases the pay.” (The latter slogan was coined by Ira Steward’s wife, Mary.) In the aftermath of 1886, the American Federation of Labor adopted a new strategy of selecting each year one industry in which it would attempt to win the eight-hour day, after laying solid plans, organizing, and building up a strike fund war chest by taxing nonstriking unions. The United Brotherhood of Carpenters and Joiners was selected first and May 1, 1890 was set as a day of national strikes. It is estimated that nearly 100,000 workers gained the eight-hour day as a result of these strikes in 1890. However, other unions turned down the opportunity to follow the carpenters’ example and the tactic was abandoned. Instead, the length of the workweek continued to erode during this period, sometimes as the result of a successful local strike, more often as the result of broader economic forces.

The Spread of Hours Legislation

Massachusetts’ first hours law in 1874 set sixty hours per week as the legal maximum for women, in 1892 this was cut to 58, in 1908 to 56, and in 1911 to 54. By 1900, 26 percent of states had maximum hours laws covering women, children and, in some, adult men (generally only those in hazardous industries). The percentage of states with maximum hours laws climbed to 58 percent in 1910, 76 percent in 1920, and 84 percent in 1930. Steinberg (1982) calculates that the percent of employees covered climbed from 4 percent nationally in 1900, to 7 percent in 1910, and 12 percent in 1920 and 1930. In addition, these laws became more restrictive with the average legal standard falling from a maximum of 59.3 hours per week in 1900 to 56.7 in 1920. According to her calculations, in 1900 about 16 percent of the workers covered by these laws were adult men, 49 percent were adult women and the rest were minors.

Court Rulings

The banner years for maximum hours legislation were right around 1910. This may have been partly a reaction to the Supreme Court’s ruling upholding female-hours legislation in the Muller vs. Oregon case (1908). The Court’s rulings were not always completely consistent during this period, however. In 1898 the Court upheld a maximum eight-hour day for workmen in the hazardous industries of mining and smelting in Utah in Holden vs. Hardy. In Lochner vs. New York (1905), it rejected as unconstitutional New York’s ten-hour day for bakers, which was also adopted (at least nominally) out of concerns for safety. The defendant showed that mortality rates in baking were only slightly above average, and lower than those for many unregulated occupations, arguing that this was special interest legislation, designed to favor unionized bakers. Several state courts, on the other hand, supported laws regulating the hours of men in only marginally hazardous work. By 1917, in Bunting vs. Oregon, the Supreme Court seemingly overturned the logic of the Lochner decision, supporting a state law that required overtime payment for all men working long hours. The general presumption during this period was that the courts would allow regulation of labor concerning women and children, who were thought to be incapable of bargaining on an equal footing with employers and in special need of protection. Men were allowed freedom of contract unless it could be proven that regulating their hours served a higher good for the population at large.

New Arguments about Shorter Hours

During the first decades of the twentieth century, arguments favoring shorter hours moved away from Steward’s line that shorter hours increased pay and reduced unemployment to arguments that shorter hours were good for employers because they made workers more productive. A new cadre of social scientists began to offer evidence that long hours produced health-threatening, productivity-reducing fatigue. This line of reasoning, advanced in the court brief of Louis Brandeis and Josephine Goldmark, was crucial in the Supreme Court’s decision to support state regulation of women’s hours in Muller vs. Oregon. Goldmark’s book, Fatigue and Efficiency (1912) was a landmark. In addition, data relating to hours and output among British and American war workers during World War I helped convince some that long hours could be counterproductive. Businessmen, however, frequently attacked the shorter hours movement as merely a ploy to raise wages, since workers were generally willing to work overtime at higher wage rates.

Federal Legislation in the 1910s

In 1912 the Federal Public Works Act was passed, which provided that every contract to which the U.S. government was a party must contain an eight-hour day clause. Three year later LaFollette’s Bill established maximum hours for maritime workers. These were preludes to the most important shorter-hours law enacted by Congress during this period — 1916’s Adamson Act, which was passed to counter a threatened nationwide strike, granted rail workers the basic eight hour day. (The law set eight hours as the basic workday and required higher overtime pay for longer hours.)

World War I and Its Aftermath

Labor markets became very tight during World War I as the demand for workers soared and the unemployment rate plunged. These forces put workers in a strong bargaining position, which they used to obtain shorter work schedules. The move to shorter hours was also pushed by the federal government, which gave unprecedented support to unionization. The federal government began to intervene in labor disputes for the first time, and the National War Labor Board “almost invariably awarded the basic eight-hour day when the question of hours was at issue” in labor disputes (Cahill, 1932). At the end of the war everyone wondered if organized labor would maintain its newfound power and the crucial test case was the steel industry. Blast furnace workers generally put in 84-hour workweeks. These abnormally long hours were the subject of much denunciation and a major issue in a strike that began in September 1919. The strike failed (and organized labor’s power receded during the 1920s), but four years later US Steel reduced its workday from twelve to eight hours. The move came after much arm-twisting by President Harding but its timing may be explained by immigration restrictions and the loss of immigrant workers who were willing to accept such long hours (Shiells, 1990).

The Move to a Five-day Workweek

During the 1920s agitation for shorter workdays largely disappeared, now that the workweek had fallen to about 50 hours. However, pressure arose to grant half-holidays on Saturday or Saturday off — especially in industries whose workers were predominantly Jewish. By 1927 at least 262 large establishments had adopted the five-day week, while only 32 had it by 1920. The most notable action was Henry Ford’s decision to adopt the five-day week in 1926. Ford employed more than half of the nation’s approximately 400,000 workers with five-day weeks. However, Ford’s motives were questioned by many employers who argued that productivity gains from reducing hours ceased beyond about forty-eight hours per week. Even the reformist American Labor Legislation Review greeted the call for a five-day workweek with lukewarm interest.

Changing Attitudes in the 1920s

Hunnicutt (1988) argues that during the 1920s businessmen and economists began to see shorter hours as a threat to future economic growth. With the development of advertising — the “gospel of consumption” — a new vision of progress was proposed to American workers. It replaced the goal of leisure time with a list of things to buy and business began to persuade workers that more work brought more tangible rewards. Many workers began to oppose further decreases in the length of the workweek. Hunnicutt concludes that a new work ethic arose as Americans threw off the psychology of scarcity for one of abundance.

Hours’ Reduction during the Great Depression

Then the Great Depression hit the American economy. By 1932 about half of American employers had shortened hours. Rather than slash workers’ real wages, employers opted to lay-off many workers (the unemployment rate hit 25 percent) and tried to protect the ones they kept on by the sharing of work among them. President Hoover’s Commission for Work Sharing pushed voluntary hours reductions and estimated that they had saved three to five million jobs. Major employers like Sears, GM, and Standard Oil scaled down their workweeks and Kellogg’s and the Akron tire industry pioneered the six-hour day. Amid these developments, the AFL called for a federally-mandated thirty-hour workweek.

The Black-Connery 30-Hours Bill and the NIRA

The movement for shorter hours as a depression-fighting work-sharing measure built such a seemingly irresistible momentum that by 1933 observers predicting that the “30-hour week was within a month of becoming federal law” (Hunnicutt, 1988). During the period after the 1932 election but before Franklin Roosevelt’s inauguration, Congressional hearings on thirty hours began, and less than one month into FDR’s first term, the Senate passed, 53 to 30, a thirty-hour bill authored by Hugo Black. The bill was sponsored in the House by William Connery. Roosevelt originally supported the Black-Connery proposals, but soon backed off, uneasy with a provision forbidding importation of goods produced by workers whose weeks were longer than thirty hours, and convinced by arguments of business that trying to legislate fewer hours might have disastrous results. Instead, FDR backed the National Industrial Recovery Act (NIRA). Hunnicutt argues that an implicit deal was struck in the NIRA. Labor leaders were persuaded by NIRA Section 7a’s provisions — which guaranteed union organization and collective bargaining — to support the NIRA rather than the Black-Connery Thirty-Hour Bill. Business, with the threat of thirty hours hanging over its head, fell raggedly into line. (Most historians cite other factors as the key to the NIRA’s passage. See Barbara Alexander’s article on the NIRA in this encyclopedia.) When specific industry codes were drawn up by the NIRA-created National Recovery Administration (NRA), shorter hours were deemphasized. Despite a plan by NRA Administrator Hugh Johnson to make blanket provisions for a thirty-five hour workweek in all industry codes, by late August 1933, the momentum toward the thirty-hour week had dissipated. About half of employees covered by NRA codes had their hours set at forty per week and nearly 40 percent had workweeks longer than forty hours.

The FSLA: Federal Overtime Law

Hunnicutt argues that the entire New Deal can be seen as an attempt to keep shorter-hours advocates at bay. After the Supreme Court struck down the NRA, Roosevelt responded to continued demands for thirty hours with the Works Progress Administration, the Wagner Act, Social Security, and, finally, the Fair Labor Standards Acts, which set a federal minimum wage and decreed that overtime beyond forty hours per week would be paid at one-and-a-half times the base rate in covered industries.

The Demise of the Shorter Hours’ Movement

As the Great Depression ended, average weekly work hours slowly climbed from their low reached in 1934. During World War II hours reached a level almost as high as at the end of World War I. With the postwar return of weekly work hours to the forty-hour level the shorter hours movement effectively ended. Occasionally organized labor’s leaders announced that they would renew the push for shorter hours, but they found that most workers didn’t desire a shorter workweek.

The Case of Kellogg’s

Offsetting isolated examples of hours reductions after World War II, there were noteworthy cases of backsliding. Hunnicutt (1996) has studied the case of Kellogg’s in great detail. In 1946, 87% of women and 71% of men working at Kellogg’s voted to return to the six-hour day, with the end of the war. Over the course of the next decade, however, the tide turned. By 1957 most departments had opted to switch to 8-hour shifts, so that only about one-quarter of the work force, mostly women, retained a six-hour shift. Finally, in 1985, the last department voted to adopt an 8-hour workday. Workers, especially male workers, began to favor additional money more than the extra two hours per day of free time. In interviews they explained that they needed the extra money to buy a wide range of consumer items and to keep up with the neighbors. Several men told about the friction that resulted when men spent too much time around the house: “The wives didn’t like the men underfoot all day.” “The wife always found something for me to do if I hung around.” “We got into a lot of fights.” During the 1950s, the threat of unemployment evaporated and the moral condemnation for being a “work hog” no longer made sense. In addition, the rise of quasi-fixed employment costs (such as health insurance) induced management to push workers toward a longer workday.

The Current Situation

As the twentieth century ended there was nothing resembling a shorter hours “movement.” The length of the workweek continues to fall for most groups — but at a glacial pace. Some Americans complain about a lack of free time but the vast majority seem content with an average workweek of roughly forty hours — channeling almost all of their growing wages into higher incomes rather than increased leisure time.

Causes of the Decline in the Length of the Workweek

Supply, Demand and Hours of Work

The length of the workweek, like other labor market outcomes, is determined by the interaction of the supply and demand for labor. Employers are torn by conflicting pressures. Holding everything else constant, they would like employees to work long hours because this means that they can utilize their equipment more fully and offset any fixed costs from hiring each worker (such as the cost of health insurance — common today, but not a consideration a century ago). On the other hand, longer hours can bring reduced productivity due to worker fatigue and can bring worker demands for higher hourly wages to compensate for putting in long hours. If they set the workweek too high, workers may quit and few workers will be willing to work for them at a competitive wage rate. Thus, workers implicitly choose among a variety of jobs — some offering shorter hours and lower earnings, others offering longer hours and higher earnings.

Economic Growth and the Long-Term Reduction of Work Hours

Historically employers and employees often agreed on very long workweeks because the economy was not very productive (by today’s standards) and people had to work long hours to earn enough money to feed, clothe and house their families. The long-term decline in the length of the workweek, in this view, has primarily been due to increased economic productivity, which has yielded higher wages for workers. Workers responded to this rise in potential income by “buying” more leisure time, as well as by buying more goods and services. In a recent survey, a sizeable majority of economic historians agreed with this view. Over eighty percent accepted the proposition that “the reduction in the length of the workweek in American manufacturing before the Great Depression was primarily due to economic growth and the increased wages it brought” (Whaples, 1995). Other broad forces probably played only a secondary role. For example, roughly two-thirds of economic historians surveyed rejected the proposition that the efforts of labor unions were the primary cause of the drop in work hours before the Great Depression.

Winning the Eight-Hour Day in the Era of World War I

The swift reduction of the workweek in the period around World War I has been extensively analyzed by Whaples (1990b). His findings support the consensus that economic growth was the key to reduced work hours. Whaples links factors such as wages, labor legislation, union power, ethnicity, city size, leisure opportunities, age structure, wealth and homeownership, health, education, alternative employment opportunities, industrial concentration, seasonality of employment, and technological considerations to changes in the average workweek in 274 cities and 118 industries. He finds that the rapid economic expansion of the World War I period, which pushed up real wages by more than 18 percent between 1914 and 1919, explains about half of the drop in the length of the workweek. The reduction of immigration during the war was important, as it deprived employers of a group of workers who were willing to put in long hours, explaining about one-fifth of the hours decline. The rapid electrification of manufacturing seems also to have played an important role in reducing the workweek. Increased unionization explains about one-seventh of the reduction, and federal and state legislation and policies that mandated reduced workweeks also had a noticeable role.

Cross-sectional Patterns from 1919

In 1919 the average workweek varied tremendously, emphasizing the point that not all workers desired the same workweek. The workweek exceeded 69 hours in the iron blast furnace, cottonseed oil, and sugar beet industries, but fell below 45 hours in industries such as hats and caps, fur goods, and women’s clothing. Cities’ averages also differed dramatically. In a few Midwestern steel mill towns average workweeks exceeded 60 hours. In a wide range of low-wage Southern cities they reached the high 50s, but in high-wage Western ports, like Seattle, the workweek fell below 45 hours.

Whaples (1990a) finds that among the most important city-level determinants of the workweek during this period were the availability of a pool of agricultural workers, the capital-labor ratio, horsepower per worker, and the amount of employment in large establishments. Hours rose as each of these increased. Eastern European immigrants worked significantly longer than others, as did people in industries whose output varied considerably from season to season. High unionization and strike levels reduced hours to a small degree. The average female employee worked about six and a half fewer hours per week in 1919 than did the average male employee. In city-level comparisons, state maximum hours laws appear to have had little affect on average work hours, once the influences of other factors have been taken into account. One possibility is that these laws were passed only after economic forces lowered the length of the workweek. Overall, in cities where wages were one percent higher, hours were about -0.13 to -0.05 percent lower. Again, this suggests that during the era of declining hours, workers were willing to use higher wages to “buy” shorter hours.

Annotated Bibliography

Perhaps the most comprehensive survey of the shorter hours movement in the U.S. is David Roediger and Philip Foner’s Our Own Time: A History of American Labor and the Working Day (1989). It contends that “the length of the working day has been the central issue for the American labor movement during its most vigorous periods of activity, uniting workers along lines of craft, gender, and ethnicity.” Critics argue that its central premise is flawed because workers have often been divided about the optimal length of the workweek. It explains the point of view of organized labor and recounts numerous historically important events and arguments, but does not attempt to examine in detail the broader economic forces that determined the length of the workweek. An earlier useful comprehensive work is Marion Cahill’s Shorter Hours: A Study of the Movement since the Civil War (1932).

Benjamin Hunnicutt’s Work Without End: Abandoning Shorter Hours for the Right to Work (1988) focuses on the period from 1920 to 1940 and traces the political, intellectual, and social “dialogues” that changed the American concept of progress from dreams of more leisure to an “obsession” with the importance of work and wage-earning. This work’s detailed analysis and insights are valuable, but it draws many of its inferences from what intellectuals said about shorter hours, rather than spending time on the actual decision makers — workers and employers. Hunnicutt’s Kellogg’s Six-Hour Day (1996), is important because it does exactly this — interviewing employees and examining the motives and decisions of a prominent employer. Unfortunately, it shows that one must carefully interpret what workers say on the subject, as they are prone to reinterpret their own pasts so that their choices can be more readily rationalized. (See EH.NET’s review: http://eh.net/book_reviews/kelloggs-six-hour-day/.)

Economists have given surprisingly little attention to the determinants of the workweek. The most comprehensive treatment is Robert Whaples’ “The Shortening of the American Work Week” (1990), which surveys estimates of the length of the workweek, the shorter hours movement, and economic theories about the length of the workweek. Its core is an extensive statistical examination of the determinants of the workweek in the period around World War I.

References

Atack, Jeremy and Fred Bateman. “How Long Was the Workday in 1880?” Journal of Economic History 52, no. 1 (1992): 129-160.

Cahill, Marion Cotter. Shorter Hours: A Study of the Movement since the Civil War. New York: Columbia University Press, 1932.

Carr, Lois Green. “Emigration and the Standard of Living: The Seventeenth Century Chesapeake.” Journal of Economic History 52, no. 2 (1992): 271-291.

Coleman, Mary T. and John Pencavel. “Changes in Work Hours of Male Employees, 1940-1988.” Industrial and Labor Relations Review 46, no. 2 (1993a): 262-283.

Coleman, Mary T. and John Pencavel. “Trends in Market Work Behavior of Women since 1940.” Industrial and Labor Relations Review 46, no. 4 (1993b): 653-676.

Douglas, Paul. Real Wages in the United States, 1890-1926. Boston: Houghton, 1930.

Fogel, Robert. The Fourth Great Awakening and the Future of Egalitarianism. Chicago: University of Chicago Press, 2000.

Fogel, Robert and Stanley Engerman. Time on the Cross: The Economics of American Negro Slavery. Boston: Little, Brown, 1974.

Gallman, Robert. “The Agricultural Sector and the Pace of Economic Growth: U.S. Experience in the Nineteenth Century.” In Essays in Nineteenth-Century Economic History: The Old Northwest, edited by David Klingaman and Richard Vedder. Athens, OH: Ohio University Press, 1975.

Goldmark, Josephine. Fatigue and Efficiency. New York: Charities Publication Committee, 1912.

Gompers, Samuel. Seventy Years of Life and Labor: An Autobiography. New York: Dutton, 1925.

Greis, Theresa Diss. The Decline of Annual Hours Worked in the United States, since 1947. Manpower and Human Resources Studies, no. 10, Wharton School, University of Pennsylvania, 1984.

Grob, Gerald. Workers and Utopia: A Study of Ideological Conflict in the American Labor Movement, 1865-1900. Evanston: Northwestern University Press, 1961.

Hunnicutt, Benjamin Kline. Work Without End: Abandoning Shorter Hours for the Right to Work. Philadelphia: Temple University Press, 1988.

Hunnicutt, Benjamin Kline. Kellogg’s Six-Hour Day. Philadelphia: Temple University Press, 1996.

Jones, Ethel. “New Estimates of Hours of Work per Week and Hourly Earnings, 1900-1957.” Review of Economics and Statistics 45, no. 4 (1963): 374-385.

Juster, F. Thomas and Frank P. Stafford. “The Allocation of Time: Empirical Findings, Behavioral Models, and Problems of Measurement.” Journal of Economic Literature 29, no. 2 (1991): 471-522.

Licht, Walter. Working for the Railroad: The Organization of Work in the Nineteenth Century. Princeton: Princeton University Press, 1983.

Margo, Robert. “The Labor Force in the Nineteenth Century.” In The Cambridge Economic History of the United States, Volume II, The Long Nineteenth Century, edited by Stanley Engerman and Robert Gallman, 207-243. New York: Cambridge University Press, 2000.

Nelson, Bruce. “‘We Can’t Get Them to Do Aggressive Work': Chicago’s Anarchists and the Eight-Hour Movement.” International Labor and Working Class History 29 (1986).

Ng, Kenneth and Nancy Virts. “The Value of Freedom.” Journal of Economic History 49, no. 4 (1989): 958-965.

Owen, John. “Workweeks and Leisure: An Analysis of Trends, 1948-1975.” Monthly Labor Review 99 (1976).

Owen, John. “Work-time Reduction in the United States and Western Europe.” Monthly Labor Review 111 (1988).

Powderly, Terence. Thirty Years of Labor, 1859-1889. Columbus: Excelsior, 1890.

Ransom, Roger and Richard Sutch. One Kind of Freedom: The Economic Consequences of Emancipation. New York: Cambridge University Press, 1977.

Rodgers, Daniel. The Work Ethic in Industrial America, 1850-1920. Chicago: University of Chicago Press, 1978.

Roediger, David. “Ira Steward and the Antislavery Origins of American Eight-Hour Theory.” Labor History 27 (1986).

Roediger, David and Philip Foner. Our Own Time: A History of American Labor and the Working Day. New York: Verso, 1989.

Schor, Juliet B. The Overworked American: The Unexpected Decline in Leisure. New York: Basic Books, 1992.

Shiells, Martha Ellen, “Collective Choice of Working Conditions: Hours in British and U.S. Iron and Steel, 1890-1923.” Journal of Economic History 50, no. 2 (1990): 379-392.

Steinberg, Ronnie. Wages and Hours: Labor and Reform in Twentieth-Century America. New Brunswick, NJ: Rutgers University Press, 1982.

United States, Department of Interior, Census Office. Report on the Statistics of Wages in Manufacturing Industries, by Joseph Weeks, 1880 Census, Vol. 20. Washington: GPO, 1883.

United States Senate. Senate Report 1394, Fifty-Second Congress, Second Session. “Wholesale Prices, Wages, and Transportation.” Washington: GPO, 1893.

Ware, Caroline. The Early New England Cotton Manufacture: A Study of Industrial Beginnings. Boston: Houghton-Mifflin, 1931.

Ware, Norman. The Labor Movement in the United States, 1860-1895. New York: Appleton, 1929.

Weiss, Thomas and Lee Craig. “Agricultural Productivity Growth during the Decade of the Civil War.” Journal of Economic History 53, no. 3 (1993): 527-548.

Whaples, Robert. “The Shortening of the American Work Week: An Economic and Historical Analysis of Its Context, Causes, and Consequences.” Ph.D. dissertation, University of Pennsylvania, 1990a.

Whaples, Robert. “Winning the Eight-Hour Day, 1909-1919.” Journal of Economic History 50, no. 2 (1990b): 393-406.

Whaples, Robert. “Where Is There Consensus Among American Economic Historians? The Results of a Survey on Forty Propositions.” Journal of Economic History 55, no. 1 (1995): 139-154.

Citation: Whaples, Robert. “Hours of Work in U.S. History”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/hours-of-work-in-u-s-history/

The Glorious Revolution of 1688

Stephen Quinn, Texas Christian University

The Glorious Revolution was when William of Orange took the English throne from James II in 1688. The event brought a permanent realignment of power within the English constitution. The new co-monarchy of King William III and Queen Mary II accepted more constraints from Parliament than previous monarchs had, and the new constitution created the expectation that future monarchs would also remain constrained by Parliament. The new balance of power between parliament and crown made the promises of the English government more credible, and credibility allowed the government to reorganize its finances through a collection of changes called the Financial Revolution. A more contentious argument is that the constitutional changes made property rights more secure and thus promoted economic development.

Historical Overview

Tension between king and parliament ran deep throughout the seventeenth century. In the 1640s, the dispute turned into civil war. The loser, Charles I, was beheaded in 1649; his sons, Charles and James, fled to France; and the victorious Oliver Cromwell ruled England in the 1650s. Cromwell’s death in 1659 created a political vacuum, so Parliament invited Charles I’s sons back from exile, and the English monarchy was restored with the coronation of Charles II in 1660.

Tensions after the Restoration

The Restoration, however, did not settle the fundamental questions of power between king and Parliament. Indeed, exile had exposed Charles I’s sons to the strong monarchical methods of Louis XIV. Charles and James returned to Britain with expectations of an absolute monarchy justified by the Divine Right of Kings, so tensions continued during the reigns of Charles II (1660-1685) and his brother James II (1685-88). Table 1 lists many of the tensions and the positions favored by each side. The compromise struck during the Restoration was that Charles II would control his succession, that he would control his judiciary, and that he would have the power to collect traditional taxes. In exchange, Charles II would remain Protestant and the imposition of additional taxes would require Parliament’s approval.

Table 1

Issues Separating Crown and Parliament, 1660-1688

Issue King’s Favored Position Parliament’s Favored Position
Constitution Absolute Royal Power

(King above Law)

Constrained Royal Power

(King within Law)

Religion Catholic Protestant
Ally France Holland
Enemy Holland France
Inter-Branch Checks Royal right to control succession

(Parliamentary approval NOT required)

Parliament’s right to meet

(Royal summons NOT required)

Judiciary Subject to Royal Punishment Subject to Parliamentary Impeachment
Ordinary Revenue Royal authority sufficient to impose and collect traditional taxes. Parliamentary authority necessary to impose and collect traditional taxes.

traditional taxes traditional taxes.

Extraordinary Revenue Royal authority sufficient to impose and collect new taxes. Parliamentary authority necessary to impose and collect new taxes.
Appropriation Complete royal control over expenditures. Parliamentary audit or even appropriation.

In practice, authority over additional taxation was how Parliament constrained Charles II. Charles brought England into war against Protestant Holland (1665-67) with the support of extra taxes authorized by Parliament. In the years following that war, however, the extra funding from Parliament ceased, but Charles II’s borrowing and spending did not. By 1671, all his income was committed to regular expenses and paying interest on his debts. Parliament would not authorize additional funds, so Charles II was fiscally shackled.

Treaty of Dover

To regain fiscal autonomy and subvert Parliament, Charles II signed the secret Treaty of Dover with Louis XIV in 1671. Charles agreed that England would join France in war against Holland and that he would publicly convert to Catholicism. In return, Charles received cash from France and the prospect of victory spoils that would solve his debt problem. The treaty, however, threatened the Anglican Church, contradicted Charles II’s stated policy of support for Protestant Holland, and provided a source of revenue independent of Parliament.

Moreover, to free the money needed to launch his scheme, Charles stopped servicing many of his debts in an act called the Stop of the Exchequer, and, in Machiavellian fashion, Charles isolated a few bankers to take the loss (Roseveare 1991). The gamble, however, was lost when the English Navy failed to defeat the Dutch in 1672. Charles then avoided a break with Parliament by retreating from Catholicism.

James II

Parliament, however, was also unable to gain the upper hand. From 1679 to 1681, Protestant nobles had Parliament pass acts excluding Charles II’s Catholic brother James from succession to the throne. The political turmoil of the Exclusion Crisis created the Whig faction favoring exclusion and the Tory counter-faction opposing exclusion. Even with a majority in Commons, however, the Whigs could not force a reworking of the constitution in their favor because Charles responded by dissolving three Parliaments without giving his consent to the acts.

As a consequence of the stalemate, Charles did not summon Parliament over the final years of his life, and James did succeed to the throne in 1685. Unlike the pragmatic Charles, James II boldly pushed for all of his goals. On the religious front, the Catholic James upset his Anglican allies by threatening the preeminence of the Anglican Church (Jones 1978, 238). He also declared that his son and heir would be raised Catholic. On the military front, James expanded the standing army and promoted Catholic officers. On the financial front, he attempted to subvert Parliament by packing it with his loyalists. With a packed Parliament, “the king and his ministers could have achieved practical and permanent independence by obtaining a larger revenue” (Jones 1978, p. 243). By 1688, Tories, worried about the Church of England, and Whigs, worried about the independence of Parliament, agreed that they needed to unite against James II.

William of Orange

The solution became Mary Stuart and her husband, William of Orange. English factions invited Mary and William to seize the throne because the couple was Protestant and Mary was the daughter of James II. The situation, however, had additional drama because William was also the military commander of the Dutch Republic, and, in 1688, the Dutch were in a difficult military position. Holland was facing war with France (the Nine Years War, 1688-97), and the possibility was growing that James II would bring England into the war on the side of France. James was nearing open war with his son-in-law William.

For William and Holland, accepting the invitation and invading England was a bold gamble, but the success could turn England from a threat to an ally. William landed in England with a Dutch army on November 5, 1688 (Israel 1991). Defections in James II’s army followed before battle was joined, and William allowed James to flee to France. Parliament took the flight of James II as abdication and the co-reign of William III and Mary II officially replaced him on February 13, 1689. Although Mary had the claim to the throne as James II’s daughter, William demanded to be made King and Mary wanted William to have that power. Authority was simplified when Mary’s death in 1694 left William the sole monarch.

New Constitution

The deal struck between Parliament and the royal couple in 1688-89 was that Parliament would support the war against France, while William and Mary would accept new constraints on their authority. The new constitution reflected the relative weakness of William’s bargaining position more than any strength in Parliament’s position. Parliament feared the return of James, but William very much needed England’s willing support in the war against France because the costs would be extraordinary and William would be focused on military command instead of political wrangling.

The initial constitutional settlement was worked out in 1689 in the English Bill of Rights, the Toleration Act, and the Mutiny Act that collectively committed the monarchs to respect Parliament and Parliament’s laws. Fiscal power was settled over the 1690s as Parliament stopped granting the monarchs the authority to collect taxes for life. Instead, Parliament began regular re-authorization of all taxes, Parliament began to specify how new revenue authorizations could be spent, Parliament began to audit how revenue was spent, and Parliament diverted some funds entirely from the king’s control (Dickson 1967: 48-73). By the end of the war in 1697, the new fiscal powers of Parliament were largely in place.

Constitutional Credibility

The financial and economic importance of the arrangement between William and Mary and Parliament was that the commitments embodied in the constitutional monarchy of the Glorious Revolution were more credible that the commitments under the Restoration constitution (North and Weingast 1989). Essential to the argument is what economists mean by the term credible. If a constitution is viewed as a deal between Parliament and the Crown, then credibility means how believable it is today that Parliament and the king will choose to honor their promises tomorrow. Credibility does not ask whether Charles II reneged on a promise; rather, credibility asks if people expected Charles to renege.

One can represent the situation by drawing a decision tree that shows the future choices determining credibility. For example, the decision tree in Figure 1 contains the elements determining the credibility of Charles II’s honoring the Restoration constitution of 1660. Going forward in time from 1660 (left to right), the critical decision is whether Charles II will honor the constitution or eventually renege. The future decision by Charles, however, will depend on his estimation of benefits of becoming an absolute monarch versus the cost of failure and the chances he assigns to each. Determining credibility in 1660 requires working backwards (right to left). If one thinks Charles II will risk civil war to become an absolute monarch, then one would expect Charles II to renege on the constitution, and therefore the constitution lacks credibility despite what Charles II may promise in 1660. In contrast, if one expects Charles II to avoid civil war, then one would expect Charles to choose to honor the constitution, so the Restoration constitution would be credible.

Figure 1. Restoration of 1660 Decision Tree

A difficulty with credibility is foreseeing future options. With hindsight, we know that Charles II did attempt to break the Restoration constitution in 1670-72. When his war against Holland failed, he repaired relations with Parliament and avoided civil war, so Charles managed something not portrayed in Figure 1. He replaced the outcome of civil war in the decision tree with the outcome of a return to the status quo. The consequence of removing the threat of civil war, however, was to destroy credibility in the king’s commitment to the constitution. If James II believed he inherited the options created by his brother, then James II’s 1685 commitment to the Restoration constitution lacked credibility because the worst that would happen to James was a return to the status quo.

So why would the Glorious Revolution constitution be more credible than Restoration constitution challenged by both Charles II and James II? William was very unlikely to become Catholic or pro-French which eliminated many tensions. Also, William very much needed Parliament’s support for his war against France; however, the change in credibility argued by North and Weingast (1989) looks past William’s reign, so it also requires confidence that William’s successors would abide by the constitution. A source of long-run confidence was that the Glorious Revolution reasserted the risk of a monarch losing his throne. William III’s decision tree in 1689 again looked like Charles II’s in 1660, and Parliament’s threat to remove an offending monarch was becoming credible. The seventeenth century had now seen Parliament remove two of the four Stuart monarchs, and the second displacement in 1688 was much easier than the wars that ended the reign of Charles I in 1649.

Another lasting change that made the new constitution more credible than the old constitution was that William and his successors were more constrained in fiscal matters. Parliament’s growing ‘power of the purse’ gave the king less freedom to maneuver a constitutional challenge. Moreover, Parliament’s fiscal control increased over time because the new constitution favored Parliament in the constitutional renegotiations that accompanied each succeeding monarch.

As a result, the Glorious Revolution constitution made credible the enduring ascendancy of Parliament. In terms of the king, the new constitution increased the credibility of the proposition that kings would not usurp Parliament.

Fiscal Credibility

The second credibility story of the Glorious Revolution was that the increased credibility of the government’s constitutional structure translated into an increased credibility for the government’s commitments. When acting together, the king and Parliament retained the power to default on debt, seize property, or change rules; so why would the credibility of the constitution create confidence in a government’s promises to the public?

A king who lives within the constitution has less desire to renege on his commitments. Recall that Charles II defaulted on his debts in an attempt to subvert the constitution, and, in contrast, Parliament after the Glorious Revolution generously financed wars for monarchs who abided by the constitution. An irony of the Glorious Revolution is that monarchs who accepted constitutional constraints gained more resources than their absolutist forebears.

Still, should a monarch want to have his government renege, Parliament will not always agree, and a stable constitution assures a Parliamentary veto. The two houses of Parliament, Commons and Lords, creates more veto opportunities, and the chances of a policy change decrease with more veto opportunities if the king and the two houses have different interests (Weingast 1997).

Another aspect of Parliament is the role of political parties. For veto opportunities to block change, opponents need only to control one veto, and here the coalition aspect of parties was important. For example, the Whig coalition combined dissenting Protestants and moneyed interests, so each could rely on mutual support through the Whig party to block government action against either. Cross-issue bargaining between factions creates a cohesive coalition on multiple issues (Stasavage 2002).

An additional reason for Parliament’s credibility was reputation. As a deterrent against violating commitments today, reputation relies on penalties felt tomorrow, so reputation often does not deter those overly focused on the present. A desperate king is a common example. As collective bodies of indefinite life, however, Parliament and political parties have longer time horizons than an individual, so reputation has better chance of fostering credibility.

A measure of fiscal credibility is the risk premium that the market puts on government debt. During the Nine Years War (1688-97), government debt carried a risk premium of 4 percent over private debt, but that risk premium disappeared and became a small discount in the years 1698 to 1705 (Quinn 2001: 610). The drop in the rates on government debt marks a substantial increase in the market’s confidence in the government after the Treaty of Ryswick ended the Nine Years War in 1697 and left William III and the new constitution intact. A related measure of confidence was the market price of stock in companies like the Bank of England and the East India Company. Because those companies were created by Parliamentary authorization and held large quantities of government debt, changes in confidence were reflected in changes in their stock prices. Again, the Treaty of Ryswick greatly increased stock prices and confirms a substantial increase in the credibility of the government (Wells and Wills 2000, 434). In contrast, later Jacobite threats, such as the invasion of Scotland by James II’s son ‘the Pretender’ in 1708, had negative but largely transitory effects on share prices.

Financial Consequences

The fiscal credibility of the English government created by the Glorious Revolution unleashed a revolution in public finance. The most prominent element was the introduction of long-run borrowing by the government, because such borrowing absolutely relied on the government’s fiscal credibility. To create credible long-run debt, Parliament took responsibility for the debt, and Parliamentary-funded debt became the National Debt, instead of just the king’s debt. To bolster credibility, Parliament committed future tax revenues to servicing the debts and introduced new taxes as needed (Dickson 1967, Brewer 1988). Credible government debt formed the basis of the Bank of England in 1694 and the core the London stock market. The combination of these changes has been called the Financial Revolution and was essential for Britain’s emergence as a Great Power in the eighteenth century (Neal 2000).

While the Glorious Revolution was critical to the Financial Revolution in England, the follow up assertion in North and Weingast (1989) that the Glorious Revolution increased the security of property rights in general, and so spurred economic growth, remains an open question. A difficulty is how to test the question. An increase in the credibility of property rights might cause interest rates to decrease because people become willing to save more; however, rates based on English property rentals show no effect from the Glorious Revolution, and the rates of one London banker actually increased after the Glorious Revolution (Clark 1996, Quinn 2001). In contrast, high interest rates could indicate that the Glorious Revolution increased entrepreneurship and demand for investment. Unfortunately, high rates could also mean that the expansion of government borrowing permitted by the Financial Revolution crowded out investment. North and Weingast (1989) point to a general expansion of financial intermediation which is supported by studies like Carlos, Key, and Dupree (1998) that find the secondary market for Royal African Company and Hudson’s Bay Company stocks became busier in the 1690s. Distinguishing between crowding out and increased demand for investment, however, relies on establishing whether the overall quantity of business investment changed, and that remains unresolved because of the difficulty in constructing such an aggregate measure. The potential linkages between the credibility created by the Glorious Revolution and economic development remain an open question.

References:

Brewer, John. The Sinews of Power. Cambridge: Harvard Press, 1988.

Carlos, Ann M., Jennifer Key, and Jill L. Dupree. “Learning and the Creation of Stock-Market Institutions: Evidence from the Royal African and Hudson’s Bay Companies, 1670-1700.” Journal of Economic History 58, no. 2 (1998): 318-44.

Clark, Gregory. “The Political Foundations of Modern Economic Growth: England, 1540-1800.” Journal of Interdisciplinary History 55 (1996): 563-87.

Dickson, Peter. The Financial Revolution in England. New York: St. Martin’s, 1967.

Israel, Jonathan. “The Dutch Role in the Glorious Revolution.” In The Anglo-Dutch Moment, edited by Jonathan Israel, 103-62. Cambridge: Cambridge University Press, 1991.

Jones, James, Country and Court England, 1658-1714. Cambridge: Harvard University Press, 1978.

Neal, Larry. “How it All Began: the Monetary and Financial Architecture of Europe during the First Global Capital Markets, 1648-1815.” Financial History Review 7 (2000): 117-40.

North, Douglass, and Barry Weingast. “Constitutions and Commitment: The Evolution of Institutions Governing Public Choice in Seventeenth-Century England.” Journal of Economic History 49, no. 4(1989): 803-32.

Roseveare, Henry. The Financial Revolution 1660-1760. London: Longman, 1991.

Quinn, Stephen. “The Glorious Revolution’s Effect on English Private Finance: A Microhistory, 1680-1705.” Journal of Economic History 61, no. 3 (2001): 593-615.

Stasavage, David. “Credible Commitments in Early Modern Europe: North and Weingast Revisited.” Journal of Law and Economics 18, no. 1 (2002): 155-86.

Weingast, Barry, “The Political Foundations of Limited Government: Parliament Sovereign Debt in Seventeenth-Century and Eighteenth-Century England.” In The Frontiers of the New Institutional Economics, edited by John Drobak and John Nye, 213-246. San Diego: Academic Press, 1997.

Wells, John, and Douglas Wills. “Revolution, Restoration, and Debt Repudiation: The Jacobite Threat to England’s Institutions and Economic Growth.” Journal of Economic History 60, no 2 (2000): 418-41.

Citation: Quinn, Stephen. “The Glorious Revolution of 1688″. EH.Net Encyclopedia, edited by Robert Whaples. April 17, 2003. URL http://eh.net/encyclopedia/the-glorious-revolution-of-1688/

The U.S. Economy in the 1920s

Gene Smiley, Marquette University

Introduction

The interwar period in the United States, and in the rest of the world, is a most interesting era. The decade of the 1930s marks the most severe depression in our history and ushered in sweeping changes in the role of government. Economists and historians have rightly given much attention to that decade. However, with all of this concern about the growing and developing role of government in economic activity in the 1930s, the decade of the 1920s often tends to get overlooked. This is unfortunate because the 1920s are a period of vigorous, vital economic growth. It marks the first truly modern decade and dramatic economic developments are found in those years. There is a rapid adoption of the automobile to the detriment of passenger rail travel. Though suburbs had been growing since the late nineteenth century their growth had been tied to rail or trolley access and this was limited to the largest cities. The flexibility of car access changed this and the growth of suburbs began to accelerate. The demands of trucks and cars led to a rapid growth in the construction of all-weather surfaced roads to facilitate their movement. The rapidly expanding electric utility networks led to new consumer appliances and new types of lighting and heating for homes and businesses. The introduction of the radio, radio stations, and commercial radio networks began to break up rural isolation, as did the expansion of local and long-distance telephone communications. Recreational activities such as traveling, going to movies, and professional sports became major businesses. The period saw major innovations in business organization and manufacturing technology. The Federal Reserve System first tested its powers and the United States moved to a dominant position in international trade and global business. These things make the 1920s a period of considerable importance independent of what happened in the 1930s.

National Product and Income and Prices

We begin the survey of the 1920s with an examination of the overall production in the economy, GNP, the most comprehensive measure of aggregate economic activity. Real GNP growth during the 1920s was relatively rapid, 4.2 percent a year from 1920 to 1929 according to the most widely used estimates. (Historical Statistics of the United States, or HSUS, 1976) Real GNP per capita grew 2.7 percent per year between 1920 and 1929. By both nineteenth and twentieth century standards these were relatively rapid rates of real economic growth and they would be considered rapid even today.

There were several interruptions to this growth. In mid-1920 the American economy began to contract and the 1920-1921 depression lasted about a year, but a rapid recovery reestablished full-employment by 1923. As will be discussed below, the Federal Reserve System’s monetary policy was a major factor in initiating the 1920-1921 depression. From 1923 through 1929 growth was much smoother. There was a very mild recession in 1924 and another mild recession in 1927 both of which may be related to oil price shocks (McMillin and Parker, 1994). The 1927 recession was also associated with Henry Ford’s shut-down of all his factories for six months in order to changeover from the Model T to the new Model A automobile. Though the Model T’s market share was declining after 1924, in 1926 Ford’s Model T still made up nearly 40 percent of all the new cars produced and sold in the United States. The Great Depression began in the summer of 1929, possibly as early as June. The initial downturn was relatively mild but the contraction accelerated after the crash of the stock market at the end of October. Real total GNP fell 10.2 percent from 1929 to 1930 while real GNP per capita fell 11.5 percent from 1929 to 1930.

image002 image004

Price changes during the 1920s are shown in Figure 2. The Consumer Price Index, CPI, is a better measure of changes in the prices of commodities and services that a typical consumer would purchase, while the Wholesale Price Index, WPI, is a better measure in the changes in the cost of inputs for businesses. As the figure shows the 1920-1921 depression was marked by extraordinarily large price decreases. Consumer prices fell 11.3 percent from 1920 to 1921 and fell another 6.6 percent from 1921 to 1922. After that consumer prices were relatively constant and actually fell slightly from 1926 to 1927 and from 1927 to 1928. Wholesale prices show greater variation. The 1920-1921 depression hit farmers very hard. Prices had been bid up with the increasing foreign demand during the First World War. As European production began to recover after the war prices began to fall. Though the prices of agricultural products fell from 1919 to 1920, the depression brought on dramatic declines in the prices of raw agricultural produce as well as many other inputs that firms employ. In the scramble to beat price increases during 1919 firms had built up large inventories of raw materials and purchased inputs and this temporary increase in demand led to even larger price increases. With the depression firms began to draw down those inventories. The result was that the prices of raw materials and manufactured inputs fell rapidly along with the prices of agricultural produce—the WPI dropped 45.9 percent between 1920 and 1921. The price changes probably tend to overstate the severity of the 1920-1921 depression. Romer’s recent work (1988) suggests that prices changed much more easily in that depression reducing the drop in production and employment. Wholesale prices in the rest of the 1920s were relatively stable though they were more likely to fall than to rise.

Economic Growth in the 1920s

Despite the 1920-1921 depression and the minor interruptions in 1924 and 1927, the American economy exhibited impressive economic growth during the 1920s. Though some commentators in later years thought that the existence of some slow growing or declining sectors in the twenties suggested weaknesses that might have helped bring on the Great Depression, few now argue this. Economic growth never occurs in all sectors at the same time and at the same rate. Growth reallocates resources from declining or slower growing sectors to the more rapidly expanding sectors in accordance with new technologies, new products and services, and changing consumer tastes.

Economic growth in the 1920s was impressive. Ownership of cars, new household appliances, and housing was spread widely through the population. New products and processes of producing those products drove this growth. The combination of the widening use of electricity in production and the growing adoption of the moving assembly line in manufacturing combined to bring on a continuing rise in the productivity of labor and capital. Though the average workweek in most manufacturing remained essentially constant throughout the 1920s, in a few industries, such as railroads and coal production, it declined. (Whaples 2001) New products and services created new markets such as the markets for radios, electric iceboxes, electric irons, fans, electric lighting, vacuum cleaners, and other laborsaving household appliances. This electricity was distributed by the growing electric utilities. The stocks of those companies helped create the stock market boom of the late twenties. RCA, one of the glamour stocks of the era, paid no dividends but its value appreciated because of expectations for the new company. Like the Internet boom of the late 1990s, the electricity boom of the 1920s fed a rapid expansion in the stock market.

Fed by continuing productivity advances and new products and services and facilitated by an environment of stable prices that encouraged production and risk taking, the American economy embarked on a sustained expansion in the 1920s.

Population and Labor in the 1920s

At the same time that overall production was growing, population growth was declining. As can be seen in Figure 3, from an annual rate of increase of 1.85 and 1.93 percent in 1920 and 1921, respectively, population growth rates fell to 1.23 percent in 1928 and 1.04 percent in 1929.

These changes in the overall growth rate were linked to the birth and death rates of the resident population and a decrease in foreign immigration. Though the crude death rate changed little during the period, the crude birth rate fell sharply into the early 1930s. (Figure 4) There are several explanations for the decline in the birth rate during this period. First, there was an accelerated rural-to-urban migration. Urban families have tended to have fewer children than rural families because urban children do not augment family incomes through their work as unpaid workers as rural children do. Second, the period also saw continued improvement in women’s job opportunities and a rise in their labor force participation rates.

Immigration also fell sharply. In 1917 the federal government began to limit immigration and in 1921 an immigration act limited the number of prospective citizens of any nationality entering the United States each year to no more than 3 percent of that nationality’s resident population as of the 1910 census. A new act in 1924 lowered this to 2 percent of the resident population at the 1890 census and more firmly blocked entry for people from central, southern, and eastern European nations. The limits were relaxed slightly in 1929.

The American population also continued to move during the interwar period. Two regions experienced the largest losses in population shares, New England and the Plains. For New England this was a continuation of a long-term trend. The population share for the Plains region had been rising through the nineteenth century. In the interwar period its agricultural base, combined with the continuing shift from agriculture to industry, led to a sharp decline in its share. The regions gaining population were the Southwest and, particularly, the far West.— California began its rapid growth at this time.

 Real Average Weekly or Daily Earnings for Selected=During the 1920s the labor force grew at a more rapid rate than population. This somewhat more rapid growth came from the declining share of the population less than 14 years old and therefore not in the labor force. In contrast, the labor force participation rates, or fraction of the population aged 14 and over that was in the labor force, declined during the twenties from 57.7 percent to 56.3 percent. This was entirely due to a fall in the male labor force participation rate from 89.6 percent to 86.8 percent as the female labor force participation rate rose from 24.3 percent to 25.1 percent. The primary source of the fall in male labor force participation rates was a rising retirement rate. Employment rates for males who were 65 or older fell from 60.1 percent in 1920 to 58.0 percent in 1930.

With the depression of 1920-1921 the unemployment rate rose rapidly from 5.2 to 8.7 percent. The recovery reduced unemployment to an average rate of 4.8 percent in 1923. The unemployment rate rose to 5.8 percent in the recession of 1924 and to 5.0 percent with the slowdown in 1927. Otherwise unemployment remained relatively low. The onset of the Great Depression from the summer of 1929 on brought the unemployment rate from 4.6 percent in 1929 to 8.9 percent in 1930. (Figure 5)

Earnings for laborers varied during the twenties. Table 1 presents average weekly earnings for 25 manufacturing industries. For these industries male skilled and semi-skilled laborers generally commanded a premium of 35 percent over the earnings of unskilled male laborers in the twenties. Unskilled males received on average 35 percent more than females during the twenties. Real average weekly earnings for these 25 manufacturing industries rose somewhat during the 1920s. For skilled and semi-skilled male workers real average weekly earnings rose 5.3 percent between 1923 and 1929, while real average weekly earnings for unskilled males rose 8.7 percent between 1923 and 1929. Real average weekly earnings for females rose on 1.7 percent between 1923 and 1929. Real weekly earnings for bituminous and lignite coal miners fell as the coal industry encountered difficult times in the late twenties and the real daily wage rate for farmworkers in the twenties, reflecting the ongoing difficulties in agriculture, fell after the recovery from the 1920-1921 depression.

The 1920s were not kind to labor unions even though the First World War had solidified the dominance of the American Federation of Labor among labor unions in the United States. The rapid growth in union membership fostered by federal government policies during the war ended in 1919. A committee of AFL craft unions undertook a successful membership drive in the steel industry in that year. When U.S. Steel refused to bargain, the committee called a strike, the failure of which was a sharp blow to the unionization drive. (Brody, 1965) In the same year, the United Mine Workers undertook a large strike and also lost. These two lost strikes and the 1920-21 depression took the impetus out of the union movement and led to severe membership losses that continued through the twenties. (Figure 6)

Under Samuel Gompers’s leadership, the AFL’s “business unionism” had attempted to promote the union and collective bargaining as the primary answer to the workers’ concerns with wages, hours, and working conditions. The AFL officially opposed any government actions that would have diminished worker attachment to unions by providing competing benefits, such as government sponsored unemployment insurance, minimum wage proposals, maximum hours proposals and social security programs. As Lloyd Ulman (1961) points out, the AFL, under Gompers’ direction, differentiated on the basis of whether the statute would or would not aid collective bargaining. After Gompers’ death, William Green led the AFL in a policy change as the AFL promoted the idea of union-management cooperation to improve output and promote greater employer acceptance of unions. But Irving Bernstein (1965) concludes that, on the whole, union-management cooperation in the twenties was a failure.

To combat the appeal of unions in the twenties, firms used the “yellow-dog” contract requiring employees to swear they were not union members and would not join one; the “American Plan” promoting the open shop and contending that the closed shop was un-American; and welfare capitalism. The most common aspects of welfare capitalism included personnel management to handle employment issues and problems, the doctrine of “high wages,” company group life insurance, old-age pension plans, stock-purchase plans, and more. Some firms formed company unions to thwart independent unionization and the number of company-controlled unions grew from 145 to 432 between 1919 and 1926.

Until the late thirties the AFL was a voluntary association of independent national craft unions. Craft unions relied upon the particular skills the workers had acquired (their craft) to distinguish the workers and provide barriers to the entry of other workers. Most craft unions required a period of apprenticeship before a worker was fully accepted as a journeyman worker. The skills, and often lengthy apprenticeship, constituted the entry barrier that gave the union its bargaining power. There were only a few unions that were closer to today’s industrial unions where the required skills were much less (or nonexistent) making the entry of new workers much easier. The most important of these industrial unions was the United Mine Workers, UMW.

The AFL had been created on two principles: the autonomy of the national unions and the exclusive jurisdiction of the national union.—Individual union members were not, in fact, members of the AFL; rather, they were members of the local and national union, and the national was a member of the AFL. Representation in the AFL gave dominance to the national unions, and, as a result, the AFL had little effective power over them. The craft lines, however, had never been distinct and increasingly became blurred. The AFL was constantly mediating jurisdictional disputes between member national unions. Because the AFL and its individual unions were not set up to appeal to and work for the relatively less skilled industrial workers, union organizing and growth lagged in the twenties.

Agriculture

The onset of the First World War in Europe brought unprecedented prosperity to American farmers. As agricultural production in Europe declined, the demand for American agricultural exports rose, leading to rising farm product prices and incomes. In response to this, American farmers expanded production by moving onto marginal farmland, such as Wisconsin cutover property on the edge of the woods and hilly terrain in the Ozark and Appalachian regions. They also increased output by purchasing more machinery, such as tractors, plows, mowers, and threshers. The price of farmland, particularly marginal farmland, rose in response to the increased demand, and the debt of American farmers increased substantially.

This expansion of American agriculture continued past the end of the First World War as farm exports to Europe and farm prices initially remained high. However, agricultural production in Europe recovered much faster than most observers had anticipated. Even before the onset of the short depression in 1920, farm exports and farm product prices had begun to fall. During the depression, farm prices virtually collapsed. From 1920 to 1921, the consumer price index fell 11.3 percent, the wholesale price index fell 45.9 percent, and the farm products price index fell 53.3 percent. (HSUS, Series E40, E42, and E135)

Real average net income per farm fell over 72.6 percent between 1920 and 1921 and, though rising in the twenties, never recovered the relative levels of 1918 and 1919. (Figure 7) Farm mortgage foreclosures rose and stayed at historically high levels for the entire decade of the 1920s. (Figure 8) The value of farmland and buildings fell throughout the twenties and, for the first time in American history, the number of cultivated acres actually declined as farmers pulled back from the marginal farmland brought into production during the war. Rather than indicators of a general depression in agriculture in the twenties, these were the results of the financial commitments made by overoptimistic American farmers during and directly after the war. The foreclosures were generally on second mortgages rather than on first mortgages as they were in the early 1930s. (Johnson, 1973; Alston, 1983)

A Declining Sector

A major difficulty in analyzing the interwar agricultural sector lies in separating the effects of the 1920-21 and 1929-33 depressions from those that arose because agriculture was declining relative to the other sectors. A relatively very slow growing demand for basic agricultural products and significant increases in the productivity of labor, land, and machinery in agricultural production combined with a much more rapid extensive economic growth in the nonagricultural sectors of the economy required a shift of resources, particularly labor, out of agriculture. (Figure 9) The market induces labor to voluntarily move from one sector to another through income differentials, suggesting that even in the absence of the effects of the depressions, farm incomes would have been lower than nonfarm incomes so as to bring about this migration.

The continuous substitution of tractor power for horse and mule power released hay and oats acreage to grow crops for human consumption. Though cotton and tobacco continued as the primary crops in the south, the relative production of cotton continued to shift to the west as production in Arkansas, Missouri, Oklahoma, Texas, New Mexico, Arizona, and California increased. As quotas reduced immigration and incomes rose, the demand for cereal grains grew slowly—more slowly than the supply—and the demand for fruits, vegetables, and dairy products grew. Refrigeration and faster freight shipments expanded the milk sheds further from metropolitan areas. Wisconsin and other North Central states began to ship cream and cheeses to the Atlantic Coast. Due to transportation improvements, specialized truck farms and the citrus industry became more important in California and Florida. (Parker, 1972; Soule, 1947)

The relative decline of the agricultural sector in this period was closely related to the highly inelastic income elasticity of demand for many farm products, particularly cereal grains, pork, and cotton. As incomes grew, the demand for these staples grew much more slowly. At the same time, rising land and labor productivity were increasing the supplies of staples, causing real prices to fall.

Table 3 presents selected agricultural productivity statistics for these years. Those data indicate that there were greater gains in labor productivity than in land productivity (or per acre yields). Per acre yields in wheat and hay actually decreased between 1915-19 and 1935-39. These productivity increases, which released resources from the agricultural sector, were the result of technological improvements in agriculture.

Technological Improvements In Agricultural Production

In many ways the adoption of the tractor in the interwar period symbolizes the technological changes that occurred in the agricultural sector. This changeover in the power source that farmers used had far-reaching consequences and altered the organization of the farm and the farmers’ lifestyle. The adoption of the tractor was land saving (by releasing acreage previously used to produce crops for workstock) and labor saving. At the same time it increased the risks of farming because farmers were now much more exposed to the marketplace. They could not produce their own fuel for tractors as they had for the workstock. Rather, this had to be purchased from other suppliers. Repair and replacement parts also had to be purchased, and sometimes the repairs had to be undertaken by specialized mechanics. The purchase of a tractor also commonly required the purchase of new complementary machines; therefore, the decision to purchase a tractor was not an isolated one. (White, 2001; Ankli, 1980; Ankli and Olmstead, 1981; Musoke, 1981; Whatley, 1987). These changes resulted in more and more farmers purchasing and using tractors, but the rate of adoption varied sharply across the United States.

Technological innovations in plants and animals also raised productivity. Hybrid seed corn increased yields from an average of 40 bushels per acre to 100 to 120 bushels per acre. New varieties of wheat were developed from the hardy Russian and Turkish wheat varieties which had been imported. The U.S. Department of Agriculture’s Experiment Stations took the lead in developing wheat varieties for different regions. For example, in the Columbia River Basin new varieties raised yields from an average of 19.1 bushels per acre in 1913-22 to 23.1 bushels per acre in 1933-42. (Shepherd, 1980) New hog breeds produced more meat and new methods of swine sanitation sharply increased the survival rate of piglets. An effective serum for hog cholera was developed, and the federal government led the way in the testing and eradication of bovine tuberculosis and brucellosis. Prior to the Second World War, a number of pesticides to control animal disease were developed, including cattle dips and disinfectants. By the mid-1920s a vaccine for “blackleg,” an infectious, usually fatal disease that particularly struck young cattle, was completed. The cattle tick, which carried Texas Fever, was largely controlled through inspections. (Schlebecker, 1975; Bogue, 1983; Wood, 1980)

Federal Agricultural Programs in the 1920s

Though there was substantial agricultural discontent in the period from the Civil War to late 1890s, the period from then to the onset of the First World War was relatively free from overt farmers’ complaints. In later years farmers dubbed the 1910-14 period as agriculture’s “golden years” and used the prices of farm crops and farm inputs in that period as a standard by which to judge crop and input prices in later years. The problems that arose in the agricultural sector during the twenties once again led to insistent demands by farmers for government to alleviate their distress.

Though there were increasing calls for direct federal government intervention to limit production and raise farm prices, this was not used until Roosevelt took office. Rather, there was a reliance upon the traditional method to aid injured groups—tariffs, and upon the “sanctioning and promotion of cooperative marketing associations.” In 1921 Congress attempted to control the grain exchanges and compel merchants and stockyards to charge “reasonable rates,” with the Packers and Stockyards Act and the Grain Futures Act. In 1922 Congress passed the Capper-Volstead Act to promote agricultural cooperatives and the Fordney-McCumber Tariff to impose high duties on most agricultural imports.—The Cooperative Marketing Act of 1924 did not bolster failing cooperatives as it was supposed to do. (Hoffman and Liebcap, 1991)

Twice between 1924 and 1928 Congress passed “McNary-Haugan” bills, but President Calvin Coolidge vetoed both. The McNary-Haugan bills proposed to establish “fair” exchange values (based on the 1910-14 period) for each product and to maintain them through tariffs and a private corporation that would be chartered by the government and could buy enough of each commodity to keep its price up to the computed fair level. The revenues were to come from taxes imposed on farmers. The Hoover administration passed the Hawley-Smoot tariff in 1930 and an Agricultural Marketing Act in 1929. This act committed the federal government to a policy of stabilizing farm prices through several nongovernment institutions but these failed during the depression. Federal intervention in the agricultural sector really came of age during the New Deal era of the 1930s.

Manufacturing

Agriculture was not the only sector experiencing difficulties in the twenties. Other industries, such as textiles, boots and shoes, and coal mining, also experienced trying times. However, at the same time that these industries were declining, other industries, such as electrical appliances, automobiles, and construction, were growing rapidly. The simultaneous existence of growing and declining industries has been common to all eras because economic growth and technological progress never affect all sectors in the same way. In general, in manufacturing there was a rapid rate of growth of productivity during the twenties. The rise of real wages due to immigration restrictions and the slower growth of the resident population spurred this. Transportation improvements and communications advances were also responsible. These developments brought about differential growth in the various manufacturing sectors in the United States in the 1920s.

Because of the historic pattern of economic development in the United States, the northeast was the first area to really develop a manufacturing base. By the mid-nineteenth century the East North Central region was creating a manufacturing base and the other regions began to create manufacturing bases in the last half of the nineteenth century resulting in a relative westward and southern shift of manufacturing activity. This trend continued in the 1920s as the New England and Middle Atlantic regions’ shares of manufacturing employment fell while all of the other regions—excluding the West North Central region—gained. There was considerable variation in the growth of the industries and shifts in their ranking during the decade. The largest broadly defined industries were, not surprisingly, food and kindred products; textile mill products; those producing and fabricating primary metals; machinery production; and chemicals. When industries are more narrowly defined, the automobile industry, which ranked third in manufacturing value added in 1919, ranked first by the mid-1920s.

Productivity Developments

Gavin Wright (1990) has argued that one of the underappreciated characteristics of American industrial history has been its reliance on mineral resources. Wright argues that the growing American strength in industrial exports and industrialization in general relied on an increasing intensity in nonreproducible natural resources. The large American market was knit together as one large market without internal barriers through the development of widespread low-cost transportation. Many distinctively American developments, such as continuous-process, mass-production methods were associated with the “high throughput” of fuel and raw materials relative to labor and capital inputs. As a result the United States became the dominant industrial force in the world 1920s and 1930s. According to Wright, after World War II “the process by which the United States became a unified ‘economy’ in the nineteenth century has been extended to the world as a whole. To a degree, natural resources have become commodities rather than part of the ‘factor endowment’ of individual countries.” (Wright, 1990)

In addition to this growing intensity in the use of nonreproducible natural resources as a source of productivity gains in American manufacturing, other technological changes during the twenties and thirties tended to raise the productivity of the existing capital through the replacement of critical types of capital equipment with superior equipment and through changes in management methods. (Soule, 1947; Lorant, 1967; Devine, 1983; Oshima, 1984) Some changes, such as the standardization of parts and processes and the reduction of the number of styles and designs, raised the productivity of both capital and labor. Modern management techniques, first introduced by Frederick W. Taylor, were introduced on a wider scale.

One of the important forces contributing to mass production and increased productivity was the transfer to electric power. (Devine, 1983) By 1929 about 70 percent of manufacturing activity relied on electricity, compared to roughly 30 percent in 1914. Steam provided 80 percent of the mechanical drive capacity in manufacturing in 1900, but electricity provided over 50 percent by 1920 and 78 percent by 1929. An increasing number of factories were buying their power from electric utilities. In 1909, 64 percent of the electric motor capacity in manufacturing establishments used electricity generated on the factory site; by 1919, 57 percent of the electricity used in manufacturing was purchased from independent electric utilities.

The shift from coal to oil and natural gas and from raw unprocessed energy in the forms of coal and waterpower to processed energy in the form of internal combustion fuel and electricity increased thermal efficiency. After the First World War energy consumption relative to GNP fell, there was a sharp increase in the growth rate of output per labor-hour, and the output per unit of capital input once again began rising. These trends can be seen in the data in Table 3. Labor productivity grew much more rapidly during the 1920s than in the previous or following decade. Capital productivity had declined in the decade previous to the 1920s while it also increased sharply during the twenties and continued to rise in the following decade. Alexander Field (2003) has argued that the 1930s were the most technologically progressive decade of the twentieth century basing his argument on the growth of multi-factor productivity as well as the impressive array of technological developments during the thirties. However, the twenties also saw impressive increases in labor and capital productivity as, particularly, developments in energy and transportation accelerated.

 Average Annual Rates of Labor Productivity and Capital Productivity Growth.

Warren Devine, Jr. (1983) reports that in the twenties the most important result of the adoption of electricity was that it would be an indirect “lever to increase production.” There were a number of ways in which this occurred. Electricity brought about an increased flow of production by allowing new flexibility in the design of buildings and the arrangement of machines. In this way it maximized throughput. Electric cranes were an “inestimable boon” to production because with adequate headroom they could operate anywhere in a plant, something that mechanical power transmission to overhead cranes did not allow. Electricity made possible the use of portable power tools that could be taken anywhere in the factory. Electricity brought about improved illumination, ventilation, and cleanliness in the plants, dramatically improving working conditions. It improved the control of machines since there was no longer belt slippage with overhead line shafts and belt transmission, and there were less limitations on the operating speeds of machines. Finally, it made plant expansion much easier than when overhead shafts and belts had been relied upon for operating power.

The mechanization of American manufacturing accelerated in the 1920s, and this led to a much more rapid growth of productivity in manufacturing compared to earlier decades and to other sectors at that time. There were several forces that promoted mechanization. One was the rapidly expanding aggregate demand during the prosperous twenties. Another was the technological developments in new machines and processes, of which electrification played an important part. Finally, Harry Jerome (1934) and, later, Harry Oshima (1984) both suggest that the price of unskilled labor began to rise as immigration sharply declined with new immigration laws and falling population growth. This accelerated the mechanization of the nation’s factories.

Technological changes during this period can be documented for a number of individual industries. In bituminous coal mining, labor productivity rose when mechanical loading devices reduced the labor required from 24 to 50 percent. The burst of paved road construction in the twenties led to the development of a finishing machine to smooth the surface of cement highways, and this reduced the labor requirement from 40 to 60 percent. Mechanical pavers that spread centrally mixed materials further increased productivity in road construction. These replaced the roadside dump and wheelbarrow methods of spreading the cement. Jerome (1934) reports that the glass in electric light bulbs was made by new machines that cut the number of labor-hours required for their manufacture by nearly half. New machines to produce cigarettes and cigars, for warp-tying in textile production, and for pressing clothes in clothing shops also cut labor-hours. The Banbury mixer reduced the labor input in the production of automobile tires by half, and output per worker of inner tubes increased about four times with a new production method. However, as Daniel Nelson (1987) points out, the continuing advances were the “cumulative process resulting from a vast number of successive small changes.” Because of these continuing advances in the quality of the tires and in the manufacturing of tires, between 1910 and 1930 “tire costs per thousand miles of driving fell from $9.39 to $0.65.”

John Lorant (1967) has documented other technological advances that occurred in American manufacturing during the twenties. For example, the organic chemical industry developed rapidly due to the introduction of the Weizman fermentation process. In a similar fashion, nearly half of the productivity advances in the paper industry were due to the “increasingly sophisticated applications of electric power and paper manufacturing processes,” especially the fourdrinier paper-making machines. As Avi Cohen (1984) has shown, the continuing advances in these machines were the result of evolutionary changes to the basic machine. Mechanization in many types of mass-production industries raised the productivity of labor and capital. In the glass industry, automatic feeding and other types of fully automatic production raised the efficiency of the production of glass containers, window glass, and pressed glass. Giedion (1948) reported that the production of bread was “automatized” in all stages during the 1920s.

Though not directly bringing about productivity increases in manufacturing processes, developments in the management of manufacturing firms, particularly the largest ones, also significantly affected their structure and operation. Alfred D. Chandler, Jr. (1962) has argued that the structure of a firm must follow its strategy. Until the First World War most industrial firms were centralized, single-division firms even when becoming vertically integrated. When this began to change the management of the large industrial firms had to change accordingly.

Because of these changes in the size and structure of the firm during the First World War, E. I. du Pont de Nemours and Company was led to adopt a strategy of diversifying into the production of largely unrelated product lines. The firm found that the centralized, divisional structure that had served it so well was not suited to this strategy, and its poor business performance led its executives to develop between 1919 and 1921 a decentralized, multidivisional structure that boosted it to the first rank among American industrial firms.

General Motors had a somewhat different problem. By 1920 it was already decentralized into separate divisions. In fact, there was so much decentralization that those divisions essentially remained separate companies and there was little coordination between the operating divisions. A financial crisis at the end of 1920 ousted W. C. Durant and brought in the du Ponts and Alfred Sloan. Sloan, who had seen the problems at GM but had been unable to convince Durant to make changes, began reorganizing the management of the company. Over the next several years Sloan and other GM executives developed the general office for a decentralized, multidivisional firm.

Though facing related problems at nearly the same time, GM and du Pont developed their decentralized, multidivisional organizations separately. As other manufacturing firms began to diversify, GM and du Pont became the models for reorganizing the management of the firms. In many industrial firms these reorganizations were not completed until well after the Second World War.

Competition, Monopoly, and the Government

The rise of big businesses, which accelerated in the postbellum period and particularly during the first great turn-of-the-century merger wave, continued in the interwar period. Between 1925 and 1939 the share of manufacturing assets held by the 100 largest corporations rose from 34.5 to 41.9 percent. (Niemi, 1980) As a public policy, the concern with monopolies diminished in the 1920s even though firms were growing larger. But the growing size of businesses was one of the convenient scapegoats upon which to blame the Great Depression.

However, the rise of large manufacturing firms in the interwar period is not so easily interpreted as an attempt to monopolize their industries. Some of the growth came about through vertical integration by the more successful manufacturing firms. Backward integration was generally an attempt to ensure a smooth supply of raw materials where that supply was not plentiful and was dispersed and firms “feared that raw materials might become controlled by competitors or independent suppliers.” (Livesay and Porter, 1969) Forward integration was an offensive tactic employed when manufacturers found that the existing distribution network proved inadequate. Livesay and Porter suggested a number of reasons why firms chose to integrate forward. In some cases they had to provide the mass distribution facilities to handle their much larger outputs; especially when the product was a new one. The complexity of some new products required technical expertise that the existing distribution system could not provide. In other cases “the high unit costs of products required consumer credit which exceeded financial capabilities of independent distributors.” Forward integration into wholesaling was more common than forward integration into retailing. The producers of automobiles, petroleum, typewriters, sewing machines, and harvesters were typical of those manufacturers that integrated all the way into retailing.

In some cases, increases in industry concentration arose as a natural process of industrial maturation. In the automobile industry, Henry Ford’s invention in 1913 of the moving assembly line—a technological innovation that changed most manufacturing—lent itself to larger factories and firms. Of the several thousand companies that had produced cars prior to 1920, 120 were still doing so then, but Ford and General Motors were the clear leaders, together producing nearly 70 percent of the cars. During the twenties, several other companies, such as Durant, Willys, and Studebaker, missed their opportunity to become more important producers, and Chrysler, formed in early 1925, became the third most important producer by 1930. Many went out of business and by 1929 only 44 companies were still producing cars. The Great Depression decimated the industry. Dozens of minor firms went out of business. Ford struggled through by relying on its huge stockpile of cash accumulated prior to the mid-1920s, while Chrysler actually grew. By 1940, only eight companies still produced cars—GM, Ford, and Chrysler had about 85 percent of the market, while Willys, Studebaker, Nash, Hudson, and Packard shared the remainder. The rising concentration in this industry was not due to attempts to monopolize. As the industry matured, growing economies of scale in factory production and vertical integration, as well as the advantages of a widespread dealer network, led to a dramatic decrease in the number of viable firms. (Chandler, 1962 and 1964; Rae, 1984; Bernstein, 1987)

It was a similar story in the tire industry. The increasing concentration and growth of firms was driven by scale economies in production and retailing and by the devastating effects of the depression in the thirties. Although there were 190 firms in 1919, 5 firms dominated the industry—Goodyear, B. F. Goodrich, Firestone, U.S. Rubber, and Fisk, followed by Miller Rubber, General Tire and Rubber, and Kelly-Springfield. During the twenties, 166 firms left the industry while 66 entered. The share of the 5 largest firms rose from 50 percent in 1921 to 75 percent in 1937. During the depressed thirties, there was fierce price competition, and many firms exited the industry. By 1937 there were 30 firms, but the average employment per factory was 4.41 times as large as in 1921, and the average factory produced 6.87 times as many tires as in 1921. (French, 1986 and 1991; Nelson, 1987; Fricke, 1982)

The steel industry was already highly concentrated by 1920 as U.S. Steel had around 50 percent of the market. But U. S. Steel’s market share declined through the twenties and thirties as several smaller firms competed and grew to become known as Little Steel, the next six largest integrated producers after U. S. Steel. Jonathan Baker (1989) has argued that the evidence is consistent with “the assumption that competition was a dominant strategy for steel manufacturers” until the depression. However, the initiation of the National Recovery Administration (NRA) codes in 1933 required the firms to cooperate rather than compete, and Baker argues that this constituted a training period leading firms to cooperate in price and output policies after 1935. (McCraw and Reinhardt, 1989; Weiss, 1980; Adams, 1977)

Mergers

A number of the larger firms grew by merger during this period, and the second great merger wave in American industry occurred during the last half of the 1920s. Figure 10 shows two series on mergers during the interwar period. The FTC series included many of the smaller mergers. The series constructed by Carl Eis (1969) only includes the larger mergers and ends in 1930.

This second great merger wave coincided with the stock market boom of the twenties and has been called “merger for oligopoly” rather than merger for monopoly. (Stigler, 1950) This merger wave created many larger firms that ranked below the industry leaders. Much of the activity in occurred in the banking and public utilities industries. (Markham, 1955) In manufacturing and mining, the effects on industrial structure were less striking. Eis (1969) found that while mergers took place in almost all industries, they were concentrated in a smaller number of them, particularly petroleum, primary metals, and food products.

The federal government’s antitrust policies toward business varied sharply during the interwar period. In the 1920s there was relatively little activity by the Justice Department, but after the Great Depression the New Dealers tried to take advantage of big business to make business exempt from the antitrust laws and cartelize industries under government supervision.

With the passage of the FTC and Clayton Acts in 1914 to supplement the 1890 Sherman Act, the cornerstones of American antitrust law were complete. Though minor amendments were later enacted, the primary changes after that came in the enforcement of the laws and in swings in judicial decisions. Their two primary areas of application were in the areas of overt behavior, such as horizontal and vertical price-fixing, and in market structure, such as mergers and dominant firms. Horizontal price-fixing involves firms that would normally be competitors getting together to agree on stable and higher prices for their products. As long as most of the important competitors agree on the new, higher prices, substitution between products is eliminated and the demand becomes much less elastic. Thus, increasing the price increases the revenues and the profits of the firms who are fixing prices. Vertical price-fixing involves firms setting the prices of intermediate products purchased at different stages of production. It also tends to eliminate substitutes and makes the demand less elastic.

Price-fixing continued to be considered illegal throughout the period, but there was no major judicial activity regarding it in the 1920s other than the Trenton Potteries decision in 1927. In that decision 20 individuals and 23 corporations were found guilty of conspiring to fix the prices of bathroom bowls. The evidence in the case suggested that the firms were not very successful at doing so, but the court found that they were guilty nevertheless; their success, or lack thereof, was not held to be a factor in the decision. (Scherer and Ross, 1990) Though criticized by some, the decision was precedent setting in that it prohibited explicit pricing conspiracies per se.

The Justice Department had achieved success in dismantling Standard Oil and American Tobacco in 1911 through decisions that the firms had unreasonably restrained trade. These were essentially the same points used in court decisions against the Powder Trust in 1911, the thread trust in 1913, Eastman Kodak in 1915, the glucose and cornstarch trust in 1916, and the anthracite railroads in 1920. The criterion of an unreasonable restraint of trade was used in the 1916 and 1918 decisions that found the American Can Company and the United Shoe Machinery Company innocent of violating the Sherman Act; it was also clearly enunciated in the 1920 U. S. Steel decision. This became known as the rule of reason standard in antitrust policy.

Merger policy had been defined in the 1914 Clayton Act to prohibit only the acquisition of one corporation’s stock by another corporation. Firms then shifted to the outright purchase of a competitor’s assets. A series of court decisions in the twenties and thirties further reduced the possibilities of Justice Department actions against mergers. “Only fifteen mergers were ordered dissolved through antitrust actions between 1914 and 1950, and ten of the orders were accomplished under the Sherman Act rather than Clayton Act proceedings.”

Energy

The search for energy and new ways to translate it into heat, light, and motion has been one of the unending themes in history. From whale oil to coal oil to kerosene to electricity, the search for better and less costly ways to light our lives, heat our homes, and move our machines has consumed much time and effort. The energy industries responded to those demands and the consumption of energy materials (coal, oil, gas, and fuel wood) as a percent of GNP rose from about 2 percent in the latter part of the nineteenth century to about 3 percent in the twentieth.

Changes in the energy markets that had begun in the nineteenth century continued. Processed energy in the forms of petroleum derivatives and electricity continued to become more important than “raw” energy, such as that available from coal and water. The evolution of energy sources for lighting continued; at the end of the nineteenth century, natural gas and electricity, rather than liquid fuels began to provide more lighting for streets, businesses, and homes.

In the twentieth century the continuing shift to electricity and internal combustion fuels increased the efficiency with which the American economy used energy. These processed forms of energy resulted in a more rapid increase in the productivity of labor and capital in American manufacturing. From 1899 to 1919, output per labor-hour increased at an average annual rate of 1.2 percent, whereas from 1919 to 1937 the increase was 3.5 percent per year. The productivity of capital had fallen at an average annual rate of 1.8 percent per year in the 20 years prior to 1919, but it rose 3.1 percent a year in the 18 years after 1919. As discussed above, the adoption of electricity in American manufacturing initiated a rapid evolution in the organization of plants and rapid increases in productivity in all types of manufacturing.

The change in transportation was even more remarkable. Internal combustion engines running on gasoline or diesel fuel revolutionized transportation. Cars quickly grabbed the lion’s share of local and regional travel and began to eat into long distance passenger travel, just as the railroads had done to passenger traffic by water in the 1830s. Even before the First World War cities had begun passing laws to regulate and limit “jitney” services and to protect the investments in urban rail mass transit. Trucking began eating into the freight carried by the railroads.

These developments brought about changes in the energy industries. Coal mining became a declining industry. As Figure 11 shows, in 1925 the share of petroleum in the value of coal, gas, and petroleum output exceeded bituminous coal, and it continued to rise. Anthracite coal’s share was much smaller and it declined while natural gas and LP (or liquefied petroleum) gas were relatively unimportant. These changes, especially the declining coal industry, were the source of considerable worry in the twenties.

Coal

One of the industries considered to be “sick” in the twenties was coal, particularly bituminous, or soft, coal. Income in the industry declined, and bankruptcies were frequent. Strikes frequently interrupted production. The majority of the miners “lived in squalid and unsanitary houses, and the incidence of accidents and diseases was high.” (Soule, 1947) The number of operating bituminous coal mines declined sharply from 1923 through 1932. Anthracite (or hard) coal output was much smaller during the twenties. Real coal prices rose from 1919 to 1922, and bituminous coal prices fell sharply from then to 1925. (Figure 12) Coal mining employment plummeted during the twenties. Annual earnings, especially in bituminous coal mining, also fell because of dwindling hourly earnings and, from 1929 on, a shrinking workweek. (Figure 13)

The sources of these changes are to be found in the increasing supply due to productivity advances in coal production and in the decreasing demand for coal. The demand fell as industries began turning from coal to electricity and because of productivity advances in the use of coal to create energy in steel, railroads, and electric utilities. (Keller, 1973) In the generation of electricity, larger steam plants employing higher temperatures and steam pressures continued to reduce coal consumption per kilowatt hour. Similar reductions were found in the production of coke from coal for iron and steel production and in the use of coal by the steam railroad engines. (Rezneck, 1951) All of these factors reduced the demand for coal.

Productivity advances in coal mining tended to be labor saving. Mechanical cutting accounted for 60.7 percent of the coal mined in 1920 and 78.4 percent in 1929. By the middle of the twenties, the mechanical loading of coal began to be introduced. Between 1929 and 1939, output per labor-hour rose nearly one third in bituminous coal mining and nearly four fifths in anthracite as more mines adopted machine mining and mechanical loading and strip mining expanded.

The increasing supply and falling demand for coal led to the closure of mines that were too costly to operate. A mine could simply cease operations, let the equipment stand idle, and lay off employees. When bankruptcies occurred, the mines generally just turned up under new ownership with lower capital charges. When demand increased or strikes reduced the supply of coal, idle mines simply resumed production. As a result, the easily expanded supply largely eliminated economic profits.

The average daily employment in coal mining dropped by over 208,000 from its peak in 1923, but the sharply falling real wages suggests that the supply of labor did not fall as rapidly as the demand for labor. Soule (1947) notes that when employment fell in coal mining, it meant fewer days of work for the same number of men. Social and cultural characteristics tended to tie many to their home region. The local alternatives were few, and ignorance of alternatives outside the Appalachian rural areas, where most bituminous coal was mined, made it very costly to transfer out.

Petroleum

In contrast to the coal industry, the petroleum industry was growing throughout the interwar period. By the thirties, crude petroleum dominated the real value of the production of energy materials. As Figure 14 shows, the production of crude petroleum increased sharply between 1920 and 1930, while real petroleum prices, though highly variable, tended to decline.

The growing demand for petroleum was driven by the growth in demand for gasoline as America became a motorized society. The production of gasoline surpassed kerosene production in 1915. Kerosene’s market continued to contract as electric lighting replaced kerosene lighting. The development of oil burners in the twenties began a switch from coal toward fuel oil for home heating, and this further increased the growing demand for petroleum. The growth in the demand for fuel oil and diesel fuel for ship engines also increased petroleum demand. But it was the growth in the demand for gasoline that drove the petroleum market.

The decline in real prices in the latter part of the twenties shows that supply was growing even faster than demand. The discovery of new fields in the early twenties increased the supply of petroleum and led to falling prices as production capacity grew. The Santa Fe Springs, California strike in 1919 initiated a supply shock as did the discovery of the Long Beach, California field in 1921. New discoveries in Powell, Texas and Smackover Arkansas further increased the supply of petroleum in 1921. New supply increases occurred in 1926 to 1928 with petroleum strikes in Seminole, Oklahoma and Hendricks, Texas. The supply of oil increased sharply in 1930 to 1931 with new discoveries in Oklahoma City and East Texas. Each new discovery pushed down real oil prices, and the prices of petroleum derivatives, and the growing production capacity led to a general declining trend in petroleum prices. McMillin and Parker (1994) argue that supply shocks generated by these new discoveries were a factor in the business cycles during the 1920s.

The supply of gasoline increased more than the supply of crude petroleum. In 1913 a chemist at Standard Oil of Indiana introduced the cracking process to refine crude petroleum; until that time it had been refined by distillation or unpressurized heating. In the heating process, various refined products such as kerosene, gasoline, naphtha, and lubricating oils were produced at different temperatures. It was difficult to vary the amount of the different refined products produced from a barrel of crude. The cracking process used pressurized heating to break heavier components down into lighter crude derivatives; with cracking, it was possible to increase the amount of gasoline obtained from a barrel of crude from 15 to 45 percent. In the early twenties, chemists at Standard Oil of New Jersey improved the cracking process, and by 1927 it was possible to obtain twice as much gasoline from a barrel of crude petroleum as in 1917.

The petroleum companies also developed new ways to distribute gasoline to motorists that made it more convenient to purchase gasoline. Prior to the First World War, gasoline was commonly purchased in one- or five-gallon cans and the purchaser used a funnel to pour the gasoline from the can into the car. Then “filling stations” appeared, which specialized in filling cars’ tanks with gasoline. These spread rapidly, and by 1919 gasoline companies werebeginning to introduce their own filling stations or contract with independent stations to exclusively distribute their gasoline. Increasing competition and falling profits led filling station operators to expand into other activities such as oil changes and other mechanical repairs. The general name attached to such stations gradually changed to “service stations” to reflect these new functions.

Though the petroleum firms tended to be large, they were highly competitive, trying to pump as much petroleum as possible to increase their share of the fields. This, combined with the development of new fields, led to an industry with highly volatile prices and output. Firms desperately wanted to stabilize and reduce the production of crude petroleum so as to stabilize and raise the prices of crude petroleum and refined products. Unable to obtain voluntary agreement on output limitations by the firms and producers, governments began stepping in. Led by Texas, which created the Texas Railroad Commission in 1891, oil-producing states began to intervene to regulate production. Such laws were usually termed prorationing laws and were quotas designed to limit each well’s output to some fraction of its potential. The purpose was as much to stabilize and reduce production and raise prices as anything else, although generally such laws were passed under the guise of conservation. Although the federal government supported such attempts, not until the New Deal were federal laws passed to assist this.

Electricity

By the mid 1890s the debate over the method by which electricity was to be transmitted had been won by those who advocated alternating current. The reduced power losses and greater distance over which electricity could be transmitted more than offset the necessity for transforming the current back to direct current for general use. Widespread adoption of machines and appliances by industry and consumers then rested on an increase in the array of products using electricity as the source of power, heat, or light and the development of an efficient, lower cost method of generating electricity.

General Electric, Westinghouse, and other firms began producing the electrical appliances for homes and an increasing number of machines based on electricity began to appear in industry. The problem of lower cost production was solved by the introduction of centralized generating facilities that distributed the electric power through lines to many consumers and business firms.

Though initially several firms competed in generating and selling electricity to consumers and firms in a city or area, by the First World War many states and communities were awarding exclusive franchises to one firm to generate and distribute electricity to the customers in the franchise area. (Bright, 1947; Passer, 1953) The electric utility industry became an important growth industry and, as Figure 15 shows, electricity production and use grew rapidly.

The electric utilities increasingly were regulated by state commissions that were charged with setting rates so that the utilities could receive a “fair return” on their investments. Disagreements over what constituted a “fair return” and the calculation of the rate base led to a steady stream of cases before the commissions and a continuing series of court appeals. Generally these court decisions favored the reproduction cost basis. Because of the difficulty and cost in making these calculations, rates tended to be in the hands of the electric utilities that, it has been suggested, did not lower rates adequately to reflect the rising productivity and lowered costs of production. The utilities argued that a more rapid lowering of rates would have jeopardized their profits. Whether or not this increased their monopoly power is still an open question, but it should be noted, that electric utilities were hardly price-taking industries prior to regulation. (Mercer, 1973) In fact, as Figure 16 shows, the electric utilities began to systematically practice market segmentation charging users with less elastic demands, higher prices per kilowatt-hour.

Energy in the American Economy of the 1920s

The changes in the energy industries had far-reaching consequences. The coal industry faced a continuing decline in demand. Even in the growing petroleum industry, the periodic surges in the supply of petroleum caused great instability. In manufacturing, as described above, electrification contributed to a remarkable rise in productivity. The transportation revolution brought about by the rise of gasoline-powered trucks and cars changed the way businesses received their supplies and distributed their production as well as where they were located. The suburbanization of America and the beginnings of urban sprawl were largely brought about by the introduction of low-priced gasoline for cars.

Transportation

The American economy was forever altered by the dramatic changes in transportation after 1900. Following Henry Ford’s introduction of the moving assembly production line in 1914, automobile prices plummeted, and by the end of the 1920s about 60 percent of American families owned an automobile. The advent of low-cost personal transportation led to an accelerating movement of population out of the crowded cities to more spacious homes in the suburbs and the automobile set off a decline in intracity public passenger transportation that has yet to end. Massive road-building programs facilitated the intercity movement of people and goods. Trucks increasingly took over the movement of freight in competition with the railroads. New industries, such as gasoline service stations, motor hotels, and the rubber tire industry, arose to service the automobile and truck traffic. These developments were complicated by the turmoil caused by changes in the federal government’s policies toward transportation in the United States.

With the end of the First World War, a debate began as to whether the railroads, which had been taken over by the government, should be returned to private ownership or nationalized. The voices calling for a return to private ownership were much stronger, but doing so fomented great controversy. Many in Congress believed that careful planning and consolidation could restore the railroads and make them more efficient. There was continued concern about the near monopoly that the railroads had on the nation’s intercity freight and passenger transportation. The result of these deliberations was the Transportation Act of 1920, which was premised on the continued domination of the nation’s transportation by the railroads—an erroneous presumption.

The Transportation Act of 1920 presented a marked change in the Interstate Commerce Commission’s ability to control railroads. The ICC was allowed to prescribe exact rates that were to be set so as to allow the railroads to earn a fair return, defined as 5.5 percent, on the fair value of their property. The ICC was authorized to make an accounting of the fair value of each regulated railroad’s property; however, this was not completed until well into the 1930s, by which time the accounting and rate rules were out of date. To maintain fair competition between railroads in a region, all roads were to have the same rates for the same goods over the same distance. With the same rates, low-cost roads should have been able to earn higher rates of return than high-cost roads. To handle this, a recapture clause was inserted: any railroad earning a return of more than 6 percent on the fair value of its property was to turn the excess over to the ICC, which would place half of the money in a contingency fund for the railroad when it encountered financial problems and the other half in a contingency fund to provide loans to other railroads in need of assistance.

In order to address the problem of weak and strong railroads and to bring better coordination to the movement of rail traffic in the United States, the act was directed to encourage railroad consolidation, but little came of this in the 1920s. In order to facilitate its control of the railroads, the ICC was given two additional powers. The first was the control over the issuance or purchase of securities by railroads, and the second was the power to control changes in railroad service through the control of car supply and the extension and abandonment of track. The control of the supply of rail cars was turned over to the Association of American Railroads. Few extensions of track were proposed, but as time passed, abandonment requests grew. The ICC, however, trying to mediate between the conflicting demands of shippers, communities and railroads, generally refused to grant abandonments, and this became an extremely sensitive issue in the 1930s.

As indicated above, the premises of the Transportation Act of 1920 were wrong. Railroads experienced increasing competition during the 1920s, and both freight and passenger traffic were drawn off to competing transport forms. Passenger traffic exited from the railroads much more quickly. As the network of all weather surfaced roads increased, people quickly turned from the train to the car. Harmed even more by the move to automobile traffic were the electric interurban railways that had grown rapidly just prior to the First World War. (Hilton-Due, 1960) Not surprisingly, during the 1920s few railroads earned profits in excess of the fair rate of return.

The use of trucks to deliver freight began shortly after the turn of the century. Before the outbreak of war in Europe, White and Mack were producing trucks with as much as 7.5 tons of carrying capacity. Most of the truck freight was carried on a local basis, and it largely supplemented the longer distance freight transportation provided by the railroads. However, truck size was growing. In 1915 Trailmobile introduced the first four-wheel trailer designed to be pulled by a truck tractor unit. During the First World War, thousands of trucks were constructed for military purposes, and truck convoys showed that long distance truck travel was feasible and economical. The use of trucks to haul freight had been growing by over 18 percent per year since 1925, so that by 1929 intercity trucking accounted for more than one percent of the ton-miles of freight hauled.

The railroads argued that the trucks and buses provided “unfair” competition and believed that if they were also regulated, then the regulation could equalize the conditions under which they competed. As early as 1925, the National Association of Railroad and Utilities Commissioners issued a call for the regulation of motor carriers in general. In 1928 the ICC called for federal regulation of buses and in 1932 extended this call to federal regulation of trucks.

Most states had began regulating buses at the beginning of the 1920s in an attempt to reduce the diversion of urban passenger traffic from the electric trolley and railway systems. However, most of the regulation did not aim to control intercity passenger traffic by buses. As the network of surfaced roads expanded during the twenties, so did the routes of the intercity buses. In 1929 a number of smaller bus companies were incorporated in the Greyhound Buslines, the carrier that has since dominated intercity bus transportation. (Walsh, 2000)

A complaint of the railroads was that interstate trucking competition was unfair because it was subsidized while railroads were not. All railroad property was privately owned and subject to property taxes, whereas truckers used the existing road system and therefore neither had to bear the costs of creating the road system nor pay taxes upon it. Beginning with the Federal Road-Aid Act of 1916, small amounts of money were provided as an incentive for states to construct rural post roads. (Dearing-Owen, 1949) However, through the First World War most of the funds for highway construction came from a combination of levies on the adjacent property owners and county and state taxes. The monies raised by the counties were commonly 60 percent of the total funds allocated, and these primarily came from property taxes. In 1919 Oregon pioneered the state gasoline tax, which then began to be adopted by more and more states. A highway system financed by property taxes and other levies can be construed as a subsidization of motor vehicles, and one study for the period up to 1920 found evidence of substantial subsidization of trucking. (Herbst-Wu, 1973) However, the use of gasoline taxes moved closer to the goal of users paying the costs of the highways. Neither did the trucks have to pay for all of the highway construction because automobiles jointly used the highways. Highways had to be constructed in more costly ways in order to accommodate the larger and heavier trucks. Ideally the gasoline taxes collected from trucks should have covered the extra (or marginal) costs of highway construction incurred because of the truck traffic. Gasoline taxes tended to do this.

The American economy occupies a vast geographic region. Because economic activity occurs over most of the country, falling transportation costs have been crucial to knitting American firms and consumers into a unified market. Throughout the nineteenth century the railroads played this crucial role. Because of the size of the railroad companies and their importance in the economic life of Americans, the federal government began to regulate them. But, by 1917 it appeared that the railroad system had achieved some stability, and it was generally assumed that the post-First World War era would be an extension of the era from 1900 to 1917. Nothing could have been further from the truth. Spurred by public investments in highways, cars and trucks voraciously ate into the railroad’s market, and, though the regulators failed to understand this at the time, the railroad’s monopoly on transportation quickly disappeared.

Communications

Communications had joined with transportation developments in the nineteenth century to tie the American economy together more completely. The telegraph had benefited by using the railroads’ right-of-ways, and the railroads used the telegraph to coordinate and organize their far-flung activities. As the cost of communications fell and information transfers sped, the development of firms with multiple plants at distant locations was facilitated. The interwar era saw a continuation of these developments as the telephone continued to supplant the telegraph and the new medium of radio arose to transmit news and provide a new entertainment source.

Telegraph domination of business and personal communications had given way to the telephone as long distance telephone calls between the east and west coasts with the new electronic amplifiers became possible in 1915. The number of telegraph messages handled grew 60.4 percent in the twenties. The number of local telephone conversations grew 46.8 percent between 1920 and 1930, while the number of long distance conversations grew 71.8 percent over the same period. There were 5 times as many long distance telephone calls as telegraph messages handled in 1920, and 5.7 times as many in 1930.

The twenties were a prosperous period for AT&T and its 18 major operating companies. (Brooks, 1975; Temin, 1987; Garnet, 1985; Lipartito, 1989) Telephone usage rose and, as Figure 19 shows, the share of all households with a telephone rose from 35 percent to nearly 42 percent. In cities across the nation, AT&T consolidated its system, gained control of many operating companies, and virtually eliminated its competitors. It was able to do this because in 1921 Congress passed the Graham Act exempting AT&T from the Sherman Act in consolidating competing telephone companies. By 1940, the non-Bell operating companies were all small relative to the Bell operating companies.

Surprisingly there was a decline in telephone use on the farms during the twenties. (Hadwiger-Cochran, 1984; Fischer 1987) Rising telephone rates explain part of the decline in rural use. The imposition of connection fees during the First World War made it more costly for new farmers to hook up. As AT&T gained control of more and more operating systems, telephone rates were increased. AT&T also began requiring, as a condition of interconnection, that independent companies upgrade their systems to meet AT&T standards. Most of the small mutual companies that had provided service to farmers had operated on a shoestring—wires were often strung along fenceposts, and phones were inexpensive “whoop and holler” magneto units. Upgrading to AT&T’s standards raised costs, forcing these companies to raise rates.

However, it also seems likely that during the 1920s there was a general decline in the rural demand for telephone services. One important factor in this was the dramatic decline in farm incomes in the early twenties. The second reason was a change in the farmers’ environment. Prior to the First World War, the telephone eased farm isolation and provided news and weather information that was otherwise hard to obtain. After 1920 automobiles, surfaced roads, movies, and the radio loosened the isolation and the telephone was no longer as crucial.

Othmar Merganthaler’s development of the linotype machine in the late nineteenth century had irrevocably altered printing and publishing. This machine, which quickly created a line of soft, lead-based metal type that could be printed, melted down and then recast as a new line of type, dramatically lowered the costs of printing. Previously, all type had to be painstakingly set by hand, with individual cast letter matrices picked out from compartments in drawers to construct words, lines, and paragraphs. After printing, each line of type on the page had to be broken down and each individual letter matrix placed back into its compartment in its drawer for use in the next printing job. Newspapers often were not published every day and did not contain many pages, resulting in many newspapers in most cities. In contrast to this laborious process, the linotype used a keyboard upon which the operator typed the words in one of the lines in a news column. Matrices for each letter dropped down from a magazine of matrices as the operator typed each letter and were assembled into a line of type with automatic spacers to justify the line (fill out the column width). When the line was completed the machine mechanically cast the line of matrices into a line of lead type. The line of lead type was ejected into a tray and the letter matrices mechanically returned to the magazine while the operator continued typing the next line in the news story. The first Merganthaler linotype machine was installed in the New York Tribune in 1886. The linotype machine dramatically lowered the costs of printing newspapers (as well as books and magazines). Prior to the linotype a typical newspaper averaged no more than 11 pages and many were published only a few times a week. The linotype machine allowed newspapers to grow in size and they began to be published more regularly. A process of consolidation of daily and Sunday newspapers began that continues to this day. Many have termed the Merganthaler linotype machine the most significant printing invention since the introduction of movable type 400 years earlier.

For city families as well as farm families, radio became the new source of news and entertainment. (Barnouw, 1966; Rosen, 1980 and 1987; Chester-Garrison, 1950) It soon took over as the prime advertising medium and in the process revolutionized advertising. By 1930 more homes had radio sets than had telephones. The radio networks sent news and entertainment broadcasts all over the country. The isolation of rural life, particularly in many areas of the plains, was forever broken by the intrusion of the “black box,” as radio receivers were often called. The radio began a process of breaking down regionalism and creating a common culture in the United States.

The potential demand for radio became clear with the first regular broadcast of Westinghouse’s KDKA in Pittsburgh in the fall of 1920. Because the Department of Commerce could not deny a license application there was an explosion of stations all broadcasting at the same frequency and signal jamming and interference became a serious problem. By 1923 the Department of Commerce had gained control of radio from the Post Office and the Navy and began to arbitrarily disperse stations on the radio dial and deny licenses creating the first market in commercial broadcast licenses. In 1926 a U.S. District Court decided that under the Radio Law of 1912 Herbert Hoover, the secretary of commerce, did not have this power. New stations appeared and the logjam and interference of signals worsened. A Radio Act was passed in January of 1927 creating the Federal Radio Commission (FRC) as a temporary licensing authority. Licenses were to be issued in the public interest, convenience, and necessity. A number of broadcasting licenses were revoked; stations were assigned frequencies, dial locations, and power levels. The FRC created 24 clear-channel stations with as much as 50,000 watts of broadcasting power, of which 21 ended up being affiliated with the new national radio networks. The Communications Act of 1934 essentially repeated the 1927 act except that it created a permanent, seven-person Federal Communications Commission (FCC).

Local stations initially created and broadcast the radio programs. The expenses were modest, and stores and companies operating radio stations wrote this off as indirect, goodwill advertising. Several forces changed all this. In 1922, AT&T opened up a radio station in New York City, WEAF (later to become WNBC). AT&T envisioned this station as the center of a radio toll system where individuals could purchase time to broadcast a message transmitted to other stations in the toll network using AT&T’s long distance lines and an August 1922 broadcast by a Long Island realty company became the first conscious use of direct advertising.

Though advertising continued to be condemned, the fiscal pressures on radio stations to accept advertising began rising. In 1923 the American Society of Composers and Publishers (ASCAP), began demanding a performance fee anytime ASCAP-copyrighted music was performed on the radio, either live or on record. By 1924 the issue was settled, and most stations began paying performance fees to ASCAP. AT&T decided that all stations broadcasting with non AT&T transmitters were violating their patent rights and began asking for annual fees from such stations based on the station’s power. By the end of 1924, most stations were paying the fees. All of this drained the coffers of the radio stations, and more and more of them began discreetly accepting advertising.

RCA became upset at AT&T’s creation of a chain of radio stations and set up its own toll network using the inferior lines of Western Union and Postal Telegraph, because AT&T, not surprisingly, did not allow any toll (or network) broadcasting on its lines except by its own stations. AT&T began to worry that its actions might threaten its federal monopoly in long distance telephone communications. In 1926 a new firm was created, the National Broadcasting Company (NBC), which took over all broadcasting activities from AT&T and RCA as AT&T left broadcasting. When NBC debuted in November of 1926, it had two networks: the Red, which was the old AT&T network, and the Blue, which was the old RCA network. Radio networks allowed advertisers to direct advertising at a national audience at a lower cost. Network programs allowed local stations to broadcast superior programs that captured a larger listening audience and in return received a share of the fees the national advertiser paid to the network. In 1927 a new network, the Columbia Broadcasting System (CBS) financed by the Paley family began operation and other new networks entered or tried to enter the industry in the 1930s.

Communications developments in the interwar era present something of a mixed picture. By 1920 long distance telephone service was in place, but rising rates slowed the rate of adoption in the period, and telephone use in rural areas declined sharply. Though direct dialing was first tried in the twenties, its general implementation would not come until the postwar era, when other changes, such as microwave transmission of signals and touch-tone dialing, would also appear. Though the number of newspapers declined, newspaper circulation generally held up. The number of competing newspapers in larger cities began declining, a trend that also would accelerate in the postwar American economy.

Banking and Securities Markets

In the twenties commercial banks became “department stores of finance.”— Banks opened up installment (or personal) loan departments, expanded their mortgage lending, opened up trust departments, undertook securities underwriting activities, and offered safe deposit boxes. These changes were a response to growing competition from other financial intermediaries. Businesses, stung by bankers’ control and reduced lending during the 1920-21 depression, began relying more on retained earnings and stock and bond issues to raise investment and, sometimes, working capital. This reduced loan demand. The thrift institutions also experienced good growth in the twenties as they helped fuel the housing construction boom of the decade. The securities markets boomed in the twenties only to see a dramatic crash of the stock market in late 1929.

There were two broad classes of commercial banks; those that were nationally chartered and those that were chartered by the states. Only the national banks were required to be members of the Federal Reserve System. (Figure 21) Most banks were unit banks because national regulators and most state regulators prohibited branching. However, in the twenties a few states began to permit limited branching; California even allowed statewide branching.—The Federal Reserve member banks held the bulk of the assets of all commercial banks, even though most banks were not members. A high bank failure rate in the 1920s has usually been explained by “overbanking” or too many banks located in an area, but H. Thomas Johnson (1973-74) makes a strong argument against this. (Figure 22)— If there were overbanking, on average each bank would have been underutilized resulting in intense competition for deposits and higher costs and lower earnings. One common reason would have been the free entry of banks as long as they achieved the minimum requirements then in force. However, the twenties saw changes that led to the demise of many smaller rural banks that would likely have been profitable if these changes had not occurred. Improved transportation led to a movement of business activities, including banking, into the larger towns and cities. Rural banks that relied on loans to farmers suffered just as farmers did during the twenties, especially in the first half of the twenties. The number of bank suspensions and the suspension rate fell after 1926. The sharp rise in bank suspensions in 1930 occurred because of the first banking crisis during the Great Depression.

Prior to the twenties, the main assets of commercial banks were short-term business loans, made by creating a demand deposit or increasing an existing one for a borrowing firm. As business lending declined in the 1920s commercial banks vigorously moved into new types of financial activities. As banks purchased more securities for their earning asset portfolios and gained expertise in the securities markets, larger ones established investment departments and by the late twenties were an important force in the underwriting of new securities issued by nonfinancial corporations.

The securities market exhibited perhaps the most dramatic growth of the noncommercial bank financial intermediaries during the twenties, but others also grew rapidly. (Figure 23) The assets of life insurance companies increased by 10 percent a year from 1921 to 1929; by the late twenties they were a very important source of funds for construction investment. Mutual savings banks and savings and loan associations (thrifts) operated in essentially the same types of markets. The Mutual savings banks were concentrated in the northeastern United States. As incomes rose, personal savings increased, and housing construction expanded in the twenties, there was an increasing demand for the thrifts’ interest earning time deposits and mortgage lending.

But the dramatic expansion in the financial sector came in new corporate securities issues in the twenties—especially common and preferred stock—and in the trading of existing shares of those securities. (Figure 24) The late twenties boom in the American economy was rapid, highly visible, and dramatic. Skyscrapers were being erected in most major cities, the automobile manufacturers produced over four and a half million new cars in 1929; and the stock market, like a barometer of this prosperity, was on a dizzying ride to higher and higher prices. “Playing the market” seemed to become a national pastime.

The Dow-Jones index hit its peak of 381 on September 3 and then slid to 320 on October 21. In the following week the stock market “crashed,” with a record number of shares being traded on several days. At the end of Tuesday, October, 29th, the index stood at 230, 96 points less than one week before. On November 13, 1929, the Dow-Jones index reached its lowest point for the year at 198—183 points less than the September 3 peak.

The path of the stock market boom of the twenties can be seen in Figure 25. Sharp price breaks occurred several times during the boom, and each of these gave rise to dark predictions of the end of the bull market and speculation. Until late October of 1929, these predictions turned out to be wrong. Between those price breaks and prior to the October crash, stock prices continued to surge upward. In March of 1928, 3,875,910 shares were traded in one day, establishing a record. By late 1928, five million shares being traded in a day was a common occurrence.

New securities, from rising merger activity and the formation of holding companies, were issued to take advantage of the rising stock prices.—Stock pools, which were not illegal until the 1934 Securities and Exchange Act, took advantage of the boom to temporarily drive up the price of selected stocks and reap large gains for the members of the pool. In stock pools a group of speculators would pool large amounts of their funds and then begin purchasing large amounts of shares of a stock. This increased demand led to rising prices for that stock. Frequently pool insiders would “churn” the stock by repeatedly buying and selling the same shares among themselves, but at rising prices. Outsiders, seeing the price rising, would decide to purchase the stock whose price was rising. At a predetermined higher price the pool members would, within a short period, sell their shares and pull out of the market for that stock. Without the additional demand from the pool, the stock’s price usually fell quickly bringing large losses for the unsuspecting outside investors while reaping large gains for the pool insiders.

Another factor commonly used to explain both the speculative boom and the October crash was the purchase of stocks on small margins. However, contrary to popular perception, margin requirements through most of the twenties were essentially the same as in previous decades. Brokers, recognizing the problems with margin lending in the rapidly changing market, began raising margin requirements in late 1928, and by the fall of 1929, margin requirements were the highest in the history of the New York Stock Exchange. In the 1920s, as was the case for decades prior to that, the usual margin requirements were 10 to 15 percent of the purchase price, and, apparently, more often around 10 percent. There were increases in this percentage by 1928 and by the fall of 1928, well before the crash and at the urging of a special New York Clearinghouse committee, margin requirements had been raised to some of the highest levels in New York Stock Exchange history. One brokerage house required the following of its clients. Securities with a selling price below $10 could only be purchased for cash. Securities with a selling price of $10 to $20 had to have a 50 percent margin; for securities of $20 to $30 a margin requirement of 40 percent; and, for securities with a price above $30 the margin was 30 percent of the purchase price. In the first half of 1929 margin requirements on customers’ accounts averaged a 40 percent margin, and some houses raised their margins to 50 percent a few months before the crash. These were, historically, very high margin requirements. (Smiley and Keehn, 1988)—Even so, during the crash when additional margin calls were issued, those investors who could not provide additional margin saw the brokers’ sell their stock at whatever the market price was at the time and these forced sales helped drive prices even lower.

The crash began on Monday, October 21, as the index of stock prices fell 3 points on the third-largest volume in the history of the New York Stock Exchange. After a slight rally on Tuesday, prices began declining on Wednesday and fell 21 points by the end of the day bringing on the third call for more margin in that week. On Black Thursday, October 24, prices initially fell sharply, but rallied somewhat in the afternoon so that the net loss was only 7 points, but the volume of thirteen million shares set a NYSE record. Friday brought a small gain that was wiped out on Saturday. On Monday, October 28, the Dow Jones index fell 38 points on a volume of nine million shares—three million in the final hour of trading. Black Tuesday, October 29, brought declines in virtually every stock price. Manufacturing firms, which had been lending large sums to brokers for margin loans, had been calling in these loans and this accelerated on Monday and Tuesday. The big Wall Street banks increased their lending on call loans to offset some of this loss of loanable funds. The Dow Jones Index fell 30 points on a record volume of nearly sixteen and a half million shares exchanged. Black Thursday and Black Tuesday wiped out entire fortunes.

Though the worst was over, prices continued to decline until November 13, 1929, as brokers cleaned up their accounts and sold off the stocks of clients who could not supply additional margin. After that, prices began to slowly rise and by April of 1930 had increased 96 points from the low of November 13,— “only” 87 points less than the peak of September 3, 1929. From that point, stock prices resumed their depressing decline until the low point was reached in the summer of 1932.

 

—There is a long tradition that insists that the Great Bull Market of the late twenties was an orgy of speculation that bid the prices of stocks far above any sustainable or economically justifiable level creating a bubble in the stock market. John Kenneth Galbraith (1954) observed, “The collapse in the stock market in the autumn of 1929 was implicit in the speculation that went before.”—But not everyone has agreed with this.

In 1930 Irving Fisher argued that the stock prices of 1928 and 1929 were based on fundamental expectations that future corporate earnings would be high.— More recently, Murray Rothbard (1963), Gerald Gunderson (1976), and Jude Wanniski (1978) have argued that stock prices were not too high prior to the crash.—Gunderson suggested that prior to 1929, stock prices were where they should have been and that when corporate profits in the summer and fall of 1929 failed to meet expectations, stock prices were written down.— Wanniski argued that political events brought on the crash. The market broke each time news arrived of advances in congressional consideration of the Hawley-Smoot tariff. However, the virtually perfect foresight that Wanniski’s explanation requires is unrealistic.— Charles Kindleberger (1973) and Peter Temin (1976) examined common stock yields and price-earnings ratios and found that the relative constancy did not suggest that stock prices were bid up unrealistically high in the late twenties.—Gary Santoni and Gerald Dwyer (1990) also failed to find evidence of a bubble in stock prices in 1928 and 1929.—Gerald Sirkin (1975) found that the implied growth rates of dividends required to justify stock prices in 1928 and 1929 were quite conservative and lower than post-Second World War dividend growth rates.

However, examination of after-the-fact common stock yields and price-earning ratios can do no more than provide some ex post justification for suggesting that there was not excessive speculation during the Great Bull Market.— Each individual investor was motivated by that person’s subjective expectations of each firm’s future earnings and dividends and the future prices of shares of each firm’s stock. Because of this element of subjectivity, not only can we never accurately know those values, but also we can never know how they varied among individuals. The market price we observe will be the end result of all of the actions of the market participants, and the observed price may be different from the price almost all of the participants expected.

In fact, there are some indications that there were differences in 1928 and 1929. Yields on common stocks were somewhat lower in 1928 and 1929. In October of 1928, brokers generally began raising margin requirements, and by the beginning of the fall of 1929, margin requirements were, on average, the highest in the history of the New York Stock Exchange. Though the discount and commercial paper rates had moved closely with the call and time rates on brokers’ loans through 1927, the rates on brokers’ loans increased much more sharply in 1928 and 1929.— This pulled in funds from corporations, private investors, and foreign banks as New York City banks sharply reduced their lending. These facts suggest that brokers and New York City bankers may have come to believe that stock prices had been bid above a sustainable level by late 1928 and early 1929. White (1990) created a quarterly index of dividends for firms in the Dow-Jones index and related this to the DJI. Through 1927 the two track closely, but in 1928 and 1929 the index of stock prices grows much more rapidly than the index of dividends.

The qualitative evidence for a bubble in the stock market in 1928 and 1929 that White assembled was strengthened by the findings of J. Bradford De Long and Andre Shleifer (1991). They examined closed-end mutual funds, a type of fund where investors wishing to liquidate must sell their shares to other individual investors allowing its fundamental value to be exactly measurable.— Using evidence from these funds, De Long and Shleifer estimated that in the summer of 1929, the Standard and Poor’s composite stock price index was overvalued about 30 percent due to excessive investor optimism. Rappoport and White (1993 and 1994) found other evidence that supported a bubble in the stock market in 1928 and 1929. There was a sharp divergence between the growth of stock prices and dividends; there were increasing premiums on call and time brokers’ loans in 1928 and 1929; margin requirements rose; and stock market volatility rose in the wake of the 1929 stock market crash.

There are several reasons for the creation of such a bubble. First, the fundamental values of earnings and dividends become difficult to assess when there are major industrial changes, such as the rapid changes in the automobile industry, the new electric utilities, and the new radio industry.— Eugene White (1990) suggests that “While investors had every reason to expect earnings to grow, they lacked the means to evaluate easily the future path of dividends.” As a result investors bid up prices as they were swept up in the ongoing stock market boom. Second, participation in the stock market widened noticeably in the twenties. The new investors were relatively unsophisticated, and they were more likely to be caught up in the euphoria of the boom and bid prices upward.— New, inexperienced commission sales personnel were hired to sell stocks and they promised glowing returns on stocks they knew little about.

These observations were strengthened by the experimental work of economist Vernon Smith. (Bishop, 1987) In a number of experiments over a three-year period using students and Tucson businessmen and businesswomen, bubbles developed as inexperienced investors valued stocks differently and engaged in price speculation. As these investors in the experiments began to realize that speculative profits were unsustainable and uncertain, their dividend expectations changed, the market crashed, and ultimately stocks began trading at their fundamental dividend values. These bubbles and crashes occurred repeatedly, leading Smith to conjecture that there are few regulatory steps that can be taken to prevent a crash.

Though the bubble of 1928 and 1929 made some downward adjustment in stock prices inevitable, as Barsky and De Long have shown, changes in fundamentals govern the overall movements. And the end of the long bull market was almost certainly governed by this. In late 1928 and early 1929 there was a striking rise in economic activity, but a decline began somewhere between May and July of that year and was clearly evident by August of 1929. By the middle of August, the rise in stock prices had slowed down as better information on the contraction was received. There were repeated statements by leading figures that stocks were “overpriced” and the Federal Reserve System sharply increased the discount rate in August 1929 was well as continuing its call for banks to reduce their margin lending. As this information was assessed, the number of speculators selling stocks increased, and the number buying decreased. With the decreased demand, stock prices began to fall, and as more accurate information on the nature and extent of the decline was received, stock prices fell more. The late October crash made the decline occur much more rapidly, and the margin purchases and consequent forced selling of many of those stocks contributed to a more severe price fall. The recovery of stock prices from November 13 into April of 1930 suggests that stock prices may have been driven somewhat too low during the crash.

There is now widespread agreement that the 1929 stock market crash did not cause the Great Depression. Instead, the initial downturn in economic activity was a primary determinant of the ending of the 1928-29 stock market bubble. The stock market crash did make the downturn become more severe beginning in November 1929. It reduced discretionary consumption spending (Romer, 1990) and created greater income uncertainty helping to bring on the contraction (Flacco and Parker, 1992). Though stock market prices reached a bottom and began to recover following November 13, 1929, the continuing decline in economic activity took its toll and by May 1930 stock prices resumed their decline and continued to fall through the summer of 1932.

Domestic Trade

In the nineteenth century, a complex array of wholesalers, jobbers, and retailers had developed, but changes in the postbellum period reduced the role of the wholesalers and jobbers and strengthened the importance of the retailers in domestic trade. (Cochran, 1977; Chandler, 1977; Marburg, 1951; Clewett, 1951) The appearance of the department store in the major cities and the rise of mail order firms in the postbellum period changed the retailing market.

Department Stores

A department store is a combination of specialty stores organized as departments within one general store. A. T. Stewart’s huge 1846 dry goods store in New York City is often referred to as the first department store. (Resseguie, 1965; Sobel-Sicilia, 1986) R. H. Macy started his dry goods store in 1858 and Wanamaker’s in Philadelphia opened in 1876. By the end of the nineteenth century, every city of any size had at least one major department store. (Appel, 1930; Benson, 1986; Hendrickson, 1979; Hower, 1946; Sobel, 1974) Until the late twenties, the department store field was dominated by independent stores, though some department stores in the largest cities had opened a few suburban branches and stores in other cities. In the interwar period department stores accounted for about 8 percent of retail sales.

The department stores relied on a “one-price” policy, which Stewart is credited with beginning. In the antebellum period and into the postbellum period, it was common not to post a specific price on an item; rather, each purchaser haggled with a sales clerk over what the price would be. Stewart posted fixed prices on the various dry goods sold, and the customer could either decide to buy or not buy at the fixed price. The policy dramatically lowered transactions costs for both the retailer and the purchaser. Prices were reduced with a smaller markup over the wholesale price, and a large sales volume and a quicker turnover of the store’s inventory generated profits.

Mail Order Firms

What changed the department store field in the twenties was the entrance of Sears Roebuck and Montgomery Ward, the two dominant mail order firms in the United States. (Emmet-Jeuck, 1950; Chandler, 1962, 1977) Both firms had begun in the late nineteenth century and by 1914 the younger Sears Roebuck had surpassed Montgomery Ward. Both located in Chicago due to its central location in the nation’s rail network and both had benefited from the advent of Rural Free Delivery in 1896 and low cost Parcel Post Service in 1912.

In 1924 Sears hired Robert C. Wood, who was able to convince Sears Roebuck to open retail stores. Wood believed that the declining rural population and the growing urban population forecast the gradual demise of the mail order business; survival of the mail order firms required a move into retail sales. By 1925 Sears Roebuck had opened 8 retail stores, and by 1929 it had 324 stores. Montgomery Ward quickly followed suit. Rather than locating these in the central business district (CBD), Wood located many on major streets closer to the residential areas. These moves of Sears Roebuck and Montgomery Ward expanded department store retailing and provided a new type of chain store.

Chain Stores

Though chain stores grew rapidly in the first two decades of the twentieth century, they date back to the 1860s when George F. Gilman and George Huntington Hartford opened a string of New York City A&P (Atlantic and Pacific) stores exclusively to sell tea. (Beckman-Nolen, 1938; Lebhar, 1963; Bullock, 1933) Stores were opened in other regions and in 1912 their first “cash-and-carry” full-range grocery was opened. Soon they were opening 50 of these stores each week and by the 1920s A&P had 14,000 stores. They then phased out the small stores to reduce the chain to 4,000 full-range, supermarket-type stores. A&P’s success led to new grocery store chains such as Kroger, Jewel Tea, and Safeway.

Prior to A&P’s cash-and-carry policy, it was common for grocery stores, produce (or green) grocers, and meat markets to provide home delivery and credit, both of which were costly. As a result, retail prices were generally marked up well above the wholesale prices. In cash-and-carry stores, items were sold only for cash; no credit was extended, and no expensive home deliveries were provided. Markups on prices could be much lower because other costs were much lower. Consumers liked the lower prices and were willing to pay cash and carry their groceries, and the policy became common by the twenties.

Chains also developed in other retail product lines. In 1879 Frank W. Woolworth developed a “5 and 10 Cent Store,” or dime store, and there were over 1,000 F. W. Woolworth stores by the mid-1920s. (Winkler, 1940) Other firms such as Kresge, Kress, and McCrory successfully imitated Woolworth’s dime store chain. J.C. Penney’s dry goods chain store began in 1901 (Beasley, 1948), Walgreen’s drug store chain began in 1909, and shoes, jewelry, cigars, and other lines of merchandise also began to be sold through chain stores.

Self-Service Policies

In 1916 Clarence Saunders, a grocer in Memphis, Tennessee, built upon the one-price policy and began offering self-service at his Piggly Wiggly store. Previously, customers handed a clerk a list or asked for the items desired, which the clerk then collected and the customer paid for. With self-service, items for sale were placed on open shelves among which the customers could walk, carrying a shopping bag or pushing a shopping cart. Each customer could then browse as he or she pleased, picking out whatever was desired. Saunders and other retailers who adopted the self-service method of retail selling found that customers often purchased more because of exposure to the array of products on the shelves; as well, self-service lowered the labor required for retail sales and therefore lowered costs.

Shopping Centers

Shopping Centers, another innovation in retailing that began in the twenties, was not destined to become a major force in retail development until after the Second World War. The ultimate cause of this innovation was the widening ownership and use of the automobile. By the 1920s, as the ownership and use of the car began expanding, population began to move out of the crowded central cities toward the more open suburbs. When General Robert Wood set Sears off on its development of urban stores, he located these not in the central business district, CBD, but as free-standing stores on major arteries away from the CBD with sufficient space for parking.

At about the same time, a few entrepreneurs began to develop shopping centers. Yehoshua Cohen (1972) says, “The owner of such a center was responsible for maintenance of the center, its parking lot, as well as other services to consumers and retailers in the center.” Perhaps the earliest such shopping center was the Country Club Plaza built in 1922 by the J. C. Nichols Company in Kansas City, Missouri. Other early shopping centers appeared in Baltimore and Dallas. By the mid-1930s the concept of a planned shopping center was well known and was expected to be the means to capture the trade of the growing number of suburban consumers.

International Trade and Finance

In the twenties a gold exchange standard was developed to replace the gold standard of the prewar world. Under a gold standard, each country’s currency carried a fixed exchange rate with gold, and the currency had to be backed up by gold. As a result, all countries on the gold standard had fixed exchange rates with all other countries. Adjustments to balance international trade flows were made by gold flows. If a country had a deficit in its trade balance, gold would leave the country, forcing the money stock to decline and prices to fall. Falling prices made the deficit countries’ exports more attractive and imports more costly, reducing the deficit. Countries with a surplus imported gold, which increased the money stock and caused prices to rise. This made the surplus countries’ exports less attractive and imports more attractive, decreasing the surplus. Most economists who have studied the prewar gold standard contend that it did not work as the conventional textbook model says, because capital flows frequently reduced or eliminated the need for gold flows for long periods of time. However, there is no consensus on whether fortuitous circumstances, rather than the gold standard, saved the international economy from periodic convulsions or whether the gold standard as it did work was sufficient to promote stability and growth in international transactions.

After the First World War it was argued that there was a “shortage” of fluid monetary gold to use for the gold standard, so some method of “economizing” on gold had to be found. To do this, two basic changes were made. First, most nations, other than the United States, stopped domestic circulation of gold. Second, the “gold exchange” system was created. Most countries held their international reserves in the form of U.S. dollars or British pounds and international transactions used dollars or pounds, as long as the United States and Great Britain stood ready to exchange their currencies for gold at fixed exchange rates. However, the overvaluation of the pound and the undervaluation of the franc threatened these arrangements. The British trade deficit led to a capital outflow, higher interest rates, and a weak economy. In the late twenties, the French trade surplus led to the importation of gold that they did not allow to expand the money supply.

Economizing on gold by no longer allowing its domestic circulation and by using key currencies as international monetary reserves was really an attempt to place the domestic economies under the control of the nations’ politicians and make them independent of international events. Unfortunately, in doing this politicians eliminated the equilibrating mechanism of the gold standard but had nothing with which to replace it. The new international monetary arrangements of the twenties were potentially destabilizing because they were not allowed to operate as a price mechanism promoting equilibrating adjustments.

There were other problems with international economic activity in the twenties. Because of the war, the United States was abruptly transformed from a debtor to a creditor on international accounts. Though the United States did not want reparations payments from Germany, it did insist that Allied governments repay American loans. The Allied governments then insisted on war reparations from Germany. These initial reparations assessments were quite large. The Allied Reparations Commission collected the charges by supervising Germany’s foreign trade and by internal controls on the German economy, and it was authorized to increase the reparations if it was felt that Germany could pay more. The treaty allowed France to occupy the Ruhr after Germany defaulted in 1923.

Ultimately, this tangled web of debts and reparations, which was a major factor in the course of international trade, depended upon two principal actions. First, the United States had to run an import surplus or, on net, export capital out of the United States to provide a pool of dollars overseas. Germany then had either to have an export surplus or else import American capital so as to build up dollar reserves—that is, the dollars the United States was exporting. In effect, these dollars were paid by Germany to Great Britain, France, and other countries that then shipped them back to the United States as payment on their U.S. debts. If these conditions did not occur, (and note that the “new” gold standard of the twenties had lost its flexibility because the price adjustment mechanism had been eliminated) disruption in international activity could easily occur and be transmitted to the domestic economies.

In the wake of the 1920-21 depression Congress passed the Emergency Tariff Act, which raised tariffs, particularly on manufactured goods. (Figures 26 and 27) The Fordney-McCumber Tariff of 1922 continued the Emergency Tariff of 1921, and its protection on many items was extremely high, ranging from 60 to 100 percent ad valorem (or as a percent of the price of the item). The increases in the Fordney-McCumber tariff were as large and sometimes larger than the more famous (or “infamous”) Smoot-Hawley tariff of 1930. As farm product prices fell at the end of the decade presidential candidate Herbert Hoover proposed, as part of his platform, tariff increases and other changes to aid the farmers. In January 1929, after Hoover’s election, but before he took office, a tariff bill was introduced into Congress. Special interests succeeded in gaining additional (or new) protection for most domestically produced commodities and the goal of greater protection for the farmers tended to get lost in the increased protection for multitudes of American manufactured products. In spite of widespread condemnation by economists, President Hoover signed the Smoot-Hawley Tariff in June 1930 and rates rose sharply.

Following the First World War, the U.S. government actively promoted American exports, and in each of the postwar years through 1929, the United States recorded a surplus in its balance of trade. (Figure 28) However, the surplus declined in the 1930s as both exports and imports fell sharply after 1929. From the mid-1920s on finished manufactures were the most important exports, while agricultural products dominated American imports.

The majority of the funds that allowed Germany to make its reparations payments to France and Great Britain and hence allowed those countries to pay their debts to the United States came from the net flow of capital out of the United States in the form of direct investment in real assets and investments in long- and short-term foreign financial assets. After the devastating German hyperinflation of 1922 and 1923, the Dawes Plan reformed the German economy and currency and accelerated the U.S. capital outflow. American investors began to actively and aggressively pursue foreign investments, particularly loans (Lewis, 1938) and in the late twenties there was a marked deterioration in the quality of foreign bonds sold in the United States. (Mintz, 1951)

The system, then, worked well as long as there was a net outflow of American capital, but this did not continue. In the middle of 1928, the flow of short-term capital began to decline. In 1928 the flow of “other long-term” capital out of the United States was 752 million dollars, but in 1929 it was only 34 million dollars. Though arguments now exist as to whether the booming stock market in the United States was to blame for this, it had far-reaching effects on the international economic system and the various domestic economies.

The Start of the Depression

The United States had the majority of the world’s monetary gold, about 40 percent, by 1920. In the latter part of the twenties, France also began accumulating gold as its share of the world’s monetary gold rose from 9 percent in 1927 to 17 percent in 1929 and 22 percent by 1931. In 1927 the Federal Reserve System had reduced discount rates (the interest rate at which they lent reserves to member commercial banks) and engaged in open market purchases (purchasing U.S. government securities on the open market to increase the reserves of the banking system) to push down interest rates and assist Great Britain in staying on the gold standard. By early 1928 the Federal Reserve System was worried about its loss of gold due to this policy as well as the ongoing boom in the stock market. It began to raise the discount rate to stop these outflows. Gold was also entering the United States so that foreigners could obtain dollars to invest in stocks and bonds. As the United States and France accumulated more and more of the world’s monetary gold, other countries’ central banks took contractionary steps to stem the loss of gold. In country after country these deflationary strategies began contracting economic activity and by 1928 some countries in Europe, Asia, and South America had entered into a depression. More countries’ economies began to decline in 1929, including the United States, and by 1930 a depression was in force for almost all of the world’s market economies. (Temin, 1989; Eichengreen, 1992)

Monetary and Fiscal Policies in the 1920s

Fiscal Policies

As a tool to promote stability in aggregate economic activity, fiscal policy is largely a post-Second World War phenomenon. Prior to 1930 the federal government’s spending and taxing decisions were largely, but not completely, based on the perceived “need” for government-provided public goods and services.

Though the fiscal policy concept had not been developed, this does not mean that during the twenties no concept of the government’s role in stimulating economic activity existed. Herbert Stein (1990) points out that in the twenties Herbert Hoover and some of his contemporaries shared two ideas about the proper role of the federal government. The first was that federal spending on public works could be an important force in reducin Smiley and Keehn, 1995.  investment. Both concepts fit the ideas held by Hoover and others of his persuasion that the U.S. economy of the twenties was not the result of laissez-faire workings but of “deliberate social engineering.”

The federal personal income tax was enacted in 1913. Though mildly progressive, its rates were low and topped out at 7 percent on taxable income in excess of $750,000. (Table 4) As the United States prepared for war in 1916, rates were increased and reached a maximum marginal rate of 12 percent. With the onset of the First World War, the rates were dramatically increased. To obtain additional revenue in 1918, marginal rates were again increased. The share of federal revenue generated by income taxes rose from 11 percent in 1914 to 69 percent in 1920. The tax rates had been extended downward so that more than 30 percent of the nation’s income recipients were subject to income taxes by 1918. However, through the purchase of tax exempt state and local securities and through steps taken by corporations to avoid the cash distribution of profits, the number of high income taxpayers and their share of total taxes paid declined as Congress kept increasing the tax rates. The normal (or base) tax rate was reduced slightly for 1919 but the surtax rates, which made the income tax highly progressive, were retained. (Smiley-Keehn, 1995)

President Harding’s new Secretary of the Treasury, Andrew Mellon, proposed cutting the tax rates, arguing that the rates in the higher brackets had “passed the point of productivity” and rates in excess of 70 percent simply could not be collected. Though most agreed that the rates were too high, there was sharp disagreement on how the rates should be cut. Democrats and  Smiley and Keehn, 1995.  Progressive Republicans argued for rate cuts targeted for the lower income taxpayers while maintaining most of the steep progressivity of the tax rates. They believed that remedies could be found to change the tax laws to stop the legal avoidance of federal income taxes. Republicans argued for sharper cuts that reduced the progressivity of the rates. Mellon proposed a maximum rate of 25 percent.

Though the federal income tax rates were reduced and made less progressive, it took three tax rate cuts in 1921, 1924, and 1925 before Mellon’s goal was finally achieved. The highest marginal tax rate was reduced from 73 percent to 58 percent to 46 percent and finally to 25 percent for the 1925 tax year. All of the other rates were also reduced and exemptions increased. By 1926, only about the top 10 percent of income recipients were subject to federal income taxes. As tax rates were reduced, the number of high income tax returns increased and the share of total federal personal income taxes paid rose. (Tables 5 and 6) Even with the dramatic income tax rate cuts and reductions in the number of low income taxpayers, federal personal income tax revenue continued to rise during the 1920s. Though early estimates of the distribution of personal income showed sharp increases in income inequality during the 1920s (Kuznets, 1953; Holt, 1977), more recent estimates have found that the increases in inequality were considerably less and these appear largely to be related to the sharp rise in capital gains due to the booming stock market in the late twenties. (Smiley, 1998 and 2000)

Each year in the twenties the federal government generated a surplus, in some years as much as 1 percent of GNP. The surpluses were used to reduce the federal deficit and it declined by 25 percent between 1920 and 1930. Contrary to simple macroeconomic models that argue a federal government budget surplus must be contractionary and tend to stop an economy from reaching full employment, the American economy operated at full-employment or close to it throughout the twenties and saw significant economic growth. In this case, the surpluses were not contractionary because the dollars were circulated back into the economy through the purchase of outstanding federal debt rather than pulled out as currency and held in a vault somewhere.

Monetary Policies

In 1913 fear of the “money trust” and their monopoly power led Congress to create 12 central banks when they created the Federal Reserve System. The new central banks were to control money and credit and act as lenders of last resort to end banking panics. The role of the Federal Reserve Board, located in Washington, D.C., was to coordinate the policies of the 12 district banks; it was composed of five presidential appointees and the current secretary of the treasury and comptroller of the currency. All national banks had to become members of the Federal Reserve System, the Fed, and any state bank meeting the qualifications could elect to do so.

The act specified fixed reserve requirements on demand and time deposits, all of which had to be on deposit in the district bank. Commercial banks were allowed to rediscount commercial paper and given Federal Reserve currency. Initially, each district bank set its own rediscount rate. To provide additional income when there was little rediscounting, the district banks were allowed to engage in open market operations that involved the purchasing and selling of federal government securities, short-term securities of state and local governments issued in anticipation of taxes, foreign exchange, and domestic bills of exchange. The district banks were also designated to act as fiscal agents for the federal government. Finally, the Federal Reserve System provided a central check clearinghouse for the entire banking system.

When the Federal Reserve System was originally set up, it was believed that its primary role was to be a lender of last resort to prevent banking panics and become a check-clearing mechanism for the nation’s banks. Both the Federal Reserve Board and the Governors of the District Banks were bodies established to jointly exercise these activities. The division of functions was not clear, and a struggle for power ensued, mainly between the New York Federal Reserve Bank, which was led by J. P. Morgan’s protege, Benjamin Strong, through 1928, and the Federal Reserve Board. By the thirties the Federal Reserve Board had achieved dominance.

There were really two conflicting criteria upon which monetary actions were ostensibly based: the Gold Standard and the Real Bills Doctrine. The Gold Standard was supposed to be quasi-automatic, with an effective limit to the quantity of money. However, the Real Bills Doctrine (which required that all loans be made on short-term, self-liquidating commercial paper) had no effective limit on the quantity of money. The rediscounting of eligible commercial paper was supposed to lead to the required “elasticity” of the stock of money to “accommodate” the needs of industry and business. Actually the rediscounting of commercial paper, open market purchases, and gold inflows all had the same effects on the money stock.

The 1920-21 Depression

During the First World War, the Fed kept discount rates low and granted discounts on banks’ customer loans used to purchase V-bonds in order to help finance the war. The final Victory Loan had not been floated when the Armistice was signed in November of 1918: in fact, it took until October of 1919 for the government to fully sell this last loan issue. The Treasury, with the secretary of the treasury sitting on the Federal Reserve Board, persuaded the Federal Reserve System to maintain low interest rates and discount the Victory bonds necessary to keep bond prices high until this last issue had been floated. As a result, during this period the money supply grew rapidly and prices rose sharply.

A shift from a federal deficit to a surplus and supply disruptions due to steel and coal strikes in 1919 and a railroad strike in early 1920 contributed to the end of the boom. But the most—common view is that the Fed’s monetary policy was the main determinant of the end of the expansion and inflation and the beginning of the subsequent contraction and severe deflation. When the Fed was released from its informal agreement with the Treasury in November of 1919, it raised the discount rate from 4 to 4.75 percent. Benjamin Strong (the governor of the New York bank) was beginning to believe that the time for strong action was past and that the Federal Reserve System’s actions should be moderate. However, with Strong out of the country, the Federal Reserve Board increased the discount rate from 4.75 to 6 percent in late January of 1920 and to 7 percent on June 1, 1920. By the middle of 1920, economic activity and employment were rapidly falling, and prices had begun their downward spiral in one of the sharpest price declines in American history. The Federal Reserve System kept the discount rate at 7 percent until May 5, 1921, when it was lowered to 6.5 percent. By June of 1922, the rate had been lowered yet again to 4 percent. (Friedman and Schwartz, 1963)

The Federal Reserve System authorities received considerable criticism then and later for their actions. Milton Friedman and Anna Schwartz (1963) contend that the discount rate was raised too much too late and then kept too high for too long, causing the decline to be more severe and the price deflation to be greater. In their opinion the Fed acted in this manner due to the necessity of meeting the legal reserve requirement with a safe margin of gold reserves. Elmus Wicker (1966), however, argues that the gold reserve ratio was not the main factor determining the Federal Reserve policy in the episode. Rather, the Fed knowingly pursued a deflationary policy because it felt that the money supply was simply too large and prices too high. To return to the prewar parity for gold required lowering the price level, and there was an excessive stock of money because the additional money had been used to finance the war, not to produce consumer goods. Finally, the outstanding indebtedness was too large due to the creation of Fed credit.

Whether statutory gold reserve requirements to maintain the gold standard or domestic credit conditions were the most important determinant of Fed policy is still an open question, though both certainly had some influence. Regardless of the answer to that question, the Federal Reserve System’s first major undertaking in the years immediately following the First World War demonstrated poor policy formulation.

Federal Reserve Policies from 1922 to 1930

By 1921 the district banks began to recognize that their open market purchases had effects on interest rates, the money stock, and economic activity. For the next several years, economists in the Federal Reserve System discussed how this worked and how it could be related to discounting by member banks. A committee was created to coordinate the open market purchases of the district banks.

The recovery from the 1920-1921 depression had proceeded smoothly with moderate price increases. In early 1923 the Fed sold some securities and increased the discount rate from 4 percent as they believed the recovery was too rapid. However, by the fall of 1923 there were some signs of a business slump. McMillin and Parker (1994) argue that this contraction, as well as the 1927 contraction, were related to oil price shocks. By October of 1923 Benjamin Strong was advocating securities purchases to counter this. Between then and September 1924 the Federal Reserve System increased its securities holdings by over $500 million. Between April and August of 1924 the Fed reduced the discount rate to 3 percent in a series of three separate steps. In addition to moderating the mild business slump, the expansionary policy was also intended to reduce American interest rates relative to British interest rates. This reversed the gold flow back toward Great Britain allowing Britain to return to the gold standard in 1925. At the time it appeared that the Fed’s monetary policy had successfully accomplished its goals.

By the summer of 1924 the business slump was over and the economy again began to grow rapidly. By the mid-1920s real estate speculation had arisen in many urban areas in the United States and especially in Southeastern Florida. Land prices were rising sharply. Stock market prices had also begun rising more rapidly. The Fed expressed some worry about these developments and in 1926 sold some securities to gently slow the real estate and stock market boom. Amid hurricanes and supply bottlenecks the Florida real estate boom collapsed but the stock market boom continued.

The American economy entered into another mild business recession in the fall of 1926 that lasted until the fall of 1927. One of the factors in this was Henry’s Ford’s shut down of all of his factories to changeover from the Model T to the Model A. His employees were left without a job and without income for over six months. International concerns also reappeared. France, which was preparing to return to the gold standard, had begun accumulating gold and gold continued to flow into the United States. Some of this gold came from Great Britain making it difficult for the British to remain on the gold standard. This occasioned a new experiment in central bank cooperation. In July 1927 Benjamin Strong arranged a conference with Governor Montagu Norman of the Bank of England, Governor Hjalmar Schacht of the Reichsbank, and Deputy Governor Charles Ritt of the Bank of France in an attempt to promote cooperation among the world’s central bankers. By the time the conference began the Fed had already taken steps to counteract the business slump and reduce the gold inflow. In early 1927 the Fed reduced discount rates and made large securities purchases. One result of this was that the gold stock fell from $4.3 billion in mid-1927 to $3.8 billion in mid-1928. Some of the gold exports went to France and France returned to the gold standard with its undervalued currency. The loss of gold from Britain eased allowing it to maintain the gold standard.

By early 1928 the Fed was again becoming worried. Stock market prices were rising even faster and the apparent speculative bubble in the stock market was of some concern to Fed authorities. The Fed was also concerned about the loss of gold and wanted to bring that to an end. To do this they sold securities and, in three steps, raised the discount rate to 5 percent by July 1928. To this point the Federal Reserve Board had largely agreed with district Bank policy changes. However, problems began to develop.

During the stock market boom of the late 1920s the Federal Reserve Board preferred to use “moral suasion” rather than increases in discount rates to lessen member bank borrowing. The New York City bank insisted that moral suasion would not work unless backed up by literal credit rationing on a bank by bank basis which they, and the other district banks, were unwilling to do. They insisted that discount rates had to be increased. The Federal Reserve Board countered that this general policy change would slow down economic activity in general rather than be specifically targeted to stock market speculation. The result was that little was done for a year. Rates were not raised but no open market purchases were undertaken. Rates were finally raised to 6 percent in August of 1929. By that time the contraction had already begun. In late October the stock market crashed, and America slid into the Great Depression.

In November, following the stock market crash the Fed reduced discount rates to 4.5 percent. In January they again decreased discount rates and began a series of discount rate decreases until the rate reached 2.5 percent at the end of 1930. No further open market operations were undertaken for the next six months. As banks reduced their discounting in 1930, the stock of money declined. There was a banking crisis in the southeast in November and December of 1930, and in its wake the public’s holding of currency relative to deposits and banks’ reserve ratios began to rise and continued to do so through the end of the Great Depression.

Conclusion

Though some disagree, there is growing evidence that the behavior of the American economy in the 1920s did not cause the Great Depression. The depressed 1930s were not “retribution” for the exuberant growth of the 1920s. The weakness of a few economic sectors in the 1920s did not forecast the contraction from 1929 to 1933. Rather it was the depression of the 1930s and the Second World War that interrupted the economic growth begun in the 1920s and resumed after the Second World War. Just as the construction of skyscrapers that began in the 1920s resumed in the 1950s, so did real economic growth and progress resume. In retrospect we can see that the introduction and expansion of new technologies and industries in the 1920s, such as autos, household electric appliances, radio, and electric utilities, are echoed in the 1990s in the effects of the expanding use and development of the personal computer and the rise of the internet. The 1920s have much to teach us about the growth and development of the American economy.

References

Adams, Walter, ed. The Structure of American Industry, 5th ed. New York: Macmillan Publishing Co., 1977.

Aldcroft, Derek H. From Versailles to Wall Street, 1919-1929. Berkeley: The University of California Press, 1977.

Allen, Frederick Lewis. Only Yesterday. New York: Harper and Sons, 1931.

Alston, Lee J. “Farm Foreclosures in the United States During the Interwar Period.” The Journal of Economic History 43, no. 4 (1983): 885-904.

Alston, Lee J., Wayne A. Grove, and David C. Wheelock. “Why Do Banks Fail? Evidence from the 1920s.” Explorations in Economic History 31 (1994): 409-431.

Ankli, Robert. “Horses vs. Tractors on the Corn Belt.” Agricultural History 54 (1980): 134-148.

Ankli, Robert and Alan L. Olmstead. “The Adoption of the Gasoline Tractor in California.” Agricultural History 55 (1981):— 213-230.

Appel, Joseph H. The Business Biography of John Wanamaker, Founder and Builder. New York: The Macmillan Co., 1930.

Baker, Jonathan B. “Identifying Cartel Pricing Under Uncertainty: The U.S. Steel Industry, 1933-1939.” The Journal of Law and Economics 32 (1989): S47-S76.

Barger, E. L, et al. Tractors and Their Power Units. New York: John Wiley and Sons, 1952.

Barnouw, Eric. A Tower in Babel: A History of Broadcasting in the United States: Vol. I—to 1933. New York: Oxford University Press, 1966.

Barnouw, Eric. The Golden Web: A History of Broadcasting in the United States: Vol. II—1933 to 1953. New York: Oxford University Press, 1968.

Beasley, Norman. Main Street Merchant: The Story of the J. C. Penney Company. New York: Whittlesey House, 1948.

Beckman, Theodore N. and Herman C. Nolen. The Chain Store Problem: A Critical Analysis. New York: McGraw-Hill Book Co., 1938.

Benson, Susan Porter. Counter Cultures: Saleswomen, Managers, and Customers in American Department Stores, 1890-1940. Urbana, IL: University of Illinois Press, 1986.

Bernstein, Irving. The Lean Years: A History of the American Worker, 1920-1933. Boston: Houghton Mifflin Co., 1960.

Bernstein, Michael A. The Great Depression: Delayed Recovery and Economic Change in America, 1929-1939. New York: Cambridge University Press, 1987.

Bishop, Jerry E. “Stock Market Experiment Suggests Inevitability of Booms and Busts.” The Wall Street Journal, 17 November, 1987.

Board of Governors of the Federal Reserve System. Banking and Monetary Statistics. Washington: USGOP, 1943.

Bogue, Allan G. “Changes in Mechanical and Plant Technology: The Corn Belt, 1910-1940.” The Journal of Economic History 43 (1983): 1-26.

Breit, William and Elzinga, Kenneth. The Antitrust Casebook: Milestones in Economic Regulation, 2d ed. Chicago: The Dryden Press, 1989.

Bright, Arthur A., Jr. The Electric Lamp Industry: Technological Change and Economic Development from 1800 to 1947. New York: Macmillan, 1947.

Brody, David. Labor in Crisis: The Steel Strike. Philadelphia: J. B. Lippincott Co., 1965.

Brooks, John. Telephone: The First Hundred Years. New York: Harper and Row, 1975.

Brown, D. Clayton. Electricity for Rural America: The Fight for the REA. Westport, CT: The Greenwood Press, 1980.

Brown, William A., Jr. The International Gold Standard Reinterpreted, 1914-1934, 2 vols. New York: National Bureau of Economic Research, 1940.

Brunner, Karl and Allen Meltzer. “What Did We Learn from the Monetary Experience of the United States in the Great Depression?” Canadian Journal of Economics 1 (1968): 334-48.

Bryant, Keith L., Jr., and Henry C. Dethloff. A History of American Business. Englewood Cliffs, NJ: Prentice-Hall, Inc., 1983.

Bucklin, Louis P. Competition and Evolution in the Distributive Trades. Englewood Cliffs, NJ: Prentice-Hall, 1972.

Bullock, Roy J. “The Early History of the Great Atlantic & Pacific Tea Company,” Harvard Business Review 11 (1933): 289-93.

Bullock, Roy J. “A History of the Great Atlantic & Pacific Tea Company Since 1878.” Harvard Business Review 12 (1933): 59-69.

Cecchetti, Stephen G. “Understanding the Great Depression: Lessons for Current Policy.” In The Economics of the Great Depression, Edited by Mark Wheeler. Kalamazoo, MI: W. E. Upjohn Institute for Employment Research, 1998.

Chandler, Alfred D., Jr. Strategy and Structure: Chapters in the History of the American Industrial Enterprise. Cambridge, MA: The M.I.T. Press, 1962.

Chandler, Alfred D., Jr. The Visible Hand: The Managerial Revolution in American Business. Cambridge, MA: the Belknap Press Harvard University Press, 1977.

Chandler, Alfred D., Jr. Giant Enterprise: Ford, General Motors, and the American Automobile Industry. New York: Harcourt, Brace, and World, 1964.

Chester, Giraud, and Garnet R. Garrison. Radio and Television: An Introduction. New York: Appleton-Century Crofts, 1950.

Clewett, Richard C. “Mass Marketing of Consumers’ Goods.” In The Growth of the American Economy, 2d ed. Edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Cochran, Thomas C. 200 Years of American Business. New York: Delta Books, 1977.

Cohen, Yehoshua S. Diffusion of an Innovation in an Urban System: The Spread of Planned Regional Shopping Centers in the United States, 1949-1968. Chicago: The University of Chicago, Department of Geography, Research Paper No. 140, 1972.

Daley, Robert. An American Saga: Juan Trippe and His Pan American Empire. New York: Random House, 1980.

De Long, J. Bradford, and Andre Shleifer. “The Stock Market Bubble of 1929: Evidence from Closed-end Mutual Funds.” The Journal of Economic History 51 (September 1991): 675-700.

Clarke, Sally. “New Deal Regulation and the Revolution in American Farm Productivity: A Case Study of the Diffusion of the Tractor in the Corn Belt, 1920-1940.” The Journal of Economic History 51, no. 1 (1991): 105-115.

Cohen, Avi. “Technological Change as Historical Process: The Case of the U.S. Pulp and Paper Industry, 1915-1940.” The Journal of Economic History 44 (1984): 775-79.

Davies, R. E. G. A History of the World’s Airlines. London: Oxford University Press, 1964.

Dearing, Charles L., and Wilfred Owen. National Transportation Policy. Washington: The Brookings Institution, 1949.

Degen, Robert A. The American Monetary System: A Concise Survey of Its Evolution Since 1896. Lexington, MA: Lexington Books, 1987.

De Long, J. Bradford and Andre Shleifer. “The Stock Market Bubble of 1929: Evidence from Closed-end Mutual Funds.” The Journal of Economic History 51 (1991): 675-700.

Devine, Warren D., Jr. “From Shafts to Wires: Historical Perspectives on Electrification.” The Journal of Economic History 43 (1983): 347-372.

Eckert, Ross D., and George W. Hilton. “The Jitneys.” The Journal of Law and Economics 15 (October 1972): 293-326.

Eichengreen, Barry, ed. The Gold Standard in Theory and History. New York: Metheun, 1985.

Barry Eichengreen. “The Political Economy of the Smoot-Hawley Tariff.” Research in Economic History 12 (1989): 1-43.

Eichengreen, Barry. Golden Fetters: The Gold Standard and the Great Depression, 1919-1939. New York: Oxford University Press, 1992.

Eis, Carl. “The 1919-1930 Merger Movement in American Industry.” The Journal of Law and Economics XII (1969): 267-96.

Emmet, Boris, and John E. Jeuck. Catalogues and Counters: A History of Sears Roebuck and Company. Chicago: University of Chicago Press, 1950.

Fearon, Peter. War, Prosperity, & Depression: The U.S. Economy, 1947-1945. Lawrence, KS: University of Kansas Press, 1987.

Field, Alexander J. “The Most Technologically Progressive Decade of the Century.” The American Economic Review 93 (2003): 1399-1413.

Fischer, Claude. “The Revolution in Rural Telephony, 1900-1920.” Journal of Social History 21 (1987): 221-38.

Fischer, Claude. “Technology’s Retreat: The Decline of Rural Telephony in the United States, 1920-1940.” Social Science History, Vol. 11 (Fall 1987), pp. 295-327.

Fisher, Irving. The Stock Market Crash—and After. New York: Macmillan, 1930.

French, Michael J. “Structural Change and Competition in the United States Tire Industry, 1920-1937.” Business History Review 60 (1986): 28-54.

French, Michael J. The U.S. Tire Industry. Boston: Twayne Publishers, 1991.

Fricke, Ernest B. “The New Deal and the Modernization of Small Business: The McCreary Tire and Rubber Company, 1930-1940.” Business History Review 56 (1982):— 559-76.

Friedman, Milton, and Anna J. Schwartz. A Monetary History of the United States, 1867-1960. Princeton: Princeton University Press, 1963.

Galbraith, John Kenneth. The Great Crash. Boston: Houghton Mifflin, 1954.

Garnet, Robert W. The Telephone Enterprise: The Evolution of the Bell System’s Horizontal Structure, 1876-1900. Baltimore: The Johns Hopkins University Press, 1985.

Gideonse, Max. “Foreign Trade, Investments, and Commercial Policy.” In The Growth of the American Economy, 2d ed. Edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Giedion, Simon. Mechanization Takes Command. New York: Oxford University Press, 1948.

Gordon, Robert Aaron. Economic Instability and Growth: The American Record. New York: Harper and Row, 1974.

Gray, Roy Burton. Development of the Agricultural Tractor in the United States, 2 vols. Washington, D. C.: USGPO, 1954.

Gunderson, Gerald. An Economic History of America. New York: McGraw-Hill, 1976.

Hadwiger, Don F., and Clay Cochran. “Rural Telephones in the United States.” Agricultural History 58 (July 1984): 221-38.

Hamilton, James D. “Monetary Factors in the Great Depression.” Journal of Monetary Economics 19 (1987): 145-169.

Hamilton, James D. “The Role of the International Gold Standard in Propagating the Great Depression.” Contemporary Policy Issues 6 (1988): 67-89.

Hayek, Friedrich A. Prices and Production. New York: Augustus M. Kelly reprint of 1931 edition.

Hayford, Marc and Carl A. Pasurka, Jr. “The Political Economy of the Fordney-McCumber and Smoot-Hawley Tariff Acts.” Explorations in Economic History 29 (1992): 30-50.

Hendrickson, Robert. The Grand Emporiums: The Illustrated History of America’s Great Department Stores. Briarcliff Manor, NY: Stein and Day, 1979.

Herbst, Anthony F., and Joseph S. K. Wu. “Some Evidence of Subsidization of the U.S. Trucking Industry, 1900-1920.” The Journal of Economic History— 33 (June 1973): 417-33.

Higgs, Robert. Crisis and Leviathan: Critical Episodes in the Growth of American Government. New York: Oxford University Press, 1987.

Hilton, George W., and John Due. The Electric Interurban Railways in America. Stanford: Stanford University Press, 1960.

Hoffman, Elizabeth and Gary D. Liebcap. “Institutional Choice and the Development of U.S. Agricultural Policies in the 1920s.” The Journal of Economic History 51 (1991): 397-412.

Holt, Charles F. “Who Benefited from the Prosperity of the Twenties?” Explorations in Economic History 14 (1977): 277-289.

Hower, Ralph W. History of Macy’s of New York, 1858-1919. Cambridge, MA: Harvard University Press, 1946.

Hubbard, R. Glenn, Ed. Financial Markets and Financial Crises. Chicago: University of Chicago Press, 1991.

Hunter, Louis C. “Industry in the Twentieth Century.” In The Growth of the American Economy, 2d ed., edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Jerome, Harry. Mechanization in Industry. New York: National Bureau of Economic Research, 1934.

Johnson, H. Thomas. “Postwar Optimism and the Rural Financial Crisis.” Explorations in Economic History 11, no. 2 (1973-1974): 173-192.

Jones, Fred R. and William H. Aldred. Farm Power and Tractors, 5th ed. New York: McGraw-Hill, 1979.

Keller, Robert. “Factor Income Distribution in the United States During the 20’s: A Reexamination of Fact and Theory.” The Journal of Economic History 33 (1973): 252-95.

Kelly, Charles J., Jr. The Sky’s the Limit: The History of the Airlines. New York: Coward-McCann, 1963.

Kindleberger, Charles. The World in Depression, 1929-1939. Berkeley: The University of California Press, 1973.

Klebaner, Benjamin J. Commercial Banking in the United States: A History. New York: W. W. Norton and Co., 1974.

Kuznets, Simon. Shares of Upper Income Groups in Income and Savings. New York: NBER, 1953.

Lebhar, Godfrey M. Chain Stores in America, 1859-1962. New York: Chain Store Publishing Corp., 1963.

Lewis, Cleona. America’s Stake in International Investments. Washington: The Brookings Institution, 1938.

Livesay, Harold C. and Patrick G. Porter. “Vertical Integration in American Manufacturing, 1899-1948.” The Journal of Economic History 29 (1969): 494-500.

Lipartito, Kenneth. The Bell System and Regional Business: The Telephone in the South, 1877-1920. Baltimore: The Johns Hopkins University Press, 1989.

Liu, Tung, Gary J. Santoni, and Courteney C. Stone. “In Search of Stock Market Bubbles: A Comment on Rappoport and White.” The Journal of Economic History 55 (1995): 647-654.

Lorant, John. “Technological Change in American Manufacturing During the 1920s.” The Journal of Economic History 33 (1967): 243-47.

MacDonald, Forrest. Insull. Chicago: University of Chicago Press, 1962.

Marburg, Theodore. “Domestic Trade and Marketing.” In The Growth of the American Economy, 2d ed. Edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Markham, Jesse. “Survey of the Evidence and Findings on Mergers.” In Business Concentration and Price Policy, National Bureau of Economic Research. Princeton: Princeton University Press, 1955.

Markin, Rom J. The Supermarket: An Analysis of Growth, Development, and Change. Rev. ed. Pullman, WA: Washington State University Press, 1968.

McCraw, Thomas K. TVA and the Power Fight, 1933-1937. Philadelphia: J. B. Lippincott, 1971.

McCraw, Thomas K. and Forest Reinhardt. “Losing to Win: U.S. Steel’s Pricing, Investment Decisions, and Market Share, 1901-1938.” The Journal of Economic History 49 (1989): 592-620.

McMillin, W. Douglas and Randall E. Parker. “An Empirical Analysis of Oil Price Shocks in the Interwar Period.” Economic Inquiry 32 (1994): 486-497.

McNair, Malcolm P., and Eleanor G. May. The Evolution of Retail Institutions in the United States. Cambridge, MA: The Marketing Science Institute, 1976.

Mercer, Lloyd J. “Comment on Papers by Scheiber, Keller, and Raup.” The Journal of Economic History 33 (1973): 291-95.

Mintz, Ilse. Deterioration in the Quality of Foreign Bonds Issued in the United States, 1920-1930. New York: National Bureau of Economic Research, 1951.

Mishkin, Frederic S. “Asymmetric Information and Financial Crises: A Historical Perspective.” In Financial Markets and Financial Crises Edited by R. Glenn Hubbard. Chicago: University of Chicago Press, 1991.

Morris, Lloyd. Not So Long Ago. New York: Random House, 1949.

Mosco, Vincent. Broadcasting in the United States: Innovative Challenge and Organizational Control. Norwood, NJ: Ablex Publishing Corp., 1979.

Moulton, Harold G. et al. The American Transportation Problem. Washington: The Brookings Institution, 1933.

Mueller, John. “Lessons of the Tax-Cuts of Yesteryear.” The Wall Street Journal, March 5, 1981.

Musoke, Moses S. “Mechanizing Cotton Production in the American South: The Tractor, 1915-1960.” Explorations in Economic History 18 (1981): 347-75.

Nelson, Daniel. “Mass Production and the U.S. Tire Industry.” The Journal of Economic History 48 (1987): 329-40.

Nelson, Ralph L. Merger Movements in American Industry, 1895-1956. Princeton: Princeton University Press, 1959.

Niemi, Albert W., Jr., U.S. Economic History, 2nd ed. Chicago: Rand McNally Publishing Co., 1980.

Norton, Hugh S. Modern Transportation Economics. Columbus, OH: Charles E. Merrill Books, Inc., 1963.

Nystrom, Paul H. Economics of Retailing, vol. 1, 3rd ed. New York: The Ronald Press Co., 1930.

Oshima, Harry T. “The Growth of U.S. Factor Productivity: The Significance of New Technologies in the Early Decades of the Twentieth Century.” The Journal of Economic History 44 (1984): 161-70.

Parker, Randall and W. Douglas McMillin. “An Empirical Analysis of Oil Price Shocks During the Interwar Period.” Economic Inquiry 32 (1994): 486-497.

Parker, Randall and Paul Flacco. “Income Uncertainty and the Onset of the Great Depression.” Economic Inquiry 30 (1992): 154-171.

Parker, William N. “Agriculture.” In American Economic Growth: An Economist’s History of the United States, edited by Lance E. Davis, Richard A. Easterlin, William N. Parker, et. al. New York: Harper and Row, 1972.

Passer, Harold C. The Electrical Manufacturers, 1875-1900. Cambridge: Harvard University Press, 1953.

Peak, Hugh S., and Ellen F. Peak. Supermarket Merchandising and Management. Englewood Cliffs, NJ: Prentice-Hall, 1977.

Pilgrim, John. “The Upper Turning Point of 1920: A Reappraisal.” Explorations in Economic History 11 (1974): 271-98.

Rae, John B. Climb to Greatness: The American Aircraft Industry, 1920-1960. Cambridge: The M.I.T. Press, 1968.

Rae, John B. The American Automobile Industry. Boston: Twayne Publishers, 1984.

Rappoport, Peter and Eugene N. White. “Was the Crash of 1929 Expected?” American Economic Review 84 (1994): 271-281.

Rappoport, Peter and Eugene N. White. “Was There a Bubble in the 1929 Stock Market?” The Journal of Economic History 53 (1993): 549-574.

Resseguie, Harry E. “Alexander Turney Stewart and the Development of the Department Store, 1823-1876,” Business History Review 39 (1965): 301-22.

Rezneck, Samuel. “Mass Production and the Use of Energy.” In The Growth of the American Economy, 2d ed., edited by Harold F. Williamson. Englewood Cliffs, NJ: Prentice-Hall, 1951.

Rockwell, Llewellyn H., Jr., ed. The Gold Standard: An Austrian Perspective. Lexington, MA: Lexington Books, 1985.

Romer, Christina. “Spurious Volatility in Historical Unemployment Data.” The Journal of Political Economy 91 (1986): 1-37.

Romer, Christina. “New Estimates of Prewar Gross National Product and Unemployment.” Journal of Economic History 46 (1986): 341-352.

Romer, Christina. “World War I and the Postwar Depression: A Reinterpretation Based on Alternative Estimates of GNP.” Journal of Monetary Economics 22 (1988): 91-115.

Romer, Christina and Jeffrey A. Miron. “A New Monthly Index of Industrial Production, 1884-1940.” Journal of Economic History 50 (1990): 321-337.

Romer, Christina. “The Great Crash and the Onset of the Great Depression.” Quarterly Journal of Economics 105 (1990): 597-625.

Romer, Christina. “Remeasuring Business Cycles.” The Journal of Economic History 54 (1994): 573-609.

Roose, Kenneth D. “The Production Ceiling and the Turning Point of 1920.” American Economic Review 48 (1958): 348-56.

Rosen, Philip T. The Modern Stentors: Radio Broadcasters and the Federal Government, 1920-1934. Westport, CT: The Greenwood Press, 1980.

Rosen, Philip T. “Government, Business, and Technology in the 1920s: The Emergence of American Broadcasting.” In American Business History: Case Studies. Edited by Henry C. Dethloff and C. Joseph Pusateri. Arlington Heights, IL: Harlan Davidson, 1987.

Rothbard, Murray N. America’s Great Depression. Kansas City: Sheed and Ward, 1963.

Sampson, Roy J., and Martin T. Ferris. Domestic Transportation: Practice, Theory, and Policy, 4th ed. Boston: Houghton Mifflin Co., 1979.

Samuelson, Paul and Everett E. Hagen. After the War—1918-1920. Washington: National Resources Planning Board, 1943.

Santoni, Gary and Gerald P. Dwyer, Jr. “The Great Bull Markets, 1924-1929 and 1982-1987: Speculative Bubbles or Economic Fundamentals?” Federal Reserve Bank of St. Louis Review 69 (1987): 16-29.

Santoni, Gary, and Gerald P. Dwyer, Jr. “Bubbles vs. Fundamentals: New Evidence from the Great Bull Markets.” In Crises and Panics: The Lessons of History. Edited by Eugene N. White. Homewood, IL: Dow Jones/Irwin, 1990.

Scherer, Frederick M. and David Ross. Industrial Market Structure and Economic Performance, 3d ed. Boston: Houghton Mifflin, 1990.

Schlebecker, John T. Whereby We Thrive: A History of American Farming, 1607-1972. Ames, IA: The Iowa State University Press, 1975.

Shepherd, James. “The Development of New Wheat Varieties in the Pacific Northwest.” Agricultural History 54 (1980): 52-63.

Sirkin, Gerald. “The Stock Market of 1929 Revisited: A Note.” Business History Review 49 (Fall 1975): 233-41.

Smiley, Gene. The American Economy in the Twentieth Century. Cincinnati: South-Western Publishing Co., 1994.

Smiley, Gene. “New Estimates of Income Shares During the 1920s.” In Calvin Coolidge and the Coolidge Era: Essays on the History of the 1920s, edited by John Earl Haynes, 215-232. Washington, D.C.: Library of Congress, 1998.

Smiley, Gene. “A Note on New Estimates of the Distribution of Income in the 1920s.” The Journal of Economic History 60, no. 4 (2000): 1120-1128.

Smiley, Gene. Rethinking the Great Depression: A New View of Its Causes and Consequences. Chicago: Ivan R. Dee, 2002.

Smiley, Gene, and Richard H. Keehn. “Margin Purchases, Brokers’ Loans and the Bull Market of the Twenties.” Business and Economic History. 2d series. 17 (1988): 129-42.

Smiley, Gene and Richard H. Keehn. “Federal Personal Income Tax Policy in the 1920s.” the Journal of Economic History 55, no. 2 (1995): 285-303.

Sobel, Robert. The Entrepreneuers: Explorations Within the American Business Tradition. New York: Weybright and Talley, 1974.

Soule, George. Prosperity Decade: From War to Depression: 1917-1929. New York: Holt, Rinehart, and Winston, 1947.

Stein, Herbert. The Fiscal Revolution in America, revised ed. Washington, D.C.: AEI Press, 1990.

Stigler, George J. “Monopoly and Oligopoly by Merger.” American Economic Review, 40 (May 1950): 23-34.

Sumner, Scott. “The Role of the International Gold Standard in Commodity Price Deflation: Evidence from the 1929 Stock Market Crash.” Explorations in Economic History 29 (1992): 290-317.

Swanson, Joseph and Samuel Williamson. “Estimates of National Product and Income for the United States Economy, 1919-1941.” Explorations in Economic History 10, no. 1 (1972): 53-73.

Temin, Peter. “The Beginning of the Depression in Germany.” Economic History Review. 24 (May 1971): 240-48.

Temin, Peter. Did Monetary Forces Cause the Great Depression. New York: W. W. Norton, 1976.

Temin, Peter. The Fall of the Bell System. New York: Cambridge University Press, 1987.

Temin, Peter. Lessons from the Great Depression. Cambridge, MA: The M.I.T. Press, 1989.

Thomas, Gordon, and Max Morgan-Witts. The Day the Bubble Burst. Garden City, NY: Doubleday, 1979.

Ulman, Lloyd. “The Development of Trades and Labor Unions,” In American Economic History, edited by Seymour E. Harris, chapter 14. New York: McGraw-Hill Book Co., 1961.

Ulman Lloyd. The Rise of the National Trade Union. Cambridge, MA: Harvard University Press, 1955.

U.S. Department of Commerce, Bureau of the Census. Historical Statistics of the United States: Colonial Times to 1970, 2 volumes. Washington, D.C.: USGPO, 1976.

Walsh, Margaret. Making Connections: The Long Distance Bus Industry in the U.S.A. Burlington, VT: Ashgate, 2000.

Wanniski, Jude. The Way the World Works. New York: Simon and Schuster, 1978.

Weiss, Leonard W. Cast Studies in American Industry, 3d ed. New York: John Wiley & Sons, 1980.

Whaples, Robert. “Hours of Work in U.S. History.” EH.Net Encyclopedia, edited by Robert Whaples, August 15 2001 URL— http://www.eh.net/encyclopedia/contents/whaples.work.hours.us.php

Whatley, Warren. “Southern Agrarian Labor Contracts as Impediments to Cotton Mechanization.” The Journal of Economic History 87 (1987): 45-70.

Wheelock, David C. and Subal C. Kumbhakar. “The Slack Banker Dances: Deposit Insurance and Risk-Taking in the Banking Collapse of the 1920s.” Explorations in Economic History 31 (1994): 357-375.

White, Eugene N. “The Stock Market Boom and Crash of 1929 Revisited.” The Journal of Economic Perspectives. 4 (Spring 1990): 67-83.

White, Eugene N., Ed. Crises and Panics: The Lessons of History. Homewood, IL: Dow Jones/Irwin, 1990.

White, Eugene N. “When the Ticker Ran Late: The Stock Market Boom and Crash of 1929.” In Crises and Panics: The Lessons of History Edited by Eugene N. White. Homewood, IL: Dow Jones/Irwin, 1990.

White, Eugene N. “Stock Market Bubbles? A Reply.” The Journal of Economic History 55 (1995): 655-665.

White, William J. “Economic History of Tractors in the United States.” EH.Net Encyclopedia, edited by Robert Whaples, August 15 2001 URL http://www.eh.net/encyclopedia/contents/white.tractors.history.us.php

Wicker, Elmus. “Federal Reserve Monetary Policy, 1922-1933: A Reinterpretation.” Journal of Political Economy 73 (1965): 325-43.

Wicker, Elmus. “A Reconsideration of Federal Reserve Policy During the 1920-1921 Depression.” The Journal of Economic History 26 (1966): 223-38.

Wicker, Elmus. Federal Reserve Monetary Policy, 1917-1933. New York: Random House, 1966.

Wigmore, Barrie A. The Crash and Its Aftermath. Westport, CT: Greenwood Press, 1985.

Williams, Raburn McFetridge. The Politics of Boom and Bust in Twentieth-Century America. Minneapolis/St. Paul: West Publishing Co., 1994.

Williamson, Harold F., et al. The American Petroleum Industry: The Age of Energy, 1899-1959. Evanston, IL: Northwestern University Press, 1963.

Wilson, Thomas. Fluctuations in Income and Employment, 3d ed. New York: Pitman Publishing, 1948.

Wilson, Jack W., Richard E. Sylla, and Charles P. Jones. “Financial Market Panics and Volatility in the Long Run, 1830-1988.” In Crises and Panics: The Lessons of History Edited by Eugene N. White. Homewood, IL: Dow Jones/Irwin, 1990.

Winkler, John Kennedy. Five and Ten: The Fabulous Life of F. W. Woolworth. New York: R. M. McBride and Co., 1940.

Wood, Charles. “Science and Politics in the War on Cattle Diseases: The Kansas Experience, 1900-1940.” Agricultural History 54 (1980): 82-92.

Wright, Gavin. Old South, New South: Revolutions in the Southern Economy Since the Civil War. New York: Basic Books, 1986.

Wright, Gavin. “The Origins of American Industrial Success, 1879-1940.” The American Economic Review 80 (1990): 651-668.

Citation: Smiley, Gene. “US Economy in the 1920s”. EH.Net Encyclopedia, edited by Robert Whaples. June 29, 2004. URL http://eh.net/encyclopedia/the-u-s-economy-in-the-1920s/

US Banking History, Civil War to World War II

Richard S. Grossman, Wesleyan University

The National Banking Era Begins, 1863

The National Banking Acts of 1863 and 1864

The National Banking era was ushered in by the passage of the National Currency (later renamed the National Banking) Acts of 1863 and 1864. The Acts marked a decisive change in the monetary system, confirmed a quarter-century-old trend in bank chartering arrangements, and also played a role in financing the Civil War.

Provision of a Uniform National Currency

As its original title suggests, one of the main objectives of the legislation was to provide a uniform national currency. Prior to the establishment of the national banking system, the national currency supply consisted of a confusing patchwork of bank notes issued under a variety of rules by banks chartered under different state laws. Notes of sound banks circulated side-by-side with notes of banks in financial trouble, as well as those of banks that had failed (not to mention forgeries). In fact, bank notes frequently traded at a discount, so that a one-dollar note of a smaller, less well-known bank (or, for that matter, of a bank at some distance) would likely have been valued at less than one dollar by someone receiving it in a transaction. The confusion was such as to lead to the publication of magazines that specialized in printing pictures, descriptions, and prices of various bank notes, along with information on whether or not the issuing bank was still in existence.

Under the legislation, newly created national banks were empowered to issue national bank notes backed by a deposit of US Treasury securities with their chartering agency, the Department of the Treasury’s Comptroller of the Currency. The legislation also placed a tax on notes issued by state banks, effectively driving them out of circulation. Bank notes were of uniform design and, in fact, were printed by the government. The amount of bank notes a national bank was allowed to issue depended upon the bank’s capital (which was also regulated by the act) and the amount of bonds it deposited with the Comptroller. The relationship between bank capital, bonds held, and note issue was changed by laws in 1874, 1882, and 1900 (Cagan 1963, James 1976, and Krooss 1969).

Federal Chartering of Banks

A second element of the Act was the introduction bank charters issued by the federal government. From the earliest days of the Republic, banking had been considered primarily the province of state governments.[1] Originally, individuals who wished to obtain banking charters had to approach the state legislature, which then decided if the applicant was of sufficient moral standing to warrant a charter and if the region in question needed an additional bank. These decisions may well have been influenced by bribes and political pressure, both by the prospective banker and by established bankers who may have hoped to block the entry of new competitors.

An important shift in state banking practice had begun with the introduction of free banking laws in the 1830s. Beginning with laws passed in Michigan (1837) and New York (1838), free banking laws changed the way banks obtained charters. Rather than apply to the state legislature and receive a decision on a case-by-case basis, individuals could obtain a charter by filling out some paperwork and depositing a prescribed amount of specified bonds with the state authorities. By 1860, over one half of the states had enacted some type of free banking law (Rockoff 1975). By regularizing and removing legislative discretion from chartering decisions, the National Banking Acts spread free banking on a national level.

Financing the Civil War

A third important element of the National Banking Acts was that they helped the Union government pay for the war. Adopted in the midst of the Civil War, the requirement for banks to deposit US bonds with the Comptroller maintained the demand for Union securities and helped finance the war effort.[2]

Development and Competition with State Banks

The National Banking system grew rapidly at first (Table 1). Much of the increase came at the expense of the state-chartered banking systems, which contracted over the same period, largely because they were no longer able to issue notes. The expansion of the new system did not lead to the extinction of the old: the growth of deposit-taking, combined with less stringent capital requirements, convinced many state bankers that they could do without either the ability to issue banknotes or a federal charter, and led to a resurgence of state banking in the 1880s and 1890s. Under the original acts, the minimum capital requirement for national banks was $50,000 for banks in towns with a population of 6000 or less, $100,000 for banks in cities with a population ranging from 6000 to 50,000, and $200,000 for banks in cities with populations exceeding 50,000. By contrast, the minimum capital requirement for a state bank was often as low as $10,000. The difference in capital requirements may have been an important difference in the resurgence of state banking: in 1877 only about one-fifth of state banks had a capital of less than $50,000; by 1899 the proportion was over three-fifths. Recognizing this competition, the Gold Standard Act of 1900 reduced the minimum capital necessary for national banks. It is questionable whether regulatory competition (both between states and between states and the federal government) kept regulators on their toes or encouraged a “race to the bottom,” that is, lower and looser standards.

Table 1: Numbers and Assets of National and State Banks, 1863-1913

Number of Banks Assets of Banks ($millions)
National Banks State Banks National Banks State Banks
1863 66 1466 16.8 1185.4
1864 467 1089 252.2 725.9
1865 1294 349 1126.5 165.8
1866 1634 297 1476.3 154.8
1867 1636 272 1494.5 151.9
1868 1640 247 1572.1 154.6
1869 1619 259 1564.1 156.0
1870 1612 325 1565.7 201.5
1871 1723 452 1703.4 259.6
1872 1853 566 1770.8 264.5
1873 1968 277 1851.2 178.9
1874 1983 368 1851.8 237.4
1875 2076 586 1913.2 395.2
1876 2091 671 1825.7 405.9
1877 2078 631 1774.3 506.9
1878 2056 510 1770.4 388.8
1879 2048 648 2019.8 427.6
1880 2076 650 2035.4 481.8
1881 2115 683 2325.8 575.5
1882 2239 704 2344.3 633.8
1883 2417 788 2364.8 724.5
1884 2625 852 2282.5 760.9
1885 2689 1015 2421.8 802.0
1886 2809 891 2474.5 807.0
1887 3014 1471 2636.2 1003.0
1888 3120 1523 2731.4 1055.0
1889 3239 1791 2937.9 1237.3
1890 3484 2250 3061.7 1374.6
1891 3652 2743 3113.4 1442.0
1892 3759 3359 3493.7 1640.0
1893 3807 3807 3213.2 1857.0
1894 3770 3810 3422.0 1782.0
1895 3715 4016 3470.5 1954.0
1896 3689 3968 3353.7 1962.0
1897 3610 4108 3563.4 1981.0
1898 3582 4211 3977.6 2298.0
1899 3583 4451 4708.8 2707.0
1900 3732 4659 4944.1 3090.0
1901 4165 5317 5675.9 3776.0
1902 4535 5814 6008.7 4292.0
1903 4939 6493 6286.9 4790.0
1904 5331 7508 6655.9 5244.0
1905 5668 8477 7327.8 6056.0
1906 6053 9604 7784.2 6636.0
1907 6429 10761 8476.5 7190.0
1908 6824 12062 8714.0 6898.0
1909 6926 12398 9471.7 7407.0
1910 7145 13257 9896.6 7911.0
1911 7277 14115 10383 8412.0
1912 7372 14791 10861.7 9005.0
1913 7473 15526 11036.9 9267.0

Source: U.S. Department of the Treasury. Annual Report of the Comptroller of the Currency (1931), pp. 3, 5. State bank columns include data on state-chartered commercial banks and loan and trust companies.

Capital Requirements and Interest Rates

The relatively high minimum capital requirement for national banks may have contributed to regional interest rate differentials in the post-Civil War era. The period from the Civil War through World War I saw a substantial decline in interregional interest rate differentials. According to Lance Davis (1965), the decline in difference between regional interest rates can be explained by the development and spread of the commercial paper market, which increased the interregional mobility of funds. Richard Sylla (1969) argues that the high minimum capital requirements established by the National Banking Acts represented barriers to entry and therefore led to local monopolies by note-issuing national banks. These local monopolies in capital-short regions led to the persistence of interest rate spreads.[3] (See also James 1976b.)

Bank Failures

Financial crises were a common occurrence in the National Banking era. O.M.W. Sprague (1910) classified the main financial crises during the era as occurring in 1873, 1884, 1890, 1893, and 1907, with those of 1873, 1893, and 1907 being regarded as full-fledged crises and those of 1884 and 1890 as less severe.

Contemporary observers complained of both the persistence and ill effects of bank failures under the new system.[4] The number and assets of failed national and non-national banks during the National Banking era is shown in Table 2. Suspensions — temporary closures of banks unable to meet demand for their liabilities — were even higher during this period.

Table 2: Bank Failures, 1865-1913

Number of Failed Banks Assets of Failed Banks ($millions)
National Banks Other Banks National Banks Other banks
1865 1 5 0.1 0.2
1866 2 5 1.8 1.2
1867 7 3 4.9 0.2
1868 3 7 0.5 0.2
1869 2 6 0.7 0.1
1870 0 1 0.0 0.0
1871 0 7 0.0 2.3
1872 6 10 5.2 2.1
1873 11 33 8.8 4.6
1874 3 40 0.6 4.1
1875 5 14 3.2 9.2
1876 9 37 2.2 7.3
1877 10 63 7.3 13.1
1878 14 70 6.9 26.0
1879 8 20 2.6 5.1
1880 3 10 1.0 1.6
1881 0 9 0.0 0.6
1882 3 19 6.0 2.8
1883 2 27 0.9 2.8
1884 11 54 7.9 12.9
1885 4 32 4.7 3.0
1886 8 13 1.6 1.3
1887 8 19 6.9 2.9
1888 8 17 6.9 2.8
1889 8 15 0.8 1.3
1890 9 30 2.0 10.7
1891 25 44 9.0 7.2
1892 17 27 15.1 2.7
1893 65 261 27.6 54.8
1894 21 71 7.4 8.0
1895 36 115 12.1 11.3
1896 27 78 12.0 10.2
1897 38 122 29.1 17.9
1898 7 53 4.6 4.5
1899 12 26 2.3 7.8
1900 6 32 11.6 7.7
1901 11 56 8.1 6.4
1902 2 43 0.5 7.3
1903 12 26 6.8 2.2
1904 20 102 7.7 24.3
1905 22 57 13.7 7.0
1906 8 37 2.2 6.6
1907 7 34 5.4 13.0
1908 24 132 30.8 177.1
1909 9 60 3.4 15.8
1910 6 28 2.6 14.5
1911 3 56 1.1 14.0
1912 8 55 5.0 7.8
1913 6 40 7.6 6.2

Source: U.S. Department of the Treasury. Annual Report of the Comptroller of the Currency (1931), pp. 6, 8.

The largest number of failures occurred in the years following the financial crisis of 1893. The number and assets of national and non-national bank failures remained high for four years following the crisis, a period which coincided with the free silver agitation of the mid-1890s, before returning to pre-1893 levels. Other crises were also accompanied by an increase in the number and assets of bank failures. The earliest peak during the national banking era accompanied the onset of the crisis of 1873. Failures subsequently fell, but rose again in the trough of the depression that followed the 1873 crisis. The panic of 1884 saw a slight increase in failures, while the financial stringency of 1890 was followed by a more substantial increase. Failures peaked again following several minor panics around the turn of the century and again at the time of the crisis of 1907.

Among the alleged causes of crises during the national banking era were that the money supply was not sufficiently elastic to allow for seasonal and other stresses on the money market and the fact that reserves were pyramided. That is, under the National Banking Acts, a portion of banks’ required reserves could be held in national banks in larger cities (“reserve city banks”). Reserve city banks could, in turn, hold a portion of their required reserves in “central reserve city banks,” national banks in New York, Chicago, and St. Louis. In practice, this led to the build-up of reserve balances in New York City. Increased demands for funds in the interior of the country during the autumn harvest season led to substantial outflows of funds from New York, which contributed to tight money market conditions and, sometimes, to panics (Miron 1986).[5]

Attempted Remedies for Banking Crises

Causes of Bank Failures

Bank failures occur when banks are unable to meet the demands of their creditors (in earlier times these were note holders; later on, they were more often depositors). Banks typically do not hold 100 percent of their liabilities in reserves, instead holding some fraction of demandable liabilities in reserves: as long as the flows of funds into and out of the bank are more or less in balance, the bank is in little danger of failing. A withdrawal of deposits that exceeds the bank’s reserves, however, can lead to the banks’ temporary suspension (inability to pay) or, if protracted, failure. The surge in withdrawals can have a variety of causes including depositor concern about the bank’s solvency (ability to pay depositors), as well as worries about other banks’ solvency that lead to a general distrust of all banks.[6]

Clearinghouses

Bankers and policy makers attempted a number of different responses to banking panics during the National Banking era. One method of dealing with panics was for the bankers of a city to pool their resources, through the local bankers’ clearinghouse and to jointly guarantee the payment of every member banks’ liabilities (see Gorton (1985a, b)).

Deposit Insurance

Another method of coping with panics was deposit insurance. Eight states (Oklahoma, Kansas, Nebraska, Texas, Mississippi, South Dakota, North Dakota, and Washington) adopted deposit insurance systems between 1908 and 1917 (six other states had adopted some form of deposit insurance in the nineteenth century: New York, Vermont, Indiana, Michigan, Ohio, and Iowa). These systems were not particularly successful, in part because they lacked diversification: because these systems operated statewide, when a panic fell full force on a state, deposit insurance system did not have adequate resources to handle each and every failure. When the agricultural depression of the 1920s hit, a number of these systems failed (Federal Deposit Insurance Corporation 1988).

Double Liability

Another measure adopted to curtail bank risk-taking, and through risk-taking, bank failures, was double liability (Grossman 2001). Under double liability, shareholders who had invested in banks that failed were liable to lose not only the money they had invested, but could be called on by a bank’s receiver to contribute an additional amount equal to the par value of the shares (hence the term “double liability,” although clearly the loss to the shareholder need not have been double if the par and market values of shares were different). Other states instituted triple liability, where the receiver could call on twice the par value of shares owned. Still others had unlimited liability, while others had single, or regular limited, liability.[7] It was argued that banks with double liability would be more risk averse, since shareholders would be liable for a greater payment if the firm went bankrupt.

By 1870, multiple (i.e., double, triple, and unlimited) liability was already the rule for state banks in eighteen states, principally in the Midwest, New England, and Middle Atlantic regions, as well as for national banks. By 1900, multiple liability was the law for state banks in thirty-two states. By this time, the main pockets of single liability were in the south and west. By 1930, only four states had single liability.

Double liability appears to have been successful (Grossman 2001), at least during less-than-turbulent times. During the 1890-1930 period, state banks in states where banks were subject to double (or triple, or unlimited) liability typically undertook less risk than their counterparts in single (limited) liability states in normal years. However, in years in which bank failures were quite high, banks in multiple liability states appeared to take more risk than their limited liability counterparts. This may have resulted from the fact that legislators in more crisis-prone states were more likely to have already adopted double liability. Whatever its advantages or disadvantages, the Great Depression spelled the end of double liability: by 1941, virtually every state had repealed double liability for state-chartered banks.

The Crisis of 1907 and Founding of the Federal Reserve

The crisis of 1907, which had been brought under control by a coalition of trust companies and other chartered banks and clearing-house members led by J.P. Morgan, led to a reconsideration of the monetary system of the United States. Congress set up the National Monetary Commission (1908-12), which undertook a massive study of the history of banking and monetary arrangements in the United States and in other economically advanced countries.[8]

The eventual result of this investigation was the Federal Reserve Act (1913), which established the Federal Reserve System as the central bank of the US. Unlike other countries that had one central bank (e.g., Bank of England, Bank of France), the Federal Reserve Act provided for a system of between eight and twelve reserve banks (twelve were eventually established under the act, although during debate over the act, some had called for as many as one reserve bank per state). This provision, like the rejection of the first two attempts at a central bank, resulted, in part, from American’s antipathy towards centralized monetary authority. The Federal Reserve was established to manage the monetary affairs of the country, to hold the reserves of banks and to regulate the money supply. At the time of its founding each of the reserve banks had a high degree of independence. As a result of the crises surrounding the Great Depression, Congress passed the Banking Act of 1935, which, among other things, centralized Federal Reserve power (including the power to engage in open market operations) in a Washington-based Board of Governors (and Federal Open Market Committee), relegating the heads of the individual reserve banks to a more consultative role in the operation of monetary policy.

The Goal of an “Elastic Currency”

The stated goals of the Federal Reserve Act were: ” . . . to furnish an elastic currency, to furnish the means of rediscounting commercial paper, to establish a more effective supervision of banking in the United States, and for other purposes.” Furnishing an “elastic currency” was important goal of the act, since none of the components of the money supply (gold and silver certificates, national bank notes) were able to expand or contract particularly rapidly. The inelasticity of the money supply, along with the seasonal fluctuations in money demand had led to a number of the panics of the National Banking era. These panic-inducing seasonal fluctuations resulted from the large flows of money out of New York and other money centers to the interior of the country to pay for the newly harvested crops. If monetary conditions were already tight before the drain of funds to the nation’s interior, the autumnal movement of funds could — and did –precipitate panics.[9]

Growth of the Bankers’ Acceptance Market

The act also fostered the growth of the bankers’ acceptance market. Bankers’ acceptances were essentially short-dated IOUs, issued by banks on behalf of clients that were importing (or otherwise purchasing) goods. These acceptances were sent to the seller who could hold on to them until they matured, and receive the face value of the acceptance, or could discount them, that is, receive the face value minus interest charges. By allowing the Federal Reserve to rediscount commercial paper, the act facilitated the growth of this short-term money market (Warburg 1930, Broz 1997, and Federal Reserve Bank of New York 1998). In the 1920s, the various Federal Reserve banks began making large-scale purchases of US Treasury obligations, marking the beginnings of Federal Reserve open market operations.[10]

The Federal Reserve and State Banking

The establishment of the Federal Reserve did not end the competition between the state and national banking systems. While national banks were required to be members of the new Federal Reserve System, state banks could also become members of the system on equal terms. Further, the Federal Reserve Act, bolstered by the Act of June 21, 1917, ensured that state banks could become member banks without losing any competitive advantages they might hold over national banks. Depending upon the state, state banking law sometimes gave state banks advantages in the areas of branching,[11] trust operations,[12] interlocking managements, loan and investment powers,[13] safe deposit operations, and the arrangement of mergers.[14] Where state banking laws were especially liberal, banks had an incentive to give up their national bank charter and seek admission to the Federal Reserve System as a state member bank.

McFadden Act

The McFadden Act (1927) addressed some of the competitive inequalities between state and national banks. It gave national banks charters of indeterminate length, allowing them to compete with state banks for trust business. It expanded the range of permissible investments, including real estate investment and allowed investment in the stock of safe deposit companies. The Act greatly restricted the ability of member banks — whether state or nationally chartered — from opening or maintaining out-of-town branches.

The Great Depression: Panic and Reform

The Great Depression was the longest, most severe economic downturn in the history of the United States.[15] The banking panics of 1930, 1931, and 1933 were the most severe banking disruption ever to hit the United States, with more than one quarter of all banks closing. Data on the number of bank suspensions during this period is presented in Table 3.

Table 3: Bank Suspensions, 1921-33

Number of Bank Suspensions
All Banks National Banks
1921 505 52
1922 367 49
1923 646 90
1924 775 122
1925 618 118
1926 976 123
1927 669 91
1928 499 57
1929 659 64
1930 1352 161
1931 2294 409
1932 1456 276
1933 5190 1475

Source: Bremer (1935).

Note: 1933 figures include 4507 non-licensed banks (1400 non-licensed national banks). Non-licensed banks consist of banks operating on a restricted basis or not in operation, but not in liquidation or receivership.

The first banking panic erupted in October 1930. According to Friedman and Schwartz (1963, pp. 308-309), it began with failures in Missouri, Indiana, Illinois, Iowa, Arkansas, and North Carolina and quickly spread to other areas of the country. Friedman and Schwartz report that 256 banks with $180 million of deposits failed in November 1930, while 352 banks with over $370 million of deposits failed in the following month (the largest of which was the Bank of United States which failed on December 11 with over $200 million of deposits). The second banking panic began in March of 1931 and continued into the summer.[16] The third and final panic began at the end of 1932 and persisted into March of 1933. During the early months of 1933, a number of states declared banking holidays, allowing banks to close their doors and therefore freeing them from the requirement to redeem deposits. By the time President Franklin Delano Roosevelt was inaugurated on March 4, 1933, state-declared banking holidays were widespread. The following day, the president declared a national banking holiday.

Beginning on March 13, the Secretary of the Treasury began granting licenses to banks to reopen for business.

Federal Deposit Insurance

The crises led to the implementation of several major reforms in banking. Among the most important of these was the introduction of federal deposit insurance under the Banking (Glass-Steagall) Act of 1933. Originally an explicitly temporary program, the Act established the Federal Deposit Insurance Corporation (the FDIC was made permanent by the Banking Act of 1935); insurance became effective January 1, 1934. Member banks of the Federal Reserve (which included all national banks) were required to join FDIC. Within six months, 14,000 out of 15,348 commercial banks, representing 97 percent of bank deposits had subscribed to federal deposit insurance (Friedman and Schwartz, 1963, 436-437).[17] Coverage under the initial act was limited to a maximum of $2500 of deposits for each depositor. Table 4 documents the increase in the limit from the act’s inception until 1980, when it reached its current $100,000 level.

Table 4: FDIC Insurance Limit

1934 (January) $2500
1934 (July) $5000
1950 $10,000
1966 $15,000
1969 $20,000
1974 $40,000
1980 $100,000
Source: http://www.fdic.gov/

Additional Provisions of the Glass-Steagall Act

An important goal of the New Deal reforms was to enhance the stability of the banking system. Because the involvement of commercial banks in securities underwriting was seen as having contributed to banking instability, the Glass-Steagall Act of 1933 forced the separation of commercial and investment banking.[18] Additionally, the Acts (1933 for member banks, 1935 for other insured banks) established Regulation Q, which forbade banks from paying interest on demand deposits (i.e., checking accounts) and established limits on interest rates paid to time deposits. It was argued that paying interest on demand deposits introduced unhealthy competition.

Recent Responses to New Deal Banking Laws

In a sense, contemporary debates on banking policy stem largely from the reforms of the post-Depression era. Although several of the reforms introduced in the wake of the 1931-33 crisis have survived into the twenty-first century, almost all of them have been subject to intense scrutiny in the last two decades. For example, several court decisions, along with the Financial Services Modernization Act (Gramm-Leach-Bliley) of 1999, have blurred the previously strict separation between different financial service industries (particularly, although not limited to commercial and investment banking).

FSLIC

The Savings and Loan crisis of the 1980s, resulting from a combination of deposit insurance-induced moral hazard and deregulation, led to the dismantling of the Depression-era Federal Savings and Loan Insurance Corporation (FSLIC) and the transfer of Savings and Loan insurance to the Federal Deposit Insurance Corporation.

Further Reading

Bernanke, Ben S. “Nonmonetary Effects of the Financial Crisis in Propagation of the Great Depression.” American Economic Review 73 (1983): 257-76.

Bordo, Michael D., Claudia Goldin, and Eugene N. White, editors. The Defining Moment: The Great Depression and the American Economy in the Twentieth Century. Chicago: University of Chicago Press, 1998.

Bremer, C. D. American Bank Failures. New York: Columbia University Press, 1935.

Broz, J. Lawrence. The International Origins of the Federal Reserve System. Ithaca: Cornell University Press, 1997.

Cagan, Phillip. “The First Fifty Years of the National Banking System: An Historical Appraisal.” In Banking and Monetary Studies, edited by Deane Carson, 15-42. Homewood: Richard D. Irwin, 1963.

Cagan, Phillip. The Determinants and Effects of Changes in the Stock of Money. New York: National Bureau of Economic Research, 1065.

Calomiris, Charles W. and Gorton, Gary. “The Origins of Banking Panics: Models, Facts, and Bank Regulation.” In Financial Markets and Financial Crises, edited by Glenn R. Hubbard, 109-73. Chicago: University of Chicago Press, 1991.

Davis, Lance. “The Investment Market, 1870-1914: The Evolution of a National Market.” Journal of Economic History 25 (1965): 355-399.

Dewald, William G. “ The National Monetary Commission: A Look Back.”

Journal of Money, Credit and Banking 4 (1972): 930-956.

Eichengreen, Barry. “Mortgage Interest Rates in the Populist Era.” American Economic Review 74 (1984): 995-1015.

Eichengreen, Barry. Golden Fetters: The Gold Standard and the Great Depression, 1919-1939, Oxford: Oxford University Press, 1992.

Federal Deposit Insurance Corporation. “A Brief History of Deposit Insurance in the United States.” Washington: FDIC, 1998. http://www.fdic.gov/bank/historical/brief/brhist.pdf

Federal Reserve. The Federal Reserve: Purposes and Functions. Washington: Federal Reserve Board, 1994. http://www.federalreserve.gov/pf/pdf/frspurp.pdf

Federal Reserve Bank of New York. U.S. Monetary Policy and Financial Markets.

New York, 1998. http://www.ny.frb.org/pihome/addpub/monpol/chapter2.pdf

Friedman, Milton and Anna J. Schawtz. A Monetary History of the United States, 1867-1960. Princeton: Princeton University Press, 1963.

Goodhart, C.A.E. The New York Money Market and the Finance of Trade, 1900-1913. Cambridge: Harvard University Press, 1969.

Gorton, Gary. “Bank Suspensions of Convertibility.” Journal of Monetary Economics 15 (1985a): 177-193.

Gorton, Gary. “Clearing Houses and the Origin of Central Banking in the United States.” Journal of Economic History 45 (1985b): 277-283.

Grossman, Richard S. “Deposit Insurance, Regulation, Moral Hazard in the Thrift Industry: Evidence from the 1930s.” American Economic Review 82 (1992): 800-821.

Grossman, Richard S. “The Macroeconomic Consequences of Bank Failures under the National Banking System.” Explorations in Economic History 30 (1993): 294-320.

Grossman, Richard S. “The Shoe That Didn’t Drop: Explaining Banking Stability during the Great Depression.” Journal of Economic History 54, no. 3 (1994): 654-82.

Grossman, Richard S. “Double Liability and Bank Risk-Taking.” Journal of Money, Credit, and Banking 33 (2001): 143-159.

James, John A. “The Conundrum of the Low Issue of National Bank Notes.” Journal of Political Economy 84 (1976a): 359-67.

James, John A. “The Development of the National Money Market, 1893-1911.” Journal of Economic History 36 (1976b): 878-97.

Kent, Raymond P. “Dual Banking between the Two Wars.” In Banking and Monetary Studies, edited by Deane Carson, 43-63. Homewood: Richard D. Irwin, 1963.

Kindleberger, Charles P. Manias, Panics, and Crashes: A History of Financial Crises. New York: Basic Books, 1978.

Krooss, Herman E., editor. Documentary History of Banking and Currency in the United States. New York: Chelsea House Publishers, 1969.

Minsky, Hyman P. Can ‘It” Happen Again? Essays on Instability and Finance. Armonk, NY: M.E. Sharpe, 1982.

Miron , Jeffrey A. “Financial Panics, the Seasonality of the Nominal Interest Rate, and the Founding of the Fed.” American Economic Review 76 (1986): 125-38.

Mishkin, Frederic S. “Asymmetric Information and Financial Crises: A Historical Perspective.” In Financial Markets and Financial Crises, edited by R. Glenn Hubbard, 69-108. Chicago: University of Chicago Press, 1991.

Rockoff, Hugh. The Free Banking Era: A Reexamination. New York: Arno Press, 1975.

Rockoff, Hugh. “Banking and Finance, 1789-1914.” In The Cambridge Economic History of the United States. Volume 2. The Long Nineteenth Century, edited by Stanley L Engerman and Robert E. Gallman, 643-84. New York: Cambridge University Press, 2000.

Sprague, O. M. W. History of Crises under the National Banking System. Washington, DC: Government Printing Office, 1910.

Sylla, Richard. “Federal Policy, Banking Market Structure, and Capital Mobilization in the United States, 1863-1913.” Journal of Economic History 29 (1969): 657-686.

Temin, Peter. Did Monetary Forces Cause the Great Depression? New York: Norton, 1976.

Temin, Peter. Lessons from the Great Depression. Cambridge: MIT Press, 1989.

Warburg,. Paul M. The Federal Reserve System: Its Origin and Growth: Reflections and Recollections, 2 volumes. New York: Macmillan, 1930.

White, Eugene N. The Regulation and Reform of American Banking, 1900-1929. Princeton: Princeton University Press, 1983.

White, Eugene N. “Before the Glass-Steagall Act: An Analysis of the Investment Banking Activities of National Banks.” Explorations in Economic History 23 (1986) 33-55.

White, Eugene N. “Banking and Finance in the Twentieth Century.” In The Cambridge Economic History of the United States. Volume 3. The Twentieth Century, edited by Stanley L.Engerman and Robert E. Gallman, 743-802. New York: Cambridge University Press, 2000.

Wicker, Elmus. The Banking Panics of the Great Depression. New York: Cambridge University Press, 1996.

Wicker, Elmus. Banking Panics of the Gilded Age. New York: Cambridge University Press, 2000.


[1] The two exceptions were the First and Second Banks of the United States. The First Bank, which was chartered by Congress at the urging of Alexander Hamilton, in 1791, was granted a 20-year charter, which Congress allowed to expire in 1811. The Second Bank was chartered just five years after the expiration of the first, but Andrew Jackson vetoed the charter renewal in 1832 and the bank ceased to operate with a national charter when its 20-year charter expired in 1836. The US remained without a central bank until the founding of the Federal Reserve in 1914. Even then, the Fed was not founded as one central bank, but as a collection of twelve regional reserve banks. American suspicion of concentrated financial power has not been limited to central banking: in contrast to the rest of the industrialized world, twentieth century US banking was characterized by large numbers of comparatively small, unbranched banks.

[2] The relationship between the enactment of the National Bank Acts and the Civil War was perhaps even deeper. Hugh Rockoff suggested the following to me: “There were western states where the banking system was in trouble because the note issue was based on southern bonds, and people in those states were looking to the national government to do something. There were also conservative politicians who were afraid that they wouldn’t be able to get rid of the greenback (a perfectly uniform [government issued wartime] currency) if there wasn’t a private alternative that also promised uniformity…. It has even been claimed that by setting up a national system, banks in the South were undermined — as a war measure.”

[3] Eichengreen (1984) argues that regional mortgage interest rate differentials resulted from differences in risk.

[4] There is some debate over the direction of causality between banking crises and economic downturns. According to monetarists Friedman and Schwartz (1963) and Cagan (1965), the monetary contraction associated with bank failures magnifies real economic downturns. Bernanke (1983) argues that bank failures raise the cost of credit intermediation and therefore have an effect on the real economy through non-monetary channels. An alternative view, articulated by Sprague (1910), Fisher (1933), Temin (1976), Minsky (1982), and Kindleberger (1978), maintains that bank failures and monetary contraction are primarily a consequence, rather than a cause, of sluggishness in the real economy which originates in non-monetary sources. See Grossman (1993) for a summary of this literature.

[5] See Calomiris and Gorton (1991) for an alternative view.

[6] See Mishkin (1991) on asymmetric information and financial crises.

[7] Still other states had “voluntary liability,” whereby each bank could choose single or double liability.

[8] See Dewald (1972) on the National Monetary Commission.

[9] Miron (1986) demonstrates the decline in the seasonality of interest rates following the founding of the Fed.

[10] Other Fed activities included check clearing.

[11] According to Kent (1963, pp. 48), starting in 1922 the Comptroller allowed national banks to open “offices” to receive deposits, cash checks, and receive applications for loans in head office cities of states that allowed state-chartered banks to establish branches.

[12] Prior to 1922, national bank charters had lives of only 20 years. This severely limited their ability to compete with state banks in the trust business. (Kent 1963, p. 49)

[13] National banks were subject to more severe limitations on lending than most state banks. These restrictions included a limit on the amount that could be loaned to one borrower as well as limitations on real estate lending. (Kent 1963, pp. 50-51)

[14] Although the Bank Consolidation Act of 1918 provided for the merger of two or more national banks, it made no provision for the merger of a state and national bank. Kent (1963, p. 51).

[15] References touching on banking and financial aspects of the Great Depression in the United States include Friedman and Schwartz (1963), Temin (1976, 1989), Kindleberger (1978), Bernanke (1983), Eichangreen (1992), and Bordo, Goldin, and White (1998).

[16] During this period, the failures of the Credit-Anstalt, Austria’s largest bank, and the Darmstädter und Nationalbank (Danat Bank), a large German bank, inaugurated the beginning of financial crisis in Europe. The European financial crisis led to Britain’s suspension of the gold standard in September 1931. See Grossman (1994) on the European banking crisis of 1931. The best source on the gold standard in the interwar years is Eichengreen (1992).

[17] Interestingly, federal deposit insurance was made optional for savings and loan institutions at about the same time. The majority of S&L’s did not elect to adopt deposit insurance until after 1950. See Grossman (1992).

[18] See, however, White (1986) for

Citation: Grossman, Richard. “US Banking History, Civil War to World War II”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL
http://eh.net/encyclopedia/us-banking-history-civil-war-to-world-war-ii/

Antebellum Banking in the United States

Howard Bodenhorn, Lafayette College

The first legitimate commercial bank in the United States was the Bank of North America founded in 1781. Encouraged by Alexander Hamilton, Robert Morris persuaded the Continental Congress to charter the bank, which loaned to the cash-strapped Revolutionary government as well as private citizens, mostly Philadelphia merchants. The possibilities of commercial banking had been widely recognized by many colonists, but British law forbade the establishment of commercial, limited-liability banks in the colonies. Given that many of the colonists’ grievances against Parliament centered on economic and monetary issues, it is not surprising that one of the earliest acts of the Continental Congress was the establishment of a bank.

The introduction of banking to the U.S. was viewed as an important first step in forming an independent nation because banks supplied a medium of exchange (banknotes1 and deposits) in an economy perpetually strangled by shortages of specie money and credit, because they animated industry, and because they fostered wealth creation and promoted well-being. In the last case, contemporaries typically viewed banks as an integral part of a wider system of government-sponsored commercial infrastructure. Like schools, bridges, road, canals, river clearing and harbor improvements, the benefits of banks were expected to accrue to everyone even if dividends accrued only to shareholders.

Financial Sector Growth

By 1800 each major U.S. port city had at least one commercial bank serving the local mercantile community. As city banks proved themselves, banking spread into smaller cities and towns and expanded their clientele. Although most banks specialized in mercantile lending, others served artisans and farmers. In 1820 there were 327 commercial banks and several mutual savings banks that promoted thrift among the poor. Thus, at the onset of the antebellum period (defined here as the period between 1820 and 1860), urban residents were familiar with the intermediary function of banks and used bank-supplied currencies (deposits and banknotes) for most transactions. Table 1 reports the number of banks and the value of loans outstanding at year end between 1820 and 1860. During the era, the number of banks increased from 327 to 1,562 and total loans increased from just over $55.1 million to $691.9 million. Bank-supplied credit in the U.S. economy increased at a remarkable annual average rate of 6.3 percent. Growth in the financial sector, then outpaced growth in aggregate economic activity. Nominal gross domestic product increased an average annual rate of about 4.3 percent over the same interval. This essay discusses how regional regulatory structures evolved as the banking sector grew and radiated out from northeastern cities to the hinterlands.

Table 1
Number of Banks and Total Loans, 1820-1860

Year Banks Loans ($ millions)
1820 327 55.1
1821 273 71.9
1822 267 56.0
1823 274 75.9
1824 300 73.8
1825 330 88.7
1826 331 104.8
1827 333 90.5
1828 355 100.3
1829 369 103.0
1830 381 115.3
1831 424 149.0
1832 464 152.5
1833 517 222.9
1834 506 324.1
1835 704 365.1
1836 713 457.5
1837 788 525.1
1838 829 485.6
1839 840 492.3
1840 901 462.9
1841 784 386.5
1842 692 324.0
1843 691 254.5
1844 696 264.9
1845 707 288.6
1846 707 312.1
1847 715 310.3
1848 751 344.5
1849 782 332.3
1850 824 364.2
1851 879 413.8
1852 913 429.8
1853 750 408.9
1854 1208 557.4
1855 1307 576.1
1856 1398 634.2
1857 1416 684.5
1858 1422 583.2
1859 1476 657.2
1860 1562 691.9

Sources: Fenstermaker (1965); U.S. Comptroller of the Currency (1931).

Adaptability

As important as early American banks were in the process of capital accumulation, perhaps their most notable feature was their adaptability. Kuznets (1958) argues that one measure of the financial sector’s value is how and to what extent it evolves with changing economic conditions. Put in place to perform certain functions under one set of economic circumstances, how did it alter its behavior and service the needs of borrowers as circumstances changed. One benefit of the federalist U.S. political system was that states were given the freedom to establish systems reflecting local needs and preferences. While the political structure deserves credit in promoting regional adaptations, North (1994) credits the adaptability of America’s formal rules and informal constraints that rewarded adventurism in the economic, as well as the noneconomic, sphere. Differences in geography, climate, crop mix, manufacturing activity, population density and a host of other variables were reflected in different state banking systems. Rhode Island’s banks bore little resemblance to those in far away Louisiana or Missouri, or even those in neighboring Connecticut. Each state’s banks took a different form, but their purpose was the same; namely, to provide the state’s citizens with monetary and intermediary services and to promote the general economic welfare. This section provides a sketch of regional differences. A more detailed discussion can be found in Bodenhorn (2002).

State Banking in New England

New England’s banks most resemble the common conception of the antebellum bank. They were relatively small, unit banks; their stock was closely held; they granted loans to local farmers, merchants and artisans with whom the bank’s managers had more than a passing familiarity; and the state took little direct interest in their daily operations.

Of the banking systems put in place in the antebellum era, New England’s is typically viewed as the most stable and conservative. Friedman and Schwartz (1986) attribute their stability to an Old World concern with business reputations, familial ties, and personal legacies. New England was long settled, its society well established, and its business community mature and respected throughout the Atlantic trading network. Wealthy businessmen and bankers with strong ties to the community — like the Browns of Providence or the Bowdoins of Boston — emphasized stability not just because doing so benefited and reflected well on them, but because they realized that bad banking was bad for everyone’s business.

Besides their reputation for soundness, the two defining characteristics of New England’s early banks were their insider nature and their small size. The typical New England bank was small compared to banks in other regions. Table 2 shows that in 1820 the average Massachusetts country bank was about the same size as a Pennsylvania country bank, but both were only about half the size of a Virginia bank. A Rhode Island bank was about one-third the size of a Massachusetts or Pennsylvania bank and a mere one-sixth as large as Virginia’s banks. By 1850 the average Massachusetts bank declined relatively, operating on about two-thirds the paid-in capital of a Pennsylvania country bank. Rhode Island’s banks also shrank relative to Pennsylvania’s and were tiny compared to the large branch banks in the South and West.

Table 2
Average Bank Size by Capital and Lending in 1820 and 1850 Selected States and Cities
(in $ thousands)

1820
Capital
Loans 1850 Capital Loans
Massachusetts $374.5 $480.4 $293.5 $494.0
except Boston 176.6 230.8 170.3 281.9
Rhode Island 95.7 103.2 186.0 246.2
except Providence 60.6 72.0 79.5 108.5
New York na na 246.8 516.3
except NYC na na 126.7 240.1
Pennsylvania 221.8 262.9 340.2 674.6
except Philadelphia 162.6 195.2 246.0 420.7
Virginia1,2 351.5 340.0 270.3 504.5
South Carolina2 na na 938.5 1,471.5
Kentucky2 na na 439.4 727.3

Notes: 1 Virginia figures for 1822. 2 Figures represent branch averages.

Source: Bodenhorn (2002).

Explanations for New England Banks’ Relatively Small Size

Several explanations have been offered for the relatively small size of New England’s banks. Contemporaries attributed it to the New England states’ propensity to tax bank capital, which was thought to work to the detriment of large banks. They argued that large banks circulated fewer banknotes per dollar of capital. The result was a progressive tax that fell disproportionately on large banks. Data compiled from Massachusetts’s bank reports suggest that large banks were not disadvantaged by the capital tax. It was a fact, as contemporaries believed, that large banks paid higher taxes per dollar of circulating banknotes, but a potentially better benchmark is the tax to loan ratio because large banks made more use of deposits than small banks. The tax to loan ratio was remarkably constant across both bank size and time, averaging just 0.6 percent between 1834 and 1855. Moreover, there is evidence of constant to modestly increasing returns to scale in New England banking. Large banks were generally at least as profitable as small banks in all years between 1834 and 1860, and slightly more so in many.

Lamoreaux (1993) offers a different explanation for the modest size of the region’s banks. New England’s banks, she argues, were not impersonal financial intermediaries. Rather, they acted as the financial arms of extended kinship trading networks. Throughout the antebellum era banks catered to insiders: directors, officers, shareholders, or business partners and kin of directors, officers, shareholders and business partners. Such preferences toward insiders represented the perpetuation of the eighteenth-century custom of pooling capital to finance family enterprises. In the nineteenth century the practice continued under corporate auspices. The corporate form, in fact, facilitated raising capital in greater amounts than the family unit could raise on its own. But because the banks kept their loans within a relatively small circle of business connections, it was not until the late nineteenth century that bank size increased.2

Once the kinship orientation of the region’s banks was established it perpetuated itself. When outsiders could not obtain loans from existing insider organizations, they formed their own insider bank. In doing so the promoters assured themselves of a steady supply of credit and created engines of economic mobility for kinship networks formerly closed off from many sources of credit. State legislatures accommodated the practice through their liberal chartering policies. By 1860, Rhode Island had 91 banks, Maine had 68, New Hampshire 51, Vermont 44, Connecticut 74 and Massachusetts 178.

The Suffolk System

One of the most commented on characteristic of New England’s banking system was its unique regional banknote redemption and clearing mechanism. Established by the Suffolk Bank of Boston in the early 1820s, the system became known as the Suffolk System. With so many banks in New England, each issuing it own form of currency, it was sometimes difficult for merchants, farmers, artisans, and even other bankers, to discriminate between real and bogus banknotes, or to discriminate between good and bad bankers. Moreover, the rural-urban terms of trade pulled most banknotes toward the region’s port cities. Because country merchants and farmers were typically indebted to city merchants, country banknotes tended to flow toward the cities, Boston more so than any other. By the second decade of the nineteenth century, country banknotes became a constant irritant for city bankers. City bankers believed that country issues displaced Boston banknotes in local transactions. More irritating though was the constant demand by the city banks’ customers to accept country banknotes on deposit, which placed the burden of interbank clearing on the city banks.3

In 1803 the city banks embarked on a first attempt to deal with country banknotes. They joined together, bought up a large quantity of country banknotes, and returned them to the country banks for redemption into specie. This effort to reduce country banknote circulation encountered so many obstacles that it was quickly abandoned. Several other schemes were hatched in the next two decades, but none proved any more successful than the 1803 plan.

The Suffolk Bank was chartered in 1818 and within a year embarked on a novel scheme to deal with the influx of country banknotes. The Suffolk sponsored a consortium of Boston bank in which each member appointed the Suffolk as its lone agent in the collection and redemption of country banknotes. In addition, each city bank contributed to a fund used to purchase and redeem country banknotes. When the Suffolk collected a large quantity of a country bank’s notes, it presented them for immediate redemption with an ultimatum: Join in a regular and organized redemption system or be subject to further unannounced redemption calls.4 Country banks objected to the Suffolk’s proposal, because it required them to keep noninterest-earning assets on deposit with the Suffolk in amounts equal to their average weekly redemptions at the city banks. Most country banks initially refused to join the redemption network, but after the Suffolk made good on a few redemption threats, the system achieved near universal membership.

Early interpretations of the Suffolk system, like those of Redlich (1949) and Hammond (1957), portray the Suffolk as a proto-central bank, which acted as a restraining influence that exercised some control over the region’s banking system and money supply. Recent studies are less quick to pronounce the Suffolk a successful experiment in early central banking. Mullineaux (1987) argues that the Suffolk’s redemption system was actually self-defeating. Instead of making country banknotes less desirable in Boston, the fact that they became readily redeemable there made them perfect substitutes for banknotes issued by Boston’s prestigious banks. This policy made country banknotes more desirable, which made it more, not less, difficult for Boston’s banks to keep their own notes in circulation.

Fenstermaker and Filer (1986) also contest the long-held view that the Suffolk exercised control over the region’s money supply (banknotes and deposits). Indeed, the Suffolk’s system was self-defeating in this regard as well. By increasing confidence in the value of a randomly encountered banknote, people were willing to hold increases in banknotes issues. In an interesting twist on the traditional interpretation, a possible outcome of the Suffolk system is that New England may have grown increasingly financial backward as a direct result of the region’s unique clearing system. Because banknotes were viewed as relatively safe and easily redeemed, the next big financial innovation — deposit banking — in New England lagged far behind other regions. With such wide acceptance of banknotes, there was no reason for banks to encourage the use of deposits and little reason for consumers to switch over.

Summary: New England Banks

New England’s banking system can be summarized as follows: Small unit banks predominated; many banks catered to small groups of capitalists bound by personal and familial ties; banking was becoming increasingly interconnected with other lines of business, such as insurance, shipping and manufacturing; the state took little direct interest in the daily operations of the banks and its supervisory role amounted to little more than a demand that every bank submit an unaudited balance sheet at year’s end; and that the Suffolk developed an interbank clearing system that facilitated the use of banknotes throughout the region, but had little effective control over the region’s money supply.

Banking in the Middle Atlantic Region

Pennsylvania

After 1810 or so, many bank charters were granted in New England, but not because of the presumption that the bank would promote the commonweal. Charters were granted for the personal gain of the promoter and the shareholders and in proportion to the personal, political and economic influence of the bank’s founders. No New England state took a significant financial stake in its banks. In both respects, New England differed markedly from states in other regions. From the beginning of state-chartered commercial banking in Pennsylvania, the state took a direct interest in the operations and profits of its banks. The Bank of North America was the obvious case: chartered to provide support to the colonial belligerents and the fledgling nation. Because the bank was popularly perceived to be dominated by Philadelphia’s Federalist merchants, who rarely loaned to outsiders, support for the bank waned.5 After a pitched political battle in which the Bank of North America’s charter was revoked and reinstated, the legislature chartered the Bank of Pennsylvania in 1793. As its name implies, this bank became the financial arm of the state. Pennsylvania subscribed $1 million of the bank’s capital, giving it the right to appoint six of thirteen directors and a $500,000 line of credit. The bank benefited by becoming the state’s fiscal agent, which guaranteed a constant inflow of deposits from regular treasury operations as well as western land sales.

By 1803 the demand for loans outstripped the existing banks’ supply and a plan for a new bank, the Philadelphia Bank, was hatched and its promoters petitioned the legislature for a charter. The existing banks lobbied against the charter, and nearly sank the new bank’s chances until it established a precedent that lasted throughout the antebellum era. Its promoters bribed the legislature with a payment of $135,000 in return for the charter, handed over one-sixth of its shares, and opened a line of credit for the state.

Between 1803 and 1814, the only other bank chartered in Pennsylvania was the Farmers and Mechanics Bank of Philadelphia, which established a second substantive precedent that persisted throughout the era. Existing banks followed a strict real-bills lending policy, restricting lending to merchants at very short terms of 30 to 90 days.6 Their adherence to a real-bills philosophy left a growing community of artisans, manufacturers and farmers on the outside looking in. The Farmers and Mechanics Bank was chartered to serve excluded groups. At least seven of its thirteen directors had to be farmers, artisans or manufacturers and the bank was required to lend the equivalent of 10 percent of its capital to farmers on mortgage for at least one year. In later years, banks were established to provide services to even more narrowly defined groups. Within a decade or two, most substantial port cities had banks with names like Merchants Bank, Planters Bank, Farmers Bank, and Mechanics Bank. By 1860 it was common to find banks with names like Leather Manufacturers Bank, Grocers Bank, Drovers Bank, and Importers Bank. Indeed, the Emigrant Savings Bank in New York City served Irish immigrants almost exclusively. In the other instances, it is not known how much of a bank’s lending was directed toward the occupational group included in its name. The adoption of such names may have been marketing ploys as much as mission statements. Only further research will reveal the answer.

New York

State-chartered banking in New York arrived less auspiciously than it had in Philadelphia or Boston. The Bank of New York opened in 1784, but operated without a charter and in open violation of state law until 1791 when the legislature finally sanctioned it. The city’s second bank obtained its charter surreptitiously. Alexander Hamilton was one of the driving forces behind the Bank of New York, and his long-time nemesis, Aaron Burr, was determined to establish a competing bank. Unable to get a charter from a Federalist legislature, Burr and his colleagues petitioned to incorporate a company to supply fresh water to the inhabitants of Manhattan Island. Burr tucked a clause into the charter of the Manhattan Company (the predecessor to today’s Chase Manhattan Bank) granting the water company the right to employ any excess capital in financial transactions. Once chartered, the company’s directors announced that $500,000 of its capital would be invested in banking.7 Thereafter, banking grew more quickly in New York than in Philadelphia, so that by 1812 New York had seven banks compared to the three operating in Philadelphia.

Deposit Insurance

Despite its inauspicious banking beginnings, New York introduced two innovations that influenced American banking down to the present. The Safety Fund system, introduced in 1829, was the nation’s first experiment in bank liability insurance (similar to that provided by the Federal Deposit Insurance Corporation today). The 1829 act authorized the appointment of bank regulators charged with regular inspections of member banks. An equally novel aspect was that it established an insurance fund insuring holders of banknotes and deposits against loss from bank failure. Ultimately, the insurance fund was insufficient to protect all bank creditors from loss during the panic of 1837 when eleven failures in rapid succession all but bankrupted the insurance fund, which delayed noteholder and depositor recoveries for months, even years. Even though the Safety Fund failed to provide its promised protections, it was an important episode in the subsequent evolution of American banking. Several Midwestern states instituted deposit insurance in the early twentieth century, and the federal government adopted it after the banking panics in the 1930s resulted in the failure of thousands of banks in which millions of depositors lost money.

“Free Banking”

Although the Safety Fund was nearly bankrupted in the late 1830s, it continued to insure a number of banks up to the mid 1860s when it was finally closed. No new banks joined the Safety Fund system after 1838 with the introduction of free banking — New York’s second significant banking innovation. Free banking represented a compromise between those most concerned with the underlying safety and stability of the currency and those most concerned with competition and freeing the country’s entrepreneurs from unduly harsh and anticompetitive restraints. Under free banking, a prospective banker could start a bank anywhere he saw fit, provided he met a few regulatory requirements. Each free bank’s capital was invested in state or federal bonds that were turned over to the state’s treasurer. If a bank failed to redeem even a single note into specie, the treasurer initiated bankruptcy proceedings and banknote holders were reimbursed from the sale of the bonds.

Actually Michigan preempted New York’s claim to be the first free-banking state, but Michigan’s 1837 law was modeled closely after a bill then under debate in New York’s legislature. Ultimately, New York’s influence was profound in this as well, because free banking became one of the century’s most widely copied financial innovations. By 1860 eighteen states adopted free banking laws closely resembling New York’s law. Three other states introduced watered-down variants. Eventually, the post-Civil War system of national banking adopted many of the substantive provisions of New York’s 1838 act.

Both the Safety Fund system and free banking were attempts to protect society from losses resulting from bank failures and to entice people to hold financial assets. Banks and bank-supplied currency were novel developments in the hinterlands in the early nineteenth century and many rural inhabitants were skeptical about the value of small pieces of paper. They were more familiar with gold and silver. Getting them to exchange one for the other was a slow process, and one that relied heavily on trust. But trust was built slowly and destroyed quickly. The failure of a single bank could, in a week, destroy the confidence in a system built up over a decade. New York’s experiments were designed to mitigate, if not eliminate, the negative consequences of bank failures. New York’s Safety Fund, then, differed in the details but not in intent, from New England’s Suffolk system. Bankers and legislators in each region grappled with the difficult issue of protecting a fragile but vital sector of the economy. Each region responded to the problem differently. The South and West settled on yet another solution.

Banking in the South and West

One distinguishing characteristic of southern and western banks was their extensive branch networks. Pennsylvania provided for branch banking in the early nineteenth century and two banks jointly opened about ten branches. In both instances, however, the branches became a net liability. The Philadelphia Bank opened four branches in 1809 and by 1811 was forced to pass on its semi-annual dividends because losses at the branches offset profits at the Philadelphia office. At bottom, branch losses resulted from a combination of ineffective central office oversight and unrealistic expectations about the scale and scope of hinterland lending. Philadelphia’s bank directors instructed branch managers to invest in high-grade commercial paper or real bills. Rural banks found a limited number of such lending opportunities and quickly turned to mortgage-based lending. Many of these loans fell into arrears and were ultimately written when land sales faltered.

Branch Banking

Unlike Pennsylvania, where branch banking failed, branch banks throughout the South and West thrived. The Bank of Virginia, founded in 1804, was the first state-chartered branch bank and up to the Civil War branch banks served the state’s financial needs. Several small, independent banks were chartered in the 1850s, but they never threatened the dominance of Virginia’s “Big Six” banks. Virginia’s branch banks, unlike Pennsylvania’s, were profitable. In 1821, for example, the net return to capital at the Farmers Bank of Virginia’s home office in Richmond was 5.4 percent. Returns at its branches ranged from a low of 3 percent at Norfolk (which was consistently the low-profit branch) to 9 percent in Winchester. In 1835, the last year the bank reported separate branch statistics, net returns to capital at the Farmers Bank’s branches ranged from 2.9 and 11.7 percent, with an average of 7.9 percent.

The low profits at the Norfolk branch represent a net subsidy from the state’s banking sector to the political system, which was not immune to the same kind of infrastructure boosterism that erupted in New York, Pennsylvania, Maryland and elsewhere. In the immediate post-Revolutionary era, the value of exports shipped from Virginia’s ports (Norfolk and Alexandria) slightly exceeded the value shipped from Baltimore. In the 1790s the numbers turned sharply in Baltimore’s favor and Virginia entered the internal-improvements craze and the battle for western shipments. Banks represented the first phase of the state’s internal improvements plan in that many believed that Baltimore’s new-found advantage resulted from easier credit supplied by the city’s banks. If Norfolk, with one of the best natural harbors on the North American Atlantic coast, was to compete with other port cities, it needed banks and the state required three of the state’s Big Six branch banks to operate branches there. Despite its natural advantages, Norfolk never became an important entrepot and it probably had more bank capital than it required. This pattern was repeated elsewhere. Other states required their branch banks to serve markets such as Memphis, Louisville, Natchez and Mobile that might, with the proper infrastructure grow into important ports.

State Involvement and Intervention in Banking

The second distinguishing characteristic of southern and western banking was sweeping state involvement and intervention. Virginia, for example, interjected the state into the banking system by taking significant stakes in its first chartered banks (providing an implicit subsidy) and by requiring them, once they established themselves, to subsidize the state’s continuing internal improvements programs of the 1820s and 1830s. Indiana followed such a strategy. So, too, did Kentucky, Louisiana, Mississippi, Illinois, Kentucky, Tennessee and Georgia in different degrees. South Carolina followed a wholly different strategy. On one hand, it chartered several banks in which it took no financial interest. On the other, it chartered the Bank of the State of South Carolina, a bank wholly owned by the state and designed to lend to planters and farmers who complained constantly that the state’s existing banks served only the urban mercantile community. The state-owned bank eventually divided its lending between merchants, farmers and artisans and dominated South Carolina’s financial sector.

The 1820s and 1830s witnessed a deluge of new banks in the South and West, with a corresponding increase in state involvement. No state matched Louisiana’s breadth of involvement in the 1830s when it chartered three distinct types of banks: commercial banks that served merchants and manufacturers; improvement banks that financed various internal improvements projects; and property banks that extended long-term mortgage credit to planters and other property holders. Louisiana’s improvement banks included the New Orleans Canal and Banking Company that built a canal connecting Lake Ponchartrain to the Mississippi River. The Exchange and Banking Company and the New Orleans Improvement and Banking Company were required to build and operate hotels. The New Orleans Gas Light and Banking Company constructed and operated gas streetlights in New Orleans and five other cities. Finally, the Carrollton Railroad and Banking Company and the Atchafalaya Railroad and Banking Company were rail construction companies whose bank subsidiaries subsidized railroad construction.

“Commonwealth Ideal” and Inflationary Banking

Louisiana’s 1830s banking exuberance reflected what some historians label the “commonwealth ideal” of banking; that is, the promotion of the general welfare through the promotion of banks. Legislatures in the South and West, however, never demonstrated a greater commitment to the commonwealth ideal than during the tough times of the early 1820s. With the collapse of the post-war land boom in 1819, a political coalition of debt-strapped landowners lobbied legislatures throughout the region for relief and its focus was banking. Relief advocates lobbied for inflationary banking that would reduce the real burden of debts taken on during prior flush times.

Several western states responded to these calls and chartered state-subsidized and state-managed banks designed to reinflate their embattled economies. Chartered in 1821, the Bank of the Commonwealth of Kentucky loaned on mortgages at longer than customary periods and all Kentucky landowners were eligible for $1,000 loans. The loans allowed landowners to discharge their existing debts without being forced to liquidate their property at ruinously low prices. Although the bank’s notes were not redeemable into specie, they were given currency in two ways. First, they were accepted at the state treasury in tax payments. Second, the state passed a law that forced creditors to accept the notes in payment of existing debts or agree to delay collection for two years.

The commonwealth ideal was not unique to Kentucky. During the depression of the 1820s, Tennessee chartered the State Bank of Tennessee, Illinois chartered the State Bank of Illinois and Louisiana chartered the Louisiana State Bank. Although they took slightly different forms, they all had the same intent; namely, to relieve distressed and embarrassed farmers, planters and land owners. What all these banks shared in common was the notion that the state should promote the general welfare and economic growth. In this instance, and again during the depression of the 1840s, state-owned banks were organized to minimize the transfer of property when economic conditions demanded wholesale liquidation. Such liquidation would have been inefficient and imposed unnecessary hardship on a large fraction of the population. To the extent that hastily chartered relief banks forestalled inefficient liquidation, they served their purpose. Although most of these banks eventually became insolvent, requiring taxpayer bailouts, we cannot label them unsuccessful. They reinflated economies and allowed for an orderly disposal of property. Determining if the net benefits were positive or negative requires more research, but for the moment we are forced to accept the possibility that the region’s state-owned banks of the 1820s and 1840s advanced the commonweal.

Conclusion: Banks and Economic Growth

Despite notable differences in the specific form and structure of each region’s banking system, they were all aimed squarely at a common goal; namely, realizing that region’s economic potential. Banks helped achieve the goal in two ways. First, banks monetized economies, which reduced the costs of transacting and helped smooth consumption and production across time. It was no longer necessary for every farm family to inventory their entire harvest. They could sell most of it, and expend the proceeds on consumption goods as the need arose until the next harvest brought a new cash infusion. Crop and livestock inventories are prone to substantial losses and an increased use of money reduced them significantly. Second, banks provided credit, which unleashed entrepreneurial spirits and talents. A complete appreciation of early American banking recognizes the banks’ contribution to antebellum America’s economic growth.

Bibliographic Essay

Because of the large number of sources used to construct the essay, the essay was more readable and less cluttered by including a brief bibliographic essay. A full bibliography is included at the end.

Good general histories of antebellum banking include Dewey (1910), Fenstermaker (1965), Gouge (1833), Hammond (1957), Knox (1903), Redlich (1949), and Trescott (1963). If only one book is read on antebellum banking, Hammond’s (1957) Pulitzer-Prize winning book remains the best choice.

The literature on New England banking is not particularly large, and the more important historical interpretations of state-wide systems include Chadbourne (1936), Hasse (1946, 1957), Simonton (1971), Spencer (1949), and Stokes (1902). Gras (1937) does an excellent job of placing the history of a single bank within the larger regional and national context. In a recent book and a number of articles Lamoreaux (1994 and sources therein) provides a compelling and eminently readable reinterpretation of the region’s banking structure. Nathan Appleton (1831, 1856) provides a contemporary observer’s interpretation, while Walker (1857) provides an entertaining if perverse and satirical history of a fictional New England bank. Martin (1969) provides details of bank share prices and dividend payments from the establishment of the first banks in Boston through the end of the nineteenth century. Less technical studies of the Suffolk system include Lake (1947), Trivoli (1979) and Whitney (1878); more technical interpretations include Calomiris and Kahn (1996), Mullineaux (1987), and Rolnick, Smith and Weber (1998).

The literature on Middle Atlantic banking is huge, but the better state-level histories include Bryan (1899), Daniels (1976), and Holdsworth (1928). The better studies of individual banks include Adams (1978), Lewis (1882), Nevins (1934), and Wainwright (1953). Chaddock (1910) provides a general history of the Safety Fund system. Golembe (1960) places it in the context of modern deposit insurance, while Bodenhorn (1996) and Calomiris (1989) provide modern analyses. A recent revival of interest in free banking has brought about a veritable explosion in the number of studies on the subject, but the better introductory ones remain Rockoff (1974, 1985), Rolnick and Weber (1982, 1983), and Dwyer (1996).

The literature on southern and western banking is large and of highly variable quality, but I have found the following to be the most readable and useful general sources: Caldwell (1935), Duke (1895), Esary (1912), Golembe (1978), Huntington (1915), Green (1972), Lesesne (1970), Royalty (1979), Schweikart (1987) and Starnes (1931).

References and Further Reading

Adams, Donald R., Jr. Finance and Enterprise in Early America: A Study of Stephen Girard’s Bank, 1812-1831. Philadelphia: University of Pennsylvania Press, 1978.

Alter, George, Claudia Goldin and Elyce Rotella. “The Savings of Ordinary Americans: The Philadelphia Saving Fund Society in the Mid-Nineteenth-Century.” Journal of Economic History 54, no. 4 (December 1994): 735-67.

Appleton, Nathan. A Defence of Country Banks: Being a Reply to a Pamphlet Entitled ‘An Examination of the Banking System of Massachusetts, in Reference to the Renewal of the Bank Charters.’ Boston: Stimpson & Clapp, 1831.

Appleton, Nathan. Bank Bills or Paper Currency and the Banking System of Massachusetts with Remarks on Present High Prices. Boston: Little, Brown and Company, 1856.

Berry, Thomas Senior. Revised Annual Estimates of American Gross National Product: Preliminary Estimates of Four Major Components of Demand, 1789-1889. Richmond: University of Richmond Bostwick Paper No. 3, 1978.

Bodenhorn, Howard. “Zombie Banks and the Demise of New York’s Safety Fund.” Eastern Economic Journal 22, no. 1 (1996): 21-34.

Bodenhorn, Howard. “Private Banking in Antebellum Virginia: Thomas Branch & Sons of Petersburg.” Business History Review 71, no. 4 (1997): 513-42.

Bodenhorn, Howard. A History of Banking in Antebellum America: Financial Markets and Economic Development in an Era of Nation-Building. Cambridge and New York: Cambridge University Press, 2000.

Bodenhorn, Howard. State Banking in Early America: A New Economic History. New York: Oxford University Press, 2002.

Bryan, Alfred C. A History of State Banking in Maryland. Baltimore: Johns Hopkins University Press, 1899.

Caldwell, Stephen A. A Banking History of Louisiana. Baton Rouge: Louisiana State University Press, 1935.

Calomiris, Charles W. “Deposit Insurance: Lessons from the Record.” Federal Reserve Bank of Chicago Economic Perspectives 13 (1989): 10-30.

Calomiris, Charles W., and Charles Kahn. “The Efficiency of Self-Regulated Payments Systems: Learnings from the Suffolk System.” Journal of Money, Credit, and Banking 28, no. 4 (1996): 766-97.

Chadbourne, Walter W. A History of Banking in Maine, 1799-1930. Orono: University of Maine Press, 1936.

Chaddock, Robert E. The Safety Fund Banking System in New York, 1829-1866. Washington, D.C.: Government Printing Office, 1910.

Daniels, Belden L. Pennsylvania: Birthplace of Banking in America. Harrisburg: Pennsylvania Bankers Association, 1976.

Davis, Lance, and Robert E. Gallman. “Capital Formation in the United States during the Nineteenth Century.” In Cambridge Economic History of Europe (Vol. 7, Part 2), edited by Peter Mathias and M.M. Postan, 1-69. Cambridge: Cambridge University Press, 1978.

Davis, Lance, and Robert E. Gallman. “Savings, Investment, and Economic Growth: The United States in the Nineteenth Century.” In Capitalism in Context: Essays on Economic Development and Cultural Change in Honor of R.M. Hartwell, edited by John A. James and Mark Thomas, 202-29. Chicago: University of Chicago Press, 1994.

Dewey, Davis R. State Banking before the Civil War. Washington, D.C.: Government Printing Office, 1910.

Duke, Basil W. History of the Bank of Kentucky, 1792-1895. Louisville: J.P. Morton, 1895.

Dwyer, Gerald P., Jr. “Wildcat Banking, Banking Panics, and Free Banking in the United States.” Federal Reserve Bank of Atlanta Economic Review 81, no. 3 (1996): 1-20.

Engerman, Stanley L., and Robert E. Gallman. “U.S. Economic Growth, 1783-1860.” Research in Economic History 8 (1983): 1-46.

Esary, Logan. State Banking in Indiana, 1814-1873. Indiana University Studies No. 15. Bloomington: Indiana University Press, 1912.

Fenstermaker, J. Van. The Development of American Commercial Banking, 1782-1837. Kent, Ohio: Kent State University, 1965.

Fenstermaker, J. Van, and John E. Filer. “Impact of the First and Second Banks of the United States and the Suffolk System on New England Bank Money, 1791-1837.” Journal of Money, Credit, and Banking 18, no. 1 (1986): 28-40.

Friedman, Milton, and Anna J. Schwartz. “Has the Government Any Role in Money?” Journal of Monetary Economics 17, no. 1 (1986): 37-62.

Gallman, Robert E. “American Economic Growth before the Civil War: The Testimony of the Capital Stock Estimates.” In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John Joseph Wallis, 79-115. Chicago: University of Chicago Press, 1992.

Goldsmith, Raymond. Financial Structure and Development. New Haven: Yale University Press, 1969.

Golembe, Carter H. “The Deposit Insurance Legislation of 1933: An Examination of its Antecedents and Purposes.” Political Science Quarterly 76, no. 2 (1960): 181-200.

Golembe, Carter H. State Banks and the Economic Development of the West. New York: Arno Press, 1978.

Gouge, William M. A Short History of Paper Money and Banking in the United States. Philadelphia: T.W. Ustick, 1833.

Gras, N.S.B. The Massachusetts First National Bank of Boston, 1784-1934. Cambridge, MA: Harvard University Press, 1937.

Green, George D. Finance and Economic Development in the Old South: Louisiana Banking, 1804-1861. Stanford: Stanford University Press, 1972.

Hammond, Bray. Banks and Politics in America from the Revolution to the Civil War. Princeton: Princeton University Press, 1957.

Hasse, William F., Jr. A History of Banking in New Haven, Connecticut. New Haven: privately printed, 1946.

Hasse, William F., Jr. A History of Money and Banking in Connecticut. New Haven: privately printed, 1957.

Holdsworth, John Thom. Financing an Empire: History of Banking in Pennsylvania. Chicago: S.J. Clarke Publishing Company, 1928.

Huntington, Charles Clifford. A History of Banking and Currency in Ohio before the Civil War. Columbus: F. J. Herr Printing Company, 1915.

Knox, John Jay. A History of Banking in the United States. New York: Bradford Rhodes & Company, 1903.

Kuznets, Simon. “Foreword.” In Financial Intermediaries in the American Economy, by Raymond W. Goldsmith. Princeton: Princeton University Press, 1958.

Lake, Wilfred. “The End of the Suffolk System.” Journal of Economic History 7, no. 4 (1947): 183-207.

Lamoreaux, Naomi R. Insider Lending: Banks, Personal Connections, and Economic Development in Industrial New England. Cambridge: Cambridge University Press, 1994.

Lesesne, J. Mauldin. The Bank of the State of South Carolina. Columbia: University of South Carolina Press, 1970.

Lewis, Lawrence, Jr. A History of the Bank of North America: The First Bank Chartered in the United States. Philadelphia: J.B. Lippincott & Company, 1882.

Lockard, Paul A. Banks, Insider Lending and Industries of the Connecticut River Valley of Massachusetts, 1813-1860. Unpublished Ph.D. thesis, University of Massachusetts, 2000.

Martin, Joseph G. A Century of Finance. New York: Greenwood Press, 1969.

Moulton, H.G. “Commercial Banking and Capital Formation.” Journal of Political Economy 26 (1918): 484-508, 638-63, 705-31, 849-81.

Mullineaux, Donald J. “Competitive Monies and the Suffolk Banking System: A Contractual Perspective.” Southern Economic Journal 53 (1987): 884-98.

Nevins, Allan. History of the Bank of New York and Trust Company, 1784 to 1934. New York: privately printed, 1934.

New York. Bank Commissioners. “Annual Report of the Bank Commissioners.” New York General Assembly Document No. 74. Albany, 1835.

North, Douglass. “Institutional Change in American Economic History.” In American Economic Development in Historical Perspective, edited by Thomas Weiss and Donald Schaefer, 87-98. Stanford: Stanford University Press, 1994.

Rappaport, George David. Stability and Change in Revolutionary Pennsylvania: Banking, Politics, and Social Structure. University Park, PA: The Pennsylvania State University Press, 1996.

Redlich, Fritz. The Molding of American Banking: Men and Ideas. New York: Hafner Publishing Company, 1947.

Rockoff, Hugh. “The Free Banking Era: A Reexamination.” Journal of Money, Credit, and Banking 6, no. 2 (1974): 141-67.

Rockoff, Hugh. “New Evidence on the Free Banking Era in the United States.” American Economic Review 75, no. 4 (1985): 886-89.

Rolnick, Arthur J., and Warren E. Weber. “Free Banking, Wildcat Banking, and Shinplasters.” Federal Reserve Bank of Minneapolis Quarterly Review 6 (1982): 10-19.

Rolnick, Arthur J., and Warren E. Weber. “New Evidence on the Free Banking Era.” American Economic Review 73, no. 5 (1983): 1080-91.

Rolnick, Arthur J., Bruce D. Smith, and Warren E. Weber. “Lessons from a Laissez-Faire Payments System: The Suffolk Banking System (1825-58).” Federal Reserve Bank of Minneapolis Quarterly Review 22, no. 3 (1998): 11-21.

Royalty, Dale. “Banking and the Commonwealth Ideal in Kentucky, 1806-1822.” Register of the Kentucky Historical Society 77 (1979): 91-107.

Schumpeter, Joseph A. The Theory of Economic Development: An Inquiry into Profit, Capital, Credit, Interest, and the Business Cycle. Cambridge, MA: Harvard University Press, 1934.

Schweikart, Larry. Banking in the American South from the Age of Jackson to Reconstruction. Baton Rouge: Louisiana State University Press, 1987.

Simonton, William G. Maine and the Panic of 1837. Unpublished master’s thesis: University of Maine, 1971.

Sokoloff, Kenneth L. “Productivity Growth in Manufacturing during Early Industrialization.” In Long-Term Factors in American Economic Growth, edited by Stanley L. Engerman and Robert E. Gallman. Chicago: University of Chicago Press, 1986.

Sokoloff, Kenneth L. “Invention, Innovation, and Manufacturing Productivity Growth in the Antebellum Northeast.” In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John Joseph Wallis, 345-78. Chicago: University of Chicago Press, 1992.

Spencer, Charles, Jr. The First Bank of Boston, 1784-1949. New York: Newcomen Society, 1949.

Starnes, George T. Sixty Years of Branch Banking in Virginia. New York: Macmillan Company, 1931.

Stokes, Howard Kemble. Chartered Banking in Rhode Island, 1791-1900. Providence: Preston & Rounds Company, 1902.

Sylla, Richard. “Forgotten Men of Money: Private Bankers in Early U.S. History.” Journal of Economic History 36, no. 2 (1976):

Temin, Peter. The Jacksonian Economy. New York: W. W. Norton & Company, 1969.

Trescott, Paul B. Financing American Enterprise: The Story of Commercial Banking. New York: Harper & Row, 1963.

Trivoli, George. The Suffolk Bank: A Study of a Free-Enterprise Clearing System. London: The Adam Smith Institute, 1979.

U.S. Comptroller of the Currency. Annual Report of the Comptroller of the Currency. Washington, D.C.: Government Printing Office, 1931.

Wainwright, Nicholas B. History of the Philadelphia National Bank. Philadelphia: William F. Fell Company, 1953.

Walker, Amasa. History of the Wickaboag Bank. Boston: Crosby, Nichols & Company, 1857.

Wallis, John Joseph. “What Caused the Panic of 1839?” Unpublished working paper, University of Maryland, October 2000.

Weiss, Thomas. “U.S. Labor Force Estimates and Economic Growth, 1800-1860.” In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John Joseph Wallis, 19-75. Chicago: University of Chicago Press, 1992.

Whitney, David R. The Suffolk Bank. Cambridge, MA: Riverside Press, 1878.

Wright, Robert E. “Artisans, Banks, Credit, and the Election of 1800.” The Pennsylvania Magazine of History and Biography 122, no. 3 (July 1998), 211-239.

Wright, Robert E. “Bank Ownership and Lending Patterns in New York and Pennsylvania, 1781-1831.” Business History Review 73, no. 1 (Spring 1999), 40-60.

1 Banknotes were small demonination IOUs printed by banks and circulated as currency. Modern U.S. money are simply banknotes issued by the Federal Reserve Bank, which has a monopoly privilege in the issue of legal tender currency. In antebellum American, when a bank made a loan, the borrower was typically handed banknotes with a face value equal to the dollar value of the loan. The borrower then spent these banknotes in purchasing goods and services, putting them into circulation. Contemporary law held that banks were required to redeem banknotes into gold and silver legal tender on demand. Banks found it profitable to issue notes because they typically held about 30 percent of the total value of banknotes in circulation as reserves. Thus, banks were able to leverage $30 in gold and silver into $100 in loans that returned about 7 percent interest on average.

2 Paul Lockard (2000) challenges Lamoreaux’s interpretation. In a study of 4 banks in the Connecticut River valley, Lockard finds that insiders did not dominate these banks’ resources. As provocative as Lockard’s findings are, he draws conclusions from a small and unrepresentative sample. Two of his four sample banks were savings banks, which were designed as quasi-charitable organizations designed to encourage savings by the working classes and provide small loans. Thus, Lockard’s sample is effectively reduced to two banks. At these two banks, he identifies about 10 percent of loans as insider loans, but readily admits that he cannot always distinguish between insiders and outsiders. For a recent study of how early Americans used savings banks, see Alter, Goldin and Rotella (1994). The literature on savings banks is so large that it cannot be be given its due here.

3 Interbank clearing involves the settling of balances between banks. Modern banks cash checks drawn on other banks and credit the funds to the depositor. The Federal Reserve system provides clearing services between banks. The accepting bank sends the checks to the Federal Reserve, who credits the sending bank’s accounts and sends the checks back to the bank on which they were drawn for reimbursement. In the antebellum era, interbank clearing involved sending banknotes back to issuing banks. Because New England had so many small and scattered banks, the costs of returning banknotes to their issuers were large and sometimes avoided by recirculating notes of distant banks rather than returning them. Regular clearings and redemptions served an important purpose, however, because they kept banks in touch with the current market conditions. A massive redemption of notes was indicative of a declining demand for money and credit. Because the bank’s reserves were drawn down with the redemptions, it was forced to reduce its volume of loans in accord with changing demand conditions.

4 The law held that banknotes were redeemable on demand into gold or silver coin or bullion. If a bank refused to redeem even a single $1 banknote, the banknote holder could have the bank closed and liquidated to recover his or her claim against it.

5 Rappaport (1996) found that the bank’s loans were about equally divided between insiders (shareholders and shareholders’ family and business associates) and outsiders, but nonshareholders received loans about 30 percent smaller than shareholders. The issue remains about whether this bank was an “insider” bank, and depends largely on one’s definition. Any modern bank which made half of its loans to shareholders and their families would be viewed as an “insider” bank. It is less clear where the line can be usefully drawn for antebellum banks.

6 Real-bills lending followed from a nineteenth-century banking philosophy, which held that bank lending should be used to finance the warehousing or wholesaling of already-produced goods. Loans made on these bases were thought to be self-liquidating in that the loan was made against readily sold collateral actually in the hands of a merchant. Under the real-bills doctrine, the banks’ proper functions were to bridge the gap between production and retail sale of goods. A strict adherence to real-bills tenets excluded loans on property (mortgages), loans on goods in process (trade credit), or loans to start-up firms (venture capital). Thus, real-bills lending prescribed a limited role for banks and bank credit. Few banks were strict adherents to the doctrine, but many followed it in large part.

7 Robert E. Wright (1998) offers a different interpretation, but notes that Burr pushed the bill through at the end of a busy legislative session so that many legislators voted on the bill without having read it thoroughly

Citation: Bodenhorn, Howard. “Antebellum Banking in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 26, 2008. URL http://eh.net/encyclopedia/antebellum-banking-in-the-united-states/