EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

Antebellum Banking in the United States

Howard Bodenhorn, Lafayette College

The first legitimate commercial bank in the United States was the Bank of North America founded in 1781. Encouraged by Alexander Hamilton, Robert Morris persuaded the Continental Congress to charter the bank, which loaned to the cash-strapped Revolutionary government as well as private citizens, mostly Philadelphia merchants. The possibilities of commercial banking had been widely recognized by many colonists, but British law forbade the establishment of commercial, limited-liability banks in the colonies. Given that many of the colonists’ grievances against Parliament centered on economic and monetary issues, it is not surprising that one of the earliest acts of the Continental Congress was the establishment of a bank.

The introduction of banking to the U.S. was viewed as an important first step in forming an independent nation because banks supplied a medium of exchange (banknotes1 and deposits) in an economy perpetually strangled by shortages of specie money and credit, because they animated industry, and because they fostered wealth creation and promoted well-being. In the last case, contemporaries typically viewed banks as an integral part of a wider system of government-sponsored commercial infrastructure. Like schools, bridges, road, canals, river clearing and harbor improvements, the benefits of banks were expected to accrue to everyone even if dividends accrued only to shareholders.

Financial Sector Growth

By 1800 each major U.S. port city had at least one commercial bank serving the local mercantile community. As city banks proved themselves, banking spread into smaller cities and towns and expanded their clientele. Although most banks specialized in mercantile lending, others served artisans and farmers. In 1820 there were 327 commercial banks and several mutual savings banks that promoted thrift among the poor. Thus, at the onset of the antebellum period (defined here as the period between 1820 and 1860), urban residents were familiar with the intermediary function of banks and used bank-supplied currencies (deposits and banknotes) for most transactions. Table 1 reports the number of banks and the value of loans outstanding at year end between 1820 and 1860. During the era, the number of banks increased from 327 to 1,562 and total loans increased from just over $55.1 million to $691.9 million. Bank-supplied credit in the U.S. economy increased at a remarkable annual average rate of 6.3 percent. Growth in the financial sector, then outpaced growth in aggregate economic activity. Nominal gross domestic product increased an average annual rate of about 4.3 percent over the same interval. This essay discusses how regional regulatory structures evolved as the banking sector grew and radiated out from northeastern cities to the hinterlands.

Table 1

Number of Banks and Total Loans, 1820-1860

Year Banks Loans ($ millions)
1820 327 55.1
1821 273 71.9
1822 267 56.0
1823 274 75.9
1824 300 73.8
1825 330 88.7
1826 331 104.8
1827 333 90.5
1828 355 100.3
1829 369 103.0
1830 381 115.3
1831 424 149.0
1832 464 152.5
1833 517 222.9
1834 506 324.1
1835 704 365.1
1836 713 457.5
1837 788 525.1
1838 829 485.6
1839 840 492.3
1840 901 462.9
1841 784 386.5
1842 692 324.0
1843 691 254.5
1844 696 264.9
1845 707 288.6
1846 707 312.1
1847 715 310.3
1848 751 344.5
1849 782 332.3
1850 824 364.2
1851 879 413.8
1852 913 429.8
1853 750 408.9
1854 1208 557.4
1855 1307 576.1
1856 1398 634.2
1857 1416 684.5
1858 1422 583.2
1859 1476 657.2
1860 1562 691.9

Sources: Fenstermaker (1965); U.S. Comptroller of the Currency (1931).

Adaptability

As important as early American banks were in the process of capital accumulation, perhaps their most notable feature was their adaptability. Kuznets (1958) argues that one measure of the financial sector’s value is how and to what extent it evolves with changing economic conditions. Put in place to perform certain functions under one set of economic circumstances, how did it alter its behavior and service the needs of borrowers as circumstances changed. One benefit of the federalist U.S. political system was that states were given the freedom to establish systems reflecting local needs and preferences. While the political structure deserves credit in promoting regional adaptations, North (1994) credits the adaptability of America’s formal rules and informal constraints that rewarded adventurism in the economic, as well as the noneconomic, sphere. Differences in geography, climate, crop mix, manufacturing activity, population density and a host of other variables were reflected in different state banking systems. Rhode Island’s banks bore little resemblance to those in far away Louisiana or Missouri, or even those in neighboring Connecticut. Each state’s banks took a different form, but their purpose was the same; namely, to provide the state’s citizens with monetary and intermediary services and to promote the general economic welfare. This section provides a sketch of regional differences. A more detailed discussion can be found in Bodenhorn (2002).

State Banking in New England

New England’s banks most resemble the common conception of the antebellum bank. They were relatively small, unit banks; their stock was closely held; they granted loans to local farmers, merchants and artisans with whom the bank’s managers had more than a passing familiarity; and the state took little direct interest in their daily operations.

Of the banking systems put in place in the antebellum era, New England’s is typically viewed as the most stable and conservative. Friedman and Schwartz (1986) attribute their stability to an Old World concern with business reputations, familial ties, and personal legacies. New England was long settled, its society well established, and its business community mature and respected throughout the Atlantic trading network. Wealthy businessmen and bankers with strong ties to the community — like the Browns of Providence or the Bowdoins of Boston — emphasized stability not just because doing so benefited and reflected well on them, but because they realized that bad banking was bad for everyone’s business.

Besides their reputation for soundness, the two defining characteristics of New England’s early banks were their insider nature and their small size. The typical New England bank was small compared to banks in other regions. Table 2 shows that in 1820 the average Massachusetts country bank was about the same size as a Pennsylvania country bank, but both were only about half the size of a Virginia bank. A Rhode Island bank was about one-third the size of a Massachusetts or Pennsylvania bank and a mere one-sixth as large as Virginia’s banks. By 1850 the average Massachusetts bank declined relatively, operating on about two-thirds the paid-in capital of a Pennsylvania country bank. Rhode Island’s banks also shrank relative to Pennsylvania’s and were tiny compared to the large branch banks in the South and West.

Table 2

Average Bank Size by Capital and Lending in 1820 and 1850 Selected States and Cities

(in $ thousands)

1820Capital Loans 1850 Capital Loans
Massachusetts $374.5 $480.4 $293.5 $494.0
except Boston 176.6 230.8 170.3 281.9
Rhode Island 95.7 103.2 186.0 246.2
except Providence 60.6 72.0 79.5 108.5
New York na na 246.8 516.3
except NYC na na 126.7 240.1
Pennsylvania 221.8 262.9 340.2 674.6
except Philadelphia 162.6 195.2 246.0 420.7
Virginia1,2 351.5 340.0 270.3 504.5
South Carolina2 na na 938.5 1,471.5
Kentucky2 na na 439.4 727.3

Notes: 1 Virginia figures for 1822. 2 Figures represent branch averages.

Source: Bodenhorn (2002).

Explanations for New England Banks’ Relatively Small Size

Several explanations have been offered for the relatively small size of New England’s banks. Contemporaries attributed it to the New England states’ propensity to tax bank capital, which was thought to work to the detriment of large banks. They argued that large banks circulated fewer banknotes per dollar of capital. The result was a progressive tax that fell disproportionately on large banks. Data compiled from Massachusetts’s bank reports suggest that large banks were not disadvantaged by the capital tax. It was a fact, as contemporaries believed, that large banks paid higher taxes per dollar of circulating banknotes, but a potentially better benchmark is the tax to loan ratio because large banks made more use of deposits than small banks. The tax to loan ratio was remarkably constant across both bank size and time, averaging just 0.6 percent between 1834 and 1855. Moreover, there is evidence of constant to modestly increasing returns to scale in New England banking. Large banks were generally at least as profitable as small banks in all years between 1834 and 1860, and slightly more so in many.

Lamoreaux (1993) offers a different explanation for the modest size of the region’s banks. New England’s banks, she argues, were not impersonal financial intermediaries. Rather, they acted as the financial arms of extended kinship trading networks. Throughout the antebellum era banks catered to insiders: directors, officers, shareholders, or business partners and kin of directors, officers, shareholders and business partners. Such preferences toward insiders represented the perpetuation of the eighteenth-century custom of pooling capital to finance family enterprises. In the nineteenth century the practice continued under corporate auspices. The corporate form, in fact, facilitated raising capital in greater amounts than the family unit could raise on its own. But because the banks kept their loans within a relatively small circle of business connections, it was not until the late nineteenth century that bank size increased.2

Once the kinship orientation of the region’s banks was established it perpetuated itself. When outsiders could not obtain loans from existing insider organizations, they formed their own insider bank. In doing so the promoters assured themselves of a steady supply of credit and created engines of economic mobility for kinship networks formerly closed off from many sources of credit. State legislatures accommodated the practice through their liberal chartering policies. By 1860, Rhode Island had 91 banks, Maine had 68, New Hampshire 51, Vermont 44, Connecticut 74 and Massachusetts 178.

The Suffolk System

One of the most commented on characteristic of New England’s banking system was its unique regional banknote redemption and clearing mechanism. Established by the Suffolk Bank of Boston in the early 1820s, the system became known as the Suffolk System. With so many banks in New England, each issuing it own form of currency, it was sometimes difficult for merchants, farmers, artisans, and even other bankers, to discriminate between real and bogus banknotes, or to discriminate between good and bad bankers. Moreover, the rural-urban terms of trade pulled most banknotes toward the region’s port cities. Because country merchants and farmers were typically indebted to city merchants, country banknotes tended to flow toward the cities, Boston more so than any other. By the second decade of the nineteenth century, country banknotes became a constant irritant for city bankers. City bankers believed that country issues displaced Boston banknotes in local transactions. More irritating though was the constant demand by the city banks’ customers to accept country banknotes on deposit, which placed the burden of interbank clearing on the city banks.3

In 1803 the city banks embarked on a first attempt to deal with country banknotes. They joined together, bought up a large quantity of country banknotes, and returned them to the country banks for redemption into specie. This effort to reduce country banknote circulation encountered so many obstacles that it was quickly abandoned. Several other schemes were hatched in the next two decades, but none proved any more successful than the 1803 plan.

The Suffolk Bank was chartered in 1818 and within a year embarked on a novel scheme to deal with the influx of country banknotes. The Suffolk sponsored a consortium of Boston bank in which each member appointed the Suffolk as its lone agent in the collection and redemption of country banknotes. In addition, each city bank contributed to a fund used to purchase and redeem country banknotes. When the Suffolk collected a large quantity of a country bank’s notes, it presented them for immediate redemption with an ultimatum: Join in a regular and organized redemption system or be subject to further unannounced redemption calls.4 Country banks objected to the Suffolk’s proposal, because it required them to keep noninterest-earning assets on deposit with the Suffolk in amounts equal to their average weekly redemptions at the city banks. Most country banks initially refused to join the redemption network, but after the Suffolk made good on a few redemption threats, the system achieved near universal membership.

Early interpretations of the Suffolk system, like those of Redlich (1949) and Hammond (1957), portray the Suffolk as a proto-central bank, which acted as a restraining influence that exercised some control over the region’s banking system and money supply. Recent studies are less quick to pronounce the Suffolk a successful experiment in early central banking. Mullineaux (1987) argues that the Suffolk’s redemption system was actually self-defeating. Instead of making country banknotes less desirable in Boston, the fact that they became readily redeemable there made them perfect substitutes for banknotes issued by Boston’s prestigious banks. This policy made country banknotes more desirable, which made it more, not less, difficult for Boston’s banks to keep their own notes in circulation.

Fenstermaker and Filer (1986) also contest the long-held view that the Suffolk exercised control over the region’s money supply (banknotes and deposits). Indeed, the Suffolk’s system was self-defeating in this regard as well. By increasing confidence in the value of a randomly encountered banknote, people were willing to hold increases in banknotes issues. In an interesting twist on the traditional interpretation, a possible outcome of the Suffolk system is that New England may have grown increasingly financial backward as a direct result of the region’s unique clearing system. Because banknotes were viewed as relatively safe and easily redeemed, the next big financial innovation — deposit banking — in New England lagged far behind other regions. With such wide acceptance of banknotes, there was no reason for banks to encourage the use of deposits and little reason for consumers to switch over.

Summary: New England Banks

New England’s banking system can be summarized as follows: Small unit banks predominated; many banks catered to small groups of capitalists bound by personal and familial ties; banking was becoming increasingly interconnected with other lines of business, such as insurance, shipping and manufacturing; the state took little direct interest in the daily operations of the banks and its supervisory role amounted to little more than a demand that every bank submit an unaudited balance sheet at year’s end; and that the Suffolk developed an interbank clearing system that facilitated the use of banknotes throughout the region, but had little effective control over the region’s money supply.

Banking in the Middle Atlantic Region

Pennsylvania

After 1810 or so, many bank charters were granted in New England, but not because of the presumption that the bank would promote the commonweal. Charters were granted for the personal gain of the promoter and the shareholders and in proportion to the personal, political and economic influence of the bank’s founders. No New England state took a significant financial stake in its banks. In both respects, New England differed markedly from states in other regions. From the beginning of state-chartered commercial banking in Pennsylvania, the state took a direct interest in the operations and profits of its banks. The Bank of North America was the obvious case: chartered to provide support to the colonial belligerents and the fledgling nation. Because the bank was popularly perceived to be dominated by Philadelphia’s Federalist merchants, who rarely loaned to outsiders, support for the bank waned.5 After a pitched political battle in which the Bank of North America’s charter was revoked and reinstated, the legislature chartered the Bank of Pennsylvania in 1793. As its name implies, this bank became the financial arm of the state. Pennsylvania subscribed $1 million of the bank’s capital, giving it the right to appoint six of thirteen directors and a $500,000 line of credit. The bank benefited by becoming the state’s fiscal agent, which guaranteed a constant inflow of deposits from regular treasury operations as well as western land sales.

By 1803 the demand for loans outstripped the existing banks’ supply and a plan for a new bank, the Philadelphia Bank, was hatched and its promoters petitioned the legislature for a charter. The existing banks lobbied against the charter, and nearly sank the new bank’s chances until it established a precedent that lasted throughout the antebellum era. Its promoters bribed the legislature with a payment of $135,000 in return for the charter, handed over one-sixth of its shares, and opened a line of credit for the state.

Between 1803 and 1814, the only other bank chartered in Pennsylvania was the Farmers and Mechanics Bank of Philadelphia, which established a second substantive precedent that persisted throughout the era. Existing banks followed a strict real-bills lending policy, restricting lending to merchants at very short terms of 30 to 90 days.6 Their adherence to a real-bills philosophy left a growing community of artisans, manufacturers and farmers on the outside looking in. The Farmers and Mechanics Bank was chartered to serve excluded groups. At least seven of its thirteen directors had to be farmers, artisans or manufacturers and the bank was required to lend the equivalent of 10 percent of its capital to farmers on mortgage for at least one year. In later years, banks were established to provide services to even more narrowly defined groups. Within a decade or two, most substantial port cities had banks with names like Merchants Bank, Planters Bank, Farmers Bank, and Mechanics Bank. By 1860 it was common to find banks with names like Leather Manufacturers Bank, Grocers Bank, Drovers Bank, and Importers Bank. Indeed, the Emigrant Savings Bank in New York City served Irish immigrants almost exclusively. In the other instances, it is not known how much of a bank’s lending was directed toward the occupational group included in its name. The adoption of such names may have been marketing ploys as much as mission statements. Only further research will reveal the answer.

New York

State-chartered banking in New York arrived less auspiciously than it had in Philadelphia or Boston. The Bank of New York opened in 1784, but operated without a charter and in open violation of state law until 1791 when the legislature finally sanctioned it. The city’s second bank obtained its charter surreptitiously. Alexander Hamilton was one of the driving forces behind the Bank of New York, and his long-time nemesis, Aaron Burr, was determined to establish a competing bank. Unable to get a charter from a Federalist legislature, Burr and his colleagues petitioned to incorporate a company to supply fresh water to the inhabitants of Manhattan Island. Burr tucked a clause into the charter of the Manhattan Company (the predecessor to today’s Chase Manhattan Bank) granting the water company the right to employ any excess capital in financial transactions. Once chartered, the company’s directors announced that $500,000 of its capital would be invested in banking.7 Thereafter, banking grew more quickly in New York than in Philadelphia, so that by 1812 New York had seven banks compared to the three operating in Philadelphia.

Deposit Insurance

Despite its inauspicious banking beginnings, New York introduced two innovations that influenced American banking down to the present. The Safety Fund system, introduced in 1829, was the nation’s first experiment in bank liability insurance (similar to that provided by the Federal Deposit Insurance Corporation today). The 1829 act authorized the appointment of bank regulators charged with regular inspections of member banks. An equally novel aspect was that it established an insurance fund insuring holders of banknotes and deposits against loss from bank failure. Ultimately, the insurance fund was insufficient to protect all bank creditors from loss during the panic of 1837 when eleven failures in rapid succession all but bankrupted the insurance fund, which delayed noteholder and depositor recoveries for months, even years. Even though the Safety Fund failed to provide its promised protections, it was an important episode in the subsequent evolution of American banking. Several Midwestern states instituted deposit insurance in the early twentieth century, and the federal government adopted it after the banking panics in the 1930s resulted in the failure of thousands of banks in which millions of depositors lost money.

“Free Banking”

Although the Safety Fund was nearly bankrupted in the late 1830s, it continued to insure a number of banks up to the mid 1860s when it was finally closed. No new banks joined the Safety Fund system after 1838 with the introduction of free banking — New York’s second significant banking innovation. Free banking represented a compromise between those most concerned with the underlying safety and stability of the currency and those most concerned with competition and freeing the country’s entrepreneurs from unduly harsh and anticompetitive restraints. Under free banking, a prospective banker could start a bank anywhere he saw fit, provided he met a few regulatory requirements. Each free bank’s capital was invested in state or federal bonds that were turned over to the state’s treasurer. If a bank failed to redeem even a single note into specie, the treasurer initiated bankruptcy proceedings and banknote holders were reimbursed from the sale of the bonds.

Actually Michigan preempted New York’s claim to be the first free-banking state, but Michigan’s 1837 law was modeled closely after a bill then under debate in New York’s legislature. Ultimately, New York’s influence was profound in this as well, because free banking became one of the century’s most widely copied financial innovations. By 1860 eighteen states adopted free banking laws closely resembling New York’s law. Three other states introduced watered-down variants. Eventually, the post-Civil War system of national banking adopted many of the substantive provisions of New York’s 1838 act.

Both the Safety Fund system and free banking were attempts to protect society from losses resulting from bank failures and to entice people to hold financial assets. Banks and bank-supplied currency were novel developments in the hinterlands in the early nineteenth century and many rural inhabitants were skeptical about the value of small pieces of paper. They were more familiar with gold and silver. Getting them to exchange one for the other was a slow process, and one that relied heavily on trust. But trust was built slowly and destroyed quickly. The failure of a single bank could, in a week, destroy the confidence in a system built up over a decade. New York’s experiments were designed to mitigate, if not eliminate, the negative consequences of bank failures. New York’s Safety Fund, then, differed in the details but not in intent, from New England’s Suffolk system. Bankers and legislators in each region grappled with the difficult issue of protecting a fragile but vital sector of the economy. Each region responded to the problem differently. The South and West settled on yet another solution.

Banking in the South and West

One distinguishing characteristic of southern and western banks was their extensive branch networks. Pennsylvania provided for branch banking in the early nineteenth century and two banks jointly opened about ten branches. In both instances, however, the branches became a net liability. The Philadelphia Bank opened four branches in 1809 and by 1811 was forced to pass on its semi-annual dividends because losses at the branches offset profits at the Philadelphia office. At bottom, branch losses resulted from a combination of ineffective central office oversight and unrealistic expectations about the scale and scope of hinterland lending. Philadelphia’s bank directors instructed branch managers to invest in high-grade commercial paper or real bills. Rural banks found a limited number of such lending opportunities and quickly turned to mortgage-based lending. Many of these loans fell into arrears and were ultimately written when land sales faltered.

Branch Banking

Unlike Pennsylvania, where branch banking failed, branch banks throughout the South and West thrived. The Bank of Virginia, founded in 1804, was the first state-chartered branch bank and up to the Civil War branch banks served the state’s financial needs. Several small, independent banks were chartered in the 1850s, but they never threatened the dominance of Virginia’s “Big Six” banks. Virginia’s branch banks, unlike Pennsylvania’s, were profitable. In 1821, for example, the net return to capital at the Farmers Bank of Virginia’s home office in Richmond was 5.4 percent. Returns at its branches ranged from a low of 3 percent at Norfolk (which was consistently the low-profit branch) to 9 percent in Winchester. In 1835, the last year the bank reported separate branch statistics, net returns to capital at the Farmers Bank’s branches ranged from 2.9 and 11.7 percent, with an average of 7.9 percent.

The low profits at the Norfolk branch represent a net subsidy from the state’s banking sector to the political system, which was not immune to the same kind of infrastructure boosterism that erupted in New York, Pennsylvania, Maryland and elsewhere. In the immediate post-Revolutionary era, the value of exports shipped from Virginia’s ports (Norfolk and Alexandria) slightly exceeded the value shipped from Baltimore. In the 1790s the numbers turned sharply in Baltimore’s favor and Virginia entered the internal-improvements craze and the battle for western shipments. Banks represented the first phase of the state’s internal improvements plan in that many believed that Baltimore’s new-found advantage resulted from easier credit supplied by the city’s banks. If Norfolk, with one of the best natural harbors on the North American Atlantic coast, was to compete with other port cities, it needed banks and the state required three of the state’s Big Six branch banks to operate branches there. Despite its natural advantages, Norfolk never became an important entrepot and it probably had more bank capital than it required. This pattern was repeated elsewhere. Other states required their branch banks to serve markets such as Memphis, Louisville, Natchez and Mobile that might, with the proper infrastructure grow into important ports.

State Involvement and Intervention in Banking

The second distinguishing characteristic of southern and western banking was sweeping state involvement and intervention. Virginia, for example, interjected the state into the banking system by taking significant stakes in its first chartered banks (providing an implicit subsidy) and by requiring them, once they established themselves, to subsidize the state’s continuing internal improvements programs of the 1820s and 1830s. Indiana followed such a strategy. So, too, did Kentucky, Louisiana, Mississippi, Illinois, Kentucky, Tennessee and Georgia in different degrees. South Carolina followed a wholly different strategy. On one hand, it chartered several banks in which it took no financial interest. On the other, it chartered the Bank of the State of South Carolina, a bank wholly owned by the state and designed to lend to planters and farmers who complained constantly that the state’s existing banks served only the urban mercantile community. The state-owned bank eventually divided its lending between merchants, farmers and artisans and dominated South Carolina’s financial sector.

The 1820s and 1830s witnessed a deluge of new banks in the South and West, with a corresponding increase in state involvement. No state matched Louisiana’s breadth of involvement in the 1830s when it chartered three distinct types of banks: commercial banks that served merchants and manufacturers; improvement banks that financed various internal improvements projects; and property banks that extended long-term mortgage credit to planters and other property holders. Louisiana’s improvement banks included the New Orleans Canal and Banking Company that built a canal connecting Lake Ponchartrain to the Mississippi River. The Exchange and Banking Company and the New Orleans Improvement and Banking Company were required to build and operate hotels. The New Orleans Gas Light and Banking Company constructed and operated gas streetlights in New Orleans and five other cities. Finally, the Carrollton Railroad and Banking Company and the Atchafalaya Railroad and Banking Company were rail construction companies whose bank subsidiaries subsidized railroad construction.

“Commonwealth Ideal” and Inflationary Banking

Louisiana’s 1830s banking exuberance reflected what some historians label the “commonwealth ideal” of banking; that is, the promotion of the general welfare through the promotion of banks. Legislatures in the South and West, however, never demonstrated a greater commitment to the commonwealth ideal than during the tough times of the early 1820s. With the collapse of the post-war land boom in 1819, a political coalition of debt-strapped landowners lobbied legislatures throughout the region for relief and its focus was banking. Relief advocates lobbied for inflationary banking that would reduce the real burden of debts taken on during prior flush times.

Several western states responded to these calls and chartered state-subsidized and state-managed banks designed to reinflate their embattled economies. Chartered in 1821, the Bank of the Commonwealth of Kentucky loaned on mortgages at longer than customary periods and all Kentucky landowners were eligible for $1,000 loans. The loans allowed landowners to discharge their existing debts without being forced to liquidate their property at ruinously low prices. Although the bank’s notes were not redeemable into specie, they were given currency in two ways. First, they were accepted at the state treasury in tax payments. Second, the state passed a law that forced creditors to accept the notes in payment of existing debts or agree to delay collection for two years.

The commonwealth ideal was not unique to Kentucky. During the depression of the 1820s, Tennessee chartered the State Bank of Tennessee, Illinois chartered the State Bank of Illinois and Louisiana chartered the Louisiana State Bank. Although they took slightly different forms, they all had the same intent; namely, to relieve distressed and embarrassed farmers, planters and land owners. What all these banks shared in common was the notion that the state should promote the general welfare and economic growth. In this instance, and again during the depression of the 1840s, state-owned banks were organized to minimize the transfer of property when economic conditions demanded wholesale liquidation. Such liquidation would have been inefficient and imposed unnecessary hardship on a large fraction of the population. To the extent that hastily chartered relief banks forestalled inefficient liquidation, they served their purpose. Although most of these banks eventually became insolvent, requiring taxpayer bailouts, we cannot label them unsuccessful. They reinflated economies and allowed for an orderly disposal of property. Determining if the net benefits were positive or negative requires more research, but for the moment we are forced to accept the possibility that the region’s state-owned banks of the 1820s and 1840s advanced the commonweal.

Conclusion: Banks and Economic Growth

Despite notable differences in the specific form and structure of each region’s banking system, they were all aimed squarely at a common goal; namely, realizing that region’s economic potential. Banks helped achieve the goal in two ways. First, banks monetized economies, which reduced the costs of transacting and helped smooth consumption and production across time. It was no longer necessary for every farm family to inventory their entire harvest. They could sell most of it, and expend the proceeds on consumption goods as the need arose until the next harvest brought a new cash infusion. Crop and livestock inventories are prone to substantial losses and an increased use of money reduced them significantly. Second, banks provided credit, which unleashed entrepreneurial spirits and talents. A complete appreciation of early American banking recognizes the banks’ contribution to antebellum America’s economic growth.

Bibliographic Essay

Because of the large number of sources used to construct the essay, the essay was more readable and less cluttered by including a brief bibliographic essay. A full bibliography is included at the end.

Good general histories of antebellum banking include Dewey (1910), Fenstermaker (1965), Gouge (1833), Hammond (1957), Knox (1903), Redlich (1949), and Trescott (1963). If only one book is read on antebellum banking, Hammond’s (1957) Pulitzer-Prize winning book remains the best choice.

The literature on New England banking is not particularly large, and the more important historical interpretations of state-wide systems include Chadbourne (1936), Hasse (1946, 1957), Simonton (1971), Spencer (1949), and Stokes (1902). Gras (1937) does an excellent job of placing the history of a single bank within the larger regional and national context. In a recent book and a number of articles Lamoreaux (1994 and sources therein) provides a compelling and eminently readable reinterpretation of the region’s banking structure. Nathan Appleton (1831, 1856) provides a contemporary observer’s interpretation, while Walker (1857) provides an entertaining if perverse and satirical history of a fictional New England bank. Martin (1969) provides details of bank share prices and dividend payments from the establishment of the first banks in Boston through the end of the nineteenth century. Less technical studies of the Suffolk system include Lake (1947), Trivoli (1979) and Whitney (1878); more technical interpretations include Calomiris and Kahn (1996), Mullineaux (1987), and Rolnick, Smith and Weber (1998).

The literature on Middle Atlantic banking is huge, but the better state-level histories include Bryan (1899), Daniels (1976), and Holdsworth (1928). The better studies of individual banks include Adams (1978), Lewis (1882), Nevins (1934), and Wainwright (1953). Chaddock (1910) provides a general history of the Safety Fund system. Golembe (1960) places it in the context of modern deposit insurance, while Bodenhorn (1996) and Calomiris (1989) provide modern analyses. A recent revival of interest in free banking has brought about a veritable explosion in the number of studies on the subject, but the better introductory ones remain Rockoff (1974, 1985), Rolnick and Weber (1982, 1983), and Dwyer (1996).

The literature on southern and western banking is large and of highly variable quality, but I have found the following to be the most readable and useful general sources: Caldwell (1935), Duke (1895), Esary (1912), Golembe (1978), Huntington (1915), Green (1972), Lesesne (1970), Royalty (1979), Schweikart (1987) and Starnes (1931).

References and Further Reading

Adams, Donald R., Jr. Finance and Enterprise in Early America: A Study of Stephen Girard’s Bank, 1812-1831. Philadelphia: University of Pennsylvania Press, 1978.

Alter, George, Claudia Goldin and Elyce Rotella. “The Savings of Ordinary Americans: The Philadelphia Saving Fund Society in the Mid-Nineteenth-Century.” Journal of Economic History 54, no. 4 (December 1994): 735-67.

Appleton, Nathan. A Defence of Country Banks: Being a Reply to a Pamphlet Entitled ‘An Examination of the Banking System of Massachusetts, in Reference to the Renewal of the Bank Charters.’ Boston: Stimpson & Clapp, 1831.

Appleton, Nathan. Bank Bills or Paper Currency and the Banking System of Massachusetts with Remarks on Present High Prices. Boston: Little, Brown and Company, 1856.

Berry, Thomas Senior. Revised Annual Estimates of American Gross National Product: Preliminary Estimates of Four Major Components of Demand, 1789-1889. Richmond: University of Richmond Bostwick Paper No. 3, 1978.

Bodenhorn, Howard. “Zombie Banks and the Demise of New York’s Safety Fund.” Eastern Economic Journal 22, no. 1 (1996): 21-34.

Bodenhorn, Howard. “Private Banking in Antebellum Virginia: Thomas Branch & Sons of Petersburg.” Business History Review 71, no. 4 (1997): 513-42.

Bodenhorn, Howard. A History of Banking in Antebellum America: Financial Markets and Economic Development in an Era of Nation-Building. Cambridge and New York: Cambridge University Press, 2000.

Bodenhorn, Howard. State Banking in Early America: A New Economic History. New York: Oxford University Press, 2002.

Bryan, Alfred C. A History of State Banking in Maryland. Baltimore: Johns Hopkins University Press, 1899.

Caldwell, Stephen A. A Banking History of Louisiana. Baton Rouge: Louisiana State University Press, 1935.

Calomiris, Charles W. “Deposit Insurance: Lessons from the Record.” Federal Reserve Bank of Chicago Economic Perspectives 13 (1989): 10-30.

Calomiris, Charles W., and Charles Kahn. “The Efficiency of Self-Regulated Payments Systems: Learnings from the Suffolk System.” Journal of Money, Credit, and Banking 28, no. 4 (1996): 766-97.

Chadbourne, Walter W. A History of Banking in Maine, 1799-1930. Orono: University of Maine Press, 1936.

Chaddock, Robert E. The Safety Fund Banking System in New York, 1829-1866. Washington, D.C.: Government Printing Office, 1910.

Daniels, Belden L. Pennsylvania: Birthplace of Banking in America. Harrisburg: Pennsylvania Bankers Association, 1976.

Davis, Lance, and Robert E. Gallman. “Capital Formation in the United States during the Nineteenth Century.” In Cambridge Economic History of Europe (Vol. 7, Part 2), edited by Peter Mathias and M.M. Postan, 1-69. Cambridge: Cambridge University Press, 1978.

Davis, Lance, and Robert E. Gallman. “Savings, Investment, and Economic Growth: The United States in the Nineteenth Century.” In Capitalism in Context: Essays on Economic Development and Cultural Change in Honor of R.M. Hartwell, edited by John A. James and Mark Thomas, 202-29. Chicago: University of Chicago Press, 1994.

Dewey, Davis R. State Banking before the Civil War. Washington, D.C.: Government Printing Office, 1910.

Duke, Basil W. History of the Bank of Kentucky, 1792-1895. Louisville: J.P. Morton, 1895.

Dwyer, Gerald P., Jr. “Wildcat Banking, Banking Panics, and Free Banking in the United States.” Federal Reserve Bank of Atlanta Economic Review 81, no. 3 (1996): 1-20.

Engerman, Stanley L., and Robert E. Gallman. “U.S. Economic Growth, 1783-1860.” Research in Economic History 8 (1983): 1-46.

Esary, Logan. State Banking in Indiana, 1814-1873. Indiana University Studies No. 15. Bloomington: Indiana University Press, 1912.

Fenstermaker, J. Van. The Development of American Commercial Banking, 1782-1837. Kent, Ohio: Kent State University, 1965.

Fenstermaker, J. Van, and John E. Filer. “Impact of the First and Second Banks of the United States and the Suffolk System on New England Bank Money, 1791-1837.” Journal of Money, Credit, and Banking 18, no. 1 (1986): 28-40.

Friedman, Milton, and Anna J. Schwartz. “Has the Government Any Role in Money?” Journal of Monetary Economics 17, no. 1 (1986): 37-62.

Gallman, Robert E. “American Economic Growth before the Civil War: The Testimony of the Capital Stock Estimates.” In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John Joseph Wallis, 79-115. Chicago: University of Chicago Press, 1992.

Goldsmith, Raymond. Financial Structure and Development. New Haven: Yale University Press, 1969.

Golembe, Carter H. “The Deposit Insurance Legislation of 1933: An Examination of its Antecedents and Purposes.” Political Science Quarterly 76, no. 2 (1960): 181-200.

Golembe, Carter H. State Banks and the Economic Development of the West. New York: Arno Press, 1978.

Gouge, William M. A Short History of Paper Money and Banking in the United States. Philadelphia: T.W. Ustick, 1833.

Gras, N.S.B. The Massachusetts First National Bank of Boston, 1784-1934. Cambridge, MA: Harvard University Press, 1937.

Green, George D. Finance and Economic Development in the Old South: Louisiana Banking, 1804-1861. Stanford: Stanford University Press, 1972.

Hammond, Bray. Banks and Politics in America from the Revolution to the Civil War. Princeton: Princeton University Press, 1957.

Hasse, William F., Jr. A History of Banking in New Haven, Connecticut. New Haven: privately printed, 1946.

Hasse, William F., Jr. A History of Money and Banking in Connecticut. New Haven: privately printed, 1957.

Holdsworth, John Thom. Financing an Empire: History of Banking in Pennsylvania. Chicago: S.J. Clarke Publishing Company, 1928.

Huntington, Charles Clifford. A History of Banking and Currency in Ohio before the Civil War. Columbus: F. J. Herr Printing Company, 1915.

Knox, John Jay. A History of Banking in the United States. New York: Bradford Rhodes & Company, 1903.

Kuznets, Simon. “Foreword.” In Financial Intermediaries in the American Economy, by Raymond W. Goldsmith. Princeton: Princeton University Press, 1958.

Lake, Wilfred. “The End of the Suffolk System.” Journal of Economic History 7, no. 4 (1947): 183-207.

Lamoreaux, Naomi R. Insider Lending: Banks, Personal Connections, and Economic Development in Industrial New England. Cambridge: Cambridge University Press, 1994.

Lesesne, J. Mauldin. The Bank of the State of South Carolina. Columbia: University of South Carolina Press, 1970.

Lewis, Lawrence, Jr. A History of the Bank of North America: The First Bank Chartered in the United States. Philadelphia: J.B. Lippincott & Company, 1882.

Lockard, Paul A. Banks, Insider Lending and Industries of the Connecticut River Valley of Massachusetts, 1813-1860. Unpublished Ph.D. thesis, University of Massachusetts, 2000.

Martin, Joseph G. A Century of Finance. New York: Greenwood Press, 1969.

Moulton, H.G. “Commercial Banking and Capital Formation.” Journal of Political Economy 26 (1918): 484-508, 638-63, 705-31, 849-81.

Mullineaux, Donald J. “Competitive Monies and the Suffolk Banking System: A Contractual Perspective.” Southern Economic Journal 53 (1987): 884-98.

Nevins, Allan. History of the Bank of New York and Trust Company, 1784 to 1934. New York: privately printed, 1934.

New York. Bank Commissioners. “Annual Report of the Bank Commissioners.” New York General Assembly Document No. 74. Albany, 1835.

North, Douglass. “Institutional Change in American Economic History.” In American Economic Development in Historical Perspective, edited by Thomas Weiss and Donald Schaefer, 87-98. Stanford: Stanford University Press, 1994.

Rappaport, George David. Stability and Change in Revolutionary Pennsylvania: Banking, Politics, and Social Structure. University Park, PA: The Pennsylvania State University Press, 1996.

Redlich, Fritz. The Molding of American Banking: Men and Ideas. New York: Hafner Publishing Company, 1947.

Rockoff, Hugh. “The Free Banking Era: A Reexamination.” Journal of Money, Credit, and Banking 6, no. 2 (1974): 141-67.

Rockoff, Hugh. “New Evidence on the Free Banking Era in the United States.” American Economic Review 75, no. 4 (1985): 886-89.

Rolnick, Arthur J., and Warren E. Weber. “Free Banking, Wildcat Banking, and Shinplasters.” Federal Reserve Bank of Minneapolis Quarterly Review 6 (1982): 10-19.

Rolnick, Arthur J., and Warren E. Weber. “New Evidence on the Free Banking Era.” American Economic Review 73, no. 5 (1983): 1080-91.

Rolnick, Arthur J., Bruce D. Smith, and Warren E. Weber. “Lessons from a Laissez-Faire Payments System: The Suffolk Banking System (1825-58).” Federal Reserve Bank of Minneapolis Quarterly Review 22, no. 3 (1998): 11-21.

Royalty, Dale. “Banking and the Commonwealth Ideal in Kentucky, 1806-1822.” Register of the Kentucky Historical Society 77 (1979): 91-107.

Schumpeter, Joseph A. The Theory of Economic Development: An Inquiry into Profit, Capital, Credit, Interest, and the Business Cycle. Cambridge, MA: Harvard University Press, 1934.

Schweikart, Larry. Banking in the American South from the Age of Jackson to Reconstruction. Baton Rouge: Louisiana State University Press, 1987.

Simonton, William G. Maine and the Panic of 1837. Unpublished master’s thesis: University of Maine, 1971.

Sokoloff, Kenneth L. “Productivity Growth in Manufacturing during Early Industrialization.” In Long-Term Factors in American Economic Growth, edited by Stanley L. Engerman and Robert E. Gallman. Chicago: University of Chicago Press, 1986.

Sokoloff, Kenneth L. “Invention, Innovation, and Manufacturing Productivity Growth in the Antebellum Northeast.” In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John Joseph Wallis, 345-78. Chicago: University of Chicago Press, 1992.

Spencer, Charles, Jr. The First Bank of Boston, 1784-1949. New York: Newcomen Society, 1949.

Starnes, George T. Sixty Years of Branch Banking in Virginia. New York: Macmillan Company, 1931.

Stokes, Howard Kemble. Chartered Banking in Rhode Island, 1791-1900. Providence: Preston & Rounds Company, 1902.

Sylla, Richard. “Forgotten Men of Money: Private Bankers in Early U.S. History.” Journal of Economic History 36, no. 2 (1976):

Temin, Peter. The Jacksonian Economy. New York: W. W. Norton & Company, 1969.

Trescott, Paul B. Financing American Enterprise: The Story of Commercial Banking. New York: Harper & Row, 1963.

Trivoli, George. The Suffolk Bank: A Study of a Free-Enterprise Clearing System. London: The Adam Smith Institute, 1979.

U.S. Comptroller of the Currency. Annual Report of the Comptroller of the Currency. Washington, D.C.: Government Printing Office, 1931.

Wainwright, Nicholas B. History of the Philadelphia National Bank. Philadelphia: William F. Fell Company, 1953.

Walker, Amasa. History of the Wickaboag Bank. Boston: Crosby, Nichols & Company, 1857.

Wallis, John Joseph. “What Caused the Panic of 1839?” Unpublished working paper, University of Maryland, October 2000.

Weiss, Thomas. “U.S. Labor Force Estimates and Economic Growth, 1800-1860.” In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John Joseph Wallis, 19-75. Chicago: University of Chicago Press, 1992.

Whitney, David R. The Suffolk Bank. Cambridge, MA: Riverside Press, 1878.

Wright, Robert E. “Artisans, Banks, Credit, and the Election of 1800.” The Pennsylvania Magazine of History and Biography 122, no. 3 (July 1998), 211-239.

Wright, Robert E. “Bank Ownership and Lending Patterns in New York and Pennsylvania, 1781-1831.” Business History Review 73, no. 1 (Spring 1999), 40-60.

1 Banknotes were small demonination IOUs printed by banks and circulated as currency. Modern U.S. money are simply banknotes issued by the Federal Reserve Bank, which has a monopoly privilege in the issue of legal tender currency. In antebellum American, when a bank made a loan, the borrower was typically handed banknotes with a face value equal to the dollar value of the loan. The borrower then spent these banknotes in purchasing goods and services, putting them into circulation. Contemporary law held that banks were required to redeem banknotes into gold and silver legal tender on demand. Banks found it profitable to issue notes because they typically held about 30 percent of the total value of banknotes in circulation as reserves. Thus, banks were able to leverage $30 in gold and silver into $100 in loans that returned about 7 percent interest on average.

2 Paul Lockard (2000) challenges Lamoreaux’s interpretation. In a study of 4 banks in the Connecticut River valley, Lockard finds that insiders did not dominate these banks’ resources. As provocative as Lockard’s findings are, he draws conclusions from a small and unrepresentative sample. Two of his four sample banks were savings banks, which were designed as quasi-charitable organizations designed to encourage savings by the working classes and provide small loans. Thus, Lockard’s sample is effectively reduced to two banks. At these two banks, he identifies about 10 percent of loans as insider loans, but readily admits that he cannot always distinguish between insiders and outsiders. For a recent study of how early Americans used savings banks, see Alter, Goldin and Rotella (1994). The literature on savings banks is so large that it cannot be be given its due here.

3 Interbank clearing involves the settling of balances between banks. Modern banks cash checks drawn on other banks and credit the funds to the depositor. The Federal Reserve system provides clearing services between banks. The accepting bank sends the checks to the Federal Reserve, who credits the sending bank’s accounts and sends the checks back to the bank on which they were drawn for reimbursement. In the antebellum era, interbank clearing involved sending banknotes back to issuing banks. Because New England had so many small and scattered banks, the costs of returning banknotes to their issuers were large and sometimes avoided by recirculating notes of distant banks rather than returning them. Regular clearings and redemptions served an important purpose, however, because they kept banks in touch with the current market conditions. A massive redemption of notes was indicative of a declining demand for money and credit. Because the bank’s reserves were drawn down with the redemptions, it was forced to reduce its volume of loans in accord with changing demand conditions.

4 The law held that banknotes were redeemable on demand into gold or silver coin or bullion. If a bank refused to redeem even a single $1 banknote, the banknote holder could have the bank closed and liquidated to recover his or her claim against it.

5 Rappaport (1996) found that the bank’s loans were about equally divided between insiders (shareholders and shareholders’ family and business associates) and outsiders, but nonshareholders received loans about 30 percent smaller than shareholders. The issue remains about whether this bank was an “insider” bank, and depends largely on one’s definition. Any modern bank which made half of its loans to shareholders and their families would be viewed as an “insider” bank. It is less clear where the line can be usefully drawn for antebellum banks.

6 Real-bills lending followed from a nineteenth-century banking philosophy, which held that bank lending should be used to finance the warehousing or wholesaling of already-produced goods. Loans made on these bases were thought to be self-liquidating in that the loan was made against readily sold collateral actually in the hands of a merchant. Under the real-bills doctrine, the banks’ proper functions were to bridge the gap between production and retail sale of goods. A strict adherence to real-bills tenets excluded loans on property (mortgages), loans on goods in process (trade credit), or loans to start-up firms (venture capital). Thus, real-bills lending prescribed a limited role for banks and bank credit. Few banks were strict adherents to the doctrine, but many followed it in large part.

7 Robert E. Wright (1998) offers a different interpretation, but notes that Burr pushed the bill through at the end of a busy legislative session so that many legislators voted on the bill without having read it thoroughly or at all.

Citation: Bodenhorn, Howard. “Antebellum Banking in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 26, 2008. URL http://eh.net/encyclopedia/antebellum-banking-in-the-united-states/

Bertola.Uruguay.final

An Overview of the Economic History of Uruguay
since the 1870s

Luis Bértola, Universidad de la República — Uruguay

Uruguay’s Early History

Without silver or gold, without valuable species, scarcely peopled by gatherers and fishers, the Eastern Strand of the Uruguay River (Banda Oriental was the colonial name; República Oriental del Uruguay is the official name today) was, in the sixteenth and seventeenth centuries, distant and unattractive to the European nations that conquered the region. The major export product was the leather of wild descendants of cattle introduced in the early 1600s by the Spaniards. As cattle preceded humans, the state preceded society: Uruguay’s first settlement was Colonia del Sacramento, a Portuguese military fortress founded in 1680, placed precisely across from Buenos Aires, Argentina. Montevideo, also a fortress, was founded by the Spaniards in 1724. Uruguay was on the border between the Spanish and Portuguese empires, a feature which would be decisive for the creation, with strong British involvement, in 1828-1830, of an independent state.

Montevideo had the best natural harbor in the region, and rapidly became the end-point of the trans-Atlantic routes into the region, the base for a strong commercial elite, and for the Spanish navy in the region. During the first decades after independence, however, Uruguay was plagued by political instability, precarious institution building and economic retardation. Recurrent civil wars with intensive involvement by Britain, France, Portugal-Brazil and Argentina, made Uruguay a center for international conflicts, the most important being the Great War (Guerra Grande), which lasted from 1839 to 1851. At its end Uruguay had only about 130,000 inhabitants.

“Around the middle of the nineteenth century, Uruguay was dominated by the latifundium, with its ill-defined boundaries and enormous herds of native cattle, from which only the hides were exported to Great Britain and part of the meat, as jerky, to Brazil and Cuba. There was a shifting rural population that worked on the large estates and lived largely on the parts of beef carcasses that could not be marketed abroad. Often the landowners were also the caudillos of the Blanco or Colorado political parties, the protagonists of civil wars that a weak government was unable to prevent” (Barrán and Nahum, 1984, 655). This picture still holds, even if it has been excessively stylized, neglecting the importance of subsistence or domestic-market oriented peasant production.

Economic Performance in the Long Run

Despite its precarious beginnings, Uruguay’s per capita gross domestic product (GDP) growth from 1870 to 2002 shows an amazing persistence, with the long-run rate averaging around one percent per year. However, this apparent stability hides some important shifts. As shown in Figure 1, both GDP and population grew much faster before the 1930s; from 1930 to 1960 immigration vanished and population grew much more slowly, while decades of GDP stagnation and fast growth alternated; after the 1960s Uruguay became a net-emigration country, with low natural growth rates and a still spasmodic GDP growth.

GDP growth shows a pattern featured by Kuznets-like swings (Bértola and Lorenzo 2004), with extremely destructive downward phases, as shown in Table 1. This cyclical pattern is correlated with movements of the terms of trade (the relative price of exports versus imports), world demand and international capital flows. In the expansive phases exports performed well, due to increased demand and/or positive terms of trade shocks (1880s, 1900s, 1920s, 1940s and even during the Mercosur years from 1991 to 1998). Capital flows would sometimes follow these booms and prolong the cycle, or even be a decisive force to set the cycle up, as were financial flows in the 1970s and 1990s. The usual outcome, however, has been an overvalued currency, which blurred the debt problem and threatened the balance of trade by overpricing exports. Crises have been the result of a combination of changing trade conditions, devaluation and over-indebtedness, as in the 1880s, early 1910s, late 1920s, 1950s, early 1980s and late 1990s.

Population and per capita GDP of Uruguay, 1870-2002 (1913=100)

Table 1: Swings in the Uruguayan Economy, 1870-2003

Per capita GDP fall (%) Length of recession (years) Time to pre-crisis levels (years) Time to next crisis (years)
1872-1875 26 3 15 16
1888-1890 21 2 19 25
1912-1915 30 3 15 19
1930-1933 36 3 17 24-27
1954/57-59 9 2-5 18-21 27-24
1981-1984 17 3 11 17
1998-2003 21 5

Sources: See Figure 1.

Besides its cyclical movement, the terms of trade showed a sharp positive trend in 1870-1913, a strongly fluctuating pattern around similar levels in 1913-1960 and a deteriorating trend since then. While the volume of exports grew quickly up to the 1920s, it stagnated in 1930-1960 and started to grow again after 1970. As a result, the purchasing power of exports grew fourfold in 1870-1913, fluctuated along with the terms of trade in 1930-1960, and exhibited a moderate growth in 1970-2002.

The Uruguayan economy was very open to trade in the period up to 1913, featuring high export shares, which naturally declined as the rapidly growing population filled in rather empty areas. In 1930-1960 the economy was increasingly and markedly closed to international trade, but since the 1970s the economy opened up to trade again. Nevertheless, exports, which earlier were mainly directed to Europe (beef, wool, leather, linseed, etc.), were increasingly oriented to Argentina and Brazil, in the context of bilateral trade agreements in the 1970s and 1980s and of Mercosur (the trading zone encompassing Argentina, Brazil, Paraguay and Uruguay) in the 1990s.

While industrial output kept pace with agrarian export-led growth during the first globalization boom before World War I, the industrial share in GDP increased in 1930-54, and was mainly domestic-market orientated. Deindustrialization has been profound since the mid-1980s. The service sector was always large: focusing on commerce, transport and traditional state bureaucracy during the first globalization boom; focusing on health care, education and social services, during the import-substituting industrialization (ISI) period in the middle of the twentieth century; and focusing on military expenditure, tourism and finance since the 1970s.

The income distribution changed markedly over time. During the first globalization boom before World War I, an already uneven distribution of income and wealth seems to have worsened, due to massive immigration and increasing demand for land, both rural and urban. However, by the 1920s the relative prices of land and labor changed their previous trend, reducing income inequality. The trend later favored industrialization policies, democratization, introduction of wage councils, and the expansion of the welfare state based on an egalitarian ideology. Inequality diminished in many respects: between sectors, within sectors, between genders and between workers and pensioners. While the military dictatorship and the liberal economic policy implemented since the 1970s initiated a drastic reversal of the trend toward economic equality, the globalizing movements of the 1980s and 1990s under democratic rule didn’t increase equality. Thus, inequality remains at the higher levels reached during the period of dictatorship (1973-85).

Comparative Long-run Performance

If the stable long-run rate of Uruguayan per capita GDP growth hides important internal transformations, Uruguay’s changing position in the international scene is even more remarkable. During the first globalization boom the world became more unequal: the United States forged ahead as the world leader (nearly followed by other settler economies); Asia and Africa lagged far behind. Latin America showed a confusing map, in which countries as Argentina and Uruguay performed rather well, and others, such as the Andean region, lagged far behind (Bértola and Williamson 2003). Uruguay’s strong initial position tended to deteriorate in relation to the successful core countries during the late 1800s, as shown in Figure 2. This trend of negative relative growth was somewhat weak during the first half of the twentieth century, improved significantly during the 1960s, as the import-substituting industrialization model got exhausted, and has continued since the 1970s, despite policies favoring increased integration into the global economy.

 Per capita GDP of Uruguay relative to four core countries, 1870-2002

If school enrollment and literacy rates are reasonable proxies for human capital, in late 1800s both Argentina and Uruguay had a great handicap in relation to the United States, as shown in Table 2. The gap in literacy rates tended to disappear — as well as this proxy’s ability to measure comparative levels of human capital. Nevertheless, school enrollment, which includes college-level and technical education, showed a catching-up trend until the 1960’s, but reverted afterwards.

The gap in life-expectancy at birth has always been much smaller than the other development indicators. Nevertheless, some trends are noticeable: the gap increased in 1900-1930; decreased in 1930-1950; and increased again after the 1970s.

Table 2: Uruguayan Performance in Comparative Perspective, 1870-2000 (US = 100)

1870 1880 1890 1900 1910 1920 1930 1940 1950 1960 1970 1980 1990 2000
GDP per capita

Uruguay

101 65 63 27 32 27 33 27 26 24 19 18 15 16

Argentina

63 34 38 31 32 29 25 25 24 21 15 16

Brazil

23 8 8 8 8 8 7 9 9 13 11 10
Latin America 13 12 13 10 9 9 9 6 6

USA

100 100 100 100 100 100 100 100 100 100 100 100 100 100
Literacy rates

Uruguay

57 65 72 79 85 91 92 94 95 97 99

Argentina

57 65 72 79 85 91 93 94 94 96 98

Brazil

39 38 37 42 46 51 61 69 76 81 86

Latin America

28 30 34 37 42 47 56 65 71 77 83

USA

100 100 100 100 100 100 100 100 100 100 100
School enrollment

Uruguay

23 31 31 30 34 42 52 46 43

Argentina

28 41 42 36 39 43 55 44 45

Brazil

12 11 12 14 18 22 30 42

Latin America

USA

100 100 100 100 100 100 100 100 100
Life expectancy at birth

Uruguay

102 100 91 85 91 97 97 97 95 96 96

Argentina

81 85 86 90 88 90 93 94 95 96 95
Brazil 60 60 56 58 58 63 79 83 85 88 88
Latin America 65 63 58 58 59 63 71 77 81 88 87
USA 100 100 100 100 100 100 100 100 100 100 100

Sources: Per capita GDP: Maddison (2001) and Astorga, Bergés and FitzGerald (2003). Literacy rates and life expectancy; Astorga, Bergés and FitzGerald (2003). School enrollment; Bértola and Bertoni (1998).

Uruguay during the First Globalization Boom: Challenge and Response

During the post-Great-War reconstruction after 1851, Uruguayan population grew rapidly (fueled by high natural rates and immigration) and so did per capita output. Productivity grew due to several causes including: the steam ship revolution, which critically reduced the price spread between Europe and America and eased access to the European market; railways, which contributed to the unification of domestic markets and reduced domestic transport costs; the diffusion and adaptation to domestic conditions of innovations in cattle-breeding and services; a significant reduction in transaction costs, related to a fluctuating but noticeable process of institutional building and strengthening of the coercive power of the state.

Wool and woolen products, hides and leather were exported mainly to Europe; salted beef (tasajo) to Brazil and Cuba. Livestock-breeding (both cattle and sheep) was intensive in natural resources and dominated by large estates. By the 1880s, the agrarian frontier was exhausted, land properties were fenced and property rights strengthened. Labor became abundant and concentrated in urban areas, especially around Montevideo’s harbor, which played an important role as a regional (supranational) commercial center. By 1908, it contained 40 percent of the nation’s population, which had risen to more than a million inhabitants, and provided the main part of Uruguay’s services, civil servants and the weak and handicraft-dominated manufacturing sector.

By the 1910s, Uruguayan competitiveness started to weaken. As the benefits of the old technological paradigm were eroding, the new one was not particularly beneficial for resource-intensive countries such as Uruguay. International demand shifted away from primary consumption, the population of Europe grew slowly and European countries struggled for self-sufficiency in primary production in a context of soaring world supply. Beginning in the 1920s, the cattle-breeding sector showed a very poor performance, due to lack of innovation away from natural pastures. In the 1930’s, its performance deteriorated mainly due to unfavorable international conditions. Export volumes stagnated until the 1970s, while purchasing power fluctuated strongly following the terms of trade.

Inward-looking Growth and Structural Change

The Uruguayan economy grew inwards until the 1950s. The multiple exchange rate system was the main economic policy tool. Agrarian production was re-oriented towards wool, crops, dairy products and other industrial inputs, away from beef. The manufacturing industry grew rapidly and diversified significantly, with the help of protectionist tariffs. It was light, and lacked capital goods or technology-intensive sectors. Productivity growth hinged upon technology transfers embodied in imported capital goods and an intensive domestic adaptation process of mature technologies. Domestic demand grew also through an expanding public sector and the expansion of a corporate welfare state. The terms of trade substantially impacted protectionism, productivity growth and domestic demand — the government raised money by manipulating exchange rates, so that when export prices rose the state had a greater capacity to protect the manufacturing sector through low exchange rates for capital goods, raw material and fuel imports and to spur productivity increases by imports of capital, while protection allowed industry to pay higher wages and thus expand domestic demand.

However, rent-seeking industries searching for protectionism and a weak clienteslist state, crowded by civil servants recruited in exchange for political favors to the political parties, directed structural change towards a closed economy and inefficient management. The obvious limits to inward looking growth of a country peopled by only about two million inhabitants were exacerbated in the late 1950s as terms of trade deteriorated. The clientelist political system, which was created by both traditional parties while the state was expanding at the national and local level, was now not able to absorb the increasing social conflicts, dyed by stringent ideological confrontation, in a context of stagnation and huge fiscal deficits.

Re-globalization and Regional Integration

The dictatorship (1973-1985) started a period of increasing openness to trade and deregulation which has persisted until the present. Dynamic integration into the world market is still incomplete, however. An attempt to return to cattle-breeding exports, as the engine of growth, was hindered by the oil crises and the ensuing European response, which restricted meat exports to that destination. The export sector was re-orientated towards “non-traditional exports” — i.e., exports of industrial goods made of traditional raw materials, to which low-quality and low-wage labor was added. Exports were also stimulated by means of strong fiscal exemptions and negative real interest rates and were re-orientated to the regional market (Argentina and Brazil) and to other developing regions. At the end of the 1970s, this policy was replaced by the monetarist approach to the balance of payments. The main goal was to defeat inflation (which had continued above 50% since the 1960s) through deregulation of foreign trade and a pre-announced exchange rate, the “tablita.” A strong wave of capital inflows led to a transitory success, but the Uruguayan peso became more and more overvalued, thus limiting exports, encouraging imports and deepening the chronic balance of trade deficit. The “tablita” remained dependent on increasing capital inflows and obviously collapsed when the risk of a huge devaluation became real. Recession and the debt crisis dominated the scene of the early 1980s.

Democratic regimes since 1985 have combined natural resource intensive exports to the region and other emergent markets, with a modest intra-industrial trade mainly with Argentina. In the 1990s, once again, Uruguay was overexposed to financial capital inflows which fueled a rather volatile growth period. However, by the year 2000, Uruguay had a much worse position in relation to the leaders of the world economy as measured by per capita GDP, real wages, equity and education coverage, than it had fifty years earlier.

Medium-run Prospects

In the 1990s Mercosur as a whole and each of its member countries exhibited a strong trade deficit with non-Mercosur countries. This was the result of a growth pattern fueled by and highly dependent on foreign capital inflows, combined with the traditional specialization in commodities. The whole Mercosur project is still mainly oriented toward price competitiveness. Nevertheless, the strongly divergent macroeconomic policies within Mercosur during the deep Argentine and Uruguayan crisis of the beginning of the twenty-first century, seem to have given place to increased coordination between Argentina and Brazil, thus making of the region a more stable environment.

The big question is whether the ongoing political revival of Mercosur will be able to achieve convergent macroeconomic policies, success in international trade negotiations, and, above all, achievements in developing productive networks which may allow Mercosur to compete outside its home market with knowledge-intensive goods and services. Over that hangs Uruguay’s chance to break away from its long-run divergent siesta.

References

Astorga, Pablo, Ame R. Bergés and Valpy FitzGerald. “The Standard of Living in Latin America during the Twentieth Century.” University of Oxford Discussion Papers in Economic and Social History 54 (2004).

Barrán, José P. and Benjamín Nahum. “Uruguayan Rural History.” Latin American Historical Review, 1985.

Bértola, Luis. The Manufacturing Industry of Uruguay, 1913-1961: A Sectoral Approach to Growth, Fluctuations and Crisis. Publications of the Department of Economic History, University of Göteborg, 61; Institute of Latin American Studies of Stockholm University, Monograph No. 20, 1990.

Bértola, Luis and Reto Bertoni. “Educación y aprendizaje en escenarios de convergencia y divergencia.” Documento de Trabajo, no. 46, Unidad Multidisciplinaria, Facultad de Ciencias Sociales, Universidad de la República, 1998.

Bértola, Luis and Fernando Lorenzo. “Witches in the South: Kuznets-like Swings in Argentina, Brazil and Uruguay since the 1870s.” In The Experience of Economic Growth, edited by J.L. van Zanden and S. Heikenen. Amsterdam: Aksant, 2004.

Bértola, Luis and Gabriel Porcile. “Argentina, Brasil, Uruguay y la Economía Mundial: una aproximación a diferentes regímenes de convergencia y divergencia.” In Ensayos de Historia Económica by Luis Bertola. Montevideo: Uruguay en la región y el mundo, 2000.

Bértola, Luis and Jeffrey Williamson. “Globalization in Latin America before 1940.” National Bureau of Economic Research Working Paper, no. 9687 (2003).

Bértola, Luis and others. El PBI uruguayo 1870-1936 y otras estimaciones. Montevideo, 1998.

Maddison, A. Monitoring the World Economy, 1820-1992. Paris: OECD, 1995.

Maddison, A. The World Economy: A Millennial Perspective. Paris: OECD, 2001.

The Economics of the American Revolutionary War

Ben Baack, Ohio State University

By the time of the onset of the American Revolution, Britain had attained the status of a military and economic superpower. The thirteen American colonies were one part of a global empire generated by the British in a series of colonial wars beginning in the late seventeenth century and continuing on to the mid eighteenth century. The British military establishment increased relentlessly in size during this period as it engaged in the Nine Years War (1688-97), the War of Spanish Succession (1702-13), the War of Austrian Succession (1739-48), and the Seven Years War (1756-63). These wars brought considerable additions to the British Empire. In North America alone the British victory in the Seven Years War resulted in France ceding to Britain all of its territory east of the Mississippi River as well as all of Canada and Spain surrendering its claim to Florida (Nester, 2000).

Given the sheer magnitude of the British military and its empire, the actions taken by the American colonists for independence have long fascinated scholars. Why did the colonists want independence? How were they able to achieve a victory over what was at the time the world’s preeminent military power? What were the consequences of achieving independence? These and many other questions have engaged the attention of economic, legal, military, political, and social historians. In this brief essay we will focus only on the economics of the Revolutionary War.

Economic Causes of the Revolutionary War

Prior to the conclusion of the Seven Years War there was little, if any, reason to believe that one day the American colonies would undertake a revolution in an effort to create an independent nation-state. As apart of the empire the colonies were protected from foreign invasion by the British military. In return, the colonists paid relatively few taxes and could engage in domestic economic activity without much interference from the British government. For the most part the colonists were only asked to adhere to regulations concerning foreign trade. In a series of acts passed by Parliament during the seventeenth century the Navigation Acts required that all trade within the empire be conducted on ships which were constructed, owned and largely manned by British citizens. Certain enumerated goods whether exported or imported by the colonies had to be shipped through England regardless of the final port of destination.

Western Land Policies

The movement for independence arose in the colonies following a series of critical decisions made by the British government after the end of the war with France in 1763. Two themes emerge from what was to be a fundamental change in British economic policy toward the American colonies. The first involved western land. With the acquisition from the French of the territory between the Allegheny Mountains and the Mississippi River the British decided to isolate the area from the rest of the colonies. Under the terms of the Proclamation of 1763 and the Quebec Act of 1774 colonists were not allowed to settle here or trade with the Indians without the permission of the British government. These actions nullified the claims to land in the area by a host of American colonies, individuals, and land companies. The essence of the policy was to maintain British control of the fur trade in the West by restricting settlement by the Americans.

Tax Policies

The second fundamental change involved taxation. The British victory over the French had come at a high price. Domestic taxes had been raised substantially during the war and total government debt had increased nearly twofold (Brewer, 1989). Furthermore, the British had decided in1763 to place a standing army of 10,000 men in North America. The bulk of these forces were stationed in newly acquired territory to enforce its new land policy in the West. Forts were to be built which would become the new centers of trade with the Indians. The British decided that the Americans should share the costs of the military buildup in the colonies. The reason seemed obvious. Taxes were significantly higher in Britain than in the colonies. One estimate suggests the per capita tax burden in the colonies ranged from two to four per cent of that in Britain (Palmer, 1959). It was time in the British view that the Americans began to pay a larger share of the expenses of the empire.

Accordingly, a series of tax acts were passed by Parliament the revenue from which was to be used to help pay for the standing army in America. The first was the Sugar Act of 1764. Proposed by England’s Prime Minister the act lowered tariff rates on non-British products from the West Indies as well as strengthened their collection. It was hoped this would reduce the incentive for smuggling and thereby increase tariff revenue (Bullion, 1982). The following year Parliament passed the Stamp Act that imposed a tax commonly used in England. It required stamps for a broad range of legal documents as well as newspapers and pamphlets. While the colonial stamp duties were less than those in England they were expected to generate enough revenue to finance a substantial portion of the cost the new standing army. The same year passage of the Quartering Act imposed essentially a tax in kind by requiring the colonists to provide British military units with housing, provisions, and transportation. In 1767 the Townshend Acts imposed tariffs upon a variety of imported goods and established a Board of Customs Commissioners in the colonies to collect the revenue.

Boycotts

American opposition to these acts was expressed initially in a variety of peaceful forms. While they did not have representation in Parliament, the colonists did attempt to exert some influence in it through petition and lobbying. However, it was the economic boycott that became by far the most effective means of altering the new British economic policies. In 1765 representatives from nine colonies met at the Stamp Act Congress in New York and organized a boycott of imported English goods. The boycott was so successful in reducing trade that English merchants lobbied Parliament for the repeal of the new taxes. Parliament soon responded to the political pressure. During 1766 it repealed both the Stamp and Sugar Acts (Johnson, 1997). In response to the Townshend Acts of 1767 a second major boycott started in 1768 in Boston and New York and subsequently spread to other cities leading Parliament in 1770 to repeal all of the Townshend duties except the one on tea. In addition, Parliament decided at the same time not to renew the Quartering Act.

With these actions taken by Parliament the Americans appeared to have successfully overturned the new British post war tax agenda. However, Parliament had not given up what it believed to be its right to tax the colonies. On the same day it repealed the Stamp Act, Parliament passed the Declaratory Act stating the British government had the full power and authority to make laws governing the colonies in all cases whatsoever including taxation. Policies not principles had been overturned.

The Tea Act

Three years after the repeal of the Townshend duties British policy was once again to emerge as an issue in the colonies. This time the American reaction was not peaceful. It all started when Parliament for the first time granted an exemption from the Navigation Acts. In an effort to assist the financially troubled British East India Company Parliament passed the Tea Act of 1773, which allowed the company to ship tea directly to America. The grant of a major trading advantage to an already powerful competitor meant a potential financial loss for American importers and smugglers of tea. In December a small group of colonists responded by boarding three British ships in the Boston harbor and throwing overboard several hundred chests of tea owned by the East India Company (Labaree, 1964). Stunned by the events in Boston, Parliament decided not to cave in to the colonists as it had before. In rapid order it passed the Boston Port Act, the Massachusetts Government Act, the Justice Act, and the Quartering Act. Among other things these so-called Coercive or Intolerable Acts closed the port of Boston, altered the charter of Massachusetts, and reintroduced the demand for colonial quartering of British troops. Once done Parliament then went on to pass the Quebec Act as a continuation of its policy of restricting the settlement of the West.

The First Continental Congress

Many Americans viewed all of this as a blatant abuse of power by the British government. Once again a call went out for a colonial congress to sort out a response. On September 5, 1774 delegates appointed by the colonies met in Philadelphia for the First Continental Congress. Drawing upon the successful manner in which previous acts had been overturned the first thing Congress did was to organize a comprehensive embargo of trade with Britain. It then conveyed to the British government a list of grievances that demanded the repeal of thirteen acts of Parliament. All of the acts listed had been passed after 1763 as the delegates had agreed not to question British policies made prior to the conclusion of the Seven Years War. Despite all the problems it had created, the Tea Act was not on the list. The reason for this was that Congress decided not to protest British regulation of colonial trade under the Navigation Acts. In short, the delegates were saying to Parliament take us back to 1763 and all will be well.

The Second Continental Congress

What happened then was a sequence of events that led to a significant increase in the degree of American resistance to British polices. Before the Congress adjourned in October the delegates voted to meet again in May of 1775 if Parliament did not meet their demands. Confronted by the extent of the American demands the British government decided it was time to impose a military solution to the crisis. Boston was occupied by British troops. In April a military confrontation occurred at Lexington and Concord. Within a month the Second Continental Congress was convened. Here the delegates decided to fundamentally change the nature of their resistance to British policies. Congress authorized a continental army and undertook the purchase of arms and munitions. To pay for all of this it established a continental currency. With previous political efforts by the First Continental Congress to form an alliance with Canada having failed, the Second Continental Congress took the extraordinary step of instructing its new army to invade Canada. In effect, these actions taken were those of an emerging nation-state. In October as American forces closed in on Quebec the King of England in a speech to Parliament declared that the colonists having formed their own government were now fighting for their independence. It was to be only a matter of months before Congress formally declared it.

Economic Incentives for Pursuing Independence: Taxation

Given the nature of British colonial policies, scholars have long sought to evaluate the economic incentives the Americans had in pursuing independence. In this effort economic historians initially focused on the period following the Seven Years War up to the Revolution. It turned out that making a case for the avoidance of British taxes as a major incentive for independence proved difficult. The reason was that many of the taxes imposed were later repealed. The actual level of taxation appeared to be relatively modest. After all, the Americans soon after adopting the Constitution taxed themselves at far higher rates than the British had prior to the Revolution (Perkins, 1988). Rather it seemed the incentive for independence might have been the avoidance of the British regulation of colonial trade. Unlike some of the new British taxes, the Navigation Acts had remained intact throughout this period.

The Burden of the Navigation Acts

One early attempt to quantify the economic effects of the Navigation Acts was by Thomas (1965). Building upon the previous work of Harper (1942), Thomas employed a counterfactual analysis to assess what would have happened to the American economy in the absence of the Navigation Acts. To do this he compared American trade under the Acts with that which would have occurred had America been independent following the Seven Years War. Thomas then estimated the loss of both consumer and produce surplus to the colonies as a result of shipping enumerated goods indirectly through England. These burdens were partially offset by his estimated value of the benefits of British protection and various bounties paid to the colonies. The outcome of his analysis was that the Navigation Acts imposed a net burden of less than one percent of colonial per capita income. From this he concluded the Acts were an unlikely cause of the Revolution. A long series of subsequent works questioned various parts of his analysis but not his general conclusion (Walton, 1971). The work of Thomas also appeared to be consistent with the observation that the First Continental Congress had not demanded in its list of grievances the repeal of either the Navigation Acts or the Sugar Act.

American Expectations about Future British Policy

Did this mean then that the Americans had few if any economic incentives for independence? Upon further consideration economic historians realized that perhaps more important to the colonists were not the past and present burdens but rather the expected future burdens of continued membership in the British Empire. The Declaratory Act made it clear the British government had not given up what it viewed as its right to tax the colonists. This was despite the fact that up to 1775 the Americans had employed a variety of protest measures including lobbying, petitions, boycotts, and violence. The confluence of not having representation in Parliament while confronting an aggressive new British tax policy designed to raise their relatively low taxes may have made it reasonable for the Americans to expect a substantial increase in the level of taxation in the future (Gunderson, 1976, Reid, 1978). Furthermore a recent study has argued that in 1776 not only did the future burdens of the Navigation Acts clearly exceed those of the past, but a substantial portion would have borne by those who played a major role in the Revolution (Sawers, 1992). Seen in this light the economic incentive for independence would have been avoiding the potential future costs of remaining in the British Empire.

The Americans Undertake a Revolution

1776-77

British Military Advantages

The American colonies had both strengths and weaknesses in terms of undertaking a revolution. The colonial population of well over two million was nearly one third of that in Britain (McCusker and Menard, 1985). The growth in the colonial economy had generated a remarkably high level of per capita wealth and income (Jones, 1980). Yet the hurdles confronting the Americans in achieving independence were indeed formidable. The British military had an array of advantages. With virtual control of the Atlantic its navy could attack anywhere along the American coast at will and would have borne logistical support for the army without much interference. A large core of experienced officers commanded a highly disciplined and well-drilled army in the large-unit tactics of eighteenth century European warfare. By these measures the American military would have great difficulty in defeating the British. Its navy was small. The Continental Army had relatively few officers proficient in large-unit military tactics. Lacking both the numbers and the discipline of its adversary the American army was unlikely to be able to meet the British army on equal terms on the battlefield (Higginbotham, 1977).

British Financial Advantages

In addition, the British were in a better position than the Americans to finance a war. A tax system was in place that had provided substantial revenue during previous colonial wars. Also for a variety of reasons the government had acquired an exceptional capacity to generate debt to fund wartime expenses (North and Weingast, 1989). For the Continental Congress the situation was much different. After declaring independence Congress had set about defining the institutional relationship between it and the former colonies. The powers granted to Congress were established under the Articles of Confederation. Reflecting the political environment neither the power to tax nor the power to regulate commerce was given to Congress. Having no tax system to generate revenue also made it very difficult to borrow money. According to the Articles the states were to make voluntary payments to Congress for its war efforts. This precarious revenue system was to hamper funding by Congress throughout the war (Baack, 2001).

Military and Financial Factors Determine Strategy

It was within these military and financial constraints that the war strategies by the British and the Americans were developed. In terms of military strategies both of the contestants realized that America was simply too large for the British army to occupy all of the cities and countryside. This being the case the British decided initially that they would try to impose a naval blockade and capture major American seaports. Having already occupied Boston, the British during 1776 and 1777 took New York, Newport, and Philadelphia. With plenty of room to maneuver his forces and unable to match those of the British, George Washington chose to engage in a war of attrition. The purpose was twofold. First, by not engaging in an all out offensive Washington reduced the probability of losing his army. Second, over time the British might tire of the war.

Saratoga

Frustrated without a conclusive victory, the British altered their strategy. During 1777 a plan was devised to cut off New England from the rest of the colonies, contain the Continental Army, and then defeat it. An army was assembled in Canada under the command of General Burgoyne and then sent to and down along the Hudson River. It was to link up with an army sent from New York City. Unfortunately for the British the plan totally unraveled as in October Burgoyne’s army was defeated at the battle of Saratoga and forced to surrender (Ketchum, 1997).

The American Financial Situation Deteriorates

With the victory at Saratoga the military side of the war had improved considerably for the Americans. However, the financial situation was seriously deteriorating. The states to this point had made no voluntary payments to Congress. At the same time the continental currency had to compete with a variety of other currencies for resources. The states were issuing their own individual currencies to help finance expenditures. Moreover the British in an effort to destroy the funding system of the Continental Congress had undertaken a covert program of counterfeiting the Continental dollar. These dollars were printed and then distributed throughout the former colonies by the British army and agents loyal to the Crown (Newman, 1957). Altogether this expansion of the nominal money supply in the colonies led to a rapid depreciation of the Continental dollar (Calomiris, 1988, Michener, 1988). Furthermore, inflation may have been enhanced by any negative impact upon output resulting from the disruption of markets along with the destruction of property and loss of able-bodied men (Buel, 1998). By the end of 1777 inflation had reduced the specie value of the Continental to about twenty percent of what it had been when originally issued. This rapid decline in value was becoming a serious problem for Congress in that up to this point almost ninety percent of its revenue had been generated from currency emissions.

1778-83

British Invasion of the South

The British defeat at Saratoga had a profound impact upon the nature of the war. The French government still upset by their defeat by the British in the Seven Years War and encouraged by the American victory signed a treaty of alliance with the Continental Congress in early 1778. Fearing a new war with France the British government sent a commission to negotiate a peace treaty with the Americans. The commission offered to repeal all of the legislation applying to the colonies passed since 1763. Congress rejected the offer. The British response was to give up its efforts to suppress the rebellion in the North and in turn organize an invasion of the South. The new southern campaign began with the taking of the port of Savannah in December. Pursuing their southern strategy the British won major victories at Charleston and Camden during the spring and summer of 1780.

Worsening Inflation and Financial Problems

As the American military situation deteriorated in the South so did the financial circumstances of the Continental Congress. Inflation continued as Congress and the states dramatically increased the rate of issuance of their currencies. At the same time the British continued to pursue their policy of counterfeiting the Continental dollar. In order to deal with inflation some states organized conventions for the purpose of establishing wage and price controls (Rockoff, 1984). With its currency rapidly depreciating in value Congress increasingly relied on funds from other sources such as state requisitions, domestic loans, and French loans of specie. As a last resort Congress authorized the army to confiscate property.

Yorktown

Fortunately for the Americans the British military effort collapsed before the funding system of Congress. In a combined effort during the fall of 1781 French and American forces trapped the British southern army under the command of Cornwallis at Yorktown, Virginia. Under siege by superior forces the British army surrendered on October 19. The British government had now suffered not only the defeat of its northern strategy at Saratoga but also the defeat of its southern campaign at Yorktown. Following Yorktown, Britain suspended its offensive military operations against the Americans. The war was over. All that remained was the political maneuvering over the terms for peace.

The Treaty of Paris

The Revolutionary War officially concluded with the signing of the Treaty of Paris in 1783. Under the terms of the treaty the United States was granted independence and British troops were to evacuate all American territory. While commonly viewed by historians through the lens of political science, the Treaty of Paris was indeed a momentous economic achievement by the United States. The British ceded to the Americans all of the land east of the Mississippi River which they had taken from the French during the Seven Years War. The West was now available for settlement. To the extent the Revolutionary War had been undertaken by the Americans to avoid the costs of continued membership in the British Empire, the goal had been achieved. As an independent nation the United States was no longer subject to the regulations of the Navigation Acts. There was no longer to be any economic burden from British taxation.

THE FORMATION OF A NATIONAL GOVERNMENT

When you start a revolution you have to be prepared for the possibility you might win. This means being prepared to form a new government. When the Americans declared independence their experience of governing at a national level was indeed limited. In 1765 delegates from various colonies had met for about eighteen days at the Stamp Act Congress in New York to sort out a colonial response to the new stamp duties. Nearly a decade passed before delegates from colonies once again got together to discuss a colonial response to British policies. This time the discussions lasted seven weeks at the First Continental Congress in Philadelphia during the fall of 1774. The primary action taken at both meetings was an agreement to boycott trade with England. After having been in session only a month, delegates at the Second Continental Congress for the first time began to undertake actions usually associated with a national government. However, when the colonies were declared to be free and independent states Congress had yet to define its institutional relationship with the states.

The Articles of Confederation

Following the Declaration of Independence, Congress turned to deciding the political and economic powers it would be given as well as those granted to the states. After more than a year of debate among the delegates the allocation of powers was articulated in the Articles of Confederation. Only Congress would have the authority to declare war and conduct foreign affairs. It was not given the power to tax or regulate commerce. The expenses of Congress were to be made from a common treasury with funds supplied by the states. This revenue was to be generated from exercising the power granted to the states to determine their own internal taxes. It was not until November of 1777 that Congress approved the final draft of the Articles. It took over three years for the states to ratify the Articles. The primary reason for the delay was a dispute over control of land in the West as some states had claims while others did not. Those states with claims eventually agreed to cede them to Congress. The Articles were then ratified and put into effect on March 1, 1781. This was just a few months before the American victory at Yorktown. The process of institutional development had proved so difficult that the Americans fought almost the entire Revolutionary War with a government not sanctioned by the states.

Difficulties in the 1780s

The new national government that emerged from the Revolution confronted a host of issues during the 1780s. The first major one to be addressed by Congress was what to do with all of the land acquired in the West. Starting in 1784 Congress passed a series of land ordinances that provided for land surveys, sales of land to individuals, and the institutional foundation for the creation of new states. These ordinances opened the West for settlement. While this was a major accomplishment by Congress, other issues remained unresolved. Having repudiated its own currency and no power of taxation, Congress did not have an independent source of revenue to pay off its domestic and foreign debts incurred during the war. Since the Continental Army had been demobilized no protection was being provided for settlers in the West or against foreign invasion. Domestic trade was being increasingly disrupted during the 1780s as more states began to impose tariffs on goods from other states. Unable to resolve these and other issues Congress endorsed a proposed plan to hold a convention to meet in Philadelphia in May of 1787 to revise the Articles of Confederation.

Rather than amend the Articles, the delegates to the convention voted to replace them entirely with a new form of national government under the Constitution. There are of course many ways to assess the significance of this truly remarkable achievement. One is to view the Constitution as an economic document. Among other things the Constitution specifically addressed many of the economic problems that confronted Congress during and after the Revolutionary War. Drawing upon lessons learned in financing the war, no state under the Constitution would be allowed to coin money or issue bills of credit. Only the national government could coin money and regulate its value. Punishment was to be provided for counterfeiting. The problems associated with the states contributing to a common treasury under the Articles were overcome by giving the national government the coercive power of taxation. Part of the revenue was to be used to pay for the common defense of the United States. No longer would states be allowed to impose tariffs as they had done during the 1780s. The national government was now given the power to regulate both foreign and interstate commerce. As a result the nation was to become a common market. There is a general consensus among economic historians today that the economic significance of the ratification of the Constitution was to lay the institutional foundation for long run growth. From the point of view of the former colonists, however, it meant they had succeeded in transferring the power to tax and regulate commerce from Parliament to the new national government of the United States.

TABLES
Table 1 Continental Dollar Emissions (1775-1779)

Year of Emission Nominal Dollars Emitted (000) Annual Emission As Share of Total Nominal Stock Emitted Specie Value of Annual Emission (000) Annual Emission As Share of Total Specie Value Emitted
1775 $6,000 3% $6,000 15%
1776 19,000 8 15,330 37
1777 13,000 5 4,040 10
1778 63,000 26 10,380 25
1779 140,500 58 5,270 13
Total $241,500 100% $41,020 100%

Source: Bullock (1895), 135.
Table 2 Currency Emissions by the States (1775-1781)

Year of Emission Nominal Dollars Emitted (000) Year of Emission Nominal Dollars Emitted (000)
1775 $4,740 1778 $9,118
1776 13,328 1779 17,613
1777 9,573 1780 66,813
1781 123.376
Total $27,641 Total $216,376

Source: Robinson (1969), 327-28.

References

Baack, Ben. “Forging a Nation State: The Continental Congress and the Financing of the War of American Independence.” Economic History Review 54, no.4 (2001): 639-56.

Brewer, John. The Sinews of Power: War, Money and the English State, 1688- 1783. London: Cambridge University Press, 1989.

Buel, Richard. In Irons: Britain’s Naval Supremacy and the American Revolutionary Economy. New Haven: Yale University Press, 1998.

Bullion, John L. A Great and Necessary Measure: George Grenville and the Genesis of the Stamp Act, 1763-1765. Columbia: University of Missouri Press, 1982.

Bullock, Charles J. “The Finances of the United States from 1775 to 1789, with Especial Reference to the Budget.” Bulletin of the University of Wisconsin 1 no. 2 (1895): 117-273.

Calomiris, Charles W. “Institutional Failure, Monetary Scarcity, and the Depreciation of the Continental.” Journal of Economic History 48 no. 1 (1988): 47-68.

Egnal, Mark. A Mighty Empire: The Origins of the American Revolution. Ithaca: Cornell University Press, 1988.

Ferguson, E. James. The Power of the Purse: A History of American Public Finance, 1776-1790. Chapel Hill: University of North Carolina Press, 1961.

Gunderson, Gerald. A New Economic History of America. New York: McGraw- Hill, 1976.

Harper, Lawrence A. “Mercantilism and the American Revolution.” Canadian Historical Review 23 (1942): 1-15.

Higginbotham, Don. The War of American Independence: Military Attitudes, Policies, and Practice, 1763-1789. Bloomington: Indiana University Press, 1977.

Jensen, Merrill, editor. English Historical Documents: American Colonial Documents to 1776 New York: Oxford university Press, 1969.

Johnson, Allen S. A Prologue to Revolution: The Political Career of George Grenville (1712-1770). New York: University Press, 1997.

Jones, Alice H. Wealth of a Nation to Be: The American Colonies on the Eve of the Revolution. New York: Columbia University Press, 1980.

Ketchum, Richard M. Saratoga: Turning Point of America’s Revolutionary War. New York: Henry Holt and Company, 1997.

Labaree, Benjamin Woods. The Boston Tea Party. New York: Oxford University Press, 1964.

Mackesy, Piers. The War for America, 1775-1783. Cambridge: Harvard University Press, 1964.

McCusker, John J. and Russell R. Menard. The Economy of British America, 1607- 1789. Chapel Hill: University of North Carolina Press, 1985.

Michener, Ron. “Backing Theories and the Currencies of Eighteenth-Century America: A Comment.” Journal of Economic History 48 no. 3 (1988): 682-692.

Nester, William R. The First Global War: Britain, France, and the Fate of North America, 1756-1775. Westport: Praeger, 2000.

Newman, E. P. “Counterfeit Continental Currency Goes to War.” The Numismatist 1 (January, 1957): 5-16.

North, Douglass C., and Barry R. Weingast. “Constitutions and Commitment: The Evolution of Institutions Governing Public Choice in Seventeenth-Century England.” Journal of Economic History 49 No. 4 (1989): 803-32.

O’Shaughnessy, Andrew Jackson. An Empire Divided: The American Revolution and the British Caribbean. Philadelphia: University of Pennsylvania Press, 2000.

Palmer, R. R. The Age of Democratic Revolution: A Political History of Europe and America. Vol. 1. Princeton: Princeton University Press, 1959.

Perkins, Edwin J. The Economy of Colonial America. New York: Columbia University Press, 1988.

Reid, Joseph D., Jr. “Economic Burden: Spark to the American Revolution?” Journal of Economic History 38, no. 1 (1978): 81-100.

Robinson, Edward F. “Continental Treasury Administration, 1775-1781: A Study in the Financial History of the American Revolution.” Ph.D. diss., University of Wisconsin, 1969.

Rockoff, Hugh. Drastic Measures: A History of Wage and Price Controls in the United States. Cambridge: Cambridge University Press, 1984.

Sawers, Larry. “The Navigation Acts Revisited.” Economic History Review 45, no. 2 (1992): 262-84.

Thomas, Robert P. “A Quantitative Approach to the Study of the Effects of British Imperial Policy on Colonial Welfare: Some Preliminary Findings.” Journal of Economic History 25, no. 4 (1965): 615-38.

Tucker, Robert W. and David C. Hendrickson. The Fall of the First British Empire: Origins of the War of American Independence. Baltimore: Johns Hopkins Press, 1982.

Walton, Gary M. “The New Economic History and the Burdens of the Navigation Acts.” Economic History Review 24, no. 4 (1971): 533-42.

Citation: Baack, Ben.  “The Economics of the American Revolutionary War.” EH.Net Encyclopedia, edited by Robert Whaples. October, 2001. URL https://eh.net/encyclopedia/the-economics-of-the-american-revolutionary-war-2/

 

From GATT to WTO: The Evolution of an Obscure Agency to One Perceived as Obstructing Democracy

Susan Ariel Aaronson, National Policy Association

Historical Roots of GATT and the Failure of the ITO

While the United States has always participated in international trade, it did not take a leadership role in global trade policy making until the Great Depression. One reason for this is that under the US Constitution, Congress has responsibility for promoting and regulating commerce, while the executive branch has responsibility for foreign policy. Thus, trade policy was a tug of war between the branches and the two branches did not always agree on the mix of trade promotion and protection. However, in 1934, the United States began an experiment, the Reciprocal Trade Agreements Act of 1934. In the hopes of expanding employment, Congress agreed to permit the executive branch to negotiate bilateral trade agreements. (Bilateral agreements are those between two parties — for example, the US and another country.)

During the 1930s, the amount of bilateral negotiation under this act was fairly limited, and in truth it did not do much to expand global or domestic trade. However, the Second World War led policy makers to experiment on a broader level. In the 1940s, working with the British government, the United States developed two innovations to expand and govern trade among nations. These mechanisms were called the General Agreement on Tariffs and Trade (GATT) and the ITO (International Trade Organization). GATT was simply a temporary multilateral agreement designed to provide a framework of rules and a forum to negotiate trade barrier reductions among nations. It was built on the Reciprocal Trade Agreements Act, which allowed the executive branch to negotiate trade agreements, with temporary authority from the Congress.

The ITO

The ITO, in contrast, set up a code of world trade principles and a formal international institution. The ITO’s architects were greatly influenced by John Maynard Keynes, the British economist. The ITO represented an internationalization of the view that governments could play a positive role in encouraging international economic growth. It was incredibly comprehensive: including chapters on commercial policy, investment, employment and even business practices (what we call antitrust or competition policies today). The ITO also included a secretariat with the power to arbitrate trade disputes. But the ITO was not popular. It also took a long time to negotiate. Its final charter was signed by 54 nations at the UN Conference on Trade and Employment in Havana in March 1948, but this was too late. The ITO missed the flurry of support for internationalism that accompanied the end of WWII and which led to the establishment of agencies such as the UN, the IMF and the World Bank. The US Congress never brought membership in the ITO to a vote, and when the president announced that he would not seek ratification of the Havana Charter, the ITO effectively died. Consequently the provisional GATT (which was not a formal international organization) governed world trade until 1994 (Aaronson, 1996, 3-5).

GATT

GATT was a club, albeit a club that was increasingly popular. But GATT was not a treaty. The United States (and other nations) joined GATT under its Protocol of Provisional Application. This meant that the provisions of GATT were binding only insofar as they are not inconsistent with a nation’s existing legislation. With this clause, the United States could spur trade liberalization or contravene the rules of GATT when politically or economically necessary (US Tariff Commission, 1950, 19-21, 20 note 4).

From 1948 until 1993, GATT’s purview and membership grew dramatically. During this period, GATT sponsored eight trade rounds where member nations, called contracting parties, agreed to mutually reduce trade barriers. But trade liberalization under the GATT came with costs to some Americans. Important industries in the United States such as textiles, television, steel and footwear suffered from foreign competition and some workers lost jobs. However, most Americans benefited from this growth in world trade; as consumers they got a cheaper and more diverse supply of goods, as producers, most found new markets and growing employment. From 1948 to about 1980 this economic growth came at little cost to the American economy as a whole or to American democracy (Aaronson, 1996, 133-134).

The Establishment of the WTO

By the late 1980s, a growing number of nations decided that GATT could better serve global trade expansion if it became a formal international organization. In 1988, the US Congress, in the Omnibus Trade and Competitiveness Act, explicitly called for more effective dispute settlement mechanisms. They pressed for negotiations to formalize GATT and to make it a more powerful and comprehensive organization. The result was the World Trade Organization, (WTO), which was established during the Uruguay Round (1986-1993) of GATT negotiations and which subsumed GATT. The WTO provides a permanent arena for member governments to address international trade issues and it oversees the implementation of the trade agreements negotiated in the Uruguay Round of trade talks

The WTO’s Powers

The WTO is not simply GATT transformed into a formal international organization. It covers a much broader purview, including subsidies, intellectual property, food safety and other policies that were once solely the subject of national governments. The WTO also has strong dispute settlement mechanisms. As under GATT, panels weigh trade disputes, but these panels have to adhere to a strict time schedule. Moreover, in contrast with GATT procedure, no country can veto or delay panel decisions. If US laws protecting the environment (such as laws requiring gas mileage standards) were found to be de facto trade impediments, the US must take action. It can either change its law, do nothing and face retaliation, or compensate the other party for lost trade if it keeps such a law (Jackson, 1994).

The WTO’s Mixed Record

Despite its broader scope and powers, the WTO has had a mixed record. Nations have clamored to join this new organization and receive the benefits of expanded trade and formalized multinational rules. Today the WTO has grown 142 members. Nations such as China, Russia, Saudi Arabia and Ukraine hope to join the WTO soon. But since the WTO was created, its members have not been able to agree on the scope of a new round of trade talks. Many developing countries believe that their industrialized trading partners have not fully granted them the benefits promised under the Uruguay Round of GATT. Some countries regret including intellectual property protections under the aegis of the WTO.

Protests

A wide range of citizens has become concerned about the effect of trade rules upon the achievement of other important policy goals. In India, Latin America, Europe, Canada and the United States, alarmed citizens have taken to the streets to protest globalization and in particular what they perceive as the undemocratic nature of the WTO. During the fiftieth anniversary of GATT in Geneva in 1998, some 30,000 people rioted. During the Seattle Ministerial Meetings in November/December 1999, again about 30,000 people protested, some violently. When the WTO attempts to kick off a new round in Doha, Qatar later this year, protestors are again planning to disrupt the proceedings (Aaronson, 2001).

Explaining Recent Protests about the WTO

During the first thirty years of GATT’s history, the relationship of trade policy to human rights, labor rights, consumer protection, and the environment were essentially “off-stage.” This is because GATT’s role was limited to governing how nations used traditional tools of economic protection — border measures such as tariffs and quotas.

GATT’s Scope Was Initially Limited

Why did policy makers limit the scope of GATT? The US could participate in GATT negotiations only by Congress granting extensions of the Reciprocal Trade Agreements Act of 1934. But this act allowed the president only to negotiate commercial policy. As a result, GATT said almost nothing about the effects of trade (whether trade degrades the environment or injures workers) or the conditions of trade (whether disparate systems of regulation, such as consumer, environmental, or labor standards, allow for fair competition). From the 1940s to the 1970s, few policy makers would admit that their systems of regulations sometimes distorted trade. Such regulations were the turf of domestic policy makers, not foreign policy makers. GATT also said little about domestic norms or regulations. In 1971, GATT established a working party on environmental measures and international trade, but it did not meet until 1991, after much pressure from some European nations (Charnovitz, 1992, 341, 348).

GATT’s Scope Widened to Include Domestic Policies

Policy makers and economists have long recognized that trade and social regulations can intersect. Although the United States did not ban trade in slaves until 1807, the US was among the first nations to ban goods manufactured by forced labor (prison labor) in the Tariff Act of 1890 (section 51) (Aaronson, 2001, 44). This provision influenced many trade agreements that followed, including GATT, which includes a similar provision. But in the 1970s, public officials began to admit that domestic regulations, such as health and safety regulations, could with or without intent, also distort trade (Keck and Sikkink, 1998, 41-47). They worked to include rules governing such regulations in the purview of GATT and other trade agreements. This process began in the Tokyo Round (1973-79) of GATT negotiations, but came to fruition during the Uruguay Round. Policy makers expanded the turf of trade agreements to include rules governing once domestic policies such as intellectual property, food safety, and subsidies (GATT Secretariat, 1993, Annex IV, 91).

Rising Importance of International Trade and Trade Policy

In 1970, the import and export of American goods and services added up to only about 11.5% of gross domestic product. This climbed swiftly to 20.5% in 1980 and at the end of the century averaged about 24%. (In addition, by the mid-1980s a persistent trade deficit emerged, with imports exceeding exports by significant amounts year after year — imports exceeded exports by 3% of GDP in 1987, for example.)

Public Opinion Has Become More Concerned about Trade Policy

Partly because of the rising importance of international trade, since at least 1980, the relationship of trade policy to the achievement of other public policy goals became an important and contentious issue. A growing number of citizens began to question whether trade agreements should include such social or environmental issues. Others argued that trade agreements had the effect of undermining domestic regulations such as environmental, food safety or consumer regulations. Still others argued that trade agreements did not sufficiently regulate the behavior of global corporations. Although relatively few Americans have taken to the streets to protest trade laws, polling data reveal that Americans agree with some of the principal concerns of the protesters. They want trade agreements to raise the environmental and labor standards in the nations with which Americans trade.

Most Agree That Trade Fuels Economic Growth

On the other hand, most people agree with analysts who argue that trade helps fuel American growth (PIPA, 1999). (For example, 93% of economists surveyed agreed that tariffs and import quotas usually reduce general economic welfare (Alston, Kearl, Vaughan, 1992).) Economists argue that the US must trade if it is to maintain its high standard of living. Autarchy is not a practical option even for America’s mighty and diversified economy. Although the US is blessed with navigable rivers, fertile soil, abundant resources, a hard working populace, and a huge internal market, Americans must trade because they cannot efficiently or sufficiently produce all the goods and services that citizens desire. Moreover, there are some goods that Americans cannot produce. That is why America from the beginning of its history has signed trade agreements with other nations.

Building a National Consensus on Trade Policy Is a Difficult Balancing Act

For the last decade, Americans have not been able to find common ground on the turf of trade policy and how to ensure that trade agreements such as those enforced by the WTO don’t thwart achievement of other important policy goals. After 1993, American business did not push for a new round of trade talks, as the global and the domestic economy prospered. But in recent months (early 2001), business has been much more active, as has George W. Bush’s Administration, in trying to develop a new round of trade talks under the WTO. Business has become more eager as economic growth has slowed. Moreover, American business leaders seem to have learned the lessons of the 1999 Seattle protests. The members of the Business Roundtable, an organization of chief executive officers from America’s largest, most prestigious companies have noted, “we must first build a national consensus on trade policy… Building this consensus will…require the careful consideration of international labor and environmental issues…that cannot be ignored.” The Roundtable concluded by noting the problem is not whether these issues are trade policy issues. They stressed that trade proponents and critics must find a strategy — a trade policy approach that allows negotiators to address these issues constructively (Business Roundtable, 2001). The Roundtable was essentially saying that we must find common ground and must acknowledge the relationship of trade policy to the achievement of other policy goals. The Roundtable was not alone. Other formal and informal business groups such as the National Association of Manufacturers, as well as environmental and labor groups, have tried to develop an inventory of ideas on how to proceed in pursuing trade agreements while also promoting other important policy goals such as environmental protection or labor rights. Republican members of Congress responded publicly to these efforts with a warning that such efforts could compromise the President’s strategy for
trade liberalization. As of this writing, however, the US Trade Representative has not announced how it will resolve the relationship between trade and social/environmental policy goals within specific trade agreements, such as the WTO. Resolving these issues will undoubtedly be very difficult, so the WTO will probably remain the source of contention.

References

Aaronson, Susan. Trade and the American Dream: A Social History of Postwar Trade Policy. Lexington, KY: University Press of Kentucky, 1996.

Aaronson, Susan. Taking Trade to the Streets: The Lost History of Efforts to Shape Globalization. Ann Arbor: University of Michigan Press, 2001.

Alston, Richard M., J.R. Kearl, and Michael B. Vaughan. “Is There a Consensus among Economists in the 1990’s?” American Economic Review: Papers and Proceedings 82 (1992): 203-209.

Business Roundtable. “The Case for US Trade Leadership: The United States is Falling Behind.” Statement 2/9/2001. www.brt.org.

Charnovitz, Steve. “Environmental and Labour Standards in Trade.” World Economy 15 (1992).

GATT Secretariat. “Final Act Embodying the Results of the Uruguay Round of Multilateral Trade Negotiations.” December 15, 1993.

Jackson, John H. “The World Trade Organization, Dispute Settlement and Codes of Conduct.” In The New GATT: Implications for the United States, edited by Susan M. Collins and Barry P. Bosworth, 63-75. Washington: Brookings, 1994.

Keck, Margaret E. and Kathryn Sikkink. Activists beyond Borders: Advocacy Networks in International Politics. Ithaca: Cornell University Press, 1998.

Program on International Policy Attitudes. “Americans on Globalization.” Poll conducted October 21-October 29, 1999 with 18,126 adults. See www.pipa.org/OnlineReports/Globalization/executive_summary.html

US Tariff Commission. Operation of the Trades Agreements Program, Second Report,

Citation: Aaronson, Susan Ariel. “From GATT to WTO: The Evolution of an Obscure Agency to One Perceived as Obstructing Democracy”. EH.Net Encyclopedia, edited by Robert Whaples. January 16, 2001. URL http://eh.net/encyclopedia/from-gatt-to-wto-the-evolution-of-an-obscure-agency-to-one-perceived-as-obstructing-democracy-2/

An Overview of the Economic History of Uruguay since the 1870s

Luis Bértola, Universidad de la República — Uruguay

Uruguay’s Early History

Without silver or gold, without valuable species, scarcely peopled by gatherers and fishers, the Eastern Strand of the Uruguay River (Banda Oriental was the colonial name; República Oriental del Uruguay is the official name today) was, in the sixteenth and seventeenth centuries, distant and unattractive to the European nations that conquered the region. The major export product was the leather of wild descendants of cattle introduced in the early 1600s by the Spaniards. As cattle preceded humans, the state preceded society: Uruguay’s first settlement was Colonia del Sacramento, a Portuguese military fortress founded in 1680, placed precisely across from Buenos Aires, Argentina. Montevideo, also a fortress, was founded by the Spaniards in 1724. Uruguay was on the border between the Spanish and Portuguese empires, a feature which would be decisive for the creation, with strong British involvement, in 1828-1830, of an independent state.

Montevideo had the best natural harbor in the region, and rapidly became the end-point of the trans-Atlantic routes into the region, the base for a strong commercial elite, and for the Spanish navy in the region. During the first decades after independence, however, Uruguay was plagued by political instability, precarious institution building and economic retardation. Recurrent civil wars with intensive involvement by Britain, France, Portugal-Brazil and Argentina, made Uruguay a center for international conflicts, the most important being the Great War (Guerra Grande), which lasted from 1839 to 1851. At its end Uruguay had only about 130,000 inhabitants.

“Around the middle of the nineteenth century, Uruguay was dominated by the latifundium, with its ill-defined boundaries and enormous herds of native cattle, from which only the hides were exported to Great Britain and part of the meat, as jerky, to Brazil and Cuba. There was a shifting rural population that worked on the large estates and lived largely on the parts of beef carcasses that could not be marketed abroad. Often the landowners were also the caudillos of the Blanco or Colorado political parties, the protagonists of civil wars that a weak government was unable to prevent” (Barrán and Nahum, 1984, 655). This picture still holds, even if it has been excessively stylized, neglecting the importance of subsistence or domestic-market oriented peasant production.

Economic Performance in the Long Run

Despite its precarious beginnings, Uruguay’s per capita gross domestic product (GDP) growth from 1870 to 2002 shows an amazing persistence, with the long-run rate averaging around one percent per year. However, this apparent stability hides some important shifts. As shown in Figure 1, both GDP and population grew much faster before the 1930s; from 1930 to 1960 immigration vanished and population grew much more slowly, while decades of GDP stagnation and fast growth alternated; after the 1960s Uruguay became a net-emigration country, with low natural growth rates and a still spasmodic GDP growth.

GDP growth shows a pattern featured by Kuznets-like swings (Bértola and Lorenzo 2004), with extremely destructive downward phases, as shown in Table 1. This cyclical pattern is correlated with movements of the terms of trade (the relative price of exports versus imports), world demand and international capital flows. In the expansive phases exports performed well, due to increased demand and/or positive terms of trade shocks (1880s, 1900s, 1920s, 1940s and even during the Mercosur years from 1991 to 1998). Capital flows would sometimes follow these booms and prolong the cycle, or even be a decisive force to set the cycle up, as were financial flows in the 1970s and 1990s. The usual outcome, however, has been an overvalued currency, which blurred the debt problem and threatened the balance of trade by overpricing exports. Crises have been the result of a combination of changing trade conditions, devaluation and over-indebtedness, as in the 1880s, early 1910s, late 1920s, 1950s, early 1980s and late 1990s.

Population and per capita GDP of Uruguay, 1870-2002 (1913=100)

Table 1: Swings in the Uruguayan Economy, 1870-2003

Per capita GDP fall (%) Length of recession (years) Time to pre-crisis levels (years) Time to next crisis (years)
1872-1875 26 3 15 16
1888-1890 21 2 19 25
1912-1915 30 3 15 19
1930-1933 36 3 17 24-27
1954/57-59 9 2-5 18-21 27-24
1981-1984 17 3 11 17
1998-2003 21 5

Sources: See Figure 1.

Besides its cyclical movement, the terms of trade showed a sharp positive trend in 1870-1913, a strongly fluctuating pattern around similar levels in 1913-1960 and a deteriorating trend since then. While the volume of exports grew quickly up to the 1920s, it stagnated in 1930-1960 and started to grow again after 1970. As a result, the purchasing power of exports grew fourfold in 1870-1913, fluctuated along with the terms of trade in 1930-1960, and exhibited a moderate growth in 1970-2002.

The Uruguayan economy was very open to trade in the period up to 1913, featuring high export shares, which naturally declined as the rapidly growing population filled in rather empty areas. In 1930-1960 the economy was increasingly and markedly closed to international trade, but since the 1970s the economy opened up to trade again. Nevertheless, exports, which earlier were mainly directed to Europe (beef, wool, leather, linseed, etc.), were increasingly oriented to Argentina and Brazil, in the context of bilateral trade agreements in the 1970s and 1980s and of Mercosur (the trading zone encompassing Argentina, Brazil, Paraguay and Uruguay) in the 1990s.

While industrial output kept pace with agrarian export-led growth during the first globalization boom before World War I, the industrial share in GDP increased in 1930-54, and was mainly domestic-market orientated. Deindustrialization has been profound since the mid-1980s. The service sector was always large: focusing on commerce, transport and traditional state bureaucracy during the first globalization boom; focusing on health care, education and social services, during the import-substituting industrialization (ISI) period in the middle of the twentieth century; and focusing on military expenditure, tourism and finance since the 1970s.

The income distribution changed markedly over time. During the first globalization boom before World War I, an already uneven distribution of income and wealth seems to have worsened, due to massive immigration and increasing demand for land, both rural and urban. However, by the 1920s the relative prices of land and labor changed their previous trend, reducing income inequality. The trend later favored industrialization policies, democratization, introduction of wage councils, and the expansion of the welfare state based on an egalitarian ideology. Inequality diminished in many respects: between sectors, within sectors, between genders and between workers and pensioners. While the military dictatorship and the liberal economic policy implemented since the 1970s initiated a drastic reversal of the trend toward economic equality, the globalizing movements of the 1980s and 1990s under democratic rule didn’t increase equality. Thus, inequality remains at the higher levels reached during the period of dictatorship (1973-85).

Comparative Long-run Performance

If the stable long-run rate of Uruguayan per capita GDP growth hides important internal transformations, Uruguay’s changing position in the international scene is even more remarkable. During the first globalization boom the world became more unequal: the United States forged ahead as the world leader (nearly followed by other settler economies); Asia and Africa lagged far behind. Latin America showed a confusing map, in which countries as Argentina and Uruguay performed rather well, and others, such as the Andean region, lagged far behind (Bértola and Williamson 2003). Uruguay’s strong initial position tended to deteriorate in relation to the successful core countries during the late 1800s, as shown in Figure 2. This trend of negative relative growth was somewhat weak during the first half of the twentieth century, improved significantly during the 1960s, as the import-substituting industrialization model got exhausted, and has continued since the 1970s, despite policies favoring increased integration into the global economy.

 Per capita GDP of Uruguay relative to four core countries,  1870-2002

If school enrollment and literacy rates are reasonable proxies for human capital, in late 1800s both Argentina and Uruguay had a great handicap in relation to the United States, as shown in Table 2. The gap in literacy rates tended to disappear — as well as this proxy’s ability to measure comparative levels of human capital. Nevertheless, school enrollment, which includes college-level and technical education, showed a catching-up trend until the 1960’s, but reverted afterwards.

The gap in life-expectancy at birth has always been much smaller than the other development indicators. Nevertheless, some trends are noticeable: the gap increased in 1900-1930; decreased in 1930-1950; and increased again after the 1970s.

Table 2: Uruguayan Performance in Comparative Perspective, 1870-2000 (US = 100)

1870 1880 1890 1900 1910 1920 1930 1940 1950 1960 1970 1980 1990 2000
GDP per capita

Uruguay

101 65 63 27 32 27 33 27 26 24 19 18 15 16

Argentina

63 34 38 31 32 29 25 25 24 21 15 16

Brazil

23 8 8 8 8 8 7 9 9 13 11 10
Latin America 13 12 13 10 9 9 9 6 6

USA

100 100 100 100 100 100 100 100 100 100 100 100 100 100
Literacy rates

Uruguay

57 65 72 79 85 91 92 94 95 97 99

Argentina

57 65 72 79 85 91 93 94 94 96 98

Brazil

39 38 37 42 46 51 61 69 76 81 86

Latin America

28 30 34 37 42 47 56 65 71 77 83

USA

100 100 100 100 100 100 100 100 100 100 100
School enrollment

Uruguay

23 31 31 30 34 42 52 46 43

Argentina

28 41 42 36 39 43 55 44 45

Brazil

12 11 12 14 18 22 30 42

Latin America

USA

100 100 100 100 100 100 100 100 100
Life expectancy at birth

Uruguay

102 100 91 85 91 97 97 97 95 96 96

Argentina

81 85 86 90 88 90 93 94 95 96 95
Brazil 60 60 56 58 58 63 79 83 85 88 88
Latin America 65 63 58 58 59 63 71 77 81 88 87
USA 100 100 100 100 100 100 100 100 100 100 100

Sources: Per capita GDP: Maddison (2001) and Astorga, Bergés and FitzGerald (2003). Literacy rates and life expectancy; Astorga, Bergés and FitzGerald (2003). School enrollment; Bértola and Bertoni (1998).

Uruguay during the First Globalization Boom: Challenge and Response

During the post-Great-War reconstruction after 1851, Uruguayan population grew rapidly (fueled by high natural rates and immigration) and so did per capita output. Productivity grew due to several causes including: the steam ship revolution, which critically reduced the price spread between Europe and America and eased access to the European market; railways, which contributed to the unification of domestic markets and reduced domestic transport costs; the diffusion and adaptation to domestic conditions of innovations in cattle-breeding and services; a significant reduction in transaction costs, related to a fluctuating but noticeable process of institutional building and strengthening of the coercive power of the state.

Wool and woolen products, hides and leather were exported mainly to Europe; salted beef (tasajo) to Brazil and Cuba. Livestock-breeding (both cattle and sheep) was intensive in natural resources and dominated by large estates. By the 1880s, the agrarian frontier was exhausted, land properties were fenced and property rights strengthened. Labor became abundant and concentrated in urban areas, especially around Montevideo’s harbor, which played an important role as a regional (supranational) commercial center. By 1908, it contained 40 percent of the nation’s population, which had risen to more than a million inhabitants, and provided the main part of Uruguay’s services, civil servants and the weak and handicraft-dominated manufacturing sector.

By the 1910s, Uruguayan competitiveness started to weaken. As the benefits of the old technological paradigm were eroding, the new one was not particularly beneficial for resource-intensive countries such as Uruguay. International demand shifted away from primary consumption, the population of Europe grew slowly and European countries struggled for self-sufficiency in primary production in a context of soaring world supply. Beginning in the 1920s, the cattle-breeding sector showed a very poor performance, due to lack of innovation away from natural pastures. In the 1930’s, its performance deteriorated mainly due to unfavorable international conditions. Export volumes stagnated until the 1970s, while purchasing power fluctuated strongly following the terms of trade.

Inward-looking Growth and Structural Change

The Uruguayan economy grew inwards until the 1950s. The multiple exchange rate system was the main economic policy tool. Agrarian production was re-oriented towards wool, crops, dairy products and other industrial inputs, away from beef. The manufacturing industry grew rapidly and diversified significantly, with the help of protectionist tariffs. It was light, and lacked capital goods or technology-intensive sectors. Productivity growth hinged upon technology transfers embodied in imported capital goods and an intensive domestic adaptation process of mature technologies. Domestic demand grew also through an expanding public sector and the expansion of a corporate welfare state. The terms of trade substantially impacted protectionism, productivity growth and domestic demand — the government raised money by manipulating exchange rates, so that when export prices rose the state had a greater capacity to protect the manufacturing sector through low exchange rates for capital goods, raw material and fuel imports and to spur productivity increases by imports of capital, while protection allowed industry to pay higher wages and thus expand domestic demand.

However, rent-seeking industries searching for protectionism and a weak clienteslist state, crowded by civil servants recruited in exchange for political favors to the political parties, directed structural change towards a closed economy and inefficient management. The obvious limits to inward looking growth of a country peopled by only about two million inhabitants were exacerbated in the late 1950s as terms of trade deteriorated. The clientelist political system, which was created by both traditional parties while the state was expanding at the national and local level, was now not able to absorb the increasing social conflicts, dyed by stringent ideological confrontation, in a context of stagnation and huge fiscal deficits.

Re-globalization and Regional Integration

The dictatorship (1973-1985) started a period of increasing openness to trade and deregulation which has persisted until the present. Dynamic integration into the world market is still incomplete, however. An attempt to return to cattle-breeding exports, as the engine of growth, was hindered by the oil crises and the ensuing European response, which restricted meat exports to that destination. The export sector was re-orientated towards “non-traditional exports” — i.e., exports of industrial goods made of traditional raw materials, to which low-quality and low-wage labor was added. Exports were also stimulated by means of strong fiscal exemptions and negative real interest rates and were re-orientated to the regional market (Argentina and Brazil) and to other developing regions. At the end of the 1970s, this policy was replaced by the monetarist approach to the balance of payments. The main goal was to defeat inflation (which had continued above 50% since the 1960s) through deregulation of foreign trade and a pre-announced exchange rate, the “tablita.” A strong wave of capital inflows led to a transitory success, but the Uruguayan peso became more and more overvalued, thus limiting exports, encouraging imports and deepening the chronic balance of trade deficit. The “tablita” remained dependent on increasing capital inflows and obviously collapsed when the risk of a huge devaluation became real. Recession and the debt crisis dominated the scene of the early 1980s.

Democratic regimes since 1985 have combined natural resource intensive exports to the region and other emergent markets, with a modest intra-industrial trade mainly with Argentina. In the 1990s, once again, Uruguay was overexposed to financial capital inflows which fueled a rather volatile growth period. However, by the year 2000, Uruguay had a much worse position in relation to the leaders of the world economy as measured by per capita GDP, real wages, equity and education coverage, than it had fifty years earlier.

Medium-run Prospects

In the 1990s Mercosur as a whole and each of its member countries exhibited a strong trade deficit with non-Mercosur countries. This was the result of a growth pattern fueled by and highly dependent on foreign capital inflows, combined with the traditional specialization in commodities. The whole Mercosur project is still mainly oriented toward price competitiveness. Nevertheless, the strongly divergent macroeconomic policies within Mercosur during the deep Argentine and Uruguayan crisis of the beginning of the twenty-first century, seem to have given place to increased coordination between Argentina and Brazil, thus making of the region a more stable environment.

The big question is whether the ongoing political revival of Mercosur will be able to achieve convergent macroeconomic policies, success in international trade negotiations, and, above all, achievements in developing productive networks which may allow Mercosur to compete outside its home market with knowledge-intensive goods and services. Over that hangs Uruguay’s chance to break away from its long-run divergent siesta.

References

Astorga, Pablo, Ame R. Bergés and Valpy FitzGerald. “The Standard of Living in Latin America during the Twentieth Century.” University of Oxford Discussion Papers in Economic and Social History 54 (2004).

Barrán, José P. and Benjamín Nahum. “Uruguayan Rural History.” Latin American Historical Review, 1985.

Bértola, Luis. The Manufacturing Industry of Uruguay, 1913-1961: A Sectoral Approach to Growth, Fluctuations and Crisis. Publications of the Department of Economic History, University of Göteborg, 61; Institute of Latin American Studies of Stockholm University, Monograph No. 20, 1990.

Bértola, Luis and Reto Bertoni. “Educación y aprendizaje en escenarios de convergencia y divergencia.” Documento de Trabajo, no. 46, Unidad Multidisciplinaria, Facultad de Ciencias Sociales, Universidad de la República, 1998.

Bértola, Luis and Fernando Lorenzo. “Witches in the South: Kuznets-like Swings in Argentina, Brazil and Uruguay since the 1870s.” In The Experience of Economic Growth, edited by J.L. van Zanden and S. Heikenen. Amsterdam: Aksant, 2004.

Bértola, Luis and Gabriel Porcile. “Argentina, Brasil, Uruguay y la Economía Mundial: una aproximación a diferentes regímenes de convergencia y divergencia.” In Ensayos de Historia Económica by Luis Bertola. Montevideo: Uruguay en la región y el mundo, 2000.

Bértola, Luis and Jeffrey Williamson. “Globalization in Latin America before 1940.” National Bureau of Economic Research Working Paper, no. 9687 (2003).

Bértola, Luis and others. El PBI uruguayo 1870-1936 y otras estimaciones. Montevideo, 1998.

Maddison, A. Monitoring the World Economy, 1820-1992. Paris: OECD, 1995.

Maddison, A. The World Economy: A Millennial

Citation: Bertola, Luis. “An Overview of the Economic History of Uruguay since the 1870s”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/article/Bertola.Uruguay.final

Sweden – Economic Growth and Structural Change, 1800-2000

Lennart Schön, Lund University

This article presents an overview of Swedish economic growth performance internationally and statistically and an account of major trends in Swedish economic development during the nineteenth and twentieth centuries.1

Modern economic growth in Sweden took off in the middle of the nineteenth century and in international comparative terms Sweden has been rather successful during the past 150 years. This is largely thanks to the transformation of the economy and society from agrarian to industrial. Sweden is a small economy that has been open to foreign influences and highly dependent upon the world economy. Thus, successive structural changes have put their imprint upon modern economic growth.

Swedish Growth in International Perspective

The century-long period from the 1870s to the 1970s comprises the most successful part of Swedish industrialization and growth. On a per capita basis the Japanese economy performed equally well (see Table 1). The neighboring Scandinavian countries also grew rapidly but at a somewhat slower rate than Sweden. Growth in the rest of industrial Europe and in the U.S. was clearly outpaced. Growth in the entire world economy, as measured by Maddison, was even slower.

Table 1 Annual Economic Growth Rates per Capita in Industrial Nations and the World Economy, 1871-2005

Year Sweden Rest of Nordic Countries Rest of Western Europe United States Japan World Economy
1871/1875-1971/1975 2.4 2.0 1.7 1.8 2.4 1.5
1971/1975-2001/2005 1.7 2.2 1.9 2.0 2.2 1.6

Note: Rest of Nordic countries = Denmark, Finland and Norway. Rest of Western Europe = Austria, Belgium, Britain, France, Germany, Italy, the Netherlands, and Switzerland.

Source: Maddison (2006); Krantz/Schön (forthcoming 2007); World Bank, World Development Indicator 2000; Groningen Growth and Development Centre, www.ggdc.com.

The Swedish advance in a global perspective is illustrated in Figure 1. In the mid-nineteenth century the Swedish average income level was close to the average global level (as measured by Maddison). In a European perspective Sweden was a rather poor country. By the 1970s, however, the Swedish income level was more than three times higher than the global average and among the highest in Europe.

Figure 1
Swedish GDP per Capita in Relation to World GDP per Capita, 1870-2004
(Nine year moving averages)
Swedish GDP per Capita in Relation to World GDP per Capita, 1870-2004
Sources: Maddison (2006); Krantz/Schön (forthcoming 2007).

Note. The annual variation in world production between Maddison’s benchmarks 1870, 1913 and 1950 is estimated from his supply of annual country series.

To some extent this was a catch-up story. Sweden was able to take advantage of technological and organizational advances made in Western Europe and North America. Furthermore, Scandinavian countries with resource bases such as Sweden and Finland had been rather disadvantaged as long as agriculture was the main source of income. The shift to industry expanded the resource base and industrial development – directed both to a growing domestic market but even more to a widening world market – became the main lever of growth from the late nineteenth century.

Catch-up is not the whole story, though. In many industrial areas Swedish companies took a position at the technological frontier from an early point in time. Thus, in certain sectors there was also forging ahead,2 quickening the pace of structural change in the industrializing economy. Furthermore, during a century of fairly rapid growth new conditions have arisen that have required profound adaptation and a renewal of entrepreneurial activity as well as of economic policies.

The slow down in Swedish growth from the 1970s may be considered in this perspective. While in most other countries growth from the 1970s fell only in relation to growth rates in the golden post-war ages, Swedish growth fell clearly below the historical long run growth trend. It also fell to a very low level internationally. The 1970s certainly meant the end to a number of successful growth trajectories in the industrial society. At the same time new growth forces appeared with the electronic revolution, as well as with the advance of a more service based economy. It may be the case that this structural change hit the Swedish economy harder than most other economies, at least of the industrial capitalist economies. Sweden was forced into a transformation of its industrial economy and of its political economy in the 1970s and the 1980s that was more profound than in most other Western economies.

A Statistical Overview, 1800-2000

Swedish economic development since 1800 may be divided into six periods with different growth trends, as well as different composition of growth forces.

Table 2 Annual Growth Rates in per Capita Production, Total Investments, Foreign Trade and Population in Sweden, 1800-2000

Period Per capita GDP Investments Foreign Trade Population
1800-1840 0.6 0.3 0.7 0.8
1840-1870 1.2 3.0 4.6 1.0
1870-1910 1.7 3.0 3.3 0.6
1910-1950 2.2 4.2 2.0 0.5
1950-1975 3.6 5.5 6.5 0.6
1975-2000 1.4 2.1 4.3 0.4
1800-2000 1.9 3.4 3.8 0.7

Source: Krantz/Schön (forthcoming 2007).

In the first decades of the nineteenth century the agricultural sector dominated and growth was slow in all aspects but in population. Still there was per capita growth, but to some extent this was a recovery from the low levels during the Napoleonic Wars. The acceleration during the next period around the mid-nineteenth century is marked in all aspects. Investments and foreign trade became very dynamic ingredients with the onset of industrialization. They were to remain so during the following periods as well. Up to the 1970s per capita growth rates increased for each successive period. In an international perspective it is most notable that per capita growth rates increased also in the interwar period, despite the slow down in foreign trade. The interwar period is crucial for the long run relative success of Swedish economic growth. The decisive culmination in the post-war period with high growth rates in investments and in foreign trade stands out as well, as the deceleration in all aspects in the late twentieth century.

An analysis in a traditional growth accounting framework gives a long term pattern with certain periodic similarities (see Table 3). Thus, total factor productivity growth has increased over time up to the 1970s, only to decrease to its long run level in the last decades. This deceleration in productivity growth may be looked upon either as a failure of the “Swedish Model” to accommodate new growth forces or as another case of the “productivity paradox” in lieu of the information technology revolution.3

Table 3 Total Factor Productivity (TFP) Growth and Relative Contribution of Capital, Labor and TFP to GDP Growth in Sweden, 1840-2000

Period TFP Growth Capital Labor TFP
1840-1870 0.4 55 27 18
1870-1910 0.7 50 18 32
1910-1950 1.0 39 24 37
1950-1975 2.1 45 7 48
1975-2000 1.0 44 1 55
1840-2000 1.1 45 16 39

Source: See Table 2.

In terms of contribution to overall growth, TFP has increased its share for every period. The TFP share was low in the 1840s but there was a very marked increase with the onset of modern industrialization from the 1870s. In relative terms TFP reached its highest level so far from the 1970s, thus indicating an increasing role of human capital, technology and knowledge in economic growth. The role of capital accumulation was markedly more pronounced in early industrialization with the build-up of a modern infrastructure and with urbanization, but still capital did retain much of its importance during the twentieth century. Thus its contribution to growth during the post-war Golden Ages was significant with very high levels of material investments. At the same time TFP growth culminated with positive structural shifts, as well as increased knowledge intensity complementary to the investments. Labor has in quantitative terms progressively reduced its role in economic growth. One should observe, however, the relatively large importance of labor in Swedish economic growth during the interwar period. This was largely due to demographic factors and to the employment situation that will be further commented upon.

In the first decades of the nineteenth century, growth was still led by the primary production of agriculture, accompanied by services and transport. Secondary production in manufacturing and building was, on the contrary, very stagnant. From the 1840s the industrial sector accelerated, increasingly supported by transport and communications, as well as by private services. The sectoral shift from agriculture to industry became more pronounced at the turn of the twentieth century when industry and transportation boomed, while agricultural growth decelerated into subsequent stagnation. In the post-war period the volume of services, both private and public, increased strongly, although still not outpacing industry. From the 1970s the focus shifted to private services and to transport and communications, indicating fundamental new prerequisites of growth.

Table 4 Growth Rates of Industrial Sectors, 1800-2000

Period Agriculture Industrial and Hand Transport and Communic. Building Private Services Public Services GDP
1800-1840 1.5 0.3 1.1 -0.1 1.4 1.5 1.3
1840-1870 2.1 3.7 1.8 2.4 2.7 0.8 2.3
1870-1910 1.0 5.0 3.9 1.3 2.7 1.0 2.3
1910-1950 0.0 3.5 4.9 1.4 2.2 2.2 2.7
1950-1975 0.4 5.1 4.4 3.8 4.3 4.0 4.3
1975-2000 -0.4 1.9 2.6 -0.8 2.2 0.2 1.8
1800-2000 0.9 3.8 3.7 1.8 2.7 1.7 2.6

Source: See Table 2.

Note: Private services are exclusive of dwelling services.

Growth and Transformation in the Agricultural Society of the Early Nineteenth Century

During the first half of the nineteenth century the agricultural sector and the rural society dominated the Swedish economy. Thus, more than three-quarters of the population were occupied in agriculture while roughly 90 percent lived in the countryside. Many non-agrarian activities such as the iron industry, the saw mill industry and many crafts as well as domestic, religious and military services were performed in rural areas. Although growth was slow, a number of structural and institutional changes occurred that paved the way for future modernization.

Most important was the transformation of agriculture. From the late eighteenth century commercialization of the primary sector intensified. Particularly during the Napoleonic Wars, the domestic market for food stuffs widened. The population increase in combination with the temporary decrease in imports stimulated enclosures and reclamation of land, the introduction of new crops and new methods and above all it stimulated a greater degree of market orientation. In the decades after the war the traditional Swedish trade deficit in grain even shifted to a trade surplus with an increasing exportation of oats, primarily to Britain.

Concomitant with the agricultural transformation were a number of infrastructural and institutional changes. Domestic transportation costs were reduced through investments in canals and roads. Trade of agricultural goods was liberalized, reducing transaction costs and integrating the domestic market even further. Trading companies became more effective in attracting agricultural surpluses for more distant markets. In support of the agricultural sector new means of information were introduced by, for example, agricultural societies that published periodicals on innovative methods and on market trends. Mortgage societies were established to supply agriculture with long term capital for investments that in turn intensified the commercialization of production.

All these elements meant a profound institutional change in the sense that the price mechanism became much more effective in directing human behavior. Furthermore, a greater interest in information and in the main instrument of information, namely literacy, was infused. Traditionally, popular literacy had been upheld by the church, mainly devoted to knowledge of the primary Lutheran texts. In the new economic environment, literacy was secularized and transformed into a more functional literacy marked by the advent of schools for public education in the 1840s.

The Breakthrough of Modern Economic Growth in the Mid-nineteenth Century

In the decades around the middle of the nineteenth century new dynamic forces appeared that accelerated growth. Most notably foreign trade expanded by leaps and bounds in the 1850s and 1860s. With new export sectors, industrial investments increased. Furthermore, railways became the most prominent component of a new infrastructure and with this construction a new component in Swedish growth was introduced, heavy capital imports.

The upswing in industrial growth in Western Europe during the 1850s, in combination with demand induced through the Crimean War, led to a particularly strong expansion in Swedish exports with sharp price increases for three staple goods – bar iron, wood and oats. The charcoal-based Swedish bar iron had been the traditional export good and had completely dominated Swedish exports until mid-nineteenth century. Bar iron met, however, increasingly strong competition from British and continental iron and steel industries and Swedish exports had stagnated in the first half of the nineteenth century. The upswing in international demand, following the diffusion of industrialization and railway construction, gave an impetus to the modernization of Swedish steel production in the following decades.

The saw mill industry was a really new export industry that grew dramatically in the 1850s and 1860s. Up until this time, the vast forests in Sweden had been regarded mainly as a fuel resource for the iron industry and for household heating and local residential construction. With sharp price increases on the Western European market from the 1840s and 1850s, the resources of the sparsely populated northern part of Sweden suddenly became valuable. A formidable explosion of saw mill construction at the mouths of the rivers along the northern coastline followed. Within a few decades Swedish merchants, as well as Norwegian, German, British and Dutch merchants, became saw mill owners running large-scale capitalist enterprises at the fringe of the European civilization.

Less dramatic but equally important was the sudden expansion of Swedish oat exports. The market for oats appeared mainly in Britain, where short-distance transportation in rapidly growing urban centers increased the fleet of horses. Swedish oats became an important energy resource during the decades around the mid-nineteenth century. In Sweden this had a special significance since oats could be cultivated on rather barren and marginal soils and Sweden was richly endowed with such soils. Thus, the market for oats with strongly increasing prices stimulated further the commercialization of agriculture and the diffusion of new methods. It was furthermore so since oats for the market were a substitute for local flax production – also thriving on barren soils – while domestic linen was increasingly supplanted by factory-produced cotton goods.

The Swedish economy was able to respond to the impetus from Western Europe during these decades, to diffuse the new influences in the economy and to integrate them in its development very successfully. The barriers to change seem to have been weak. This is partly explained by the prior transformation of agriculture and the evolution of market institutions in the rural economy. People reacted to the price mechanism. New social classes of commercial peasants, capitalists and wage laborers had emerged in an era of domestic market expansion, with increased regional specialization, and population increase.

The composition of export goods also contributed to the diffusion of participation and to the diffusion of export income. Iron, wood and oats meant both a regional and a social distribution. The value of prior marginal resources such as soils in the south and forests in the north was inflated. The technology was simple and labor intensive in industry, forestry, agriculture and transportation. The demand for unskilled labor increased strongly that was to put an imprint upon Swedish wage development in the second half of the nineteenth century. Commercial houses and industrial companies made profits but export income was distributed to many segments of the population.

The integration of the Swedish economy was further enforced through initiatives taken by the State. The parliament decision in the 1850s to construct the railway trunk lines meant, first, a more direct involvement by the State in the development of a modern infrastructure and, second, new principles of finance since the State had to rely upon capital imports. At the same time markets for goods, labor and capital were liberalized and integration both within Sweden and with the world market deepened. The Swedish adoption of the Gold Standard in 1873 put a final stamp on this institutional development.

A Second Industrial Revolution around 1900

In the late nineteenth century, particularly in the 1880s, international competition became fiercer for agriculture and early industrial branches. The integration of world markets led to falling prices and stagnation in the demand for Swedish staple goods such as iron, sawn wood and oats. Profits were squeezed and expansion thwarted. On the other hand there arose new markets. Increasing wages intensified mechanization both in agriculture and in industry. The demand increased for more sophisticated machinery equipment. At the same time consumer demand shifted towards better foodstuff – such as milk, butter and meat – and towards more fabricated industrial goods.

The decades around the turn of the twentieth century meant a profound structural change in the composition of Swedish industrial expansion that was crucial for long term growth. New and more sophisticated enterprises were founded and expanded particularly from the 1890s, in the upswing after the Baring Crisis.

The new enterprises were closely related to the so called Second Industrial Revolution in which scientific knowledge and more complex engineering skills were main components. The electrical motor became especially important in Sweden. A new development block was created around this innovation that combined engineering skills in companies such as ASEA (later ABB) with a large demand in energy-intensive processes and with the large supply of hydropower in Sweden.4 Financing the rapid development of this large block engaged commercial banks, knitting closer ties between financial capital and industry. The State, once again, engaged itself in infrastructural development in support of electrification, still resorting to heavy capital imports.

A number of innovative industries were founded in this period – all related to increased demand for mechanization and engineering skills. Companies such as AGA, ASEA, Ericsson, Separator (AlfaLaval) and SKF have been labeled “enterprises of genius” and all are represented with renowned inventors and innovators. This was, of course, not an entirely Swedish phenomenon. These branches developed simultaneously on the Continent, particularly in nearby Germany and in the U.S. Knowledge and innovative stimulus was diffused among these economies. The question is rather why this new development became so strong in Sweden so that new industries within a relatively short period of time were able to supplant old resource-based industries as main driving forces of industrialization.

Traditions of engineering skills were certainly important, developed in old heavy industrial branches such as iron and steel industries and stimulated further by State initiatives such as railway construction or, more directly, the founding of the Royal Institute of Technology. But apart from that the economic development in the second half of the nineteenth century fundamentally changed relative factor prices and the profitability of allocation of resources in different lines of production.

The relative increase in the wages of unskilled labor had been stimulated by the composition of early exports in Sweden. This was much reinforced by two components in the further development – emigration and capital imports.

Within approximately the same period, 1850-1910, the Swedish economy received a huge amount of capital mainly from Germany and France, while delivering an equally huge amount of labor to primarily the U.S. Thus, Swedish relative factor prices changed dramatically. Swedish interest rates remained at rather high levels compared to leading European countries until 1910, due to a continuous large demand for capital in Sweden, but relative wages rose persistently (see Table 5). As in the rest of Scandinavia, wage increases were much stronger than GDP growth in Sweden indicating a shift in income distribution in favor of labor, particularly in favor of unskilled labor, during this period of increased world market integration.

Table 5 Annual Increase in Real Wages of Unskilled Labor and Annual GDP Growth per Capita, 1870-1910

Country Annual real wage increase, 1870-1910 Annual GDP growth per capita, 1870-1910
Sweden 2.8 1.7
Denmark and Norway 2.6 1.3
France, Germany and Great Britain 1.1 1.2
United States 1.1 1.6

Sources: Wages from Williamson (1995); GDP growth see Table 1.

Relative profitability fell in traditional industries, which exploited rich natural resources and cheap labor, while more sophisticated industries were favored. But the causality runs both ways. Had this structural shift with the growth of new and more profitable industries not occurred, the Swedish economy would not have been able to sustain the wage increase.5

Accelerated Growth in the War-stricken Period, 1910-1950

The most notable feature of long term Swedish growth is the acceleration in growth rates during the period 1910-1950, which in Europe at large was full of problems and catastrophes.6 Thus, Swedish per capita production grew at 2.2 percent annually while growth in the rest of Scandinavia was somewhat below 2 percent and in the rest of Europe hovered at 1 percent. The Swedish acceleration was based mainly on three pillars.

First, the structure created at the end of the nineteenth century was very viable, with considerable long term growth potential. It consisted of new industries and new infrastructures that involved industrialists and financial capitalists, as well as public sector support. It also involved industries meeting a relatively strong demand in war times, as well as in the interwar period, both domestically and abroad.

Second, the First World War meant an immense financial bonus to the Swedish market. A huge export surplus at inflated prices during the war led to the domestication of the Swedish national debt. This in turn further capitalized the Swedish financial market, lowering interest rates and ameliorating sequential innovative activity in industry. A domestic money market arose that provided the State with new instruments for economic policy that were to become important for the implementation of the new social democratic “Keynesian” policies of the 1930s.

Third, demographic development favored the Swedish economy in this period. The share of the economically active age group 15-64 grew substantially. This was due partly to the fact that prior emigration had sized down cohorts that now would have become old age pensioners. Comparatively low mortality of young people during the 1910s, as well as an end to mass emigration further enhanced the share of the active population. Both the labor market and domestic demand was stimulated in particular during the 1930s when the household forming age group of 25-30 years increased.

The augmented labor supply would have increased unemployment had it not been combined with the richer supply of capital and innovative industrial development that met elastic demand both domestically and in Europe.

Thus, a richer supply of both capital and labor stimulated the domestic market in a period when international market integration deteriorated. Above all it stimulated the development of mass production of consumption goods based upon the innovations of the Second Industrial Revolution. Significant new enterprises that emanated from the interwar period were very much related to the new logic of the industrial society, such as Volvo, SAAB, Electrolux, Tetra Pak and IKEA.

The Golden Age of Growth, 1950-1975

The Swedish economy was clearly part of the European Golden Age of growth, although Swedish acceleration from the 1950s was less pronounced than in the rest of Western Europe, which to a much larger extent had been plagued by wars and crises.7 The Swedish post-war period was characterized primarily by two phenomena – the full fruition of development blocks based upon the great innovations of the late nineteenth century (the electrical motor and the combustion engine) and the cementation of the “Swedish Model” for the welfare state. These two phenomena were highly complementary.

The Swedish Model had basically two components. One was a greater public responsibility for social security and for the creation and preservation of human capital. This led to a rapid increase in the supply of public services in the realms of education, health and children’s day care as well as to increases in social security programs and in public savings for transfers to pensioners program. The consequence was high taxation. The other component was a regulation of labor and capital markets. This was the most ingenious part of the model, constructed to sustain growth in the industrial society and to increase equality in combination with the social security program and taxation.

The labor market program was the result of negotiations between trade unions and the employers’ organization. It was labeled “solidaristic wage policy” with two elements. One was to achieve equal wages for equal work, regardless of individual companies’ ability to pay. The other element was to raise the wage level in low paid areas and thus to compress the wage distribution. The aim of the program was actually to increase the speed in the structural rationalization of industries and to eliminate less productive companies and branches. Labor should be transferred to the most productive export-oriented sectors. At the same time income should be distributed more equally. A drawback of the solidaristic wage policy from an egalitarian point of view was that profits soared in the productive sectors since wage increases were held back. However, capital market regulations hindered the ability of high profits to be converted into very high incomes for shareholders. Profits were taxed very low if they were converted into further investments within the company (the timing in the use of the funds was controlled by the State in its stabilization policy) but taxed heavily if distributed to share holders. The result was that investments within existing profitable companies were supported and actually subsidized while the mobility of capital dwindled and the activity at the stock market fell.

As long as the export sectors grew, the program worked well.8 Companies founded in the late nineteenth century and in the interwar period developed into successful multinationals in engineering with machinery, auto industries and shipbuilding, as well as in resource-based industries of steel and paper. The expansion of the export sector was the main force behind the high growth rates and the productivity increases but the sector was strongly supported by public investments or publicly subsidized investments in infrastructure and residential construction.

Hence, during the Golden Age of growth the development blocks around electrification and motorization matured in a broad modernization of the society, where mass consumption and mass production was supported by social programs, by investment programs and by labor market policy.

Crisis and Restructuring from the 1970s

In the 1970s and early 1980s a number of industries – such as steel works, pulp and paper, shipbuilding, and mechanical engineering – ran into crisis. New global competition, changing consumer behavior and profound innovative renewal, especially in microelectronics, made some of the industrial pillars of the Swedish Model crumble. At the same time the disadvantages of the old model became more apparent. It put obstacles to flexibility and to entrepreneurial initiatives and it reduced individual incentives for mobility. Thus, while the Swedish Model did foster rationalization of existing industries well adapted to the post-war period, it did not support more profound transformation of the economy.

One should not exaggerate the obstacles to transformation, though. The Swedish economy was still very open in the market for goods and many services, and the pressure to transform increased rapidly. During the 1980s a far-reaching structural change within industry as well as in economic policy took place, engaging both private and public actors. Shipbuilding was almost completely discontinued, pulp industries were integrated into modernized paper works, the steel industry was concentrated and specialized, and the mechanical engineering was digitalized. New and more knowledge-intensive growth industries appeared in the 1980s, such as IT-based telecommunication, pharmaceutical industries, and biotechnology, as well as new service industries.

During the 1980s some of the constituent components of the Swedish model were weakened or eliminated. Centralized negotiations and solidaristic wage policy disappeared. Regulations in the capital market were dismantled under the pressure of increasing international capital flows simultaneously with a forceful revival of the stock market. The expansion of public sector services came to an end and the taxation system was reformed with a reduction of marginal tax rates. Thus, Swedish economic policy and welfare system became more adapted to the main European level that facilitated the Swedish application of membership and final entrance into the European Union in 1995.

It is also clear that the period from the 1970s to the early twenty-first century comprise two growth trends, before and after 1990 respectively. During the 1970s and 1980s, growth in Sweden was very slow and marked by the great structural problems that the Swedish economy had to cope with. The slow growth prior to 1990 does not signify stagnation in a real sense, but rather the transformation of industrial structures and the reformulation of economic policy, which did not immediately result in a speed up of growth but rather in imbalances and bottle necks that took years to eliminate. From the 1990s up to 2005 Swedish growth accelerated quite forcefully in comparison with most Western economies.9 Thus, the 1980s may be considered as a Swedish case of “the productivity paradox,” with innovative renewal but with a delayed acceleration of productivity and growth from the 1990s – although a delayed productivity effect of more profound transformation and radical innovative behavior is not paradoxical.

Table 6 Annual Growth Rates per Capita, 1971-2005

Period Sweden Rest of Nordic Countries Rest of Western Europe United States World Economy
1971/1975-1991/1995 1.2 2.1 1.8 1.6 1.4
1991/1995-2001/2005 2.4 2.5 1.7 2.1 2.1

Sources: See Table 1.

The recent acceleration in growth may also indicate that some of the basic traits from early industrialization still pertain to the Swedish economy – an international attitude in a small open economy fosters transformation and adaptation of human skills to new circumstances as a major force behind long term growth.

References

Abramovitz, Moses. “Catching Up, Forging Ahead and Falling Behind.” Journal of Economic History 46, no. 2 (1986): 385-406.

Dahmén, Erik. “Development Blocks in Industrial Economics.” Scandinavian Economic History Review 36 (1988): 3-14.

David, Paul A. “The Dynamo and the Computer: An Historical Perspective on the Modern Productivity Paradox.” American Economic Review 80, no. 2 (1980): 355-61.

Eichengreen, Barry. “Institutions and Economic Growth: Europe after World War II.” In Economic Growth in Europe since 1945, edited by Nicholas Crafts and Gianni Toniolo. New York: Cambridge University Press, 1996.

Krantz, Olle and Lennart Schön. Swedish Historical National Accounts, 1800-2000. Lund: Almqvist and Wiksell International (forthcoming, 2007).

Maddison, Angus. The World Economy, Volumes 1 and 2. Paris: OECD (2006).

Schön, Lennart. “Development Blocks and Transformation Pressure in a Macro-Economic Perspective: A Model of Long-Cyclical Change.” Skandinaviska Enskilda Banken Quarterly Review 20, no. 3-4 (1991): 67-76.

Schön, Lennart. “External and Internal Factors in Swedish Industrialization.” Scandinavian Economic History Review 45, no. 3 (1997): 209-223.

Schön, Lennart. En modern svensk ekonomisk historia: Tillväxt och omvandling under två sekel (A Modern Swedish Economic History: Growth and Transformation in Two Centuries). Stockholm: SNS (2000).

Schön, Lennart. “Total Factor Productivity in Swedish Manufacturing in the Period 1870-2000.” In Exploring Economic Growth: Essays in Measurement and Analysis: A Festschrift for Riitta Hjerppe on Her Sixtieth Birthday, edited by S. Heikkinen and J.L. van Zanden. Amsterdam: Aksant, 2004.

Schön, Lennart. “Swedish Industrialization 1870-1930 and the Heckscher-Ohlin Theory.” In Eli Heckscher, International Trade, and Economic History, edited by Ronald Findlay et al. Cambridge, MA: MIT Press (2006).

Svennilson, Ingvar. Growth and Stagnation in the European Economy. Geneva: United Nations Economic Commission for Europe, 1954.

Temin, Peter. “The Golden Age of European Growth Reconsidered.” European Review of Economic History 6, no. 1 (2002): 3-22.

Williamson, Jeffrey G. “The Evolution of Global Labor Markets since 1830: Background Evidence and Hypotheses.” Explorations in Economic History 32, no. 2 (1995): 141-96.

Citation: Schön, Lennart. “Sweden – Economic Growth and Structural Change, 1800-2000″. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/sweden-economic-growth-and-structural-change-1800-2000/

The 1929 Stock Market Crash

Harold Bierman, Jr., Cornell University

Overview

The 1929 stock market crash is conventionally said to have occurred on Thursday the 24th and Tuesday the 29th of October. These two dates have been dubbed “Black Thursday” and “Black Tuesday,” respectively. On September 3, 1929, the Dow Jones Industrial Average reached a record high of 381.2. At the end of the market day on Thursday, October 24, the market was at 299.5 — a 21 percent decline from the high. On this day the market fell 33 points — a drop of 9 percent — on trading that was approximately three times the normal daily volume for the first nine months of the year. By all accounts, there was a selling panic. By November 13, 1929, the market had fallen to 199. By the time the crash was completed in 1932, following an unprecedentedly large economic depression, stocks had lost nearly 90 percent of their value.

The events of Black Thursday are normally defined to be the start of the stock market crash of 1929-1932, but the series of events leading to the crash started before that date. This article examines the causes of the 1929 stock market crash. While no consensus exists about its precise causes, the article will critique some arguments and support a preferred set of conclusions. It argues that one of the primary causes was the attempt by important people and the media to stop market speculators. A second probable cause was the great expansion of investment trusts, public utility holding companies, and the amount of margin buying, all of which fueled the purchase of public utility stocks, and drove up their prices. Public utilities, utility holding companies, and investment trusts were all highly levered using large amounts of debt and preferred stock. These factors seem to have set the stage for the triggering event. This sector was vulnerable to the arrival of bad news regarding utility regulation. In October 1929, the bad news arrived and utility stocks fell dramatically. After the utilities decreased in price, margin buyers had to sell and there was then panic selling of all stocks.

The Conventional View

The crash helped bring on the depression of the thirties and the depression helped to extend the period of low stock prices, thus “proving” to many that the prices had been too high.

Laying the blame for the “boom” on speculators was common in 1929. Thus, immediately upon learning of the crash of October 24 John Maynard Keynes (Moggridge, 1981, p. 2 of Vol. XX) wrote in the New York Evening Post (25 October 1929) that “The extraordinary speculation on Wall Street in past months has driven up the rate of interest to an unprecedented level.” And the Economist when stock prices reached their low for the year repeated the theme that the U.S. stock market had been too high (November 2, 1929, p. 806): “there is warrant for hoping that the deflation of the exaggerated balloon of American stock values will be for the good of the world.” The key phrases in these quotations are “exaggerated balloon of American stock values” and “extraordinary speculation on Wall Street.” Likewise, President Herbert Hoover saw increasing stock market prices leading up to the crash as a speculative bubble manufactured by the mistakes of the Federal Reserve Board. “One of these clouds was an American wave of optimism, born of continued progress over the decade, which the Federal Reserve Board transformed into the stock-exchange Mississippi Bubble” (Hoover, 1952). Thus, the common viewpoint was that stock prices were too high.

There is much to criticize in conventional interpretations of the 1929 stock market crash, however. (Even the name is inexact. The largest losses to the market did not come in October 1929 but rather in the following two years.) In December 1929, many expert economists, including Keynes and Irving Fisher, felt that the financial crisis had ended and by April 1930 the Standard and Poor 500 composite index was at 25.92, compared to a 1929 close of 21.45. There are good reasons for thinking that the stock market was not obviously overvalued in 1929 and that it was sensible to hold most stocks in the fall of 1929 and to buy stocks in December 1929 (admittedly this investment strategy would have been terribly unsuccessful).

Were Stocks Obviously Overpriced in October 1929?
Debatable — Economic Indicators Were Strong

From 1925 to the third quarter of 1929, common stocks increased in value by 120 percent in four years, a compound annual growth of 21.8%. While this is a large rate of appreciation, it is not obvious proof of an “orgy of speculation.” The decade of the 1920s was extremely prosperous and the stock market with its rising prices reflected this prosperity as well as the expectation that the prosperity would continue.

The fact that the stock market lost 90 percent of its value from 1929 to 1932 indicates that the market, at least using one criterion (actual performance of the market), was overvalued in 1929. John Kenneth Galbraith (1961) implies that there was a speculative orgy and that the crash was predictable: “Early in 1928, the nature of the boom changed. The mass escape into make-believe, so much a part of the true speculative orgy, started in earnest.” Galbraith had no difficulty in 1961 identifying the end of the boom in 1929: “On the first of January of 1929, as a matter of probability, it was most likely that the boom would end before the year was out.”

Compare this position with the fact that Irving Fisher, one of the leading economists in the U.S. at the time, was heavily invested in stocks and was bullish before and after the October sell offs; he lost his entire wealth (including his house) before stocks started to recover. In England, John Maynard Keynes, possibly the world’s leading economist during the first half of the twentieth century, and an acknowledged master of practical finance, also lost heavily. Paul Samuelson (1979) quotes P. Sergeant Florence (another leading economist): “Keynes may have made his own fortune and that of King’s College, but the investment trust of Keynes and Dennis Robertson managed to lose my fortune in 1929.”

Galbraith’s ability to ‘forecast’ the market turn is not shared by all. Samuelson (1979) admits that: “playing as I often do the experiment of studying price profiles with their dates concealed, I discovered that I would have been caught by the 1929 debacle.” For many, the collapse from 1929 to 1933 was neither foreseeable nor inevitable.

The stock price increases leading to October 1929, were not driven solely by fools or speculators. There were also intelligent, knowledgeable investors who were buying or holding stocks in September and October 1929. Also, leading economists, both then and now, could neither anticipate nor explain the October 1929 decline of the market. Thus, the conviction that stocks were obviously overpriced is somewhat of a myth.

The nation’s total real income rose from 1921 to 1923 by 10.5% per year, and from 1923 to 1929, it rose 3.4% per year. The 1920s were, in fact, a period of real growth and prosperity. For the period of 1923-1929, wholesale prices went down 0.9% per year, reflecting moderate stable growth in the money supply during a period of healthy real growth.

Examining the manufacturing situation in the United States prior to the crash is also informative. Irving Fisher’s Stock Market Crash and After (1930) offers much data indicating that there was real growth in the manufacturing sector. The evidence presented goes a long way to explain Fisher’s optimism regarding the level of stock prices. What Fisher saw was manufacturing efficiency rapidly increasing (output per worker) as was manufacturing output and the use of electricity.

The financial fundamentals of the markets were also strong. During 1928, the price-earnings ratio for 45 industrial stocks increased from approximately 12 to approximately 14. It was over 15 in 1929 for industrials and then decreased to approximately 10 by the end of 1929. While not low, these price-earnings (P/E) ratios were by no means out of line historically. Values in this range would be considered reasonable by most market analysts today. For example, the P/E ratio of the S & P 500 in July 2003 reached a high of 33 and in May 2004 the high was 23.

The rise in stock prices was not uniform across all industries. The stocks that went up the most were in industries where the economic fundamentals indicated there was cause for large amounts of optimism. They included airplanes, agricultural implements, chemicals, department stores, steel, utilities, telephone and telegraph, electrical equipment, oil, paper, and radio. These were reasonable choices for expectations of growth.

To put the P/E ratios of 10 to 15 in perspective, note that government bonds in 1929 yielded 3.4%. Industrial bonds of investment grade were yielding 5.1%. Consider that an interest rate of 5.1% represents a 1/(0.051) = 19.6 price-earnings ratio for debt.

In 1930, the Federal Reserve Bulletin reported production in 1920 at an index of 87.1 The index went down to 67 in 1921, then climbed steadily (except for 1924) until it reached 125 in 1929. This is an annual growth rate in production of 3.1%. During the period commodity prices actually decreased. The production record for the ten-year period was exceptionally good.

Factory payrolls in September were at an index of 111 (an all-time high). In October the index dropped to 110, which beat all previous months and years except for September 1929. The factory employment measures were consistent with the payroll index.

The September unadjusted measure of freight car loadings was at 121 — also an all-time record.2 In October the loadings dropped to 118, which was a performance second only to September’s record measure.

J.W. Kendrick (1961) shows that the period 1919-1929 had an unusually high rate of change in total factor productivity. The annual rate of change of 5.3% for 1919-1929 for the manufacturing sector was more than twice the 2.5% rate of the second best period (1948-1953). Farming productivity change for 1919-1929 was second only to the period 1929-1937. Overall, the period 1919-1929 easily took first place for productivity increases, handily beating the six other time periods studied by Kendrick (all the periods studies were prior to 1961) with an annual productivity change measure of 3.7%. This was outstanding economic performance — performance which normally would justify stock market optimism.

In the first nine months of 1929, 1,436 firms announced increased dividends. In 1928, the number was only 955 and in 1927, it was 755. In September 1929 dividend increased were announced by 193 firms compared with 135 the year before. The financial news from corporations was very positive in September and October 1929.

The May issue of the National City Bank of New York Newsletter indicated the earnings statements for the first quarter of surveyed firms showed a 31% increase compared to the first quarter of 1928. The August issue showed that for 650 firms the increase for the first six months of 1929 compared to 1928 was 24.4%. In September, the results were expanded to 916 firms with a 27.4% increase. The earnings for the third quarter for 638 firms were calculated to be 14.1% larger than for 1928. This is evidence that the general level of business activity and reported profits were excellent at the end of September 1929 and the middle of October 1929.

Barrie Wigmore (1985) researched 1929 financial data for 135 firms. The market price as a percentage of year-end book value was 420% using the high prices and 181% using the low prices. However, the return on equity for the firms (using the year-end book value) was a high 16.5%. The dividend yield was 2.96% using the high stock prices and 5.9% using the low stock prices.

Article after article from January to October in business magazines carried news of outstanding economic performance. E.K. Berger and A.M. Leinbach, two staff writers of the Magazine of Wall Street, wrote in June 1929: “Business so far this year has astonished even the perennial optimists.”

To summarize: There was little hint of a severe weakness in the real economy in the months prior to October 1929. There is a great deal of evidence that in 1929 stock prices were not out of line with the real economics of the firms that had issued the stock. Leading economists were betting that common stocks in the fall of 1929 were a good buy. Conventional financial reports of corporations gave cause for optimism relative to the 1929 earnings of corporations. Price-earnings ratios, dividend amounts and changes in dividends, and earnings and changes in earnings all gave cause for stock price optimism.

Table 1 shows the average of the highs and lows of the Dow Jones Industrial Index for 1922 to 1932.

Table 1
Dow-Jones Industrials Index Average
of Lows and Highs for the Year
1922 91.0
1923 95.6
1924 104.4
1925 137.2
1926 150.9
1927 177.6
1928 245.6
1929 290.0
1930 225.8
1931 134.1
1932 79.4

Sources: 1922-1929 measures are from the Stock Market Study, U.S. Senate, 1955, pp. 40, 49, 110, and 111; 1930-1932 Wigmore, 1985, pp. 637-639.

Using the information of Table 1, from 1922 to 1929 stocks rose in value by 218.7%. This is equivalent to an 18% annual growth rate in value for the seven years. From 1929 to 1932 stocks lost 73% of their value (different indices measured at different time would give different measures of the increase and decrease). The price increases were large, but not beyond comprehension. The price decreases taken to 1932 were consistent with the fact that by 1932 there was a worldwide depression.

If we take the 386 high of September 1929 and the 1929-year end value of 248.5, the market lost 36% of its value during that four-month period. Most of us, if we held stock in September 1929 would not have sold early in October. In fact, if I had money to invest, I would have purchased after the major break on Black Thursday, October 24. (I would have been sorry.)

Events Precipitating the Crash

Although it can be argued that the stock market was not overvalued, there is evidence that many feared that it was overvalued — including the Federal Reserve Board and the United States Senate. By 1929, there were many who felt the market price of equity securities had increased too much, and this feeling was reinforced daily by the media and statements by influential government officials.

What precipitated the October 1929 crash?

My research minimizes several candidates that are frequently cited by others (see Bierman 1991, 1998, 1999, and 2001).

  • The market did not fall just because it was too high — as argued above it is not obvious that it was too high.
  • The actions of the Federal Reserve, while not always wise, cannot be directly identified with the October stock market crashes in an important way.
  • The Smoot-Hawley tariff, while looming on the horizon, was not cited by the news sources in 1929 as a factor, and was probably not important to the October 1929 market.
  • The Hatry Affair in England was not material for the New York Stock Exchange and the timing did not coincide with the October crashes.
  • Business activity news in October was generally good and there were very few hints of a coming depression.
  • Short selling and bear raids were not large enough to move the entire market.
  • Fraud and other illegal or immoral acts were not material, despite the attention they have received.

Barsky and DeLong (1990, p. 280) stress the importance of fundamentals rather than fads or fashions. “Our conclusion is that major decade-to-decade stock market movements arise predominantly from careful re-evaluation of fundamentals and less so from fads or fashions.” The argument below is consistent with their conclusion, but there will be one major exception. In September 1929, the market value of one segment of the market, the public utility sector, should be based on existing fundamentals, and fundamentals seem to have changed considerably in October 1929.

A Look at the Financial Press

Thursday, October 3, 1929, the Washington Post with a page 1 headline exclaimed “Stock Prices Crash in Frantic Selling.” the New York Times of October 4 headed a page 1 article with “Year’s Worst Break Hits Stock Market.” The article on the first page of the Times cited three contributing factors:

  • A large broker loan increase was expected (the article stated that the loans increased, but the increase was not as large as expected).
  • The statement by Philip Snowden, England’s Chancellor of the Exchequer that described America’s stock market as a “speculative orgy.”
  • Weakening of margin accounts making it necessary to sell, which further depressed prices.

While the 1928 and 1929 financial press focused extensively and excessively on broker loans and margin account activity, the statement by Snowden is the only unique relevant news event on October 3. The October 4 (p. 20) issue of the Wall Street Journal also reported the remark by Snowden that there was “a perfect orgy of speculation.” Also, on October 4, the New York Times made another editorial reference to Snowden’s American speculation orgy. It added that “Wall Street had come to recognize its truth.” The editorial also quoted Secretary of the Treasury Mellon that investors “acted as if the price of securities would infinitely advance.” The Times editor obviously thought there was excessive speculation, and agreed with Snowden.

The stock market went down on October 3 and October 4, but almost all reported business news was very optimistic. The primary negative news item was the statement by Snowden regarding the amount of speculation in the American stock market. The market had been subjected to a barrage of statements throughout the year that there was excessive speculation and that the level of stock prices was too high. There is a possibility that the Snowden comment reported on October 3 was the push that started the boulder down the hill, but there were other events that also jeopardized the level of the market.

On August 8, the Federal Reserve Bank of New York had increased the rediscount rate from 5 to 6%. On September 26 the Bank of England raised its discount rate from 5.5 to 6.5%. England was losing gold as a result of investment in the New York Stock Exchange and wanted to decrease this investment. The Hatry Case also happened in September. It was first reported on September 29, 1929. Both the collapse of the Hatry industrial empire and the increase in the investment returns available in England resulted in shrinkage of English investment (especially the financing of broker loans) in the United States, adding to the market instability in the beginning of October.

Wednesday, October 16, 1929

On Wednesday, October 16, stock prices again declined. the Washington Post (October 17, p. 1) reported “Crushing Blow Again Dealt Stock Market.” Remember, the start of the stock market crash is conventionally identified with Black Thursday, October 24, but there were price declines on October 3, 4, and 16.

The news reports of the Post on October 17 and subsequent days are important since they were Associated Press (AP) releases, thus broadly read throughout the country. The Associated Press reported (p. 1) “The index of 20 leading public utilities computed for the Associated Press by the Standard Statistics Co. dropped 19.7 points to 302.4 which contrasts with the year’s high established less than a month ago.” This index had also dropped 18.7 points on October 3 and 4.3 points on October 4. The Times (October 17, p. 38) reported, “The utility stocks suffered most as a group in the day’s break.”

The economic news after the price drops of October 3 and October 4 had been good. But the deluge of bad news regarding public utility regulation seems to have truly upset the market. On Saturday, October 19, the Washington Post headlined (p. 13) “20 Utility Stocks Hit New Low Mark” and (Associated Press) “The utility shares again broke wide open and the general list came tumbling down almost half as far.” The October 20 issue of the Post had another relevant AP article (p. 12) “The selling again concentrated today on the utilities, which were in general depressed to the lowest levels since early July.”

An evaluation of the October 16 break in the New York Times on Sunday, October 20 (pp. 1 and 29) gave the following favorable factors:

  • stable business condition
  • low money rates (5%)
  • good retail trade
  • revival of the bond market
  • buying power of investment trusts
  • largest short interest in history (this is the total dollar value of stock sold where the investors do not own the stock they sold)

The following negative factors were described:

  • undigested investment trusts and new common stock shares
  • increase in broker loans
  • some high stock prices
  • agricultural prices lower
  • nervous market

The negative factors were not very upsetting to an investor if one was optimistic that the real economic boom (business prosperity) would continue. The Times failed to consider the impact on the market of the news concerning the regulation of public utilities.

Monday, October 21, 1929

On Monday, October 21, the market went down again. The Times (October 22) identified the causes to be

  • margin sellers (buyers on margin being forced to sell)
  • foreign money liquidating
  • skillful short selling

The same newspaper carried an article about a talk by Irving Fisher (p. 24) “Fisher says prices of stocks are low.” Fisher also defended investment trusts as offering investors diversification, thus reduced risk. He was reminded by a person attending the talk that in May he had “pointed out that predicting the human behavior of the market was quite different from analyzing its economic soundness.” Fisher was better with fundamentals than market psychology.

Wednesday, October 23, 1929

On Wednesday, October 23 the market tumbled. The Times headlines (October 24, p.1) said “Prices of Stocks Crash in Heavy Liquidation.” The Washington Post (p. 1) had “Huge Selling Wave Creates Near-Panic as Stocks Collapse.” In a total market value of $87 billion the market declined $4 billion — a 4.6% drop. If the events of the next day (Black Thursday) had not occurred, October 23 would have gone down in history as a major stock market event. But October 24 was to make the “Crash” of October 23 become merely a “Dip.”

The Times lamented October 24, (p. 38) “There was hardly a single item of news which might be construed as bearish.”

Thursday, October 24, 1929

Thursday, October 24 (Black Thursday) was a 12,894,650 share day (the previous record was 8,246,742 shares on March 26, 1929) on the NYSE. The headline on page one of the Times (October 25) was “Treasury Officials Blame Speculation.”

The Times (p. 41) moaned that the cost of call money had been 20% in March and the price break in March was understandable. (A call loan is a loan payable on demand of the lender.) Call money on October 24 cost only 5%. There should not have been a crash. The Friday Wall Street Journal (October 25) gave New York bankers credit for stopping the price decline with $1 billion of support.

the Washington Post (October 26, p. 1) reported “Market Drop Fails to Alarm Officials.” The “officials” were all in Washington. The rest of the country seemed alarmed. On October 25, the market gained. President Hoover made a statement on Friday regarding the excellent state of business, but then added how building and construction had been adversely “affected by the high interest rates induced by stock speculation” (New York Times, October 26, p. 1). A Times editorial (p. 16) quoted Snowden’s “orgy of speculation” again.

Tuesday, October 29, 1929

The Sunday, October 27 edition of the Times had a two-column article “Bay State Utilities Face Investigation.” It implied that regulation in Massachusetts was going to be less friendly towards utilities. Stocks again went down on Monday, October 28. There were 9,212,800 shares traded (3,000,000 in the final hour). The Times on Tuesday, October 29 again carried an article on the New York public utility investigating committee being critical of the rate making process. October 29 was “Black Tuesday.” The headline the next day was “Stocks Collapse in 16,410,030 Share Day” (October 30, p. 1). Stocks lost nearly $16 billion in the month of October or 18% of the beginning of the month value. Twenty-nine public utilities (tabulated by the New York Times) lost $5.1 billion in the month, by far the largest loss of any of the industries listed by the Times. The value of the stocks of all public utilities went down by more than $5.1 billion.

An Interpretive Overview of Events and Issues

My interpretation of these events is that the statement by Snowden, Chancellor of the Exchequer, indicating the presence of a speculative orgy in America is likely to have triggered the October 3 break. Public utility stocks had been driven up by an explosion of investment trust formation and investing. The trusts, to a large extent, bought stock on margin with funds loaned not by banks but by “others.” These funds were very sensitive to any market weakness. Public utility regulation was being reviewed by the Federal Trade Commission, New York City, New York State, and Massachusetts, and these reviews were watched by the other regulatory commissions and by investors. The sell-off of utility stocks from October 16 to October 23 weakened prices and created “margin selling” and withdrawal of capital by the nervous “other” money. Then on October 24, the selling panic happened.

There are three topics that require expansion. First, there is the setting of the climate concerning speculation that may have led to the possibility of relatively specific issues being able to trigger a general market decline. Second, there are investment trusts, utility holding companies, and margin buying that seem to have resulted in one sector being very over-levered and overvalued. Third, there are the public utility stocks that appear to be the best candidate as the actual trigger of the crash.

Contemporary Worries of Excessive Speculation

During 1929, the public was bombarded with statements of outrage by public officials regarding the speculative orgy taking place on the New York Stock Exchange. If the media say something often enough, a large percentage of the public may come to believe it. By October 29 the overall opinion was that there had been excessive speculation and the market had been too high. Galbraith (1961), Kindleberger (1978), and Malkiel (1996) all clearly accept this assumption. the Federal Reserve Bulletin of February 1929 states that the Federal Reserve would restrain the use of “credit facilities in aid of the growth of speculative credit.”

In the spring of 1929, the U.S. Senate adopted a resolution stating that the Senate would support legislation “necessary to correct the evil complained of and prevent illegitimate and harmful speculation” (Bierman, 1991).

The President of the Investment Bankers Association of America, Trowbridge Callaway, gave a talk in which he spoke of “the orgy of speculation which clouded the country’s vision.”

Adolph Casper Miller, an outspoken member of the Federal Reserve Board from its beginning described 1929 as “this period of optimism gone wild and cupidity gone drunk.”

Myron C. Taylor, head of U.S. Steel described “the folly of the speculative frenzy that lifted securities to levels far beyond any warrant of supporting profits.”

Herbert Hoover becoming president in March 1929 was a very significant event. He was a good friend and neighbor of Adolph Miller (see above) and Miller reinforced Hoover’s fears. Hoover was an aggressive foe of speculation. For example, he wrote, “I sent individually for the editors and publishers of major newspapers and magazine and requested them systematically to warn the country against speculation and the unduly high price of stocks.” Hoover then pressured Secretary of the Treasury Andrew Mellon and Governor of the Federal Reserve Board Roy Young “to strangle the speculative movement.” In his memoirs (1952) he titled his Chapter 2 “We Attempt to Stop the Orgy of Speculation” reflecting Snowden’s influence.

Buying on Margin

Margin buying during the 1920’s was not controlled by the government. It was controlled by brokers interested in their own well-being. The average margin requirement was 50% of the stock price prior to October 1929. On selected stocks, it was as high as 75%. When the crash came, no major brokerage firm was bankrupted, because the brokers managed their finances in a conservative manner. At the end of October, margins were lowered to 25%.

Brokers’ loans received a lot of attention in England, as they did in the United States. The Financial Times reported the level and the changes in the amount regularly. For example, the October 4 issue indicated that on October 3 broker loans reached a record high as money rates dropped from 7.5% to 6%. By October 9, money rates had dropped further to below .06. Thus, investors prior to October 24 had relatively easy access to funds at the lowest rate since July 1928.

the Financial Times (October 7, 1929, p. 3) reported that the President of the American Bankers Association was concerned about the level of credit for securities and had given a talk in which he stated, “Bankers are gravely alarmed over the mounting volume of credit being employed in carrying security loans, both by brokers and by individuals.” The Financial Times was also concerned with the buying of investment trusts on margin and the lack of credit to support the bull market.

My conclusion is that the margin buying was a likely factor in causing stock prices to go up, but there is no reason to conclude that margin buying triggered the October crash. Once the selling rush began, however, the calling of margin loans probably exacerbated the price declines. (A calling of margin loans requires the stock buyer to contribute more cash to the broker or the broker sells the stock to get the cash.)

Investment Trusts

By 1929, investment trusts were very popular with investors. These trusts were the 1929 version of closed-end mutual funds. In recent years seasoned closed-end mutual funds sell at a discount to their fundamental value. The fundamental value is the sum of the market values of the fund’s components (securities in the portfolio). In 1929, the investment trusts sold at a premium — i.e. higher than the value of the underlying stocks. Malkiel concludes (p. 51) that this “provides clinching evidence of wide-scale stock-market irrationality during the 1920s.” However, Malkiel also notes (p. 442) “as of the mid-1990’s, Berkshire Hathaway shares were selling at a hefty premium over the value of assets it owned.” Warren Buffett is the guiding force behind Berkshire Hathaway’s great success as an investor. If we were to conclude that rational investors would currently pay a premium for Warren Buffet’s expertise, then we should reject a conclusion that the 1929 market was obviously irrational. We have current evidence that rational investors will pay a premium for what they consider to be superior money management skills.

There were $1 billion of investment trusts sold to investors in the first eight months of 1929 compared to $400 million in the entire 1928. the Economist reported that this was important (October 12, 1929, p. 665). “Much of the recent increase is to be accounted for by the extraordinary burst of investment trust financing.” In September alone $643 million was invested in investment trusts (Financial Times, October 21, p. 3). While the two sets of numbers (from the Economist and the Financial Times) are not exactly comparable, both sets of numbers indicate that investment trusts had become very popular by October 1929.

The common stocks of trusts that had used debt or preferred stock leverage were particularly vulnerable to the stock price declines. For example, the Goldman Sachs Trading Corporation was highly levered with preferred stock and the value of its common stock fell from $104 a share to less than $3 in 1933. Many of the trusts were levered, but the leverage of choice was not debt but rather preferred stock.

In concept, investment trusts were sensible. They offered expert management and diversification. Unfortunately, in 1929 a diversification of stocks was not going to be a big help given the universal price declines. Irving Fisher on September 6, 1929 was quoted in the New York Herald Tribune as stating: “The present high levels of stock prices and corresponding low levels of dividend returns are due largely to two factors. One, the anticipation of large dividend returns in the immediate future; and two, reduction of risk to investors largely brought about through investment diversification made possible for the investor by investment trusts.”

If a researcher could find out the composition of the portfolio of a couple of dozen of the largest investment trusts as of September-October 1929 this would be extremely helpful. Seven important types of information that are not readily available but would be of interest are:

  • The percentage of the portfolio that was public utilities.
  • The extent of diversification.
  • The percentage of the portfolios that was NYSE firms.
  • The investment turnover.
  • The ratio of market price to net asset value at various points in time.
  • The amount of debt and preferred stock leverage used.
  • Who bought the trusts and how long they held.

The ideal information to establish whether market prices are excessively high compared to intrinsic values is to have both the prices and well-defined intrinsic values at the same moment in time. For the normal financial security, this is impossible since the intrinsic values are not objectively well defined. There are two exceptions. DeLong and Schleifer (1991) followed one path, very cleverly choosing to study closed-end mutual funds. Some of these funds were traded on the stock market and the market values of the securities in the funds’ portfolios are a very reasonable estimate of the intrinsic value. DeLong and Schleifer state (1991, p. 675):

“We use the difference between prices and net asset values of closed-end mutual funds at the end of the 1920s to estimate the degree to which the stock market was overvalued on the eve of the 1929 crash. We conclude that the stocks making up the S&P composite were priced at least 30 percent above fundamentals in late summer, 1929.”

Unfortunately (p. 682) “portfolios were rarely published and net asset values rarely calculated.” It was only after the crash that investment trusts started to reveal routinely their net asset value. In the third quarter of 1929 (p. 682), “three types of event seemed to trigger a closed-end fund’s publication of its portfolio.” The three events were (1) listing on the New York Stock Exchange (most of the trusts were not listed), (2) start up of a new closed-end fund (this stock price reflects selling pressure), and (3) shares selling at a discount from net asset value (in September 1929 most trusts were not selling at a discount, the inclusion of any that were introduces a bias). After 1929, some trusts revealed 1929 net asset values. Thus, DeLong and Schleifer lacked the amount and quality of information that would have allowed definite conclusions. In fact, if investors also lacked the information regarding the portfolio composition we would have to place investment trusts in a unique investment category where investment decisions were made without reliable financial statements. If investors in the third quarter of 1929 did not know the current net asset value of investment trusts, this fact is significant.

The closed-end funds were an attractive vehicle to study since the market for investment trusts in 1929 was large and growing rapidly. In August and September alone over $1 billion of new funds were launched. DeLong and Schleifer found the premiums of price over value to be large — the median was about 50% in the third quarter of 1929) (p. 678). But they worried about the validity of their study because funds were not selected randomly.

DeLong and Schleifer had limited data (pp. 698-699). For example, for September 1929 there were two observations, for August 1929 there were five, and for July there were nine. The nine funds observed in July 1929 had the following premia: 277%, 152%, 48%, 22%, 18% (2 times), 8% (3 times). Given that closed-end funds tend to sell at a discount, the positive premiums are interesting. Given the conventional perspective in 1929 that financial experts could manager money better than the person not plugged into the street, it is not surprising that some investors were willing to pay for expertise and to buy shares in investment trusts. Thus, a premium for investment trusts does not imply the same premium for other stocks.

The Public Utility Sector

In addition to investment trusts, intrinsic values are usually well defined for regulated public utilities. The general rule applied by regulatory authorities is to allow utilities to earn a “fair return” on an allowed rate base. The fair return is defined to be equal to a utility’s weighted average cost of capital. There are several reasons why a public utility can earn more or less than a fair return, but the target set by the regulatory authority is the weighted average cost of capital.

Thus, if a utility has an allowed rate equity base of $X and is allowed to earn a return of r, (rX in terms of dollars) after one year the firm’s equity will be worth X + rX or (1 + r)X with a present value of X. (This assumes that r is the return required by the market as well as the return allowed by regulators.) Thus, the present value of the equity is equal to the present rate base, and the stock price should be equal to the rate base per share. Given the nature of public utility accounting, the book value of a utility’s stock is approximately equal to the rate base.

There can be time periods where the utility can earn more (or less) than the allowed return. The reasons for this include regulatory lag, changes in efficiency, changes in the weather, and changes in the mix and number of customers. Also, the cost of equity may be different than the allowed return because of inaccurate (or incorrect) or changing capital market conditions. Thus, the stock price may differ from the book value, but one would not expect the stock price to be very much different than the book value per share for very long. There should be a tendency for the stock price to revert to the book value for a public utility supplying an essential service where there is no effective competition, and the rate commission is effectively allowing a fair return to be earned.

In 1929, public utility stock prices were in excess of three times their book values. Consider, for example, the following measures (Wigmore, 1985, p. 39) for five operating utilities.

border=”1″ cellspacing=”0″ cellpadding=”2″ class=”encyclopedia” width=”580″>

1929 Price-earnings Ratio

High Price for Year

Market Price/Book Value

Commonwealth Edison

35

3.31

Consolidated Gas of New York

39

3.34

Detroit Edison

35

3.06

Pacific Gas & Electric

28

3.30

Public Service of New Jersey

35

3.14

Sooner or later this price bubble had to break unless the regulatory authorities were to decide to allow the utilities to earn more than a fair return, or an infinite stream of greater fools existed. The decision made by the Massachusetts Public Utility Commission in October 1929 applicable to the Edison Electric Illuminating Company of Boston made clear that neither of these improbable events were going to happen (see below).

The utilities bubble did burst. Between the end of September and the end of November 1929, industrial stocks fell by 48%, railroads by 32% and utilities by 55% — thus utilities dropped the furthest from the highs. A comparison of the beginning of the year prices and the highest prices is also of interest: industrials rose by 20%, railroads by 19%, and utilities by 48%. The growth in value for utilities during the first nine months of 1929 was more than twice that of the other two groups.

The following high and low prices for 1929 for a typical set of public utilities and holding companies illustrate how severely public utility prices were hit by the crash (New York Times, 1 January 1930 quotations.)

1929
Firm High Price Low Price Low Price DividedBy High Price
American Power & Light 1753/8 641/4 .37
American Superpower 711/8 15 .21
Brooklyn Gas 2481/2 99 .44
Buffalo, Niagara & Eastern Power 128 611/8 .48
Cities Service 681/8 20 .29
Consolidated Gas Co. of N.Y. 1831/4 801/8 .44
Electric Bond and Share 189 50 .26
Long Island Lighting 91 40 .44
Niagara Hudson Power 303/4 111/4 .37
Transamerica 673/8 201/4 .30

Picking on one segment of the market as the cause of a general break in the market is not obviously correct. But the combination of an overpriced utility segment and investment trusts with a portion of the market that had purchased on margin appears to be a viable explanation. In addition, as of September 1, 1929 utilities industry represented $14.8 billion of value or 18% of the value of the outstanding shares on the NYSE. Thus, they were a large sector, capable of exerting a powerful influence on the overall market. Moreover, many contemporaries pointed to the utility sector as an important force in triggering the market decline.

The October 19, 1929 issue of the Commercial and Financial Chronicle identified the main depressing influences on the market to be the indications of a recession in steel and the refusal of the Massachusetts Department of Public Utilities to allow Edison Electric Illuminating Company of Boston to split its stock. The explanations offered by the Department — that the stock was not worth its price and the company’s dividend would have to be reduced — made the situation worse.

the Washington Post (October 17, p. 1) in explaining the October 16 market declines (an Associated Press release) reported, “Professional traders also were obviously distressed at the printed remarks regarding inflation of power and light securities by the Massachusetts Public Utility Commission in its recent decision.”

Straws That Broke the Camel’s Back?

Edison Electric of Boston

On August 2, 1929, the New York Times reported that the Directors of the Edison Electric Illuminating Company of Boston had called a meeting of stockholders to obtain authorization for a stock split. The stock went up to a high of $440. Its book value was $164 (the ratio of price to book value was 2.6, which was less than many other utilities).

On Saturday (October 12, p. 27) the Times reported that on Friday the Massachusetts Department of Public Utilities has rejected the stock split. The heading said “Bars Stock Split by Boston Edison. Criticizes Dividend Policy. Holds Rates Should Not Be Raised Until Company Can Reduce Charge for Electricity.” Boston Edison lost 15 points for the day even though the decision was released after the Friday closing. The high for the year was $440 and the stock closed at $360 on Friday.

The Massachusetts Department of Public Utilities (New York Times, October 12, p. 27) did not want to imply to investors that this was the “forerunner of substantial increases in dividends.” They stated that the expectation of increased dividends was not justified, offered “scathing criticisms of the company” (October 16, p. 42) and concluded “the public will take over such utilities as try to gobble up all profits available.”

On October 15, the Boston City Council advised the mayor to initiate legislation for public ownership of Edison, on October 16, the Department announced it would investigate the level of rates being charged by Edison, and on October 19, it set the dates for the inquiry. On Tuesday, October 15 (p. 41), there was a discussion in the Times of the Massachusetts decision in the column “Topic in Wall Street.” It “excited intense interest in public utility circles yesterday and undoubtedly had effect in depressing the issues of this group. The decision is a far-reaching one and Wall Street expressed the greatest interest in what effect it will have, if any, upon commissions in other States.”

Boston Edison had closed at 360 on Friday, October 11, before the announcement was released. It dropped 61 points at its low on Monday, (October 14) but closed at 328, a loss of 32 points.

On October 16 (p. 42), the Times reported that Governor Allen of Massachusetts was launching a full investigation of Boston Edison including “dividends, depreciation, and surplus.”

One major factor that can be identified leading to the price break for public utilities was the ruling by the Massachusetts Public Utility Commission. The only specific action was that it refused to permit Edison Electric Illuminating Company of Boston to split its stock. Standard financial theory predicts that the primary effect of a stock split would be to reduce the stock price by 50% and would leave the total value unchanged, thus the denial of the split was not economically significant, and the stock split should have been easy to grant. But the Commission made it clear it had additional messages to communicate. For example, the Financial Times (October 16, 1929, p. 7) reported that the Commission advised the company to “reduce the selling price to the consumer.” Boston was paying $.085 per kilowatt-hour and Cambridge only $.055. There were also rumors of public ownership and a shifting of control. The next day (October 17), the Times reported (p. 3) “The worst pressure was against Public Utility shares” and the headline read “Electric Issue Hard Hit.”

Public Utility Regulation in New York

Massachusetts was not alone in challenging the profit levels of utilities. The Federal Trade Commission, New York City, and New York State were all challenging the status of public utility regulation. New York Governor (Franklin D. Roosevelt) appointed a committee on October 8 to investigate the regulation of public utilities in the state. The Committee stated, “this inquiry is likely to have far-reaching effects and may lead to similar action in other States.” Both the October 17 and October 19 issues of the Times carried articles regarding the New York investigative committee. Professor Bonbright, a Roosevelt appointee, described the regulatory process as a “vicious system” (October 19, p. 21), which ignored consumers. The Chairman of the Public Service Commission, testifying before the Committee wanted more control over utility holding companies, especially management fees and other transfers.

The New York State Committee also noted the increasing importance of investment trusts: “mention of the influence of the investment trust on utility securities is too important for this committee to ignore” (New York Times, October 17, p. 18). They conjectured that the trusts had $3.5 billion to invest, and “their influence has become very important” (p. 18).

In New York City Mayor Jimmy Walker was fighting the accusation of graft charges with statements that his administration would fight aggressively against rate increases, thus proving that he had not accepted bribes (New York Times, October 23). It is reasonable to conclude that the October 16 break was related to the news from Massachusetts and New York.

On October 17, the New York Times (p. 18) reported that the Committee on Public Service Securities of the Investment Banking Association warned against “speculative and uniformed buying.” The Committee published a report in which it asked for care in buying shares in utilities.

On Black Thursday, October 24, the market panic began. The market dropped from 305.87 to 272.32 (a 34 point drop, or 9%) and closed at 299.47. The declines were led by the motor stocks and public utilities.

The Public Utility Multipliers and Leverage

Public utilities were a very important segment of the stock market, and even more importantly, any change in public utility stock values resulted in larger changes in equity wealth. In 1929, there were three potentially important multipliers that meant that any change in a public utility’s underlying value would result in a larger value change in the market and in the investor’s value.

Consider the following hypothetical values for a public utility:

Book value per share for a utility $50

Market price per share 162.502

Market price of investment trust holding stock (assuming a 100% 325.00

premium over market value)

Eliminating the utility’s $112.50 market price premium over book value, the market price of the investment trust would be $50 without a premium. The loss in market value of the stock of the investment trust and the utility would be $387.50 (with no premium). The $387.50 is equal to the $112.50 loss in underlying stock value and the $275 reduction in investment trust stock value. The public utility holding companies, in fact, were even more vulnerable to a stock price change since their ratio of price to book value averaged 4.44 (Wigmore, p. 43). The $387.50 loss in market value implies investments in both the firm’s stock and the investment trust.

For simplicity, this discussion has assumed the trust held all the holding company stock. The effects shown would be reduced if the trust held only a fraction of the stock. However, this discussion has also assumed that no debt or margin was used to finance the investment. Assume the individual investors invested only $162.50 of their money and borrowed $162.50 to buy the investment trust stock costing $325. If the utility stock went down from $162.50 to $50 and the trust still sold at a 100% premium, the trust would sell at $100 and the investors would have lost 100% of their investment since the investors owe $162.50. The vulnerability of the margin investor buying a trust stock that has invested in a utility is obvious.

These highly levered non-operating utilities offered an opportunity for speculation. The holding company typically owned 100% of the operating companies’ stock and both entities were levered (there could be more than two levels of leverage). There were also holding companies that owned holding companies (e.g., Ebasco). Wigmore (p. 43) lists nine of the largest public utility holding companies. The ratio of the low 1929 price to the high price (average) was 33%. These stocks were even more volatile than the publicly owned utilities.

The amount of leverage (both debt and preferred stock) used in the utility sector may have been enormous, but we cannot tell for certain. Assume that a utility purchases an asset that costs $1,000,000 and that asset is financed with 40% stock ($400,000). A utility holding company owns the utility stock and is also financed with 40% stock ($160,000). A second utility holding company owns the first and it is financed with 40% stock ($64,000). An investment trust owns the second holding company’s stock and is financed with 40% stock ($25,600). An investor buys the investment trust’s common stock using 50% margin and investing $12,800 in the stock. Thus, the $1,000,000 utility asset is financed with $12,800 of equity capital.

When the large amount of leverage is combined with the inflated prices of the public utility stock, both holding company stocks, and the investment trust the problem is even more dramatic. Continuing the above example, assume the $1,000,000 asset again financed with $600,000 of debt and $400,000 common stock, but the common stock has a $1,200,000 market value. The first utility holding company has $720,000 of debt and $480,000 of common. The second holding company has $288,000 of debt and $192,000 of stock. The investment trust has $115,200 of debt and $76,800 of stock. The investor uses $38,400 of margin debt. The $1,000,000 asset is supporting $1,761,600 of debt. The investor’s $38,400 of equity is very much in jeopardy.

Conclusions and Lessons

Although no consensus has been reached on the causes of the 1929 stock market crash, the evidence cited above suggests that it may have been that the fear of speculation helped push the stock market to the brink of collapse. It is possible that Hoover’s aggressive campaign against speculation, helped by the overpriced public utilities hit by the Massachusetts Public Utility Commission decision and statements and the vulnerable margin investors, triggered the October selling panic and the consequences that followed.

An important first event may have been Lord Snowden’s reference to the speculative orgy in America. The resulting decline in stock prices weakened margin positions. When several governmental bodies indicated that public utilities in the future were not going to be able to justify their market prices, the decreases in utility stock prices resulted in margin positions being further weakened resulting in general selling. At some stage, the selling panic started and the crash resulted.

What can we learn from the 1929 crash? There are many lessons, but a handful seem to be most applicable to today’s stock market.

  • There is a delicate balance between optimism and pessimism regarding the stock market. Statements and actions by government officials can affect the sensitivity of stock prices to events. Call a market overpriced often enough, and investors may begin to believe it.
  • The fact that stocks can lose 40% of their value in a month and 90% over three years suggests the desirability of diversification (including assets other than stocks). Remember, some investors lose all of their investment when the market falls 40%.
  • A levered investment portfolio amplifies the swings of the stock market. Some investment securities have leverage built into them (e.g., stocks of highly levered firms, options, and stock index futures).
  • A series of presumably undramatic events may establish a setting for a wide price decline.
  • A segment of the market can experience bad news and a price decline that infects the broader market. In 1929, it seems to have been public utilities. In 2000, high technology firms were candidates.
  • Interpreting events and assigning blame is unreliable if there has not been an adequate passage of time and opportunity for reflection and analysis — and is difficult even with decades of hindsight.
  • It is difficult to predict a major market turn with any degree of reliability. It is impressive that in September 1929, Roger Babson predicted the collapse of the stock market, but he had been predicting a collapse for many years. Also, even Babson recommended diversification and was against complete liquidation of stock investments (Financial Chronicle, September 7, 1929, p. 1505).
  • Even a market that is not excessively high can collapse. Both market psychology and the underlying economics are relevant.

References

Barsky, Robert B. and J. Bradford DeLong. “Bull and Bear Markets in the Twentieth Century,” Journal of Economic History 50, no. 2 (1990): 265-281.

Bierman, Harold, Jr. The Great Myths of 1929 and the Lessons to be Learned. Westport, CT: Greenwood Press, 1991.

Bierman, Harold, Jr. The Causes of the 1929 Stock Market Crash. Westport, CT, Greenwood Press, 1998.

Bierman, Harold, Jr. “The Reasons Stock Crashed in 1929.” Journal of Investing (1999): 11-18.

Bierman, Harold, Jr. “Bad Market Days,” World Economics (2001) 177-191.

Commercial and Financial Chronicle, 1929 issues.

Committee on Banking and Currency. Hearings on Performance of the National and Federal Reserve Banking System. Washington, 1931.

DeLong, J. Bradford and Andrei Schleifer, “The Stock Market Bubble of 1929: Evidence from Closed-end Mutual Funds.” Journal of Economic History 51, no. 3 (1991): 675-700.

Federal Reserve Bulletin, February, 1929.

Fisher, Irving. The Stock Market Crash and After. New York: Macmillan, 1930.

Galbraith, John K. The Great Crash, 1929. Boston, Houghton Mifflin, 1961.

Hoover, Herbert. The Memoirs of Herbert Hoover. New York, Macmillan, 1952.

Kendrick, John W. Productivity Trends in the United States. Princeton University Press, 1961.

Kindleberger, Charles P. Manias, Panics, and Crashes. New York, Basic Books, 1978.

Malkiel, Burton G., A Random Walk Down Wall Street. New York, Norton, 1975 and 1996.

Moggridge, Donald. The Collected Writings of John Maynard Keynes, Volume XX. New York: Macmillan, 1981.

New York Times, 1929 and 1930.

Rappoport, Peter and Eugene N. White, “Was There a Bubble in the 1929 Stock Market?” Journal of Economic History 53, no. 3 (1993): 549-574.

Samuelson, Paul A. “Myths and Realities about the Crash and Depression.” Journal of Portfolio Management (1979): 9.

Senate Committee on Banking and Currency. Stock Exchange Practices. Washington, 1928.

Siegel, Jeremy J. “The Equity Premium: Stock and Bond Returns since 1802,”

Financial Analysts Journal 48, no. 1 (1992): 28-46.

Wall Street Journal, October 1929.

Washington Post, October 1929.

Wigmore, Barry A. The Crash and Its Aftermath: A History of Securities Markets in the United States, 1929-1933. Greenwood Press, Westport, 1985.

1 1923-25 average = 100.

2 Based a price to book value ratio of 3.25 (Wigmore, p. 39).

Citation: Bierman, Harold. “The 1929 Stock Market Crash”. EH.Net Encyclopedia, edited by Robert Whaples. March 26, 2008. URL http://eh.net/encyclopedia/the-1929-stock-market-crash/

Slavery in the United States

Jenny Bourne, Carleton College

Slavery is fundamentally an economic phenomenon. Throughout history, slavery has existed where it has been economically worthwhile to those in power. The principal example in modern times is the U.S. South. Nearly 4 million slaves with a market value estimated to be between $3.1 and $3.6 billion lived in the U.S. just before the Civil War. Masters enjoyed rates of return on slaves comparable to those on other assets; cotton consumers, insurance companies, and industrial enterprises benefited from slavery as well. Such valuable property required rules to protect it, and the institutional practices surrounding slavery display a sophistication that rivals modern-day law and business.

THE SPREAD OF SLAVERY IN THE U.S.

Not long after Columbus set sail for the New World, the French and Spanish brought slaves with them on various expeditions. Slaves accompanied Ponce de Leon to Florida in 1513, for instance. But a far greater proportion of slaves arrived in chains in crowded, sweltering cargo holds. The first dark-skinned slaves in what was to become British North America arrived in Virginia — perhaps stopping first in Spanish lands — in 1619 aboard a Dutch vessel. From 1500 to 1900, approximately 12 million Africans were forced from their homes to go westward, with about 10 million of them completing the journey. Yet very few ended up in the British colonies and young American republic. By 1808, when the trans-Atlantic slave trade to the U.S. officially ended, only about 6 percent of African slaves landing in the New World had come to North America.

Slavery in the North

Colonial slavery had a slow start, particularly in the North. The proportion there never got much above 5 percent of the total population. Scholars have speculated as to why, without coming to a definite conclusion. Some surmise that indentured servants were fundamentally better suited to the Northern climate, crops, and tasks at hand; some claim that anti-slavery sentiment provided the explanation. At the time of the American Revolution, fewer than 10 percent of the half million slaves in the thirteen colonies resided in the North, working primarily in agriculture. New York had the greatest number, with just over 20,000. New Jersey had close to 12,000 slaves. Vermont was the first Northern region to abolish slavery when it became an independent republic in 1777. Most of the original Northern colonies implemented a process of gradual emancipation in the late eighteenth and early nineteenth centuries, requiring the children of slave mothers to remain in servitude for a set period, typically 28 years. Other regions above the Mason-Dixon line ended slavery upon statehood early in the nineteenth century — Ohio in 1803 and Indiana in 1816, for instance.

TABLE 1
Population of the Original Thirteen Colonies, selected years by type

1750 1750 1790 1790 1790 1810 1810 1810 1860 1860 1860

State

White Black White Free Slave White Free Slave White Free Slave
Nonwhite Nonwhite Nonwhite
108,270 3,010 232,236 2,771 2,648 255,179 6,453 310 451,504 8,643 - Connecticut
27,208 1,496 46,310 3,899 8,887 55,361 13,136 4,177 90,589 19,829 1,798 Delaware
4,200 1,000 52,886 398 29,264 145,414 1,801 105,218 591,550 3,538 462,198 Georgia
97,623 43,450 208,649 8,043 103,036 235,117 33,927 111,502 515,918 83,942 87,189 Maryland
183,925 4,075 373,187 5,369 - 465,303 6,737 - 1,221,432 9,634 - Massachusetts
26,955 550 141,112 630 157 182,690 970 - 325,579 494 - New Hampshire
66,039 5,354 169,954 2,762 11,423 226,868 7,843 10,851 646,699 25,318 - New Jersey
65,682 11,014 314,366 4,682 21,193 918,699 25,333 15,017 3,831,590 49,145 - New York
53,184 19,800 289,181 5,041 100,783 376,410 10,266 168,824 629,942 31,621 331,059 North Carolina
116,794 2,872 317,479 6,531 3,707 786,804 22,492 795 2,849,259 56,956 - Pennsylvania
29,879 3,347 64,670 3,484 958 73,214 3,609 108 170,649 3,971 - Rhode Island
25,000 39,000 140,178 1,801 107,094 214,196 4,554 196,365 291,300 10,002 402,406 South Carolina
129,581 101,452 442,117 12,866 292,627 551,534 30,570 392,518 1,047,299 58,154 490,865 Virginia
934,340 236,420 2,792,325 58,277 681,777 4,486,789 167,691 1,005,685 12,663,310 361,247 1,775,515 United States

Source: Historical Statistics of the U.S. (1970), Franklin (1988).

Slavery in the South

Throughout colonial and antebellum history, U.S. slaves lived primarily in the South. Slaves comprised less than a tenth of the total Southern population in 1680 but grew to a third by 1790. At that date, 293,000 slaves lived in Virginia alone, making up 42 percent of all slaves in the U.S. at the time. South Carolina, North Carolina, and Maryland each had over 100,000 slaves. After the American Revolution, the Southern slave population exploded, reaching about 1.1 million in 1810 and over 3.9 million in 1860.

TABLE 2
Population of the South 1790-1860 by type

Year White Free Nonwhite Slave
1790 1,240,454 32,523 654,121
1800 1,691,892 61,575 851,532
1810 2,118,144 97,284 1,103,700
1820 2,867,454 130,487 1,509,904
1830 3,614,600 175,074 1,983,860
1840 4,601,873 207,214 2,481,390
1850 6,184,477 235,821 3,200,364
1860 8,036,700 253,082 3,950,511

Source: Historical Statistics of the U.S. (1970).

Slave Ownership Patterns

Despite their numbers, slaves typically comprised a minority of the local population. Only in antebellum South Carolina and Mississippi did slaves outnumber free persons. Most Southerners owned no slaves and most slaves lived in small groups rather than on large plantations. Less than one-quarter of white Southerners held slaves, with half of these holding fewer than five and fewer than 1 percent owning more than one hundred. In 1860, the average number of slaves residing together was about ten.

TABLE 3
Slaves as a Percent of the Total Population
selected years, by Southern state

1750 1790 1810 1860
State Black/total Slave/total Slave/total Slave/total
population population population population
Alabama 45.12
Arkansas 25.52
Delaware 5.21 15.04 5.75 1.60
Florida 43.97
Georgia 19.23 35.45 41.68 43.72
Kentucky 16.87 19.82 19.51
Louisiana 46.85
Maryland 30.80 32.23 29.30 12.69
Mississippi 55.18
Missouri 9.72
North Carolina 27.13 25.51 30.39 33.35
South Carolina 60.94 43.00 47.30 57.18
Tennessee 17.02 24.84
Texas 30.22
Virginia 43.91 39.14 40.27 30.75
Overall 37.97 33.95 33.25 32.27

Sources: Historical Statistics of the United States (1970), Franklin (1988).

TABLE 4
Holdings of Southern Slaveowners
by states, 1860

State Total Held 1 Held 2 Held 3 Held 4 Held 5 Held 1-5 Held 100- Held 500+
slaveholders slave slaves Slaves slaves slaves slaves 499 slaves slaves
AL 33,730 5,607 3,663 2,805 2,329 1,986 16,390 344 -
AR 11,481 2,339 1,503 1,070 894 730 6,536 65 1
DE 587 237 114 74 51 34 510 - -
FL 5,152 863 568 437 365 285 2,518 47 -
GA 41,084 6,713 4,335 3,482 2,984 2,543 20,057 211 8
KY 38,645 9,306 5,430 4,009 3,281 2,694 24,720 7 -
LA 22,033 4,092 2,573 2,034 1,536 1,310 11,545 543 4
MD 13,783 4,119 1,952 1,279 1,023 815 9,188 16 -
MS 30,943 4,856 3,201 2,503 2,129 1,809 14,498 315 1
MO 24,320 6,893 3,754 2,773 2,243 1,686 17,349 4 -
NC 34,658 6,440 4,017 3,068 2,546 2,245 18,316 133 -
SC 26,701 3,763 2,533 1,990 1,731 1,541 11,558 441 8
TN 36,844 7,820 4,738 3,609 3,012 2,536 21,715 47 -
TX 21,878 4,593 2,874 2,093 1,782 1,439 12,781 54 -
VA 52,128 11,085 5,989 4,474 3,807 3,233 28,588 114 -
TOTAL 393,967 78,726 47,244 35,700 29,713 24,886 216,269 2,341 22

Source: Historical Statistics of the United States (1970).

Rapid Natural Increase in U.S. Slave Population

How did the U.S. slave population increase nearly fourfold between 1810 and 1860, given the demise of the trans-Atlantic trade? They enjoyed an exceptional rate of natural increase. Unlike elsewhere in the New World, the South did not require constant infusions of immigrant slaves to keep its slave population intact. In fact, by 1825, 36 percent of the slaves in the Western hemisphere lived in the U.S. This was partly due to higher birth rates, which were in turn due to a more equal ratio of female to male slaves in the U.S. relative to other parts of the Americas. Lower mortality rates also figured prominently. Climate was one cause; crops were another. U.S. slaves planted and harvested first tobacco and then, after Eli Whitney’s invention of the cotton gin in 1793, cotton. This work was relatively less grueling than the tasks on the sugar plantations of the West Indies and in the mines and fields of South America. Southern slaves worked in industry, did domestic work, and grew a variety of other food crops as well, mostly under less abusive conditions than their counterparts elsewhere. For example, the South grew half to three-quarters of the corn crop harvested between 1840 and 1860.

INSTITUTIONAL FRAMEWORK

Central to the success of slavery are political and legal institutions that validate the ownership of other persons. A Kentucky court acknowledged the dual character of slaves in Turner v. Johnson (1838): “[S]laves are property and must, under our present institutions, be treated as such. But they are human beings, with like passions, sympathies, and affections with ourselves.” To construct slave law, lawmakers borrowed from laws concerning personal property and animals, as well as from rules regarding servants, employees, and free persons. The outcome was a set of doctrines that supported the Southern way of life.

The English common law of property formed a foundation for U.S. slave law. The French and Spanish influence in Louisiana — and, to a lesser extent, Texas — meant that Roman (or civil) law offered building blocks there as well. Despite certain formal distinctions, slave law as practiced differed little from common-law to civil-law states. Southern state law governed roughly five areas: slave status, masters’ treatment of slaves, interactions between slaveowners and contractual partners, rights and duties of noncontractual parties toward others’ slaves, and slave crimes. Federal law and laws in various Northern states also dealt with matters of interstate commerce, travel, and fugitive slaves.

Interestingly enough, just as slave law combined elements of other sorts of law, so too did it yield principles that eventually applied elsewhere. Lawmakers had to consider the intelligence and volition of slaves as they crafted laws to preserve property rights. Slavery therefore created legal rules that could potentially apply to free persons as well as to those in bondage. Many legal principles we now consider standard in fact had their origins in slave law.

Legal Status Of Slaves And Blacks

By the end of the seventeenth century, the status of blacks — slave or free — tended to follow the status of their mothers. Generally, “white” persons were not slaves but Native and African Americans could be. One odd case was the offspring of a free white woman and a slave: the law often bound these people to servitude for thirty-one years. Conversion to Christianity could set a slave free in the early colonial period, but this practice quickly disappeared.

Skin Color and Status

Southern law largely identified skin color with status. Those who appeared African or of African descent were generally presumed to be slaves. Virginia was the only state to pass a statute that actually classified people by race: essentially, it considered those with one quarter or more black ancestry as black. Other states used informal tests in addition to visual inspection: one-quarter, one-eighth, or one-sixteenth black ancestry might categorize a person as black.

Even if blacks proved their freedom, they enjoyed little higher status than slaves except, to some extent, in Louisiana. Many Southern states forbade free persons of color from becoming preachers, selling certain goods, tending bar, staying out past a certain time of night, or owning dogs, among other things. Federal law denied black persons citizenship under the Dred Scott decision (1857). In this case, Chief Justice Roger Taney also determined that visiting a free state did not free a slave who returned to a slave state, nor did traveling to a free territory ensure emancipation.

Rights And Responsibilities Of Slave Masters

Southern masters enjoyed great freedom in their dealings with slaves. North Carolina Chief Justice Thomas Ruffin expressed the sentiments of many Southerners when he wrote in State v. Mann (1829): “The power of the master must be absolute, to render the submission of the slave perfect.” By the nineteenth century, household heads had far more physical power over their slaves than their employees. In part, the differences in allowable punishment had to do with the substitutability of other means of persuasion. Instead of physical coercion, antebellum employers could legally withhold all wages if a worker did not complete all agreed-upon services. No such alternate mechanism existed for slaves.

Despite the respect Southerners held for the power of masters, the law — particularly in the thirty years before the Civil War — limited owners somewhat. Southerners feared that unchecked slave abuse could lead to theft, public beatings, and insurrection. People also thought that hungry slaves would steal produce and livestock. But masters who treated slaves too well, or gave them freedom, caused consternation as well. The preamble to Delaware’s Act of 1767 conveys one prevalent view: “[I]t is found by experience, that freed [N]egroes and mulattoes are idle and slothful, and often prove burdensome to the neighborhood wherein they live, and are of evil examples to slaves.” Accordingly, masters sometimes fell afoul of the criminal law not only when they brutalized or neglected their slaves, but also when they indulged or manumitted slaves. Still, prosecuting masters was extremely difficult, because often the only witnesses were slaves or wives, neither of whom could testify against male heads of household.

Law of Manumission

One area that changed dramatically over time was the law of manumission. The South initially allowed masters to set their slaves free because this was an inherent right of property ownership. During the Revolutionary period, some Southern leaders also believed that manumission was consistent with the ideology of the new nation. Manumission occurred only rarely in colonial times, increased dramatically during the Revolution, then diminished after the early 1800s. By the 1830s, most Southern states had begun to limit manumission. Allowing masters to free their slaves at will created incentives to emancipate only unproductive slaves. Consequently, the community at large bore the costs of young, old, and disabled former slaves. The public might also run the risk of having rebellious former slaves in its midst.

Antebellum U.S. Southern states worried considerably about these problems and eventually enacted restrictions on the age at which slaves could be free, the number freed by any one master, and the number manumitted by last will. Some required former masters to file indemnifying bonds with state treasurers so governments would not have to support indigent former slaves. Some instead required former owners to contribute to ex-slaves’ upkeep. Many states limited manumissions to slaves of a certain age who were capable of earning a living. A few states made masters emancipate their slaves out of state or encouraged slaveowners to bequeath slaves to the Colonization Society, which would then send the freed slaves to Liberia. Former slaves sometimes paid fees on the way out of town to make up for lost property tax revenue; they often encountered hostility and residential fees on the other end as well. By 1860, most Southern states had banned in-state and post-mortem manumissions, and some had enacted procedures by which free blacks could voluntarily become slaves.

Other Restrictions

In addition to constraints on manumission, laws restricted other actions of masters and, by extension, slaves. Masters generally had to maintain a certain ratio of white to black residents upon plantations. Some laws barred slaves from owning musical instruments or bearing firearms. All states refused to allow slaves to make contracts or testify in court against whites. About half of Southern states prohibited masters from teaching slaves to read and write although some of these permitted slaves to learn rudimentary mathematics. Masters could use slaves for some tasks and responsibilities, but they typically could not order slaves to compel payment, beat white men, or sample cotton. Nor could slaves officially hire themselves out to others, although such prohibitions were often ignored by masters, slaves, hirers, and public officials. Owners faced fines and sometimes damages if their slaves stole from others or caused injuries.

Southern law did encourage benevolence, at least if it tended to supplement the lash and shackle. Court opinions in particular indicate the belief that good treatment of slaves could enhance labor productivity, increase plantation profits, and reinforce sentimental ties. Allowing slaves to control small amounts of property, even if statutes prohibited it, was an oft-sanctioned practice. Courts also permitted slaves small diversions, such as Christmas parties and quilting bees, despite statutes that barred slave assemblies.

Sale, Hire, And Transportation Of Slaves

Sales of Slaves

Slaves were freely bought and sold across the antebellum South. Southern law offered greater protection to slave buyers than to buyers of other goods, in part because slaves were complex commodities with characteristics not easily ascertained by inspection. Slave sellers were responsible for their representations, required to disclose known defects, and often liable for unknown defects, as well as bound by explicit contractual language. These rules stand in stark contrast to the caveat emptor doctrine applied in antebellum commodity sales cases. In fact, they more closely resemble certain provisions of the modern Uniform Commercial Code. Sales law in two states stands out. South Carolina was extremely pro-buyer, presuming that any slave sold at full price was sound. Louisiana buyers enjoyed extensive legal protection as well. A sold slave who later manifested an incurable disease or vice — such as a tendency to escape frequently — could generate a lawsuit that entitled the purchaser to nullify the sale.

Hiring Out Slaves

Slaves faced the possibility of being hired out by their masters as well as being sold. Although scholars disagree about the extent of hiring in agriculture, most concur that hired slaves frequently worked in manufacturing, construction, mining, and domestic service. Hired slaves and free persons often labored side by side. Bond and free workers both faced a legal burden to behave responsibly on the job. Yet the law of the workplace differed significantly for the two: generally speaking, employers were far more culpable in cases of injuries to slaves. The divergent law for slave and free workers does not necessarily imply that free workers suffered. Empirical evidence shows that nineteenth-century free laborers received at least partial compensation for the risks of jobs. Indeed, the tripartite nature of slave-hiring arrangements suggests why antebellum laws appeared as they did. Whereas free persons had direct work and contractual relations with their bosses, slaves worked under terms designed by others. Free workers arguably could have walked out or insisted on different conditions or wages. Slaves could not. The law therefore offered substitute protections. Still, the powerful interests of slaveowners also may mean that they simply were more successful at shaping the law. Postbellum developments in employment law — North and South — in fact paralleled earlier slave-hiring law, at times relying upon slave cases as legal precedents.

Public Transportation

Public transportation also figured into slave law: slaves suffered death and injury aboard common carriers as well as traveled as legitimate passengers and fugitives. As elsewhere, slave-common carrier law both borrowed from and established precedents for other areas of law. One key doctrine originating in slave cases was the “last-clear-chance rule.” Common-carrier defendants that had failed to offer slaves — even negligent slaves — a last clear chance to avoid accidents ended up paying damages to slaveowners. Slaveowner plaintiffs won several cases in the decade before the Civil War when engineers failed to warn slaves off railroad tracks. Postbellum courts used slave cases as precedents to entrench the last-clear-chance doctrine.

Slave Control: Patrollers And Overseers

Society at large shared in maintaining the machinery of slavery. In place of a standing police force, Southern states passed legislation to establish and regulate county-wide citizen patrols. Essentially, Southern citizens took upon themselves the protection of their neighbors’ interests as well as their own. County courts had local administrative authority; court officials appointed three to five men per patrol from a pool of white male citizens to serve for a specified period. Typical patrol duty ranged from one night per week for a year to twelve hours per month for three months. Not all white men had to serve: judges, magistrates, ministers, and sometimes millers and blacksmiths enjoyed exemptions. So did those in the higher ranks of the state militia. In many states, courts had to select from adult males under a certain age, usually 45, 50, or 60. Some states allowed only slaveowners or householders to join patrols. Patrollers typically earned fees for captured fugitive slaves and exemption from road or militia duty, as well as hourly wages. Keeping order among slaves was the patrollers’ primary duty. Statutes set guidelines for appropriate treatment of slaves and often imposed fines for unlawful beatings. In rare instances, patrollers had to compensate masters for injured slaves. For the most part, however, patrollers enjoyed quasi-judicial or quasi-executive powers in their dealings with slaves.

Overseers commanded considerable control as well. The Southern overseer was the linchpin of the large slave plantation. He ran daily operations and served as a first line of defense in safeguarding whites. The vigorous protests against drafting overseers into military service during the Civil War reveal their significance to the South. Yet slaves were too valuable to be left to the whims of frustrated, angry overseers. Injuries caused to slaves by overseers’ cruelty (or “immoral conduct”) usually entitled masters to recover civil damages. Overseers occasionally confronted criminal charges as well. Brutality by overseers naturally generated responses by their victims; at times, courts reduced murder charges to manslaughter when slaves killed abusive overseers.

Protecting The Master Against Loss: Slave Injury And Slave Stealing

Whether they liked it or not, many Southerners dealt daily with slaves. Southern law shaped these interactions among strangers, awarding damages more often for injuries to slaves than injuries to other property or persons, shielding slaves more than free persons from brutality, and generating convictions more frequently in slave-stealing cases than in other criminal cases. The law also recognized more offenses against slaveowners than against other property owners because slaves, unlike other property, succumbed to influence.

Just as assaults of slaves generated civil damages and criminal penalties, so did stealing a slave to sell him or help him escape to freedom. Many Southerners considered slave stealing worse than killing fellow citizens. In marked contrast, selling a free black person into slavery carried almost no penalty.

The counterpart to helping slaves escape — picking up fugitives — also created laws. Southern states offered rewards to defray the costs of capture or passed statutes requiring owners to pay fees to those who caught and returned slaves. Some Northern citizens worked hand-in-hand with their Southern counterparts, returning fugitive slaves to masters either with or without the prompting of law. But many Northerners vehemently opposed the peculiar institution. In an attempt to stitch together the young nation, the federal government passed the first fugitive slave act in 1793. To circumvent its application, several Northern states passed personal liberty laws in the 1840s. Stronger federal fugitive slave legislation then passed in 1850. Still, enough slaves fled to freedom — perhaps as many as 15,000 in the decade before the Civil War — with the help (or inaction) of Northerners that the profession of “slave-catching” evolved. This occupation was often highly risky — enough so that such men could not purchase life insurance coverage — and just as often highly lucrative.

Slave Crimes

Southern law governed slaves as well as slaveowners and their adversaries. What few due process protections slaves possessed stemmed from desires to grant rights to masters. Still, slaves faced harsh penalties for their crimes. When slaves stole, rioted, set fires, or killed free people, the law sometimes had to subvert the property rights of masters in order to preserve slavery as a social institution.

Slaves, like other antebellum Southern residents, committed a host of crimes ranging from arson to theft to homicide. Other slave crimes included violating curfew, attending religious meetings without a master’s consent, and running away. Indeed, a slave was not permitted off his master’s farm or business without his owner’s permission. In rural areas, a slave was required to carry a written pass to leave the master’s land.

Southern states erected numerous punishments for slave crimes, including prison terms, banishment, whipping, castration, and execution. In most states, the criminal law for slaves (and blacks generally) was noticeably harsher than for free whites; in others, slave law as practiced resembled that governing poorer white citizens. Particularly harsh punishments applied to slaves who had allegedly killed their masters or who had committed rebellious acts. Southerners considered these acts of treason and resorted to immolation, drawing and quartering, and hanging.

MARKETS AND PRICES

Market prices for slaves reflect their substantial economic value. Scholars have gathered slave prices from a variety of sources, including censuses, probate records, plantation and slave-trader accounts, and proceedings of slave auctions. These data sets reveal that prime field hands went for four to six hundred dollars in the U.S. in 1800, thirteen to fifteen hundred dollars in 1850, and up to three thousand dollars just before Fort Sumter fell. Even controlling for inflation, the prices of U.S. slaves rose significantly in the six decades before South Carolina seceded from the Union. By 1860, Southerners owned close to $4 billion worth of slaves. Slavery remained a thriving business on the eve of the Civil War: Fogel and Engerman (1974) projected that by 1890 slave prices would have increased on average more than 50 percent over their 1860 levels. No wonder the South rose in armed resistance to protect its enormous investment.

Slave markets existed across the antebellum U.S. South. Even today, one can find stone markers like the one next to the Antietam battlefield, which reads: “From 1800 to 1865 This Stone Was Used as a Slave Auction Block. It has been a famous landmark at this original location for over 150 years.” Private auctions, estate sales, and professional traders facilitated easy exchange. Established dealers like Franklin and Armfield in Virginia, Woolfolk, Saunders, and Overly in Maryland, and Nathan Bedford Forrest in Tennessee prospered alongside itinerant traders who operated in a few counties, buying slaves for cash from their owners, then moving them overland in coffles to the lower South. Over a million slaves were taken across state lines between 1790 and 1860 with many more moving within states. Some of these slaves went with their owners; many were sold to new owners. In his monumental study, Michael Tadman (1989) found that slaves who lived in the upper South faced a very real chance of being sold for profit. From 1820 to 1860, he estimated that an average of 200,000 slaves per decade moved from the upper to the lower South, most via sales. A contemporary newspaper, The Virginia Times, calculated that 40,000 slaves were sold in the year 1830.

Determinants of Slave Prices

The prices paid for slaves reflected two economic factors: the characteristics of the slave and the conditions of the market. Important individual features included age, sex, childbearing capacity (for females), physical condition, temperament, and skill level. In addition, the supply of slaves, demand for products produced by slaves, and seasonal factors helped determine market conditions and therefore prices.

Age and Price

Prices for both male and female slaves tended to follow similar life-cycle patterns. In the U.S. South, infant slaves sold for a positive price because masters expected them to live long enough to make the initial costs of raising them worthwhile. Prices rose through puberty as productivity and experience increased. In nineteenth-century New Orleans, for example, prices peaked at about age 22 for females and age 25 for males. Girls cost more than boys up to their mid-teens. The genders then switched places in terms of value. In the Old South, boys aged 14 sold for 71 percent of the price of 27-year-old men, whereas girls aged 14 sold for 65 percent of the price of 27-year-old men. After the peak age, prices declined slowly for a time, then fell off rapidly as the aging process caused productivity to fall. Compared to full-grown men, women were worth 80 to 90 percent as much. One characteristic in particular set some females apart: their ability to bear children. Fertile females commanded a premium. The mother-child link also proved important for pricing in a different way: people sometimes paid more for intact families.


Source: Fogel and Engerman (1974)

Other Characteristics and Price

Skills, physical traits, mental capabilities, and other qualities also helped determine a slave’s price. Skilled workers sold for premiums of 40-55 percent whereas crippled and chronically ill slaves sold for deep discounts. Slaves who proved troublesome — runaways, thieves, layabouts, drunks, slow learners, and the like — also sold for lower prices. Taller slaves cost more, perhaps because height acts as a proxy for healthiness. In New Orleans, light-skinned females (who were often used as concubines) sold for a 5 percent premium.

Fluctuations in Supply

Prices for slaves fluctuated with market conditions as well as with individual characteristics. U.S. slave prices fell around 1800 as the Haitian revolution sparked the movement of slaves into the Southern states. Less than a decade later, slave prices climbed when the international slave trade was banned, cutting off legal external supplies. Interestingly enough, among those who supported the closing of the trans-Atlantic slave trade were several Southern slaveowners. Why this apparent anomaly? Because the resulting reduction in supply drove up the prices of slaves already living in the U.S and, hence, their masters’ wealth. U.S. slaves had high enough fertility rates and low enough mortality rates to reproduce themselves, so Southern slaveowners did not worry about having too few slaves to go around.

Fluctuations in Demand

Demand helped determine prices as well. The demand for slaves derived in part from the demand for the commodities and services that slaves provided. Changes in slave occupations and variability in prices for slave-produced goods therefore created movements in slave prices. As slaves replaced increasingly expensive indentured servants in the New World, their prices went up. In the period 1748 to 1775, slave prices in British America rose nearly 30 percent. As cotton prices fell in the 1840s, Southern slave prices also fell. But, as the demand for cotton and tobacco grew after about 1850, the prices of slaves increased as well.

Interregional Price Differences

Differences in demand across regions led to transitional regional price differences, which in turn meant large movements of slaves. Yet because planters experienced greater stability among their workforce when entire plantations moved, 84 percent of slaves were taken to the lower South in this way rather than being sold piecemeal.

Time of Year and Price

Demand sometimes had to do with the time of year a sale took place. For example, slave prices in the New Orleans market were 10 to 20 percent higher in January than in September. Why? September was a busy time of year for plantation owners: the opportunity cost of their time was relatively high. Prices had to be relatively low for them to be willing to travel to New Orleans during harvest time.

Expectations and Prices

One additional demand factor loomed large in determining slave prices: the expectation of continued legal slavery. As the American Civil War progressed, prices dropped dramatically because people could not be sure that slavery would survive. In New Orleans, prime male slaves sold on average for $1381 in 1861 and for $1116 in 1862. Burgeoning inflation meant that real prices fell considerably more. By war’s end, slaves sold for a small fraction of their 1860 price.


Source: Data supplied by Stanley Engerman and reported in Walton and Rockoff (1994).

PROFITABILITY, EFFICIENCY, AND EXPLOITATION

That slavery was profitable seems almost obvious. Yet scholars have argued furiously about this matter. On one side stand antebellum writers such as Hinton Rowan Helper and Frederick Law Olmstead, many antebellum abolitionists, and contemporary scholars like Eugene Genovese (at least in his early writings), who speculated that American slavery was unprofitable, inefficient, and incompatible with urban life. On the other side are scholars who have marshaled masses of data to support their contention that Southern slavery was profitable and efficient relative to free labor and that slavery suited cities as well as farms. These researchers stress the similarity between slave markets and markets for other sorts of capital.

Consensus That Slavery Was Profitable

This battle has largely been won by those who claim that New World slavery was profitable. Much like other businessmen, New World slaveowners responded to market signals — adjusting crop mixes, reallocating slaves to more profitable tasks, hiring out idle slaves, and selling slaves for profit. One well-known instance shows that contemporaneous free labor thought that urban slavery may even have worked too well: employees of the Tredegar Iron Works in Richmond, Virginia, went out on their first strike in 1847 to protest the use of slave labor at the Works.

Fogel and Engerman’s Time on the Cross

Carrying the banner of the “slavery was profitable” camp is Nobel laureate Robert Fogel. Perhaps the most controversial book ever written about American slavery is Time on the Cross, published in 1974 by Fogel and co-author Stanley Engerman. These men were among the first to use modern statistical methods, computers, and large datasets to answer a series of empirical questions about the economics of slavery. To find profit levels and rates of return, they built upon the work of Alfred Conrad and John Meyer, who in 1958 had calculated similar measures from data on cotton prices, physical yield per slave, demographic characteristics of slaves (including expected lifespan), maintenance and supervisory costs, and (in the case of females) number of children. To estimate the relative efficiency of farms, Fogel and Engerman devised an index of “total factor productivity,” which measured the output per average unit of input on each type of farm. They included in this index controls for quality of livestock and land and for age and sex composition of the workforce, as well as amounts of output, labor, land, and capital

Time on the Cross generated praise — and considerable criticism. A major critique appeared in 1976 as a collection of articles entitled Reckoning with Slavery. Although some contributors took umbrage at the tone of the book and denied that it broke new ground, others focused on flawed and insufficient data and inappropriate inferences. Despite its shortcomings, Time on the Cross inarguably brought people’s attention to a new way of viewing slavery. The book also served as a catalyst for much subsequent research. Even Eugene Genovese, long an ardent proponent of the belief that Southern planters had held slaves for their prestige value, finally acknowledged that slavery was probably a profitable enterprise. Fogel himself refined and expanded his views in a 1989 book, Without Consent or Contract.

Efficiency Estimates

Fogel’s and Engerman’s research led them to conclude that investments in slaves generated high rates of return, masters held slaves for profit motives rather than for prestige, and slavery thrived in cities and rural areas alike. They also found that antebellum Southern farms were 35 percent more efficient overall than Northern ones and that slave farms in the New South were 53 percent more efficient than free farms in either North or South. This would mean that a slave farm that is otherwise identical to a free farm (in terms of the amount of land, livestock, machinery and labor used) would produce output worth 53 percent more than the free. On the eve of the Civil War, slavery flourished in the South and generated a rate of economic growth comparable to that of many European countries, according to Fogel and Engerman. They also discovered that, because slaves constituted a considerable portion of individual wealth, masters fed and treated their slaves reasonably well. Although some evidence indicates that infant and young slaves suffered much worse conditions than their freeborn counterparts, teenaged and adult slaves lived in conditions similar to — sometimes better than — those enjoyed by many free laborers of the same period.

Transition from Indentured Servitude to Slavery

One potent piece of evidence supporting the notion that slavery provides pecuniary benefits is this: slavery replaces other labor when it becomes relatively cheaper. In the early U.S. colonies, for example, indentured servitude was common. As the demand for skilled servants (and therefore their wages) rose in England, the cost of indentured servants went up in the colonies. At the same time, second-generation slaves became more productive than their forebears because they spoke English and did not have to adjust to life in a strange new world. Consequently, the balance of labor shifted away from indentured servitude and toward slavery.

Gang System

The value of slaves arose in part from the value of labor generally in the antebellum U.S. Scarce factors of production command economic rent, and labor was by far the scarcest available input in America. Moreover, a large proportion of the reward to owning and working slaves resulted from innovative labor practices. Certainly, the use of the “gang” system in agriculture contributed to profits in the antebellum period. In the gang system, groups of slaves perfomed synchronized tasks under the watchful overseer’s eye, much like parts of a single machine. Masters found that treating people like machinery paid off handsomely.

Antebellum slaveowners experimented with a variety of other methods to increase productivity. They developed an elaborate system of “hand ratings” in order to improve the match between the slave worker and the job. Hand ratings categorized slaves by age and sex and rated their productivity relative to that of a prime male field hand. Masters also capitalized on the native intelligence of slaves by using them as agents to receive goods, keep books, and the like.

Use of Positive Incentives

Masters offered positive incentives to make slaves work more efficiently. Slaves often had Sundays off. Slaves could sometimes earn bonuses in cash or in kind, or quit early if they finished tasks quickly. Some masters allowed slaves to keep part of the harvest or to work their own small plots. In places, slaves could even sell their own crops. To prevent stealing, however, many masters limited the products that slaves could raise and sell, confining them to corn or brown cotton, for example. In antebellum Louisiana, slaves even had under their control a sum of money called a peculium. This served as a sort of working capital, enabling slaves to establish thriving businesses that often benefited their masters as well. Yet these practices may have helped lead to the downfall of slavery, for they gave slaves a taste of freedom that left them longing for more.

Slave Families

Masters profited from reproduction as well as production. Southern planters encouraged slaves to have large families because U.S. slaves lived long enough — unlike those elsewhere in the New World — to generate more revenue than cost over their lifetimes. But researchers have found little evidence of slave breeding; instead, masters encouraged slaves to live in nuclear or extended families for stability. Lest one think sentimentality triumphed on the Southern plantation, one need only recall the willingness of most masters to sell if the bottom line was attractive enough.

Profitability and African Heritage

One element that contributed to the profitability of New World slavery was the African heritage of slaves. Africans, more than indigenous Americans, were accustomed to the discipline of agricultural practices and knew metalworking. Some scholars surmise that Africans, relative to Europeans, could better withstand tropical diseases and, unlike Native Americans, also had some exposure to the European disease pool.

Ease of Identifying Slaves

Perhaps the most distinctive feature of Africans, however, was their skin color. Because they looked different from their masters, their movements were easy to monitor. Denying slaves education, property ownership, contractual rights, and other things enjoyed by those in power was simple: one needed only to look at people to ascertain their likely status. Using color was a low-cost way of distinguishing slaves from free persons. For this reason, the colonial practices that freed slaves who converted to Christianity quickly faded away. Deciphering true religious beliefs is far more difficult than establishing skin color. Other slave societies have used distinguishing marks like brands or long hair to denote slaves, yet color is far more immutable and therefore better as a cheap way of keeping slaves separate. Skin color, of course, can also serve as a racist identifying mark even after slavery itself disappears.

Profit Estimates

Slavery never generated superprofits, because people always had the option of putting their money elsewhere. Nevertheless, investment in slaves offered a rate of return — about 10 percent — that was comparable to returns on other assets. Slaveowners were not the only ones to reap rewards, however. So too did cotton consumers who enjoyed low prices and Northern entrepreneurs who helped finance plantation operations.

Exploitation Estimates

So slavery was profitable; was it an efficient way of organizing the workforce? On this question, considerable controversy remains. Slavery might well have profited masters, but only because they exploited their chattel. What is more, slavery could have locked people into a method of production and way of life that might later have proven burdensome.

Fogel and Engerman (1974) claimed that slaves kept about ninety percent of what they produced. Because these scholars also found that agricultural slavery produced relatively more output for a given set of inputs, they argued that slaves may actually have shared in the overall material benefits resulting from the gang system. Other scholars contend that slaves in fact kept less than half of what they produced and that slavery, while profitable, certainly was not efficient. On the whole, current estimates suggest that the typical slave received only about fifty percent of the extra output that he or she produced.

Did Slavery Retard Southern Economic Development?

Gavin Wright (1978) called attention as well to the difference between the short run and the long run. He noted that slaves accounted for a very large proportion of most masters’ portfolios of assets. Although slavery might have seemed an efficient means of production at a point in time, it tied masters to a certain system of labor which might not have adapted quickly to changed economic circumstances. This argument has some merit. Although the South’s growth rate compared favorably with that of the North in the antebellum period, a considerable portion of wealth was held in the hands of planters. Consequently, commercial and service industries lagged in the South. The region also had far less rail transportation than the North. Yet many plantations used the most advanced technologies of the day, and certain innovative commercial and insurance practices appeared first in transactions involving slaves. What is more, although the South fell behind the North and Great Britain in its level of manufacturing, it compared favorably to other advanced countries of the time. In sum, no clear consensus emerges as to whether the antebellum South created a standard of living comparable to that of the North or, if it did, whether it could have sustained it.

Ultimately, the South’s system of law, politics, business, and social customs strengthened the shackles of slavery and reinforced racial stereotyping. As such, it was undeniably evil. Yet, because slaves constituted valuable property, their masters had ample incentives to take care of them. And, by protecting the property rights of masters, slave law necessarily sheltered the persons embodied within. In a sense, the apologists for slavery were right: slaves sometimes fared better than free persons because powerful people had a stake in their well-being.

Conclusion: Slavery Cannot Be Seen As Benign

But slavery cannot be thought of as benign. In terms of material conditions, diet, and treatment, Southern slaves may have fared as well in many ways as the poorest class of free citizens. Yet the root of slavery is coercion. By its very nature, slavery involves involuntary transactions. Slaves are property, whereas free laborers are persons who make choices (at times constrained, of course) about the sort of work they do and the number of hours they work.

The behavior of former slaves after abolition clearly reveals that they cared strongly about the manner of their work and valued their non-work time more highly than masters did. Even the most benevolent former masters in the U.S. South found it impossible to entice their former chattels back into gang work, even with large wage premiums. Nor could they persuade women back into the labor force: many female ex-slaves simply chose to stay at home. In the end, perhaps slavery is an economic phenomenon only because slave societies fail to account for the incalculable costs borne by the slaves themselves.

REFERENCES AND FURTHER READING

For studies pertaining to the economics of slavery, see particularly Aitken, Hugh, editor. Did Slavery Pay? Readings in the Economics of Black Slavery in the United States. Boston: Houghton-Mifflin, 1971.

Barzel, Yoram. “An Economic Analysis of Slavery.” Journal of Law and Economics 20 (1977): 87-110.

Conrad, Alfred H., and John R. Meyer. The Economics of Slavery and Other Studies. Chicago: Aldine, 1964.

David, Paul A., Herbert G. Gutman, Richard Sutch, Peter Temin, and Gavin Wright. Reckoning with Slavery: A Critical Study in the Quantitative History of American Negro Slavery. New York: Oxford University Press, 1976

Fogel , Robert W. Without Consent or Contract. New York: Norton, 1989.

Fogel, Robert W., and Stanley L. Engerman. Time on the Cross: The Economics of American Negro Slavery. New York: Little, Brown, 1974.

Galenson, David W. Traders, Planters, and Slaves: Market Behavior in Early English America. New York: Cambridge University Press, 1986

Kotlikoff, Laurence. “The Structure of Slave Prices in New Orleans, 1804-1862.” Economic Inquiry 17 (1979): 496-518.

Ransom, Roger L., and Richard Sutch. One Kind of Freedom: The Economic Consequences of Emancipation. New York: Cambridge University Press, 1977.

Ransom, Roger L., and Richard Sutch “Capitalists Without Capital” Agricultural History 62 (1988): 133-160.

Vedder, Richard K. “The Slave Exploitation (Expropriation) Rate.” Explorations in Economic History 12 (1975): 453-57.

Wright, Gavin. The Political Economy of the Cotton South: Households, Markets, and Wealth in the Nineteenth Century. New York: Norton, 1978.

Yasuba, Yasukichi. “The Profitability and Viability of Slavery in the U.S.” Economic Studies Quarterly 12 (1961): 60-67.

For accounts of slave trading and sales, see
Bancroft, Frederic. Slave Trading in the Old South. New York: Ungar, 1931. Tadman, Michael. Speculators and Slaves. Madison: University of Wisconsin Press, 1989.

For discussion of the profession of slave catchers, see
Campbell, Stanley W. The Slave Catchers. Chapel Hill: University of North Carolina Press, 1968.

To read about slaves in industry and urban areas, see
Dew, Charles B. Slavery in the Antebellum Southern Industries. Bethesda: University Publications of America, 1991.

Goldin, Claudia D. Urban Slavery in the American South, 1820-1860: A Quantitative History. Chicago: University of Chicago Press,1976.

Starobin, Robert. Industrial Slavery in the Old South. New York: Oxford University Press, 1970.

For discussions of masters and overseers, see
Oakes, James. The Ruling Race: A History of American Slaveholders. New York: Knopf, 1982.

Roark, James L. Masters Without Slaves. New York: Norton, 1977.

Scarborough, William K. The Overseer: Plantation Management in the Old South. Baton Rouge, Louisiana State University Press, 1966.

On indentured servitude, see
Galenson, David. “Rise and Fall of Indentured Servitude in the Americas: An Economic Analysis.” Journal of Economic History 44 (1984): 1-26.

Galenson, David. White Servitude in Colonial America: An Economic Analysis. New York: Cambridge University Press, 1981.

Grubb, Farley. “Immigrant Servant Labor: Their Occupational and Geographic Distribution in the Late Eighteenth Century Mid-Atlantic Economy.” Social Science History 9 (1985): 249-75.

Menard, Russell R. “From Servants to Slaves: The Transformation of the Chesapeake Labor System.” Southern Studies 16 (1977): 355-90.

On slave law, see
Fede, Andrew. “Legal Protection for Slave Buyers in the U.S. South.” American Journal of Legal History 31 (1987). Finkelman, Paul. An Imperfect Union: Slavery, Federalism, and Comity. Chapel Hill: University of North Carolina, 1981.

Finkelman, Paul. Slavery, Race, and the American Legal System, 1700-1872. New York: Garland, 1988.

Finkelman, Paul, ed. Slavery and the Law. Madison: Madison House, 1997.

Flanigan, Daniel J. The Criminal Law of Slavery and Freedom, 1800-68. New York: Garland, 1987.

Morris, Thomas D., Southern Slavery and the Law: 1619-1860. Chapel Hill: University of North Carolina Press, 1996.

Schafer, Judith K. Slavery, The Civil Law, and the Supreme Court of Louisiana. Baton Rouge: Louisiana State University Press, 1994.

Tushnet, Mark V. The American Law of Slavery, 1810-60: Considerations of Humanity and Interest. Princeton: Princeton University Press, 1981.

Wahl, Jenny B. The Bondsman’s Burden: An Economic Analysis of the Common Law of Southern Slavery. New York: Cambridge University Press, 1998.

Other useful sources include
Berlin, Ira, and Philip D. Morgan, eds. The Slave’s Economy: Independent Production by Slaves in the Americas. London: Frank Cass, 1991.

Berlin, Ira, and Philip D. Morgan, eds, Cultivation and Culture: Labor and the Shaping of Slave Life in the Americas. Charlottesville, University Press of Virginia, 1993.

Elkins, Stanley M. Slavery: A Problem in American Institutional and Intellectual Life. Chicago: University of Chicago Press, 1976.

Engerman, Stanley, and Eugene Genovese. Race and Slavery in the Western Hemisphere: Quantitative Studies. Princeton: Princeton University Press, 1975.

Fehrenbacher, Don. Slavery, Law, and Politics. New York: Oxford University Press, 1981.

Franklin, John H. From Slavery to Freedom. New York: Knopf, 1988.

Genovese, Eugene D. Roll, Jordan, Roll. New York: Pantheon, 1974.

Genovese, Eugene D. The Political Economy of Slavery: Studies in the Economy and Society of the Slave South . Middletown, CT: Wesleyan, 1989.

Hindus, Michael S. Prison and Plantation. Chapel Hill: University of North Carolina Press, 1980.

Margo, Robert, and Richard Steckel. “The Heights of American Slaves: New Evidence on Slave Nutrition and Health.” Social Science History 6 (1982): 516-538.

Phillips, Ulrich B. American Negro Slavery: A Survey of the Supply, Employment and Control of Negro Labor as Determined by the Plantation Regime. New York: Appleton, 1918.

Stampp, Kenneth M. The Peculiar Institution: Slavery in the Antebellum South. New York: Knopf, 1956.

Steckel, Richard. “Birth Weights and Infant Mortality Among American Slaves.” Explorations in Economic History 23 (1986): 173-98.

Walton, Gary, and Hugh Rockoff. History of the American Economy. Orlando: Harcourt Brace, 1994, chapter 13.

Whaples, Robert. “Where Is There Consensus among American Economic Historians?” Journal of Economic History 55 (1995): 139-154.

Data can be found at
U.S. Bureau of the Census, Historical Statistics of the United States, 1970, collected in ICPSR study number 0003, “Historical Demographic, Economic and Social Data: The United States, 1790-1970,” located at http://fisher.lib.virginia.edu/census/.

Citation: Bourne, Jenny. “Slavery in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 26, 2008. URL http://eh.net/encyclopedia/slavery-in-the-united-states/

The International Natural Rubber Market, 1870-1930

Zephyr Frank, Stanford University and Aldo Musacchio, Ibmec SãoPaulo

Overview of the Rubber Market, 1870-1930

Natural rubber was first used by the indigenous peoples of the Amazon basin for a variety of purposes. By the middle of the eighteenth century, Europeans had begun to experiment with rubber as a waterproofing agent. In the early nineteenth century, rubber was used to make waterproof shoes (Dean, 1987). The best source of latex, the milky fluid from which natural rubber products were made, was hevea brasiliensis, which grew predominantly in the Brazilian Amazon (but also in the Amazonian regions of Bolivia and Peru). Thus, by geographical accident, the first period of rubber’s commercial history, from the late 1700s through 1900, was centered in Brazil; the second period, from roughly 1910 on, was increasingly centered in East Asia as the result of plantation development. The first century of rubber was typified by relatively low levels of production, high wages, and very high prices; the period following 1910 was one of rapidly increasing production, low wages, and falling prices.

Uses of Rubber

The early uses of the material were quite limited. Initially the problem of natural rubber was its sensitivity to temperature changes, which altered its shape and consistency. In 1839 Charles Goodyear improved the process called vulcanization, which modified rubber so that it would support extreme temperatures. It was then that natural rubber became suitable for producing hoses, tires, industrial bands, sheets, shoes, shoe soles, and other products. What initially caused the beginning of the “Rubber Boom,” however, was the popularization of the bicycle. The boom would then be accentuated after 1900 by the development of the automobile industry and the expansion of the tire industry to produce car tires (Weinstein, 1983; Dean 1987).

Brazil’s Initial Advantage and High-Wage Cost Structure

Until the turn of the twentieth century Brazil and the countries that share the Amazon basin (i.e. Bolivia, Venezuela and Peru), were the only exporters of natural rubber. Brazil sold almost ninety percent of the total rubber commercialized in the world. The fundamental fact that explains Brazil’s entry into and domination of natural rubber production during the period 1870 through roughly 1913 is that most of the world’s rubber trees grew naturally in the Amazon region of Brazil. The Brazilian rubber industry developed a high-wage cost structure as the result of labor scarcity and lack of competition in the early years of rubber production. Since there were no credit markets to finance the trips of the workers of other parts of Brazil to the Amazon, workers paid their trips with loans from their future employers. Much like indenture servitude during colonial times in the United States, these loans were paid back to the employers with work once the laborers were established in the Amazon basin. Another factor that increased the costs of producing rubber was that most provisions for tappers in the field had to be shipped in from outside the region at great expense (Barham and Coomes, 1994). This made Brazilian production very expensive compared to the future plantations in Asia. Nevertheless Brazil’s system of production worked well as long as two conditions were met: first, that the demand for rubber did not grow too quickly, for wild rubber production could not expand rapidly owing to labor and environmental constraints; second, that competition based on some other more efficient arrangement of factors of production did not exist. As can be seen in Figure 1, Brazil dominated the natural rubber market until the first decade of the twentieth century.

Between 1900 and 1913, these conditions ceased to hold. First, the demand for rubber skyrocketed [see Figure 2], providing a huge incentive for other producers to enter the market. Prices had been high before, but Brazilian supply had been quite capable of meeting demand; now, prices were high and demand appeared insatiable. Plantations, which had been possible since the 1880s, now became a reality mainly in the colonies of Southeast Asia. Because Brazil was committed to a high-wage, labor-scarce production regime, it was unable to counter the entry of Asian plantations into the market it had dominated for half a century.

Southeast Asian Plantations Develop a Low-Cost, Labor-Intensive Alternative

In Asia, the British and Dutch drew upon their superior stocks of capital and vast pools of cheap colonial labor to transform rubber collection into a low-cost, labor-intensive industry. Investment per tapper in Brazil was reportedly 337 pounds sterling circa 1910; in the low-cost Asian plantations, investment was estimated at just 210 pounds per worker (Dean, 1987). Not only were Southeast Asian tappers cheaper, they were potentially eighty percent more productive (Dean, 1987).

Ironically, the new plantation system proved equally susceptible to uncertainty and competition. Unexpected sources of uncertainty arose in the technological development of automobile tires. In spite of colonialism, the British and Dutch were unable to collude to control production and prices plummeted after 1910. When the British did attempt to restrict production in the 1920s, the United States attempted to set up plantations in Brazil and the Dutch were happy to take market share. Yet it was too late for Brazil: the cost structure of Southeast Asian plantations could not be matched. In a sense, then, the game was no longer worth the candle: in order to compete in rubber production, Brazil would have to have had significantly lower wages — which would only have been possible with a vastly expanded transport network and domestic agriculture sector in the hinterland of the Amazon basin. Such an expensive solution made no economic sense in the 1910s and 20s when coffee and nascent industrialization in São Paulo offered much more promising prospects.

Natural Rubber Extraction and Commercialization: Brazil

Rubber Tapping in the Amazon Rainforest

One disadvantage Brazilian rubber producers suffered was that the organization of production depended on the distribution of Hevea brasiliensis trees in the forest. The owner (or often lease concessionary) of a large land plot would hire tappers to gather rubber by gouging the tree trunk with an axe. In Brazil, the usual practice was to make a big dent in the tree and put a small bowl to collect the latex that would come out of the trunk. Typically, tappers had two “rows” of trees they worked on, alternating one row per day. The “rows” contained several circular roads that went through the forest with more than 100 trees each. Rubber could only be collected during the tapping season (August to January), and the living conditions of tappers were hard. As the need for rubber expanded, tappers had to be sent deep into the Amazon rainforest to look for unexplored land with more productive trees. Tappers established their shacks close to the river because rubber, once smoked, was sent by boat to Manaus (capital of the state of Amazonas) or to Belém (capital of the state of Pará), both entrepots for rubber exporting to Europe and the US.[1]

Competition or Exploitation? Tappers and Seringalistas

After collecting the rubber, tappers would go back to their shacks and smoke the resin in order to make balls of partially filtered and purified rough rubber that could be sold at the ports. There is much discussion about the commercialization of the product. Weinstein (1983) argues that the seringalista — the employer of the rubber tapper — controlled the transportation of rubber to the ports, where he sold the rubber, many times in exchange for goods that could be sold (with a large gain) back to the tapper. In this economy money was scarce and the “wages” of tappers or seringueiros were determined by the price of rubber. Wages depended on the current price of rubber; the usual agreement for tappers was to split the gross profits with their patrons. These salaries were most commonly paid in goods, such as cigarettes, food, and tools. According to Weinstein (1983), the goods were overpriced by the seringalistas to extract larger profits from the seringueiros work. Barham and Coomes (1994), on the other hand, argue that the structure of the market in the Amazon was less closed and that independent traders would travel around the basin in small boats, willing to exchange goods for rubber. Poor monitoring by employers and an absent state facilitated these under-the-counter transactions, which allowed tappers to get better pay for their work.

Exporting Rubber

From the ports, rubber was in the hands of mainly Brazilian, British and American exporters. Contrary to what Weinstein (1983) argued, Brazilian producers or local merchants from the interior could choose whether to send the rubber on consignment to a New York commission house, rather than selling it to a exporter in the Amazon (Shelley, 1918). Rubber was taken, like other commodities, to ports in Europe and the US to be distributed to the industries that bought large amounts of the product in the London or New York commodities exchanges. A large part of rubber produced was traded at these exchanges, but tire manufacturers and other large consumers also made direct purchases from the distributors in the country of origin.[2]

Rubber Production in Southeast Asia

Seeds Smuggled from Brazil to Britain

The Hevea brasiliensis, the most important type of rubber tree, was an Amazonian species. This is why the countries of the Amazon basin were the main producers of rubber at the beginning of the international rubber trade. How, then, did British and Dutch colonies in Southeast Asia end up dominating the market? Brazil tried to prevent Hevea brasiliensis seeds from being exported, as the Brazilian government knew that by being the main producers of rubber, profits from rubber trading were insured. Protecting property rights in seeds proved a futile exercise. In 1876, the Englishman and aspiring author and rubber expert, Henry Wickham, smuggled 70,000 seeds to London, a feat for which he earned Brazil’s eternal opprobrium and an English knighthood. After experimenting with the seeds, 2,800 plants were raised at the Royal Botanical Gardens in London (Kew Gardens) and then shipped to Perideniya Gardens in Ceylon. In 1877 a case of 22 plants reached Singapore and were planted at the Singapore Botanical Garden. In the same year the first plant arrived in the Malay States. Since rubber trees needed between 6 to 8 years to be mature enough to yield good rubber, tapping began in the 1880s.

Scientific Research to Maximize Yields

In order to develop rubber extraction in the Malay States, more scientific intervention was needed. In 1888, H. N. Ridley was appointed director of the Singapore Botanical Garden and began experimenting with tapping methods. The final result of all the experimentations with different methods of tapping in Southeast Asia was the discovery of how to extract rubber in such a way that the tree would maintain a high yield for a long period of time. Rather than making a deep gouge with an axe on the rubber tree, as in Brazil, Southeast Asian tappers scraped the trunk of the tree by making a series of overlapped Y-shaped cuts with an axe, such that at the bottom there would be a canal ending in a collecting receptacle. According to Akers (1912), the tapping techniques in Asia insured the exploitation of the trees for longer periods, because the Brazilian technique scarred the tree’s bark and lowered yields over time.

Rapid Commercial Development and the Automobile Boom

Commercial planting in the Malay States began in 1895. The development of large-scale plantations was slow because of the lack of capital. Investors did not get interested in plantations until the prospects for rubber improved radically with the spectacular development of the automobile industry. By 1905, European capitalists were sufficiently interested in investing in large-scale plantations in Southeast Asia to plant some 38,000 acres of trees. Between 1905 and 1911 the annual increase was over 70,000 acres per year, and, by the end of 1911, the acreage in the Malay States reached 542,877 (Baxendale, 1913). The expansion of plantations was possible because of the sophistication in the organization of such enterprises. Joint stock companies were created to exploit the land grants and capital was raised through stock issues on the London Stock Exchange. The high returns during the first years (1906-1910) made investors ever more optimistic and capital flowed in large amounts. Plantations depended on a very disciplined system of labor and an intensive use of land.

Malaysia’s Advantages over Brazil

In addition to the intensive use of land, the production system in Malaysia had several economic advantages over that of Brazil. First, in the Malay States there was no specific tapping season, unlike Brazil where the rain did not allow tappers to collect rubber during six months of the year. Second, health conditions were better on the plantations, where rubber companies typically provided basic medical care and built infirmaries. In Brazil, by contrast, yellow fever and malaria made survival harder for rubber tappers who were dispersed in the forest and without even rudimentary medical attention. Finally, better living conditions and the support of the British and Dutch colonial authorities helped to attract Indian labor to the rubber plantations. Japanese and Chinese labor also immigrated to the plantations in Southeast Asia in response to relatively high wages (Baxendale, 1913).

Initially, demand for rubber was associated with specialized industrial components (belts and gaskets, etc.), consumer goods (golf balls, shoe soles, galoshes, etc.), and bicycle tires. Prior to the development of the automobile as a mass-marketed phenomenon, the Brazilian wild rubber industry was capable of meeting world demand and, furthermore, it was impossible for rubber producers to predict the scope and growth of the automobile industry prior to the 1900s. Thus, as Figure 3 indicates, growth in demand, as measured by U.K. imports, was not particularly rapid in the period 1880-1899. There was no reason to believe, in the early 1880s, that demand for rubber would explode as it did in the 1890s. Even as demand rose in the 1890s with the bicycle craze, the rate of increase was not beyond the capacity of wild rubber producers in Brazil and elsewhere (see figure 3). High rubber prices did not induce rapid increases in production or plantation development in the nineteenth century. In this context, Brazil developed a reasonably efficient industry based on its natural resource endowment and limited labor and capital sources.

In the first three decades of the twentieth century, major changes in both supply and demand created unprecedented uncertainty in rubber markets. On the supply side, Southeast Asian rubber plantations transformed the cost structure and capacity of the industry. On the demand side, and directly inducing plantation development, automobile production and associated demand for rubber exploded. Then, in the 1920s, competition and technological advance in tire production led to another shift in the market with profound consequences for rubber producers and tire manufacturers alike.

Rapid Price Fluctuations and Output Lags

Figure 1 shows the fluctuations of the Rubber Smoked Sheet type 1 (RSS1) price in London on an annual basis. The movements from 1906 to 1910 were very volatile on a monthly basis, as well, thus complicating forecasts for producers and making it hard for producers to decide how to react to market signals. Even though the information of prices and amounts in the markets were published every month in the major rubber journals, producers did not have a good idea of what was going to happen in the long run. If prices were high today, then they wanted to expand the area planted, but since it took from 6 to 8 years for trees to yield good rubber, they would have to wait to see the result of the expansion in production many years and price swings later. Since many producers reacted in the same way, periods of overproduction of rubber six to eight -odd years after a price rise were common.[3] Overproduction meant low prices, but since investments were mostly sunk (the costs of preparing the land, planting the trees and bringing in the workers could not be recovered and these resources could not be easily shifted to other uses), the market tended to stay oversupplied for long periods of time.

In figure 1 we see the annual price of Malaysian rubber plotted over time.

The years 1905 and 1906 marked historic highs for rubber prices, only to be surpassed briefly in 1909 and 1910. The area planted in rubber throughout Asia grew from 15,000 acres in 1901 to 433,000 acres in 1907; these plantings matured circa 1913, and cultivated rubber surpassed Brazilian wild rubber in volume exported.[4] The growth of the Asian rubber industry soon swamped Brazil’s market share and drove prices well below pre-Boom levels. After the major peak in prices of 1910, prices plummeted and followed a downward trend throughout the 1920s. By 1921, the bottom had dropped out of the market, and Malaysian rubber producers were induced by the British colonial authorities to enter into a scheme to restrict production. Plantations received export coupons that set quotas that limited the supply of rubber. The shortage of rubber did not affect prices until 1924 when the consumption passed the production of rubber and prices started to rise rapidly. This scheme had a short success because competition from the Dutch plantations in southeast Asia and others drove prices down by 1926. The plan was officially ended in 1928.[5]

Automobiles’ Impact on Rubber Demand

In order to understand the boom in rubber production, it is fundamental to look at the automobile industry. Cars had originally been adapted from horse-drawn carriages; some ran on wooden wheels, some on metal, some shod as it were in solid rubber. In any case, the ride at the speeds cars were soon capable of was impossible to bear. The pneumatic tire was quickly adopted from the bicycle, and the automobile tire industry was born — soon to account for well over half of rubber company sales in the United States where the vast majority of automobiles were manufactured in the early years of the industry.[6] The amount of rubber required to satisfy demand for automobile tires led first to a spike in rubber prices; second, it led to the development of rubber plantations in Asia.[7]

The connection between automobiles, plantations, and the rubber tire industry was explicit and obvious to observers at the time. Harvey Firestone, son of the founder of the company, put it this way:

It was not until 1898 that any serious attention was paid to plantation development. Then came the automobile, and with it the awakening on the part of everybody that without rubber there could be no tires, and without tires there could be no automobiles. (Firestone, 1932, p. 41)

Thus the emergence of a strong consuming sector linked to the automobile was necessary. For instance, the average price of rubber from 1880-1884 was 401 pounds sterling per ton; from 1900 to 1904, when the first plantations were beginning to be set up, the average price was 459 pounds sterling per ton. Thus, Asian plantations were developed both in response to high rubber prices and to what everyone could see was an exponentially growing source of demand in automobiles. Previous consumers of rubber did not show the kind of dynamism needed to spur entry by plantations into the natural rubber market, even though prices were very high throughout most of second half of the nineteenth century.

Producers Need to Forecast Future Supply and Demand Conditions

Rubber producers made decisions about production and planting during the period 1900-1912 with the aim to reap windfall profits, instead of thinking about the long-run sustainability of their business. High prices were an incentive for all to increase production, but increasing production, through more acreage planted could mean a loss for everyone in the future (because too much supply could drive the prices down). Yet, current prices could not yield profits when investment decisions had to be made six or more years in advance, as was the case in plantation production: in order to invest in plantations, capital had to be able to predict future interactions in supply and demand. Demand, although high and apparently relatively price inelastic, was not entirely predictable. It was predictable enough, however, for planters to expand acreage in rubber in Asia at a dramatic rate. Planters were often uncertain as to the aggregate level of supply: new plantations were constantly coming into production; others were entering into decline or bankruptcy. Thus their investments could yield a lot in the short run, but if all the people reacted in the same way, prices were driven down and profits were low too. This is what happened in the 1920s, after all the acreage expansion of the first two decades of the century.

Demand Growth Unexpectedly Slows in the 1920s

Plantings between 1912 and 1916 were destined to come into production during a period in which growth in the automobile industry leveled off significantly owing to recession in 1920-21. Making maters worse for rubber producers, major advances in tire technology further controlled demand — for example, the change from corded to balloon tires increased average tire tread mileage from 8,000 to 15,000 miles.[8] The shift from corded to balloon tires decreased demand for natural rubber even as the automobile industry recovered from recession in the early 1920s. In addition, better design of tire casings circa 1920 led to the growth of the retreading industry, the result of which was further saving on rubber. Finally, better techniques in cotton weaving lowered friction and heat and further extended tire life.[9] As rubber supplies increased and demand decreased and became more price inelastic, prices plummeted: neither demand nor price proved predictable over the long run and suppliers paid a stiff price for overextending themselves during the boom years. Rubber tire manufacturers suffered the same fate: competition and technology (which they themselves introduced) pushed prices downward and, at the same time, flattened demand (Allen, 1936).[10]

Now, if one looks at the price of rubber and the rate of growth in demand as measured by imports in the 1920s, it is clear that the industry was over-invested in capacity. The consequences of technological change were dramatic for tire manufacturer profits as well as for rubber producers.

Conclusion

The natural rubber trade underwent several radical transformations over the period 1870 to 1930. First, prior to 1910, it was associated with high costs of production and high prices for final goods; most rubber was produced, during this period, by tapping rubber trees in the Amazon region of Brazil. After 1900, and especially after 1910, rubber was increasingly produced on low-cost plantations in Southeast Asia. The price of rubber fell with plantation development and, at the same time, the volume of rubber demanded by car tire manufacturers expanded dramatically. Uncertainty, in terms of both supply and demand, (often driven by changing tire technology) meant that natural rubber producers and tire manufacturers both experienced great volatility in returns. The overall evolution of the natural rubber trade and the related tire manufacture industry was toward large volume, low-cost production in an internationally competitive environment marked by commodity price volatility and declining levels of profit as the industry matured.

References

Akers, C. E. Report on the Amazon Valley: Its Rubber Industry and Other Resources. London: Waterlow & Sons, 1912.

Allen, Hugh. The House of Goodyear. Akron: Superior Printing, 1936.

Alves Pinto, Nelson Prado. Política Da Borracha No Brasil. A Falência Da Borracha Vegetal. São Paulo: HUCITEC, 1984.

Babcock, Glenn D. History of the United States Rubber Company. Indiana: Bureau of Business Research, 1966.

Barham, Bradford, and Oliver Coomes. “The Amazon Rubber Boom: Labor Control, Resistance, and Failed Plantation Development Revisited.” Hispanic American Historical Review 74, no. 2 (1994): 231-57.

Barham, Bradford, and Oliver Coomes. Prosperity’s Promise. The Amazon Rubber Boom and Distorted Economic Development. Boulder: Westview Press, 1996.

Barham, Bradford, and Oliver Coomes. “Wild Rubber: Industrial Organisation and the Microeconomics of Extraction during the Amazon Rubber Boom (1860-1920).” Hispanic American Historical Review 26, no. 1 (1994): 37-72.

Baxendale, Cyril. “The Plantation Rubber Industry.” India Rubber World, 1 January 1913.

Blackford, Mansel and Kerr, K. Austin. BFGoodrich. Columbus: Ohio State University Press, 1996.

Brazil. Instituto Brasileiro de Geografia e Estatística. Anuário Estatístico Do Brasil. Rio de Janeiro: Instituto Brasileiro de Geografia e Estatística, 1940.

Dean, Warren. Brazil and the Struggle for Rubber: A Study in Environmental History. Cambridge: Cambridge University Press, 1987.

Drabble, J. H. Rubber in Malaya, 1876-1922. Oxford: Oxford University Press, 1973.

Firestone, Harvey Jr. The Romance and Drama of the Rubber Industry. Akron: Firestone Tire and Rubber Co., 1932.

Santos, Roberto. História Econômica Da Amazônia (1800-1920). São Paulo: T.A. Queiroz, 1980.

Schurz, William Lytle, O. D Hargis, Curtis Fletcher Marbut, and C. B Manifold. Rubber Production in the Amazon Valley by William L. Schurz, Commercial Attaché, and O.D. Hargis, Special Agent, of the Department of Commerce, and C.F. Marbut, Chief, Division of Soil Survey, and C.B. Manifold, Soil Surveyor, of the Department of Agriculture. U.S. Bureau of Foreign and Domestic Commerce (Dept. of Commerce) Trade Promotion Series: Crude Rubber Survey: Crude Rubber Survey: Trade Promotion Series, no. 4. no. 28. Washington: Govt. Print. Office, 1925.

Shelley, Miguel. “Financing Rubber in Brazil.” India Rubber World, 1 July 1918.

Weinstein, Barbara. The Amazon Rubber Boom, 1850-1920. Stanford: Stanford University Press, 1983.


Notes:

[1] Rubber taping in the Amazon basin is described in Weinstein (1983), Barham and Coomes (1994), Stanfield (1998), and in several articles published in India Rubber World, the main journal on rubber trading. See, for example, the explanation of tapping in the October 1, 1910 issue, or “The Present and Future of the Native Havea Rubber Industry” in the January 1, 1913 issue. For a detailed analysis of the rubber industry by region in Brazil by contemporary observers, see Schurz et al (1925).

[2] Newspapers such as The Economist or the London Times included sections on rubber trading, such as weekly or monthly reports of the market conditions, prices and other information. For the dealings between tire manufacturers and distributors in Brazil and Malaysia see Firestone (1932).

[3] Using cross-correlations of production and prices, we found that changes in production at time t were correlated with price changes in t-6 and t-8 (years). This is only weak evidence because these correlations are not statistically significant.

[4] Drabble (1973), 213, 220. The expansion in acreage was accompanied by a boom in company formation.

[5] Drabble (1973), 192-199. This was the so-called Stevenson Committee restriction, which lasted from 1922 to 1926. This plan basically limited the amount of rubber each planter could export assigning quotas through coupons.

[6] Pneumatic tires were first adapted to automobiles in 1896; Dunlop’s pneumatic bicycle tire was introduced in 1888. The great advantage of these tires over solid rubber was that they generated far less friction, extending tread life, and, of course, cushioned the ride and allowed for higher speeds.

[7] Early histories of the rubber industry tended to blame Brazilian “monopolists” for holding up supply and reaping windfall profits, see, e.g., Allen (1936), 116-117. In fact, rubber production in Brazil was far from monopolistic; other reasons account for supply inelasticity.

[8] Blackford and Kerr (1996), p. 88.

[9] The so-called “supertwist” weave allowed for the manufacture of larger, more durable tires, especially for trucks. Allen (1936), pp. 215-216.

[10] Allen (1936), p. 320.

Citation: Frank, Zephyr and Aldo Musacchio. “The International Natural Rubber Market, 1870-1930″. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-international-natural-rubber-market-1870-1930/

The Economics of the American Revolutionary War

Ben Baack, Ohio State University

By the time of the onset of the American Revolution, Britain had attained the status of a military and economic superpower. The thirteen American colonies were one part of a global empire generated by the British in a series of colonial wars beginning in the late seventeenth century and continuing on to the mid eighteenth century. The British military establishment increased relentlessly in size during this period as it engaged in the Nine Years War (1688-97), the War of Spanish Succession (1702-13), the War of Austrian Succession (1739-48), and the Seven Years War (1756-63). These wars brought considerable additions to the British Empire. In North America alone the British victory in the Seven Years War resulted in France ceding to Britain all of its territory east of the Mississippi River as well as all of Canada and Spain surrendering its claim to Florida (Nester, 2000).

Given the sheer magnitude of the British military and its empire, the actions taken by the American colonists for independence have long fascinated scholars. Why did the colonists want independence? How were they able to achieve a victory over what was at the time the world’s preeminent military power? What were the consequences of achieving independence? These and many other questions have engaged the attention of economic, legal, military, political, and social historians. In this brief essay we will focus only on the economics of the Revolutionary War.

Economic Causes of the Revolutionary War

Prior to the conclusion of the Seven Years War there was little, if any, reason to believe that one day the American colonies would undertake a revolution in an effort to create an independent nation-state. As apart of the empire the colonies were protected from foreign invasion by the British military. In return, the colonists paid relatively few taxes and could engage in domestic economic activity without much interference from the British government. For the most part the colonists were only asked to adhere to regulations concerning foreign trade. In a series of acts passed by Parliament during the seventeenth century the Navigation Acts required that all trade within the empire be conducted on ships which were constructed, owned and largely manned by British citizens. Certain enumerated goods whether exported or imported by the colonies had to be shipped through England regardless of the final port of destination.

Western Land Policies

Economic incentives for independence significantly increased in the colonies as a result of a series of critical land policy decisions made by the British government. The Seven Years’ War had originated in a contest between Britain and France over control of the land from the Appalachian Mountains to the Mississippi River. During the 1740s the British government pursued a policy of promoting colonial land claims to as well as settlement in this area, which was at the time French territory. With the ensuing conflict of land claims both nations resorted to the use of military force which ultimately led to the onset of the war. At the conclusion of the war as a result of one of many concessions made by France in the 1763 Treaty of Paris, Britain acquired all the contested land west of its colonies to the Mississippi River. It was at this point that the British government began to implement a fundamental change in its western land policy.

Britain now reversed its long-time position of encouraging colonial claims to land and settlement in the west. The essence of the new policy was to establish British control of the former French fur trade in the west by excluding any settlement there by the Americans. Implementation led to the development of three new areas of policy. 1. Construction of the new rules of exclusion. 2. Enforcement of the new exclusion rules. 3. Financing the cost of the enforcement of the new rules. First, the rules of exclusion were set out under the terms of the Proclamation of 1763 whereby colonists were not allowed to settle in the west. This action legally nullified the claims to land in the area by a host of individual colonists, land companies, as well as colonies. Second, enforcement of the new rules was delegated to the standing army of about 7,500 regulars newly stationed in the west. This army for the most part occupied former French forts although some new ones were built. Among other things, this army was charged with keeping Americans out of the west as well as returning to the colonies any Americans who were already there. Third, financing of the cost of the enforcement was to be accomplished by levying taxes on the Americans. Thus, Americans were being asked to finance a British army which was charged with keeping Americans out of the west (Baack, 2004).

Tax Policies

Of all the potential options available for funding the new standing army in the west, why did the British decide to tax their American colonies? The answer is fairly straightforward. First of all, the victory over the French in the Seven Years’ War had come at a high price. Domestic taxes had been raised substantially during the war and total government debt had increased nearly twofold (Brewer, 1989). In addition, taxes were significantly higher in Britain than in the colonies. One estimate suggests the per capita tax burden in the colonies ranged from two to four percent of that in Britain (Palmer, 1959). And finally, the voting constituencies of the members of parliament were in Britain not the colonies. All things considered, Parliament viewed taxing the colonies as the obvious choice.

Accordingly, a series of tax acts were passed by Parliament the revenue from which was to be used to help pay for the standing army in America. The first was the Sugar Act of 1764. Proposed by England’s Prime Minister the act lowered tariff rates on non-British products from the West Indies as well as strengthened their collection. It was hoped this would reduce the incentive for smuggling and thereby increase tariff revenue (Bullion, 1982). The following year Parliament passed the Stamp Act that imposed a tax commonly used in England. It required stamps for a broad range of legal documents as well as newspapers and pamphlets. While the colonial stamp duties were less than those in England they were expected to generate enough revenue to finance a substantial portion of the cost the new standing army. The same year passage of the Quartering Act imposed essentially a tax in kind by requiring the colonists to provide British military units with housing, provisions, and transportation. In 1767 the Townshend Acts imposed tariffs upon a variety of imported goods and established a Board of Customs Commissioners in the colonies to collect the revenue.

Boycotts

While the Americans could do little about the British army stationed in the west, they could do somthing about the new British taxes. American opposition to these acts was expressed initially in a variety of peaceful forms. While they did not have representation in Parliament, the colonists did attempt to exert some influence in it through petition and lobbying. However, it was the economic boycott that became by far the most effective means of altering the new British economic policies. In 1765 representatives from nine colonies met at the Stamp Act Congress in New York and organized a boycott of imported English goods. The boycott was so successful in reducing trade that English merchants lobbied Parliament for the repeal of the new taxes. Parliament soon responded to the political pressure. During 1766 it repealed both the Stamp and Sugar Acts (Johnson, 1997). In response to the Townshend Acts of 1767 a second major boycott started in 1768 in Boston and New York and subsequently spread to other cities leading Parliament in 1770 to repeal all of the Townshend duties except the one on tea. In addition, Parliament decided at the same time not to renew the Quartering Act.

With these actions taken by Parliament the Americans appeared to have successfully overturned the new British post war tax agenda. However, Parliament had not given up what it believed to be its right to tax the colonies. On the same day it repealed the Stamp Act, Parliament passed the Declaratory Act stating the British government had the full power and authority to make laws governing the colonies in all cases whatsoever including taxation. Legislation not principles had been overturned.

The Tea Act

Three years after the repeal of the Townshend duties British policy was once again to emerge as an issue in the colonies. This time the American reaction was not peaceful. It all started when Parliament for the first time granted an exemption from the Navigation Acts. In an effort to assist the financially troubled British East India Company Parliament passed the Tea Act of 1773, which allowed the company to ship tea directly to America. The grant of a major trading advantage to an already powerful competitor meant a potential financial loss for American importers and smugglers of tea. In December a small group of colonists responded by boarding three British ships in the Boston harbor and throwing overboard several hundred chests of tea owned by the East India Company (Labaree, 1964). Stunned by the events in Boston, Parliament decided not to cave in to the colonists as it had before. In rapid order it passed the Boston Port Act, the Massachusetts Government Act, the Justice Act, and the Quartering Act. Among other things these so-called Coercive or Intolerable Acts closed the port of Boston, altered the charter of Massachusetts, and reintroduced the demand for colonial quartering of British troops. Once done Parliament then went on to pass the Quebec Act as a continuation of its policy of restricting the settlement of the West.

The First Continental Congress

Many Americans viewed all of this as a blatant abuse of power by the British government. Once again a call went out for a colonial congress to sort out a response. On September 5, 1774 delegates appointed by the colonies met in Philadelphia for the First Continental Congress. Drawing upon the successful manner in which previous acts had been overturned the first thing Congress did was to organize a comprehensive embargo of trade with Britain. It then conveyed to the British government a list of grievances that demanded the repeal of thirteen acts of Parliament. All of the acts listed had been passed after 1763 as the delegates had agreed not to question British policies made prior to the conclusion of the Seven Years War. Despite all the problems it had created, the Tea Act was not on the list. The reason for this was that Congress decided not to protest British regulation of colonial trade under the Navigation Acts. In short, the delegates were saying to Parliament take us back to 1763 and all will be well.

The Second Continental Congress

What happened then was a sequence of events that led to a significant increase in the degree of American resistance to British polices. Before the Congress adjourned in October the delegates voted to meet again in May of 1775 if Parliament did not meet their demands. Confronted by the extent of the American demands the British government decided it was time to impose a military solution to the crisis. Boston was occupied by British troops. In April a military confrontation occurred at Lexington and Concord. Within a month the Second Continental Congress was convened. Here the delegates decided to fundamentally change the nature of their resistance to British policies. Congress authorized a continental army and undertook the purchase of arms and munitions. To pay for all of this it established a continental currency. With previous political efforts by the First Continental Congress to form an alliance with Canada having failed, the Second Continental Congress took the extraordinary step of instructing its new army to invade Canada. In effect, these actions taken were those of an emerging nation-state. In October as American forces closed in on Quebec the King of England in a speech to Parliament declared that the colonists having formed their own government were now fighting for their independence. It was to be only a matter of months before Congress formally declared it.

Economic Incentives for Pursuing Independence: Taxation

Given the nature of British colonial policies, scholars have long sought to evaluate the economic incentives the Americans had in pursuing independence. In this effort economic historians initially focused on the period following the Seven Years War up to the Revolution. It turned out that making a case for the avoidance of British taxes as a major incentive for independence proved difficult. The reason was that many of the taxes imposed were later repealed. The actual level of taxation appeared to be relatively modest. After all, the Americans soon after adopting the Constitution taxed themselves at far higher rates than the British had prior to the Revolution (Perkins, 1988). Rather it seemed the incentive for independence might have been the avoidance of the British regulation of colonial trade. Unlike some of the new British taxes, the Navigation Acts had remained intact throughout this period.

The Burden of the Navigation Acts

One early attempt to quantify the economic effects of the Navigation Acts was by Thomas (1965). Building upon the previous work of Harper (1942), Thomas employed a counterfactual analysis to assess what would have happened to the American economy in the absence of the Navigation Acts. To do this he compared American trade under the Acts with that which would have occurred had America been independent following the Seven Years War. Thomas then estimated the loss of both consumer and produce surplus to the colonies as a result of shipping enumerated goods indirectly through England. These burdens were partially offset by his estimated value of the benefits of British protection and various bounties paid to the colonies. The outcome of his analysis was that the Navigation Acts imposed a net burden of less than one percent of colonial per capita income. From this he concluded the Acts were an unlikely cause of the Revolution. A long series of subsequent works questioned various parts of his analysis but not his general conclusion (Walton, 1971). The work of Thomas also appeared to be consistent with the observation that the First Continental Congress had not demanded in its list of grievances the repeal of either the Navigation Acts or the Sugar Act.

American Expectations about Future British Policy

Did this mean then that the Americans had few if any economic incentives for independence? Upon further consideration economic historians realized that perhaps more important to the colonists were not the past and present burdens but rather the expected future burdens of continued membership in the British Empire. The Declaratory Act made it clear the British government had not given up what it viewed as its right to tax the colonists. This was despite the fact that up to 1775 the Americans had employed a variety of protest measures including lobbying, petitions, boycotts, and violence. The confluence of not having representation in Parliament while confronting an aggressive new British tax policy designed to raise their relatively low taxes may have made it reasonable for the Americans to expect a substantial increase in the level of taxation in the future (Gunderson, 1976, Reid, 1978). Furthermore a recent study has argued that in 1776 not only did the future burdens of the Navigation Acts clearly exceed those of the past, but a substantial portion would have borne by those who played a major role in the Revolution (Sawers, 1992). Seen in this light the economic incentive for independence would have been avoiding the potential future costs of remaining in the British Empire.

The Americans Undertake a Revolution

1776-77

British Military Advantages

The American colonies had both strengths and weaknesses in terms of undertaking a revolution. The colonial population of well over two million was nearly one third of that in Britain (McCusker and Menard, 1985). The growth in the colonial economy had generated a remarkably high level of per capita wealth and income (Jones, 1980). Yet the hurdles confronting the Americans in achieving independence were indeed formidable. The British military had an array of advantages. With virtual control of the Atlantic its navy could attack anywhere along the American coast at will and would have borne logistical support for the army without much interference. A large core of experienced officers commanded a highly disciplined and well-drilled army in the large-unit tactics of eighteenth century European warfare. By these measures the American military would have great difficulty in defeating the British. Its navy was small. The Continental Army had relatively few officers proficient in large-unit military tactics. Lacking both the numbers and the discipline of its adversary the American army was unlikely to be able to meet the British army on equal terms on the battlefield (Higginbotham, 1977).

British Financial Advantages

In addition, the British were in a better position than the Americans to finance a war. A tax system was in place that had provided substantial revenue during previous colonial wars. Also for a variety of reasons the government had acquired an exceptional capacity to generate debt to fund wartime expenses (North and Weingast, 1989). For the Continental Congress the situation was much different. After declaring independence Congress had set about defining the institutional relationship between it and the former colonies. The powers granted to Congress were established under the Articles of Confederation. Reflecting the political environment neither the power to tax nor the power to regulate commerce was given to Congress. Having no tax system to generate revenue also made it very difficult to borrow money. According to the Articles the states were to make voluntary payments to Congress for its war efforts. This precarious revenue system was to hamper funding by Congress throughout the war (Baack, 2001).

Military and Financial Factors Determine Strategy

It was within these military and financial constraints that the war strategies by the British and the Americans were developed. In terms of military strategies both of the contestants realized that America was simply too large for the British army to occupy all of the cities and countryside. This being the case the British decided initially that they would try to impose a naval blockade and capture major American seaports. Having already occupied Boston, the British during 1776 and 1777 took New York, Newport, and Philadelphia. With plenty of room to maneuver his forces and unable to match those of the British, George Washington chose to engage in a war of attrition. The purpose was twofold. First, by not engaging in an all out offensive Washington reduced the probability of losing his army. Second, over time the British might tire of the war.

Saratoga

Frustrated without a conclusive victory, the British altered their strategy. During 1777 a plan was devised to cut off New England from the rest of the colonies, contain the Continental Army, and then defeat it. An army was assembled in Canada under the command of General Burgoyne and then sent to and down along the Hudson River. It was to link up with an army sent from New York City. Unfortunately for the British the plan totally unraveled as in October Burgoyne’s army was defeated at the battle of Saratoga and forced to surrender (Ketchum, 1997).

The American Financial Situation Deteriorates

With the victory at Saratoga the military side of the war had improved considerably for the Americans. However, the financial situation was seriously deteriorating. The states to this point had made no voluntary payments to Congress. At the same time the continental currency had to compete with a variety of other currencies for resources. The states were issuing their own individual currencies to help finance expenditures. Moreover the British in an effort to destroy the funding system of the Continental Congress had undertaken a covert program of counterfeiting the Continental dollar. These dollars were printed and then distributed throughout the former colonies by the British army and agents loyal to the Crown (Newman, 1957). Altogether this expansion of the nominal money supply in the colonies led to a rapid depreciation of the Continental dollar (Calomiris, 1988, Michener, 1988). Furthermore, inflation may have been enhanced by any negative impact upon output resulting from the disruption of markets along with the destruction of property and loss of able-bodied men (Buel, 1998). By the end of 1777 inflation had reduced the specie value of the Continental to about twenty percent of what it had been when originally issued. This rapid decline in value was becoming a serious problem for Congress in that up to this point almost ninety percent of its revenue had been generated from currency emissions.

1778-83

British Invasion of the South

The British defeat at Saratoga had a profound impact upon the nature of the war. The French government still upset by their defeat by the British in the Seven Years War and encouraged by the American victory signed a treaty of alliance with the Continental Congress in early 1778. Fearing a new war with France the British government sent a commission to negotiate a peace treaty with the Americans. The commission offered to repeal all of the legislation applying to the colonies passed since 1763. Congress rejected the offer. The British response was to give up its efforts to suppress the rebellion in the North and in turn organize an invasion of the South. The new southern campaign began with the taking of the port of Savannah in December. Pursuing their southern strategy the British won major victories at Charleston and Camden during the spring and summer of 1780.

Worsening Inflation and Financial Problems

As the American military situation deteriorated in the South so did the financial circumstances of the Continental Congress. Inflation continued as Congress and the states dramatically increased the rate of issuance of their currencies. At the same time the British continued to pursue their policy of counterfeiting the Continental dollar. In order to deal with inflation some states organized conventions for the purpose of establishing wage and price controls (Rockoff, 1984). With few contributions coming from the states and a currency rapidly losing its value, Congress resorted to authorizing the army to confiscate whatever it needed to continue the war effort (Baack, 2001, 2008).

Yorktown

Fortunately for the Americans the British military effort collapsed before the funding system of Congress. In a combined effort during the fall of 1781 French and American forces trapped the British southern army under the command of Cornwallis at Yorktown, Virginia. Under siege by superior forces the British army surrendered on October 19. The British government had now suffered not only the defeat of its northern strategy at Saratoga but also the defeat of its southern campaign at Yorktown. Following Yorktown, Britain suspended its offensive military operations against the Americans. The war was over. All that remained was the political maneuvering over the terms for peace.

The Treaty of Paris

The Revolutionary War officially concluded with the signing of the Treaty of Paris in 1783. Under the terms of the treaty the United States was granted independence and British troops were to evacuate all American territory. While commonly viewed by historians through the lens of political science, the Treaty of Paris was indeed a momentous economic achievement by the United States. The British ceded to the Americans all of the land east of the Mississippi River which they had taken from the French during the Seven Years War. The West was now available for settlement. To the extent the Revolutionary War had been undertaken by the Americans to avoid the costs of continued membership in the British Empire, the goal had been achieved. As an independent nation the United States was no longer subject to the regulations of the Navigation Acts. There was no longer to be any economic burden from British taxation.

THE FORMATION OF A NATIONAL GOVERNMENT

When you start a revolution you have to be prepared for the possibility you might win. This means being prepared to form a new government. When the Americans declared independence their experience of governing at a national level was indeed limited. In 1765 delegates from various colonies had met for about eighteen days at the Stamp Act Congress in New York to sort out a colonial response to the new stamp duties. Nearly a decade passed before delegates from colonies once again got together to discuss a colonial response to British policies. This time the discussions lasted seven weeks at the First Continental Congress in Philadelphia during the fall of 1774. The primary action taken at both meetings was an agreement to boycott trade with England. After having been in session only a month, delegates at the Second Continental Congress for the first time began to undertake actions usually associated with a national government. However, when the colonies were declared to be free and independent states Congress had yet to define its institutional relationship with the states.

The Articles of Confederation

Following the Declaration of Independence, Congress turned to deciding the political and economic powers it would be given as well as those granted to the states. After more than a year of debate among the delegates the allocation of powers was articulated in the Articles of Confederation. Only Congress would have the authority to declare war and conduct foreign affairs. It was not given the power to tax or regulate commerce. The expenses of Congress were to be made from a common treasury with funds supplied by the states. This revenue was to be generated from exercising the power granted to the states to determine their own internal taxes. It was not until November of 1777 that Congress approved the final draft of the Articles. It took over three years for the states to ratify the Articles. The primary reason for the delay was a dispute over control of land in the West as some states had claims while others did not. Those states with claims eventually agreed to cede them to Congress. The Articles were then ratified and put into effect on March 1, 1781. This was just a few months before the American victory at Yorktown. The process of institutional development had proved so difficult that the Americans fought almost the entire Revolutionary War with a government not sanctioned by the states.

Difficulties in the 1780s

The new national government that emerged from the Revolution confronted a host of issues during the 1780s. The first major one to be addressed by Congress was what to do with all of the land acquired in the West. Starting in 1784 Congress passed a series of land ordinances that provided for land surveys, sales of land to individuals, and the institutional foundation for the creation of new states. These ordinances opened the West for settlement. While this was a major accomplishment by Congress, other issues remained unresolved. Having repudiated its own currency and no power of taxation, Congress did not have an independent source of revenue to pay off its domestic and foreign debts incurred during the war. Since the Continental Army had been demobilized no protection was being provided for settlers in the West or against foreign invasion. Domestic trade was being increasingly disrupted during the 1780s as more states began to impose tariffs on goods from other states. Unable to resolve these and other issues Congress endorsed a proposed plan to hold a convention to meet in Philadelphia in May of 1787 to revise the Articles of Confederation.

Rather than amend the Articles, the delegates to the convention voted to replace them entirely with a new form of national government under the Constitution. There are of course many ways to assess the significance of this truly remarkable achievement. One is to view the Constitution as an economic document. Among other things the Constitution specifically addressed many of the economic problems that confronted Congress during and after the Revolutionary War. Drawing upon lessons learned in financing the war, no state under the Constitution would be allowed to coin money or issue bills of credit. Only the national government could coin money and regulate its value. Punishment was to be provided for counterfeiting. The problems associated with the states contributing to a common treasury under the Articles were overcome by giving the national government the coercive power of taxation. Part of the revenue was to be used to pay for the common defense of the United States. No longer would states be allowed to impose tariffs as they had done during the 1780s. The national government was now given the power to regulate both foreign and interstate commerce. As a result the nation was to become a common market. There is a general consensus among economic historians today that the economic significance of the ratification of the Constitution was to lay the institutional foundation for long run growth. From the point of view of the former colonists, however, it meant they had succeeded in transferring the power to tax and regulate commerce from Parliament to the new national government of the United States.

TABLES
Table 1 Continental Dollar Emissions (1775-1779)

Year of Emission Nominal Dollars Emitted (000) Annual Emission As Share of Total Nominal Stock Emitted Specie Value of Annual Emission (000) Annual Emission As Share of Total Specie Value Emitted
1775 $6,000 3% $6,000 15%
1776 19,000 8 15,330 37
1777 13,000 5 4,040 10
1778 63,000 26 10,380 25
1779 140,500 58 5,270 13
Total $241,500 100% $41,020 100%

Source: Bullock (1895), 135.
Table 2 Currency Emissions by the States (1775-1781)

Year of Emission Nominal Dollars Emitted (000) Year of Emission Nominal Dollars Emitted (000)
1775 $4,740 1778 $9,118
1776 13,328 1779 17,613
1777 9,573 1780 66,813
1781 123.376
Total $27,641 Total $216,376

Source: Robinson (1969), 327-28.

References

Baack, Ben. “Forging a Nation State: The Continental Congress and the Financing of the War of American Independence.” Economic History Review 54, no.4 (2001): 639-56.

Baack, Ben. “British versus American Interests in Land and the War of American Independence.” Journal of European Economic History 33, no. 3 (2004): 519-54.

Baack, Ben. “America’s First Monetary Policy: Inflation and Seigniorage during the Revolutionary War.” Financial History Review 15, no. 2 (2008): 107-21.

Baack, Ben, Robert A. McGuire, and T. Norman Van Cott. “Constitutional Agreement during the Drafting of the Constitution: A New Interpretation.” Journal of Legal Studies 38, no. 2 (2009): 533-67.

Brewer, John. The Sinews of Power: War, Money and the English State, 1688- 1783. London: Cambridge University Press, 1989.

Buel, Richard. In Irons: Britain’s Naval Supremacy and the American Revolutionary Economy. New Haven: Yale University Press, 1998.

Bullion, John L. A Great and Necessary Measure: George Grenville and the Genesis of the Stamp Act, 1763-1765. Columbia: University of Missouri Press, 1982.

Bullock, Charles J. “The Finances of the United States from 1775 to 1789, with Especial Reference to the Budget.” Bulletin of the University of Wisconsin 1, no. 2 (1895): 117-273.

Calomiris, Charles W. “Institutional Failure, Monetary Scarcity, and the Depreciation of the Continental.” Journal of Economic History 48, no. 1 (1988): 47-68.

Egnal, Mark. A Mighty Empire: The Origins of the American Revolution. Ithaca: Cornell University Press, 1988.

Ferguson, E. James. The Power of the Purse: A History of American Public Finance, 1776-1790. Chapel Hill: University of North Carolina Press, 1961.

Gunderson, Gerald. A New Economic History of America. New York: McGraw- Hill, 1976.

Harper, Lawrence A. “Mercantilism and the American Revolution.” Canadian Historical Review 23 (1942): 1-15.

Higginbotham, Don. The War of American Independence: Military Attitudes, Policies, and Practice, 1763-1789. Bloomington: Indiana University Press, 1977.

Jensen, Merrill, editor. English Historical Documents: American Colonial Documents to 1776 New York: Oxford university Press, 1969.

Johnson, Allen S. A Prologue to Revolution: The Political Career of George Grenville (1712-1770). New York: University Press, 1997.

Jones, Alice H. Wealth of a Nation to Be: The American Colonies on the Eve of the Revolution. New York: Columbia University Press, 1980.

Ketchum, Richard M. Saratoga: Turning Point of America’s Revolutionary War. New York: Henry Holt and Company, 1997.

Labaree, Benjamin Woods. The Boston Tea Party. New York: Oxford University Press, 1964.

Mackesy, Piers. The War for America, 1775-1783. Cambridge: Harvard University Press, 1964.

McCusker, John J. and Russell R. Menard. The Economy of British America, 1607- 1789. Chapel Hill: University of North Carolina Press, 1985.

Michener, Ron. “Backing Theories and the Currencies of Eighteenth-Century America: A Comment.” Journal of Economic History 48, no. 3 (1988): 682-92.

Nester, William R. The First Global War: Britain, France, and the Fate of North America, 1756-1775. Westport: Praeger, 2000.

Newman, E. P. “Counterfeit Continental Currency Goes to War.” The Numismatist 1 (January, 1957): 5-16.

North, Douglass C., and Barry R. Weingast. “Constitutions and Commitment: The Evolution of Institutions Governing Public Choice in Seventeenth-Century England.” Journal of Economic History 49 No. 4 (1989): 803-32.

O’Shaughnessy, Andrew Jackson. An Empire Divided: The American Revolution and the British Caribbean. Philadelphia: University of Pennsylvania Press, 2000.

Palmer, R. R. The Age of Democratic Revolution: A Political History of Europe and America. Vol. 1. Princeton: Princeton University Press, 1959.

Perkins, Edwin J. The Economy of Colonial America. New York: Columbia University Press, 1988.

Reid, Joseph D., Jr. “Economic Burden: Spark to the American Revolution?” Journal of Economic History 38, no. 1 (1978): 81-100.

Robinson, Edward F. “Continental Treasury Administration, 1775-1781: A Study in the Financial History of the American Revolution.” Ph.D. diss., University of Wisconsin, 1969.

Rockoff, Hugh. Drastic Measures: A History of Wage and Price Controls in the United States. Cambridge: Cambridge University Press, 1984.

Sawers, Larry. “The Navigation Acts Revisited.” Economic History Review 45, no. 2 (1992): 262-84.

Thomas, Robert P. “A Quantitative Approach to the Study of the Effects of British Imperial Policy on Colonial Welfare: Some Preliminary Findings.” Journal of Economic History 25, no. 4 (1965): 615-38.

Tucker, Robert W. and David C. Hendrickson. The Fall of the First British Empire: Origins of the War of American Independence. Baltimore: Johns Hopkins Press, 1982.

Walton, Gary M. “The New Economic History and the Burdens of the Navigation Acts.” Economic History Review 24, no. 4 (1971): 533-42.

Citation: Baack, Ben. “Economics of the American Revolutionary War”. EH.Net Encyclopedia, edited by Robert Whaples. November 13, 2001 (updated August 5, 2010). URL http://eh.net/encyclopedia/the-economics-of-the-american-revolutionary-war/