is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

Antebellum Banking in the United States

Howard Bodenhorn, Lafayette College

The first legitimate commercial bank in the United States was the Bank of North America founded in 1781. Encouraged by Alexander Hamilton, Robert Morris persuaded the Continental Congress to charter the bank, which loaned to the cash-strapped Revolutionary government as well as private citizens, mostly Philadelphia merchants. The possibilities of commercial banking had been widely recognized by many colonists, but British law forbade the establishment of commercial, limited-liability banks in the colonies. Given that many of the colonists’ grievances against Parliament centered on economic and monetary issues, it is not surprising that one of the earliest acts of the Continental Congress was the establishment of a bank.

The introduction of banking to the U.S. was viewed as an important first step in forming an independent nation because banks supplied a medium of exchange (banknotes1 and deposits) in an economy perpetually strangled by shortages of specie money and credit, because they animated industry, and because they fostered wealth creation and promoted well-being. In the last case, contemporaries typically viewed banks as an integral part of a wider system of government-sponsored commercial infrastructure. Like schools, bridges, road, canals, river clearing and harbor improvements, the benefits of banks were expected to accrue to everyone even if dividends accrued only to shareholders.

Financial Sector Growth

By 1800 each major U.S. port city had at least one commercial bank serving the local mercantile community. As city banks proved themselves, banking spread into smaller cities and towns and expanded their clientele. Although most banks specialized in mercantile lending, others served artisans and farmers. In 1820 there were 327 commercial banks and several mutual savings banks that promoted thrift among the poor. Thus, at the onset of the antebellum period (defined here as the period between 1820 and 1860), urban residents were familiar with the intermediary function of banks and used bank-supplied currencies (deposits and banknotes) for most transactions. Table 1 reports the number of banks and the value of loans outstanding at year end between 1820 and 1860. During the era, the number of banks increased from 327 to 1,562 and total loans increased from just over $55.1 million to $691.9 million. Bank-supplied credit in the U.S. economy increased at a remarkable annual average rate of 6.3 percent. Growth in the financial sector, then outpaced growth in aggregate economic activity. Nominal gross domestic product increased an average annual rate of about 4.3 percent over the same interval. This essay discusses how regional regulatory structures evolved as the banking sector grew and radiated out from northeastern cities to the hinterlands.

Table 1

Number of Banks and Total Loans, 1820-1860

Year Banks Loans ($ millions)
1820 327 55.1
1821 273 71.9
1822 267 56.0
1823 274 75.9
1824 300 73.8
1825 330 88.7
1826 331 104.8
1827 333 90.5
1828 355 100.3
1829 369 103.0
1830 381 115.3
1831 424 149.0
1832 464 152.5
1833 517 222.9
1834 506 324.1
1835 704 365.1
1836 713 457.5
1837 788 525.1
1838 829 485.6
1839 840 492.3
1840 901 462.9
1841 784 386.5
1842 692 324.0
1843 691 254.5
1844 696 264.9
1845 707 288.6
1846 707 312.1
1847 715 310.3
1848 751 344.5
1849 782 332.3
1850 824 364.2
1851 879 413.8
1852 913 429.8
1853 750 408.9
1854 1208 557.4
1855 1307 576.1
1856 1398 634.2
1857 1416 684.5
1858 1422 583.2
1859 1476 657.2
1860 1562 691.9

Sources: Fenstermaker (1965); U.S. Comptroller of the Currency (1931).


As important as early American banks were in the process of capital accumulation, perhaps their most notable feature was their adaptability. Kuznets (1958) argues that one measure of the financial sector’s value is how and to what extent it evolves with changing economic conditions. Put in place to perform certain functions under one set of economic circumstances, how did it alter its behavior and service the needs of borrowers as circumstances changed. One benefit of the federalist U.S. political system was that states were given the freedom to establish systems reflecting local needs and preferences. While the political structure deserves credit in promoting regional adaptations, North (1994) credits the adaptability of America’s formal rules and informal constraints that rewarded adventurism in the economic, as well as the noneconomic, sphere. Differences in geography, climate, crop mix, manufacturing activity, population density and a host of other variables were reflected in different state banking systems. Rhode Island’s banks bore little resemblance to those in far away Louisiana or Missouri, or even those in neighboring Connecticut. Each state’s banks took a different form, but their purpose was the same; namely, to provide the state’s citizens with monetary and intermediary services and to promote the general economic welfare. This section provides a sketch of regional differences. A more detailed discussion can be found in Bodenhorn (2002).

State Banking in New England

New England’s banks most resemble the common conception of the antebellum bank. They were relatively small, unit banks; their stock was closely held; they granted loans to local farmers, merchants and artisans with whom the bank’s managers had more than a passing familiarity; and the state took little direct interest in their daily operations.

Of the banking systems put in place in the antebellum era, New England’s is typically viewed as the most stable and conservative. Friedman and Schwartz (1986) attribute their stability to an Old World concern with business reputations, familial ties, and personal legacies. New England was long settled, its society well established, and its business community mature and respected throughout the Atlantic trading network. Wealthy businessmen and bankers with strong ties to the community — like the Browns of Providence or the Bowdoins of Boston — emphasized stability not just because doing so benefited and reflected well on them, but because they realized that bad banking was bad for everyone’s business.

Besides their reputation for soundness, the two defining characteristics of New England’s early banks were their insider nature and their small size. The typical New England bank was small compared to banks in other regions. Table 2 shows that in 1820 the average Massachusetts country bank was about the same size as a Pennsylvania country bank, but both were only about half the size of a Virginia bank. A Rhode Island bank was about one-third the size of a Massachusetts or Pennsylvania bank and a mere one-sixth as large as Virginia’s banks. By 1850 the average Massachusetts bank declined relatively, operating on about two-thirds the paid-in capital of a Pennsylvania country bank. Rhode Island’s banks also shrank relative to Pennsylvania’s and were tiny compared to the large branch banks in the South and West.

Table 2

Average Bank Size by Capital and Lending in 1820 and 1850 Selected States and Cities

(in $ thousands)



Loans 1850 Capital Loans
Massachusetts $374.5 $480.4 $293.5 $494.0
except Boston 176.6 230.8 170.3 281.9
Rhode Island 95.7 103.2 186.0 246.2
except Providence 60.6 72.0 79.5 108.5
New York na na 246.8 516.3
except NYC na na 126.7 240.1
Pennsylvania 221.8 262.9 340.2 674.6
except Philadelphia 162.6 195.2 246.0 420.7
Virginia1,2 351.5 340.0 270.3 504.5
South Carolina2 na na 938.5 1,471.5
Kentucky2 na na 439.4 727.3

Notes: 1 Virginia figures for 1822. 2 Figures represent branch averages.

Source: Bodenhorn (2002).

Explanations for New England Banks’ Relatively Small Size

Several explanations have been offered for the relatively small size of New England’s banks. Contemporaries attributed it to the New England states’ propensity to tax bank capital, which was thought to work to the detriment of large banks. They argued that large banks circulated fewer banknotes per dollar of capital. The result was a progressive tax that fell disproportionately on large banks. Data compiled from Massachusetts’s bank reports suggest that large banks were not disadvantaged by the capital tax. It was a fact, as contemporaries believed, that large banks paid higher taxes per dollar of circulating banknotes, but a potentially better benchmark is the tax to loan ratio because large banks made more use of deposits than small banks. The tax to loan ratio was remarkably constant across both bank size and time, averaging just 0.6 percent between 1834 and 1855. Moreover, there is evidence of constant to modestly increasing returns to scale in New England banking. Large banks were generally at least as profitable as small banks in all years between 1834 and 1860, and slightly more so in many.

Lamoreaux (1993) offers a different explanation for the modest size of the region’s banks. New England’s banks, she argues, were not impersonal financial intermediaries. Rather, they acted as the financial arms of extended kinship trading networks. Throughout the antebellum era banks catered to insiders: directors, officers, shareholders, or business partners and kin of directors, officers, shareholders and business partners. Such preferences toward insiders represented the perpetuation of the eighteenth-century custom of pooling capital to finance family enterprises. In the nineteenth century the practice continued under corporate auspices. The corporate form, in fact, facilitated raising capital in greater amounts than the family unit could raise on its own. But because the banks kept their loans within a relatively small circle of business connections, it was not until the late nineteenth century that bank size increased.2

Once the kinship orientation of the region’s banks was established it perpetuated itself. When outsiders could not obtain loans from existing insider organizations, they formed their own insider bank. In doing so the promoters assured themselves of a steady supply of credit and created engines of economic mobility for kinship networks formerly closed off from many sources of credit. State legislatures accommodated the practice through their liberal chartering policies. By 1860, Rhode Island had 91 banks, Maine had 68, New Hampshire 51, Vermont 44, Connecticut 74 and Massachusetts 178.

The Suffolk System

One of the most commented on characteristic of New England’s banking system was its unique regional banknote redemption and clearing mechanism. Established by the Suffolk Bank of Boston in the early 1820s, the system became known as the Suffolk System. With so many banks in New England, each issuing it own form of currency, it was sometimes difficult for merchants, farmers, artisans, and even other bankers, to discriminate between real and bogus banknotes, or to discriminate between good and bad bankers. Moreover, the rural-urban terms of trade pulled most banknotes toward the region’s port cities. Because country merchants and farmers were typically indebted to city merchants, country banknotes tended to flow toward the cities, Boston more so than any other. By the second decade of the nineteenth century, country banknotes became a constant irritant for city bankers. City bankers believed that country issues displaced Boston banknotes in local transactions. More irritating though was the constant demand by the city banks’ customers to accept country banknotes on deposit, which placed the burden of interbank clearing on the city banks.3

In 1803 the city banks embarked on a first attempt to deal with country banknotes. They joined together, bought up a large quantity of country banknotes, and returned them to the country banks for redemption into specie. This effort to reduce country banknote circulation encountered so many obstacles that it was quickly abandoned. Several other schemes were hatched in the next two decades, but none proved any more successful than the 1803 plan.

The Suffolk Bank was chartered in 1818 and within a year embarked on a novel scheme to deal with the influx of country banknotes. The Suffolk sponsored a consortium of Boston bank in which each member appointed the Suffolk as its lone agent in the collection and redemption of country banknotes. In addition, each city bank contributed to a fund used to purchase and redeem country banknotes. When the Suffolk collected a large quantity of a country bank’s notes, it presented them for immediate redemption with an ultimatum: Join in a regular and organized redemption system or be subject to further unannounced redemption calls.4 Country banks objected to the Suffolk’s proposal, because it required them to keep noninterest-earning assets on deposit with the Suffolk in amounts equal to their average weekly redemptions at the city banks. Most country banks initially refused to join the redemption network, but after the Suffolk made good on a few redemption threats, the system achieved near universal membership.

Early interpretations of the Suffolk system, like those of Redlich (1949) and Hammond (1957), portray the Suffolk as a proto-central bank, which acted as a restraining influence that exercised some control over the region’s banking system and money supply. Recent studies are less quick to pronounce the Suffolk a successful experiment in early central banking. Mullineaux (1987) argues that the Suffolk’s redemption system was actually self-defeating. Instead of making country banknotes less desirable in Boston, the fact that they became readily redeemable there made them perfect substitutes for banknotes issued by Boston’s prestigious banks. This policy made country banknotes more desirable, which made it more, not less, difficult for Boston’s banks to keep their own notes in circulation.

Fenstermaker and Filer (1986) also contest the long-held view that the Suffolk exercised control over the region’s money supply (banknotes and deposits). Indeed, the Suffolk’s system was self-defeating in this regard as well. By increasing confidence in the value of a randomly encountered banknote, people were willing to hold increases in banknotes issues. In an interesting twist on the traditional interpretation, a possible outcome of the Suffolk system is that New England may have grown increasingly financial backward as a direct result of the region’s unique clearing system. Because banknotes were viewed as relatively safe and easily redeemed, the next big financial innovation — deposit banking — in New England lagged far behind other regions. With such wide acceptance of banknotes, there was no reason for banks to encourage the use of deposits and little reason for consumers to switch over.

Summary: New England Banks

New England’s banking system can be summarized as follows: Small unit banks predominated; many banks catered to small groups of capitalists bound by personal and familial ties; banking was becoming increasingly interconnected with other lines of business, such as insurance, shipping and manufacturing; the state took little direct interest in the daily operations of the banks and its supervisory role amounted to little more than a demand that every bank submit an unaudited balance sheet at year’s end; and that the Suffolk developed an interbank clearing system that facilitated the use of banknotes throughout the region, but had little effective control over the region’s money supply.

Banking in the Middle Atlantic Region


After 1810 or so, many bank charters were granted in New England, but not because of the presumption that the bank would promote the commonweal. Charters were granted for the personal gain of the promoter and the shareholders and in proportion to the personal, political and economic influence of the bank’s founders. No New England state took a significant financial stake in its banks. In both respects, New England differed markedly from states in other regions. From the beginning of state-chartered commercial banking in Pennsylvania, the state took a direct interest in the operations and profits of its banks. The Bank of North America was the obvious case: chartered to provide support to the colonial belligerents and the fledgling nation. Because the bank was popularly perceived to be dominated by Philadelphia’s Federalist merchants, who rarely loaned to outsiders, support for the bank waned.5 After a pitched political battle in which the Bank of North America’s charter was revoked and reinstated, the legislature chartered the Bank of Pennsylvania in 1793. As its name implies, this bank became the financial arm of the state. Pennsylvania subscribed $1 million of the bank’s capital, giving it the right to appoint six of thirteen directors and a $500,000 line of credit. The bank benefited by becoming the state’s fiscal agent, which guaranteed a constant inflow of deposits from regular treasury operations as well as western land sales.

By 1803 the demand for loans outstripped the existing banks’ supply and a plan for a new bank, the Philadelphia Bank, was hatched and its promoters petitioned the legislature for a charter. The existing banks lobbied against the charter, and nearly sank the new bank’s chances until it established a precedent that lasted throughout the antebellum era. Its promoters bribed the legislature with a payment of $135,000 in return for the charter, handed over one-sixth of its shares, and opened a line of credit for the state.

Between 1803 and 1814, the only other bank chartered in Pennsylvania was the Farmers and Mechanics Bank of Philadelphia, which established a second substantive precedent that persisted throughout the era. Existing banks followed a strict real-bills lending policy, restricting lending to merchants at very short terms of 30 to 90 days.6 Their adherence to a real-bills philosophy left a growing community of artisans, manufacturers and farmers on the outside looking in. The Farmers and Mechanics Bank was chartered to serve excluded groups. At least seven of its thirteen directors had to be farmers, artisans or manufacturers and the bank was required to lend the equivalent of 10 percent of its capital to farmers on mortgage for at least one year. In later years, banks were established to provide services to even more narrowly defined groups. Within a decade or two, most substantial port cities had banks with names like Merchants Bank, Planters Bank, Farmers Bank, and Mechanics Bank. By 1860 it was common to find banks with names like Leather Manufacturers Bank, Grocers Bank, Drovers Bank, and Importers Bank. Indeed, the Emigrant Savings Bank in New York City served Irish immigrants almost exclusively. In the other instances, it is not known how much of a bank’s lending was directed toward the occupational group included in its name. The adoption of such names may have been marketing ploys as much as mission statements. Only further research will reveal the answer.

New York

State-chartered banking in New York arrived less auspiciously than it had in Philadelphia or Boston. The Bank of New York opened in 1784, but operated without a charter and in open violation of state law until 1791 when the legislature finally sanctioned it. The city’s second bank obtained its charter surreptitiously. Alexander Hamilton was one of the driving forces behind the Bank of New York, and his long-time nemesis, Aaron Burr, was determined to establish a competing bank. Unable to get a charter from a Federalist legislature, Burr and his colleagues petitioned to incorporate a company to supply fresh water to the inhabitants of Manhattan Island. Burr tucked a clause into the charter of the Manhattan Company (the predecessor to today’s Chase Manhattan Bank) granting the water company the right to employ any excess capital in financial transactions. Once chartered, the company’s directors announced that $500,000 of its capital would be invested in banking.7 Thereafter, banking grew more quickly in New York than in Philadelphia, so that by 1812 New York had seven banks compared to the three operating in Philadelphia.

Deposit Insurance

Despite its inauspicious banking beginnings, New York introduced two innovations that influenced American banking down to the present. The Safety Fund system, introduced in 1829, was the nation’s first experiment in bank liability insurance (similar to that provided by the Federal Deposit Insurance Corporation today). The 1829 act authorized the appointment of bank regulators charged with regular inspections of member banks. An equally novel aspect was that it established an insurance fund insuring holders of banknotes and deposits against loss from bank failure. Ultimately, the insurance fund was insufficient to protect all bank creditors from loss during the panic of 1837 when eleven failures in rapid succession all but bankrupted the insurance fund, which delayed noteholder and depositor recoveries for months, even years. Even though the Safety Fund failed to provide its promised protections, it was an important episode in the subsequent evolution of American banking. Several Midwestern states instituted deposit insurance in the early twentieth century, and the federal government adopted it after the banking panics in the 1930s resulted in the failure of thousands of banks in which millions of depositors lost money.

“Free Banking”

Although the Safety Fund was nearly bankrupted in the late 1830s, it continued to insure a number of banks up to the mid 1860s when it was finally closed. No new banks joined the Safety Fund system after 1838 with the introduction of free banking — New York’s second significant banking innovation. Free banking represented a compromise between those most concerned with the underlying safety and stability of the currency and those most concerned with competition and freeing the country’s entrepreneurs from unduly harsh and anticompetitive restraints. Under free banking, a prospective banker could start a bank anywhere he saw fit, provided he met a few regulatory requirements. Each free bank’s capital was invested in state or federal bonds that were turned over to the state’s treasurer. If a bank failed to redeem even a single note into specie, the treasurer initiated bankruptcy proceedings and banknote holders were reimbursed from the sale of the bonds.

Actually Michigan preempted New York’s claim to be the first free-banking state, but Michigan’s 1837 law was modeled closely after a bill then under debate in New York’s legislature. Ultimately, New York’s influence was profound in this as well, because free banking became one of the century’s most widely copied financial innovations. By 1860 eighteen states adopted free banking laws closely resembling New York’s law. Three other states introduced watered-down variants. Eventually, the post-Civil War system of national banking adopted many of the substantive provisions of New York’s 1838 act.

Both the Safety Fund system and free banking were attempts to protect society from losses resulting from bank failures and to entice people to hold financial assets. Banks and bank-supplied currency were novel developments in the hinterlands in the early nineteenth century and many rural inhabitants were skeptical about the value of small pieces of paper. They were more familiar with gold and silver. Getting them to exchange one for the other was a slow process, and one that relied heavily on trust. But trust was built slowly and destroyed quickly. The failure of a single bank could, in a week, destroy the confidence in a system built up over a decade. New York’s experiments were designed to mitigate, if not eliminate, the negative consequences of bank failures. New York’s Safety Fund, then, differed in the details but not in intent, from New England’s Suffolk system. Bankers and legislators in each region grappled with the difficult issue of protecting a fragile but vital sector of the economy. Each region responded to the problem differently. The South and West settled on yet another solution.

Banking in the South and West

One distinguishing characteristic of southern and western banks was their extensive branch networks. Pennsylvania provided for branch banking in the early nineteenth century and two banks jointly opened about ten branches. In both instances, however, the branches became a net liability. The Philadelphia Bank opened four branches in 1809 and by 1811 was forced to pass on its semi-annual dividends because losses at the branches offset profits at the Philadelphia office. At bottom, branch losses resulted from a combination of ineffective central office oversight and unrealistic expectations about the scale and scope of hinterland lending. Philadelphia’s bank directors instructed branch managers to invest in high-grade commercial paper or real bills. Rural banks found a limited number of such lending opportunities and quickly turned to mortgage-based lending. Many of these loans fell into arrears and were ultimately written when land sales faltered.

Branch Banking

Unlike Pennsylvania, where branch banking failed, branch banks throughout the South and West thrived. The Bank of Virginia, founded in 1804, was the first state-chartered branch bank and up to the Civil War branch banks served the state’s financial needs. Several small, independent banks were chartered in the 1850s, but they never threatened the dominance of Virginia’s “Big Six” banks. Virginia’s branch banks, unlike Pennsylvania’s, were profitable. In 1821, for example, the net return to capital at the Farmers Bank of Virginia’s home office in Richmond was 5.4 percent. Returns at its branches ranged from a low of 3 percent at Norfolk (which was consistently the low-profit branch) to 9 percent in Winchester. In 1835, the last year the bank reported separate branch statistics, net returns to capital at the Farmers Bank’s branches ranged from 2.9 and 11.7 percent, with an average of 7.9 percent.

The low profits at the Norfolk branch represent a net subsidy from the state’s banking sector to the political system, which was not immune to the same kind of infrastructure boosterism that erupted in New York, Pennsylvania, Maryland and elsewhere. In the immediate post-Revolutionary era, the value of exports shipped from Virginia’s ports (Norfolk and Alexandria) slightly exceeded the value shipped from Baltimore. In the 1790s the numbers turned sharply in Baltimore’s favor and Virginia entered the internal-improvements craze and the battle for western shipments. Banks represented the first phase of the state’s internal improvements plan in that many believed that Baltimore’s new-found advantage resulted from easier credit supplied by the city’s banks. If Norfolk, with one of the best natural harbors on the North American Atlantic coast, was to compete with other port cities, it needed banks and the state required three of the state’s Big Six branch banks to operate branches there. Despite its natural advantages, Norfolk never became an important entrepot and it probably had more bank capital than it required. This pattern was repeated elsewhere. Other states required their branch banks to serve markets such as Memphis, Louisville, Natchez and Mobile that might, with the proper infrastructure grow into important ports.

State Involvement and Intervention in Banking

The second distinguishing characteristic of southern and western banking was sweeping state involvement and intervention. Virginia, for example, interjected the state into the banking system by taking significant stakes in its first chartered banks (providing an implicit subsidy) and by requiring them, once they established themselves, to subsidize the state’s continuing internal improvements programs of the 1820s and 1830s. Indiana followed such a strategy. So, too, did Kentucky, Louisiana, Mississippi, Illinois, Kentucky, Tennessee and Georgia in different degrees. South Carolina followed a wholly different strategy. On one hand, it chartered several banks in which it took no financial interest. On the other, it chartered the Bank of the State of South Carolina, a bank wholly owned by the state and designed to lend to planters and farmers who complained constantly that the state’s existing banks served only the urban mercantile community. The state-owned bank eventually divided its lending between merchants, farmers and artisans and dominated South Carolina’s financial sector.

The 1820s and 1830s witnessed a deluge of new banks in the South and West, with a corresponding increase in state involvement. No state matched Louisiana’s breadth of involvement in the 1830s when it chartered three distinct types of banks: commercial banks that served merchants and manufacturers; improvement banks that financed various internal improvements projects; and property banks that extended long-term mortgage credit to planters and other property holders. Louisiana’s improvement banks included the New Orleans Canal and Banking Company that built a canal connecting Lake Ponchartrain to the Mississippi River. The Exchange and Banking Company and the New Orleans Improvement and Banking Company were required to build and operate hotels. The New Orleans Gas Light and Banking Company constructed and operated gas streetlights in New Orleans and five other cities. Finally, the Carrollton Railroad and Banking Company and the Atchafalaya Railroad and Banking Company were rail construction companies whose bank subsidiaries subsidized railroad construction.

“Commonwealth Ideal” and Inflationary Banking

Louisiana’s 1830s banking exuberance reflected what some historians label the “commonwealth ideal” of banking; that is, the promotion of the general welfare through the promotion of banks. Legislatures in the South and West, however, never demonstrated a greater commitment to the commonwealth ideal than during the tough times of the early 1820s. With the collapse of the post-war land boom in 1819, a political coalition of debt-strapped landowners lobbied legislatures throughout the region for relief and its focus was banking. Relief advocates lobbied for inflationary banking that would reduce the real burden of debts taken on during prior flush times.

Several western states responded to these calls and chartered state-subsidized and state-managed banks designed to reinflate their embattled economies. Chartered in 1821, the Bank of the Commonwealth of Kentucky loaned on mortgages at longer than customary periods and all Kentucky landowners were eligible for $1,000 loans. The loans allowed landowners to discharge their existing debts without being forced to liquidate their property at ruinously low prices. Although the bank’s notes were not redeemable into specie, they were given currency in two ways. First, they were accepted at the state treasury in tax payments. Second, the state passed a law that forced creditors to accept the notes in payment of existing debts or agree to delay collection for two years.

The commonwealth ideal was not unique to Kentucky. During the depression of the 1820s, Tennessee chartered the State Bank of Tennessee, Illinois chartered the State Bank of Illinois and Louisiana chartered the Louisiana State Bank. Although they took slightly different forms, they all had the same intent; namely, to relieve distressed and embarrassed farmers, planters and land owners. What all these banks shared in common was the notion that the state should promote the general welfare and economic growth. In this instance, and again during the depression of the 1840s, state-owned banks were organized to minimize the transfer of property when economic conditions demanded wholesale liquidation. Such liquidation would have been inefficient and imposed unnecessary hardship on a large fraction of the population. To the extent that hastily chartered relief banks forestalled inefficient liquidation, they served their purpose. Although most of these banks eventually became insolvent, requiring taxpayer bailouts, we cannot label them unsuccessful. They reinflated economies and allowed for an orderly disposal of property. Determining if the net benefits were positive or negative requires more research, but for the moment we are forced to accept the possibility that the region’s state-owned banks of the 1820s and 1840s advanced the commonweal.

Conclusion: Banks and Economic Growth

Despite notable differences in the specific form and structure of each region’s banking system, they were all aimed squarely at a common goal; namely, realizing that region’s economic potential. Banks helped achieve the goal in two ways. First, banks monetized economies, which reduced the costs of transacting and helped smooth consumption and production across time. It was no longer necessary for every farm family to inventory their entire harvest. They could sell most of it, and expend the proceeds on consumption goods as the need arose until the next harvest brought a new cash infusion. Crop and livestock inventories are prone to substantial losses and an increased use of money reduced them significantly. Second, banks provided credit, which unleashed entrepreneurial spirits and talents. A complete appreciation of early American banking recognizes the banks’ contribution to antebellum America’s economic growth.

Bibliographic Essay

Because of the large number of sources used to construct the essay, the essay was more readable and less cluttered by including a brief bibliographic essay. A full bibliography is included at the end.

Good general histories of antebellum banking include Dewey (1910), Fenstermaker (1965), Gouge (1833), Hammond (1957), Knox (1903), Redlich (1949), and Trescott (1963). If only one book is read on antebellum banking, Hammond’s (1957) Pulitzer-Prize winning book remains the best choice.

The literature on New England banking is not particularly large, and the more important historical interpretations of state-wide systems include Chadbourne (1936), Hasse (1946, 1957), Simonton (1971), Spencer (1949), and Stokes (1902). Gras (1937) does an excellent job of placing the history of a single bank within the larger regional and national context. In a recent book and a number of articles Lamoreaux (1994 and sources therein) provides a compelling and eminently readable reinterpretation of the region’s banking structure. Nathan Appleton (1831, 1856) provides a contemporary observer’s interpretation, while Walker (1857) provides an entertaining if perverse and satirical history of a fictional New England bank. Martin (1969) provides details of bank share prices and dividend payments from the establishment of the first banks in Boston through the end of the nineteenth century. Less technical studies of the Suffolk system include Lake (1947), Trivoli (1979) and Whitney (1878); more technical interpretations include Calomiris and Kahn (1996), Mullineaux (1987), and Rolnick, Smith and Weber (1998).

The literature on Middle Atlantic banking is huge, but the better state-level histories include Bryan (1899), Daniels (1976), and Holdsworth (1928). The better studies of individual banks include Adams (1978), Lewis (1882), Nevins (1934), and Wainwright (1953). Chaddock (1910) provides a general history of the Safety Fund system. Golembe (1960) places it in the context of modern deposit insurance, while Bodenhorn (1996) and Calomiris (1989) provide modern analyses. A recent revival of interest in free banking has brought about a veritable explosion in the number of studies on the subject, but the better introductory ones remain Rockoff (1974, 1985), Rolnick and Weber (1982, 1983), and Dwyer (1996).

The literature on southern and western banking is large and of highly variable quality, but I have found the following to be the most readable and useful general sources: Caldwell (1935), Duke (1895), Esary (1912), Golembe (1978), Huntington (1915), Green (1972), Lesesne (1970), Royalty (1979), Schweikart (1987) and Starnes (1931).

References and Further Reading

Adams, Donald R., Jr. Finance and Enterprise in Early America: A Study of Stephen Girard’s Bank, 1812-1831. Philadelphia: University of Pennsylvania Press, 1978.

Alter, George, Claudia Goldin and Elyce Rotella. “The Savings of Ordinary Americans: The Philadelphia Saving Fund Society in the Mid-Nineteenth-Century.” Journal of Economic History 54, no. 4 (December 1994): 735-67.

Appleton, Nathan. A Defence of Country Banks: Being a Reply to a Pamphlet Entitled ‘An Examination of the Banking System of Massachusetts, in Reference to the Renewal of the Bank Charters.’ Boston: Stimpson & Clapp, 1831.

Appleton, Nathan. Bank Bills or Paper Currency and the Banking System of Massachusetts with Remarks on Present High Prices. Boston: Little, Brown and Company, 1856.

Berry, Thomas Senior. Revised Annual Estimates of American Gross National Product: Preliminary Estimates of Four Major Components of Demand, 1789-1889. Richmond: University of Richmond Bostwick Paper No. 3, 1978.

Bodenhorn, Howard. “Zombie Banks and the Demise of New York’s Safety Fund.” Eastern Economic Journal 22, no. 1 (1996): 21-34.

Bodenhorn, Howard. “Private Banking in Antebellum Virginia: Thomas Branch & Sons of Petersburg.” Business History Review 71, no. 4 (1997): 513-42.

Bodenhorn, Howard. A History of Banking in Antebellum America: Financial Markets and Economic Development in an Era of Nation-Building. Cambridge and New York: Cambridge University Press, 2000.

Bodenhorn, Howard. State Banking in Early America: A New Economic History. New York: Oxford University Press, 2002.

Bryan, Alfred C. A History of State Banking in Maryland. Baltimore: Johns Hopkins University Press, 1899.

Caldwell, Stephen A. A Banking History of Louisiana. Baton Rouge: Louisiana State University Press, 1935.

Calomiris, Charles W. “Deposit Insurance: Lessons from the Record.” Federal Reserve Bank of Chicago Economic Perspectives 13 (1989): 10-30.

Calomiris, Charles W., and Charles Kahn. “The Efficiency of Self-Regulated Payments Systems: Learnings from the Suffolk System.” Journal of Money, Credit, and Banking 28, no. 4 (1996): 766-97.

Chadbourne, Walter W. A History of Banking in Maine, 1799-1930. Orono: University of Maine Press, 1936.

Chaddock, Robert E. The Safety Fund Banking System in New York, 1829-1866. Washington, D.C.: Government Printing Office, 1910.

Daniels, Belden L. Pennsylvania: Birthplace of Banking in America. Harrisburg: Pennsylvania Bankers Association, 1976.

Davis, Lance, and Robert E. Gallman. “Capital Formation in the United States during the Nineteenth Century.” In Cambridge Economic History of Europe (Vol. 7, Part 2), edited by Peter Mathias and M.M. Postan, 1-69. Cambridge: Cambridge University Press, 1978.

Davis, Lance, and Robert E. Gallman. “Savings, Investment, and Economic Growth: The United States in the Nineteenth Century.” In Capitalism in Context: Essays on Economic Development and Cultural Change in Honor of R.M. Hartwell, edited by John A. James and Mark Thomas, 202-29. Chicago: University of Chicago Press, 1994.

Dewey, Davis R. State Banking before the Civil War. Washington, D.C.: Government Printing Office, 1910.

Duke, Basil W. History of the Bank of Kentucky, 1792-1895. Louisville: J.P. Morton, 1895.

Dwyer, Gerald P., Jr. “Wildcat Banking, Banking Panics, and Free Banking in the United States.” Federal Reserve Bank of Atlanta Economic Review 81, no. 3 (1996): 1-20.

Engerman, Stanley L., and Robert E. Gallman. “U.S. Economic Growth, 1783-1860.” Research in Economic History 8 (1983): 1-46.

Esary, Logan. State Banking in Indiana, 1814-1873. Indiana University Studies No. 15. Bloomington: Indiana University Press, 1912.

Fenstermaker, J. Van. The Development of American Commercial Banking, 1782-1837. Kent, Ohio: Kent State University, 1965.

Fenstermaker, J. Van, and John E. Filer. “Impact of the First and Second Banks of the United States and the Suffolk System on New England Bank Money, 1791-1837.” Journal of Money, Credit, and Banking 18, no. 1 (1986): 28-40.

Friedman, Milton, and Anna J. Schwartz. “Has the Government Any Role in Money?” Journal of Monetary Economics 17, no. 1 (1986): 37-62.

Gallman, Robert E. “American Economic Growth before the Civil War: The Testimony of the Capital Stock Estimates.” In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John Joseph Wallis, 79-115. Chicago: University of Chicago Press, 1992.

Goldsmith, Raymond. Financial Structure and Development. New Haven: Yale University Press, 1969.

Golembe, Carter H. “The Deposit Insurance Legislation of 1933: An Examination of its Antecedents and Purposes.” Political Science Quarterly 76, no. 2 (1960): 181-200.

Golembe, Carter H. State Banks and the Economic Development of the West. New York: Arno Press, 1978.

Gouge, William M. A Short History of Paper Money and Banking in the United States. Philadelphia: T.W. Ustick, 1833.

Gras, N.S.B. The Massachusetts First National Bank of Boston, 1784-1934. Cambridge, MA: Harvard University Press, 1937.

Green, George D. Finance and Economic Development in the Old South: Louisiana Banking, 1804-1861. Stanford: Stanford University Press, 1972.

Hammond, Bray. Banks and Politics in America from the Revolution to the Civil War. Princeton: Princeton University Press, 1957.

Hasse, William F., Jr. A History of Banking in New Haven, Connecticut. New Haven: privately printed, 1946.

Hasse, William F., Jr. A History of Money and Banking in Connecticut. New Haven: privately printed, 1957.

Holdsworth, John Thom. Financing an Empire: History of Banking in Pennsylvania. Chicago: S.J. Clarke Publishing Company, 1928.

Huntington, Charles Clifford. A History of Banking and Currency in Ohio before the Civil War. Columbus: F. J. Herr Printing Company, 1915.

Knox, John Jay. A History of Banking in the United States. New York: Bradford Rhodes & Company, 1903.

Kuznets, Simon. “Foreword.” In Financial Intermediaries in the American Economy, by Raymond W. Goldsmith. Princeton: Princeton University Press, 1958.

Lake, Wilfred. “The End of the Suffolk System.” Journal of Economic History 7, no. 4 (1947): 183-207.

Lamoreaux, Naomi R. Insider Lending: Banks, Personal Connections, and Economic Development in Industrial New England. Cambridge: Cambridge University Press, 1994.

Lesesne, J. Mauldin. The Bank of the State of South Carolina. Columbia: University of South Carolina Press, 1970.

Lewis, Lawrence, Jr. A History of the Bank of North America: The First Bank Chartered in the United States. Philadelphia: J.B. Lippincott & Company, 1882.

Lockard, Paul A. Banks, Insider Lending and Industries of the Connecticut River Valley of Massachusetts, 1813-1860. Unpublished Ph.D. thesis, University of Massachusetts, 2000.

Martin, Joseph G. A Century of Finance. New York: Greenwood Press, 1969.

Moulton, H.G. “Commercial Banking and Capital Formation.” Journal of Political Economy 26 (1918): 484-508, 638-63, 705-31, 849-81.

Mullineaux, Donald J. “Competitive Monies and the Suffolk Banking System: A Contractual Perspective.” Southern Economic Journal 53 (1987): 884-98.

Nevins, Allan. History of the Bank of New York and Trust Company, 1784 to 1934. New York: privately printed, 1934.

New York. Bank Commissioners. “Annual Report of the Bank Commissioners.” New York General Assembly Document No. 74. Albany, 1835.

North, Douglass. “Institutional Change in American Economic History.” In American Economic Development in Historical Perspective, edited by Thomas Weiss and Donald Schaefer, 87-98. Stanford: Stanford University Press, 1994.

Rappaport, George David. Stability and Change in Revolutionary Pennsylvania: Banking, Politics, and Social Structure. University Park, PA: The Pennsylvania State University Press, 1996.

Redlich, Fritz. The Molding of American Banking: Men and Ideas. New York: Hafner Publishing Company, 1947.

Rockoff, Hugh. “The Free Banking Era: A Reexamination.” Journal of Money, Credit, and Banking 6, no. 2 (1974): 141-67.

Rockoff, Hugh. “New Evidence on the Free Banking Era in the United States.” American Economic Review 75, no. 4 (1985): 886-89.

Rolnick, Arthur J., and Warren E. Weber. “Free Banking, Wildcat Banking, and Shinplasters.” Federal Reserve Bank of Minneapolis Quarterly Review 6 (1982): 10-19.

Rolnick, Arthur J., and Warren E. Weber. “New Evidence on the Free Banking Era.” American Economic Review 73, no. 5 (1983): 1080-91.

Rolnick, Arthur J., Bruce D. Smith, and Warren E. Weber. “Lessons from a Laissez-Faire Payments System: The Suffolk Banking System (1825-58).” Federal Reserve Bank of Minneapolis Quarterly Review 22, no. 3 (1998): 11-21.

Royalty, Dale. “Banking and the Commonwealth Ideal in Kentucky, 1806-1822.” Register of the Kentucky Historical Society 77 (1979): 91-107.

Schumpeter, Joseph A. The Theory of Economic Development: An Inquiry into Profit, Capital, Credit, Interest, and the Business Cycle. Cambridge, MA: Harvard University Press, 1934.

Schweikart, Larry. Banking in the American South from the Age of Jackson to Reconstruction. Baton Rouge: Louisiana State University Press, 1987.

Simonton, William G. Maine and the Panic of 1837. Unpublished master’s thesis: University of Maine, 1971.

Sokoloff, Kenneth L. “Productivity Growth in Manufacturing during Early Industrialization.” In Long-Term Factors in American Economic Growth, edited by Stanley L. Engerman and Robert E. Gallman. Chicago: University of Chicago Press, 1986.

Sokoloff, Kenneth L. “Invention, Innovation, and Manufacturing Productivity Growth in the Antebellum Northeast.” In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John Joseph Wallis, 345-78. Chicago: University of Chicago Press, 1992.

Spencer, Charles, Jr. The First Bank of Boston, 1784-1949. New York: Newcomen Society, 1949.

Starnes, George T. Sixty Years of Branch Banking in Virginia. New York: Macmillan Company, 1931.

Stokes, Howard Kemble. Chartered Banking in Rhode Island, 1791-1900. Providence: Preston & Rounds Company, 1902.

Sylla, Richard. “Forgotten Men of Money: Private Bankers in Early U.S. History.” Journal of Economic History 36, no. 2 (1976):

Temin, Peter. The Jacksonian Economy. New York: W. W. Norton & Company, 1969.

Trescott, Paul B. Financing American Enterprise: The Story of Commercial Banking. New York: Harper & Row, 1963.

Trivoli, George. The Suffolk Bank: A Study of a Free-Enterprise Clearing System. London: The Adam Smith Institute, 1979.

U.S. Comptroller of the Currency. Annual Report of the Comptroller of the Currency. Washington, D.C.: Government Printing Office, 1931.

Wainwright, Nicholas B. History of the Philadelphia National Bank. Philadelphia: William F. Fell Company, 1953.

Walker, Amasa. History of the Wickaboag Bank. Boston: Crosby, Nichols & Company, 1857.

Wallis, John Joseph. “What Caused the Panic of 1839?” Unpublished working paper, University of Maryland, October 2000.

Weiss, Thomas. “U.S. Labor Force Estimates and Economic Growth, 1800-1860.” In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John Joseph Wallis, 19-75. Chicago: University of Chicago Press, 1992.

Whitney, David R. The Suffolk Bank. Cambridge, MA: Riverside Press, 1878.

Wright, Robert E. “Artisans, Banks, Credit, and the Election of 1800.” The Pennsylvania Magazine of History and Biography 122, no. 3 (July 1998), 211-239.

Wright, Robert E. “Bank Ownership and Lending Patterns in New York and Pennsylvania, 1781-1831.” Business History Review 73, no. 1 (Spring 1999), 40-60.

1 Banknotes were small demonination IOUs printed by banks and circulated as currency. Modern U.S. money are simply banknotes issued by the Federal Reserve Bank, which has a monopoly privilege in the issue of legal tender currency. In antebellum American, when a bank made a loan, the borrower was typically handed banknotes with a face value equal to the dollar value of the loan. The borrower then spent these banknotes in purchasing goods and services, putting them into circulation. Contemporary law held that banks were required to redeem banknotes into gold and silver legal tender on demand. Banks found it profitable to issue notes because they typically held about 30 percent of the total value of banknotes in circulation as reserves. Thus, banks were able to leverage $30 in gold and silver into $100 in loans that returned about 7 percent interest on average.

2 Paul Lockard (2000) challenges Lamoreaux’s interpretation. In a study of 4 banks in the Connecticut River valley, Lockard finds that insiders did not dominate these banks’ resources. As provocative as Lockard’s findings are, he draws conclusions from a small and unrepresentative sample. Two of his four sample banks were savings banks, which were designed as quasi-charitable organizations designed to encourage savings by the working classes and provide small loans. Thus, Lockard’s sample is effectively reduced to two banks. At these two banks, he identifies about 10 percent of loans as insider loans, but readily admits that he cannot always distinguish between insiders and outsiders. For a recent study of how early Americans used savings banks, see Alter, Goldin and Rotella (1994). The literature on savings banks is so large that it cannot be be given its due here.

3 Interbank clearing involves the settling of balances between banks. Modern banks cash checks drawn on other banks and credit the funds to the depositor. The Federal Reserve system provides clearing services between banks. The accepting bank sends the checks to the Federal Reserve, who credits the sending bank’s accounts and sends the checks back to the bank on which they were drawn for reimbursement. In the antebellum era, interbank clearing involved sending banknotes back to issuing banks. Because New England had so many small and scattered banks, the costs of returning banknotes to their issuers were large and sometimes avoided by recirculating notes of distant banks rather than returning them. Regular clearings and redemptions served an important purpose, however, because they kept banks in touch with the current market conditions. A massive redemption of notes was indicative of a declining demand for money and credit. Because the bank’s reserves were drawn down with the redemptions, it was forced to reduce its volume of loans in accord with changing demand conditions.

4 The law held that banknotes were redeemable on demand into gold or silver coin or bullion. If a bank refused to redeem even a single $1 banknote, the banknote holder could have the bank closed and liquidated to recover his or her claim against it.

5 Rappaport (1996) found that the bank’s loans were about equally divided between insiders (shareholders and shareholders’ family and business associates) and outsiders, but nonshareholders received loans about 30 percent smaller than shareholders. The issue remains about whether this bank was an “insider” bank, and depends largely on one’s definition. Any modern bank which made half of its loans to shareholders and their families would be viewed as an “insider” bank. It is less clear where the line can be usefully drawn for antebellum banks.

6 Real-bills lending followed from a nineteenth-century banking philosophy, which held that bank lending should be used to finance the warehousing or wholesaling of already-produced goods. Loans made on these bases were thought to be self-liquidating in that the loan was made against readily sold collateral actually in the hands of a merchant. Under the real-bills doctrine, the banks’ proper functions were to bridge the gap between production and retail sale of goods. A strict adherence to real-bills tenets excluded loans on property (mortgages), loans on goods in process (trade credit), or loans to start-up firms (venture capital). Thus, real-bills lending prescribed a limited role for banks and bank credit. Few banks were strict adherents to the doctrine, but many followed it in large part.

7 Robert E. Wright (1998) offers a different interpretation, but notes that Burr pushed the bill through at the end of a busy legislative session so that many legislators voted on the bill without having read it thoroughly or at all.

An Economic History of Patent Institutions

B. Zorina Khan, Bowdoin College


Such scholars as Max Weber and Douglass North have suggested that intellectual property systems had an important impact on the course of economic development. However, questions from other eras are still current today, ranging from whether patents and copyrights constitute optimal policies toward intellectual inventions and their philosophical rationale to the growing concerns of international political economy. Throughout their history, patent and copyright regimes have confronted and accommodated technological innovations that were no less significant and contentious for their time than those of the twenty-first century. An economist from the nineteenth century would have been equally familiar with considerations about whether uniformity in intellectual property rights across countries harmed or benefited global welfare and whether piracy might be to the advantage of developing countries. The nineteenth and early twentieth centuries in particular witnessed considerable variation in the intellectual property policies that individual countries implemented, and this allows economic historians to determine the consequences of different rules and standards.

This article outlines crucial developments in the patent policies of Europe, the United States, and follower countries. The final section discusses the harmonization of international patent laws that occurred after the middle of the nineteenth century.


The British Patent System

The grant of exclusive property rights vested in patents developed from medieval guild practices in Europe. Britain in particular is noted for the establishment of a patent system which has been in continuous operation for a longer period than any other in the world. English monarchs frequently used patents to reward favorites with privileges, such as monopolies over trade that increased the retail prices of commodities. It was not until the seventeenth century that patents were associated entirely with awards to inventors, when Section 6 of the Statute of Monopolies (21 Jac. I. C. 3, 1623, implemented in 1624) repealed the practice of royal monopoly grants to all except patentees of inventions. The Statute of Monopolies allowed patent rights of fourteen years for “the sole making or working of any manner of new manufacture within this realm to the first and true inventor…” Importers of foreign discoveries were allowed to obtain domestic patent protection in their own right.

The British patent system established significant barriers in the form of prohibitively high costs that limited access to property rights in invention to a privileged few. Patent fees for England alone amounted to £100-£120 ($585) or approximately four times per capita income in 1860. The fee for a patent that also covered Scotland and Ireland could cost as much as £350 pounds ($1,680). Adding a co-inventor was likely to increase the costs by another £24. Patents could be extended only by a private Act of Parliament, which required political influence, and extensions could cost as much as £700. These constraints favored the elite class of those with wealth, political connections or exceptional technical qualifications, and consciously created disincentives for inventors from humble backgrounds. Patent fees provided an important source of revenues for the Crown and its employees, and created a class of administrators who had strong incentives to block proposed reforms.

In addition to the monetary costs, complicated administrative procedures that inventors had to follow implied that transactions costs were also high. Patent applications for England alone had to pass through seven offices, from the Home Secretary to the Lord Chancellor, and twice required the signature of the Sovereign. If the patent were extended to Scotland and Ireland it was necessary to negotiate another five offices in each country. The cumbersome process of patent applications (variously described as “mediaeval” and “fantastical”) afforded ample material for satire, but obviously imposed severe constraints on the ordinary inventor who wished to obtain protection for his discovery. These features testify to the much higher monetary and transactions costs, in both absolute and relative terms, of obtaining property rights to inventions in England in comparison to the United States. Such costs essentially restricted the use of the patent system to inventions of high value and to applicants who already possessed or could raise sufficient capital to apply for the patent. The complicated system also inhibited the diffusion of information and made it difficult, if not impossible, for inventors outside of London to readily conduct patent searches. Patent specifications were open to public inspection on payment of a fee, but until 1852 they were not officially printed, published or indexed. Since the patent could be filed in any of three offices in Chancery, searches of the prior art involved much time and inconvenience. Potential patentees were well advised to obtain the help of a patent agent to aid in negotiating the numerous steps and offices that were required for pursuit of the application in London.

In the second half of the eighteenth century, nation-wide lobbies of manufacturers and patentees expressed dissatisfaction with the operation of the British patent system. However, it was not until after the Crystal Palace Exhibition in 1851 that their concerns were finally addressed, in an effort to meet the burgeoning competition from the United States. In 1852 the efforts of numerous societies and of individual engineers, inventors and manufacturers over many decades were finally rewarded. Parliament approved the Patent Law Amendment Act, which authorized the first major adjustment of the system in two centuries. The new patent statutes incorporated features that drew on testimonials to the superior functioning of the American patent regime. Significant changes in the direction of the American system included lower fees and costs, and the application procedures were rationalized into a single Office of the Commissioners of Patents for Inventions, or “Great Seal Patent Office.”

The 1852 patent reform bills included calls for a U.S.-style examination system but this was amended in the House of Commons and the measure was not included in the final version. Opponents were reluctant to vest examiners with the necessary discretionary power, and pragmatic observers pointed to the shortage of a cadre of officials with the required expertise. The law established a renewal system that required the payment of fees in installments if the patentee wished to maintain the patent for the full term. Patentees initially paid £25 and later installments of £50 (after three years) and £100 (after seven years) to maintain the patent for a full term of fourteen years. Despite the relatively low number of patents granted in England, between 1852 and 1880 the patent office still made a profit of over £2 million. Provision was made for the printing and publication of the patent records. The 1852 reforms undoubtedly instituted improvements over the former opaque procedures, and the lower fees had an immediate impact. Nevertheless, the system retained many of the former features that had implied that patents were in effect viewed as privileges rather than merited rights, and only temporarily abated expressions of dissatisfaction.

One source of dissatisfaction that endured until the end of the nineteenth century was the state of the common law regarding patents. At least partially in reaction to a history of abuse of patent privileges, patents were widely viewed as monopolies that restricted community rights, and thus to be carefully monitored and narrowly construed. Second, British patents were granted “by the grace of the Crown” and therefore were subject to any restrictions that the government cared to impose. According to the statutes, as a matter of national expediency, patents were to be granted if “they be not contrary to the law, nor mischievous to the State, by raising prices of commodities at home, or to the hurt of trade, or generally inconvenient.” The Crown possessed the ability to revoke any patents that were deemed inconvenient or contrary to public policy. After 1855, the government could also appeal to a need for official secrecy to prohibit the publication of patent specifications in order to protect national security and welfare. Moreover, the state could commandeer a patentee’s invention without compensation or consent, although in some cases the patentee was paid a royalty.

Policies towards patent assignments and trade in intellectual property rights also constrained the market for inventions. Ever vigilant to protect an unsuspecting public from fraudulent financial schemes on the scale of the South Sea Bubble, ownership of patent rights was limited to five investors (later extended to twelve). Nevertheless, the law did not offer any relief to the purchaser of an invalid or worthless patent, so potential purchasers were well advised to engage in extensive searches before entering into contracts. When coupled with the lack of assurance inherent in a registration system, the purchase of a patent right involved a substantive amount of risk and high transactions costs — all indicative of a speculative instrument. It is therefore not surprising that the market for assignments and licenses seems to have been quite limited, and even in the year after the 1852 reforms only 273 assignments were recorded.

In 1883 new legislation introduced procedures that were somewhat simpler, with fewer steps. The fees fell to £4 for the initial term of four years, and the remaining £150 could be paid in annual increments. For the first time, applications could be forwarded to the Patent Office through the post office. This statute introduced opposition proceedings, which enabled interested parties to contest the proposed patent within two months of the filing of the patent specifications. Compulsory licenses were introduced in 1883 (and strengthened in 1919 as “licenses of right”) for fear that foreign inventors might injure British industry by refusing to grant other manufacturers the right to use their patent. The 1883 act provided for the employment of “examiners” but their activity was limited to ensuring that the material was patentable and properly described. Indeed, it was not until 1902 that the British system included an examination for novelty, and even then the process was not regarded as stringent as in other countries. Many new provisions were designed to thwart foreign competition. Until 1907 patentees who manufactured abroad were required to also make the patented product in Britain. Between 1919 and 1949 chemical products were excluded from patent protection to counter the threat posed by the superior German chemical industry. Licenses of right enabled British manufacturers to compel foreign patentees to permit the use of their patents on pharmaceuticals and food products.

In sum, changes in the British patent system were initially unforthcoming despite numerous calls for change. Ultimately, the realization that England’s early industrial and technological supremacy was threatened by the United States and other nations in Europe led to a slow process of revisions that lasted well into the twentieth century. One commentator summed up the series of developments by declaring that the British patent system at the time of writing (1967) remained essentially “a modified version of a pre-industrial economic institution.”

The French Patent System

Early French policies towards inventions and innovations in the eighteenth century were based on an extensive but somewhat arbitrary array of rewards and incentives. During this period inventors or introducers of inventions could benefit from titles, pensions that sometimes extended to spouses and offspring, loans (some interest-free), lump-sum grants, bounties or subsidies for production, exemptions from taxes, or monopoly grants in the form of exclusive privileges. This complex network of state policies towards inventors and their inventions was revised but not revoked after the outbreak of the French Revolution.

The modern French patent system was established according to the laws of 1791 (amended in 1800) and 1844. Patentees filed through a simple registration system without any need to specify what was new about their claim, and could persist in obtaining the grant even if warned that the patent was likely to be legally invalid. On each patent document the following caveat was printed: “The government, in granting a patent without prior examination, does not in any manner guarantee either the priority, merit or success of an invention.” The inventor decided whether to obtain a patent for a period of five, ten or fifteen years, and the term could only be extended through legislative action. Protection extended to all methods and manufactured articles, but excluded theoretical or scientific discoveries without practical application, financial methods, medicines, and items that could be covered by copyright.

The 1791 statute stipulated patent fees that were costly, ranging from 300 livres through 1500 livres, based on the declared term of the patent. The 1844 statute maintained this policy since fees were set at 500 francs ($100) for a five year patent, 1000 francs for a 10 year patent and 1500 for a patent of fifteen years, payable in annual installments. In an obvious attempt to limit international diffusion of French discoveries, until 1844 patents were voided if the inventor attempted to obtain a patent overseas on the same invention. On the other hand, the first introducer of an invention covered by a foreign patent would enjoy the same “natural rights” as the patentee of an original invention or improvement. Patentees had to put the invention into practice within two years from the initial grant, or face a tribunal which had the power to repeal the patent, unless the patentee could point to unforeseen events which had prevented his complying with the provisions of the law. The rights of patentees were also restricted if the invention related to items that were controlled by the French government, such as printing presses and firearms.

In return for the limited monopoly right, the patentee was expected to describe the invention in such terms that a workman skilled in the arts could replicate the invention and this information was expected to be made public. However, no provision was made for the publication or diffusion of these descriptions. At least until the law of April 7 1902, specifications were only available in manuscript form in the office in which they had originally been lodged, and printed information was limited to brief titles in patent indexes. The attempt to obtain information on the prior art was also inhibited by restrictions placed on access: viewers had to state their motives; foreigners had to be assisted by French attorneys; and no extract from the manuscript could be copied until the patent had expired.

The state remained involved in the discretionary promotion of invention and innovation through policies beyond the granting of patents. In the first place, the patent statutes did not limit their offer of potential appropriation of returns only to property rights vested in patents. The inventor of a discovery of proven utility could choose between a patent or making a gift of the invention to the nation in exchange for an award from funds that were set aside for the encouragement of industry. Second, institutions such as the Société d’encouragement pour l’industrie nationale awarded a number of medals each year to stimulate new discoveries in areas they considered to be worth pursuing, and also to reward deserving inventors and manufacturers. Third, the award of assistance and pensions to inventors and their families continued well into the nineteenth century. Fourth, at times the Society purchased patent rights and turned the invention over into the public domain.

The basic principles of the modern French patent system were evident in the early French statutes and were retained in later revisions. Since France during the ancien régime was likely the first country to introduce systematic examinations of applications for privileges, it is somewhat ironic that commentators point to the retention of registration without prior examination as the defining feature of the “French system” until 1978. In 1910 fees remained high, although somewhat lower in real terms, at one hundred francs per year. Working requirements were still in place, and patentees were not allowed to satisfy the requirement by importing the article even if the patentee had manufactured it in another European country. However, the requirement was waived if the patentee could persuade the tribunal that the patent was not worked because of unavoidable circumstances.

Similar problems were evident in the market for patent rights. Contracts for patent assignments were filed in the office of the Prefect for the district, but since there was no central source of information it was difficult to trace the records for specific inventions. The annual fees for the entire term of the patent had to be paid in advance if the patent was assigned to a second party. Like patents themselves, assignments and licenses were issued with a caveat emptor clause. This was partially due to the nature of patent property under a registration system, and partially to the uncertainties of legal jurisprudence in this area. For both buyer and seller, the uncertainties associated with the exchange likely reduced the net expected value of trade.

The Spanish Patent System

French patent laws were adopted in its colonies, but also diffused to other countries through its influence on Spain’s system following the Spanish Decree of 1811. The Spanish experience during the nineteenth century is instructive since this country experienced lower rates and levels of economic development than the early industrializers. Like its European neighbors, early Spanish rules and institutions were vested in privileges which had lasting effects that could be detected even in the later period. The per capita rate of patenting in Spain was lower than other major European countries, and foreigners filed the majority of patented inventions. Between 1759 and 1878, roughly one half of all grants were to citizens of other countries, notably France and (to a lesser extent) Britain. Thus, the transfer of foreign technology was a major concern in the political economy of Spain.

This dependence on foreign technologies was reflected in the structure of the Spanish patent system, which permitted patents of introduction as well as patents for invention. Patents of introduction were granted to entrepreneurs who wished to produce foreign technologies that were new to Spain, with no requirement of claims to being the true inventor. Thus, the sole objective of these instruments was to enhance innovation and production in Spain. Since the owners of introduction patents could not prevent third parties from importing similar machines from abroad, they also had an incentive to maintain reasonable pricing structures. Introduction patents had a term of only five years, with a cost of 3000 reales, whereas the fees of patents for invention varied from 1000 reales for five years, 3000 reales for ten years, and 6000 reales for a term of fifteen years. Patentees were required to work the patent within one year, and about a quarter of patents granted between 1826 and 1878 were actually implemented. Since patents of introduction had a brief term, they encouraged the production of items with high expected profits and a quick payback period, after which monopoly rights expired, and the country could benefit from its diffusion.

The German Patent System

The German patent system was influenced by developments in the United States, and itself influenced legislation in Argentina, Austria, Brazil, Denmark, Finland, Holland, Norway, Poland, Russia and Sweden. The German Empire was founded in 1871, and in the first six years each state adopted its own policies. Alsace-Lorraine favored a French-style system, whereas others such as Hamburg and Bremen did not offer patent protection. However, after strong lobbying by supporters of both sides of the debate regarding the merits of patent regimes, Germany passed a unified national Patent Act of 1877.

The 1877 statute created a centralized administration for the grant of a federal patent for original inventions. Industrial entrepreneurs succeeded in their objective of creating a “first to file” system, so patents were granted to the first applicant rather than to the “first and true inventor,” but in 1936 the National Socialists introduced a first to invent system. Applications were examined by examiners in the Patent Office who were expert in their field. During the eight weeks before the grant, patent applications were open to the public and an opposition could be filed denying the validity of the patent. German patent fees were deliberately high to eliminate protection for trivial inventions, with a renewal system that required payment of 30 marks for the first year, 50 marks for the second year, 100 marks for the third, and 50 marks annually after the third year. In 1923 the patent term was extended from fifteen years to eighteen years.

German patent policies encouraged diffusion, innovation and growth in specific industries with a view to fostering economic development. Patents could not be obtained for food products, pharmaceuticals or chemical products, although the process through which such items were produced could be protected. It has been argued that the lack of restrictions on the use of innovations and the incentives to patent around existing processes spurred productivity and diffusion in these industries. The authorities further ensured the diffusion of patent information by publishing claims and specification before they were granted. The German patent system also facilitated the use of inventions by firms, with the early application of a “work for hire” doctrine that allowed enterprises access to the rights and benefits of inventions of employees.

Although the German system was close to the American patent system, it was in other ways more stringent, resulting in patent grants that were lower in number, but likely higher in average value. The patent examination process required that the patent should be new, nonobvious, and also capable of producing greater efficiency. As in the United States, once granted, the courts adopted an extremely liberal attitude in interpreting and enforcing existing patent rights. Penalties for willful infringement included not only fines, but also the possibility of imprisonment. The grant of a patent could be revoked after the first three years if the patent was not worked, if the owner refused to grant licenses for the use of an invention that was deemed in the public interest, or if the invention was primarily being exploited outside of Germany. However, in most cases, a compulsory license was regarded as adequate.

After 1891 a parallel and weaker version of patent protection could be obtained through a gebrauchsmuster or utility patent (sometimes called a petty patent), which was granted through a registration system. Patent protection was available for inventions that could be represented by drawings or models with only a slight degree of novelty, and for a limited term of three years (renewable once for a total life of six years). About twice as many utility patents as examined patents were granted early in the 1930s. Patent protection based on co-existing systems of registration and examination appears to have served distinct but complementary purposes. Remedies for infringement of utility patents also included fines and imprisonment.

Other European Patent Systems

Very few developed countries would now seriously consider eliminating statutory protection for inventions, but in the second half of the nineteenth century the “patent controversy” in Europe pitted advocates of patent rights against an effective abolitionist movement. For a short period, the abolitionists were strong enough to obtain support for dismantling patent systems in a number of European countries. In 1863 the Congress of German Economists declared “patents of invention are injurious to common welfare;” and the movement achieved its greatest victory in Holland, which repealed its patent legislation in 1869. The Swiss cantons did not adopt patent protection until 1888, with an extension in the scope of coverage in 1907. The abolitionists based their arguments on the benefits of free trade and competition, and viewed patents as part of an anticompetitive and protectionist strategy analogous to tariffs on imports. Instead of state-sponsored monopoly awards, they argued, inventors could be rewarded by alternative policies, such as stipends from the government, payments from private industry or associations formed for that purpose, or simply through the lead time that the first inventor acquired over competitors by virtue of his prior knowledge.

According to one authority, the Netherlands eventually reinstated its patent system in 1912 and Switzerland introduced patent laws in 1888 largely because of a keen sense of morality, national pride and international pressure to do so. The appeal to “morality” as an explanatory factor is incapable of explaining the timing and nature of changes in strategies. Nineteenth-century institutions were not exogenous and their introduction or revisions generally reflected the outcome of a self-interested balancing of costs and benefits. The Netherlands and Switzerland were initially able to benefit from their ability to free-ride on the investments that other countries had made in technological advances. As for the cost of lower incentives for discoveries by domestic inventors, the Netherlands was never vaunted as a leader in technological innovation, and this is reflected in their low per capita patenting rates both before and after the period without patent laws. They recorded a total of only 4561 patents in the entire period from 1800 to 1869 and, even after adjusting for population, the Dutch patenting rate in 1869 was a mere 13.4 percent of the U.S. patenting rate. Moreover, between 1851 and 1865 88.6 percent of patents in the Netherlands had been granted to foreigners. After the patent laws were reintroduced in 1912, the major beneficiaries were again foreign inventors, who obtained 79.3 of the patents issued in the Netherlands. Thus, the Netherlands had little reason to adopt patent protection, except for external political pressures and the possibility that some types of foreign investment might be deterred.

The case was somewhat different for Switzerland, which was noted for being innovative, but in a narrow range of pursuits. Since the scale of output and markets were quite limited, much of Swiss industry generated few incentives for invention. A number of the industries in which the Swiss excelled, such as hand-made watches, chocolates and food products, were less susceptible to invention that warranted patent protection. For instance, despite the much larger consumer market in the United States, during the entire nineteenth century fewer than 300 U.S. patents related to chocolate composition or production. Improvements in pursuits such as watch-making could be readily protected by trade secrecy as long as the industry remained artisanal. However, with increased mechanization and worker mobility, secrecy would ultimately prove to be ineffective, and innovators would be unable to appropriate returns without more formal means of exclusion.

According to contemporary observers, the Swiss resolved to introduce patent legislation not because of a sudden newfound sense of morality, but because they feared that American manufacturers were surpassing them as a result of patented innovations in the mass production of products such as boots, shoes and watches. Indeed, before 1890, American inventors obtained more than 2068 patents on watches, and the U.S. watch making industry benefited from mechanization and strong economies of scale that led to rapidly falling prices of output, making them more competitive internationally. The implications are that the rates of industrial and technical progress in the United States were more rapid, and technological change was rendering artisanal methods obsolete in products with mass markets. Thus, the Swiss endogenously adopted patent laws because of falling competitiveness in their key industrial sectors.

What was the impact of the introduction of patent protection in Switzerland? Foreign inventors could obtain patents in the United States regardless of their domestic legislation, so we can approach this question tangentially by examining the patterns of patenting in the United States by Swiss residents before and after the 1888 reforms. Between 1836 and 1888, Swiss residents obtained a grand total of 585 patents in the United States. Fully a third of these patents were for watches and music boxes, and only six were for textiles or dyeing, industries in which Switzerland was regarded as competitive early on. Swiss patentees were more oriented to the international market, rather than the small and unprotected domestic market where they could not hope to gain as much from their inventions. For instance, in 1872 Jean-Jacques Mullerpack of Basel collaborated with Leon Jarossonl of Lille, France to invent an improvement in dyeing black with aniline colors, which they assigned to William Morgan Brown of London, England. Another Basel inventor, Alfred Kern, assigned his 1883 patent for violet aniline dyes to the Badische Anilin and Soda Fabrik of Mannheim, Germany.

After the patent reforms, the rate of Swiss patenting in the United States immediately increased. Swiss patentees obtained an annual average of 32.8 patents in the United States in the decade before the patent law was enacted in Switzerland. After the Swiss allowed patenting, this figure increased to an average of 111 each year in the following six years, and in the period from 1895 to 1900 a total of 821 Swiss patents were filed in the United States. The decadal rate of patenting per million residents increased from 111.8 for the ten years up to the reforms, to 451 per million residents in the 1890s, 513 in the 1900s, 458 in the 1910s and 684 in the 1920s. U.S. statutes required worldwide novelty, and patents could not be granted for discoveries that had been in prior use, so the increase was not due to a backlog of trade secrets that were now patented.

Moreover, the introduction of Swiss patent laws also affected the direction of inventions that Swiss residents patented in the United States. After the passage of the law, such patents covered a much broader range of inventions, including gas generators, textile machines, explosives, turbines, paints and dyes, and drawing instruments and lamps. The relative importance of watches and music boxes immediately fell from about a third before the reforms to 6.2 percent and 2.1 percent respectively in the 1890s and even further to 3.8 percent and 0.3 percent between 1900 and 1909. Another indication that international patenting was not entirely unconnected to domestic Swiss inventions can be discerned from the fraction of Swiss patents (filed in the U.S.) that related to process innovations. Before 1888, 21 percent of the patent specifications mentioned a process. Between 1888 and 1907, the Swiss statutes included the requirement that patents should include mechanical models, which precluded patenting of pure processes. The fraction of specifications that mentioned a process fell during the period between 1888 and 1907, but returned to 22 percent when the restriction was modified in 1907.

In short, although the Swiss experience is often cited as proof of the redundancy of patent protection, the limitations of this special case should be taken into account. The domestic market was quite small and offered minimal opportunity or inducements for inventors to take advantage of economies of scale or cost-reducing innovations. Manufacturing tended to cluster in a few industries where innovation was largely irrelevant, such as premium chocolates, or in artisanal production that was susceptible to trade secrecy, such as watches and music boxes. In other areas, notably chemicals, dyes and pharmaceuticals, Swiss industries were export-oriented, but even today their output tends to be quite specialized and high-valued rather than mass-produced. Export-oriented inventors were likely to have been more concerned about patent protection in the important overseas markets, rather than in the home market. Thus, between 1888 and 1907, although Swiss laws excluded patents for chemicals, pharmaceuticals and dyes, 20.7 percent of the Swiss patents filed in the United States were for just these types of inventions. The scanty evidence on Switzerland suggests that the introduction of patent rights was accompanied by changes in the rate and direction of inventive activity. In any event, both the Netherlands and Switzerland featured unique circumstances that seem to hold few lessons for developing countries today.

The Patent System in the United States

The United States stands out as having established one of the most successful patent systems in the world. Over six million patents have been issued since 1790, and American industrial supremacy has frequently been credited to its favorable treatment of inventors and the inducements held out for inventive activity. The first Article of the U.S. Constitution included a clause to “promote the Progress of Science and the useful Arts by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.” Congress complied by passing a patent statute in April 1790. The United States created in 1836 the first modern patent institution in the world, a system whose features differed in significant respects from those of other major countries. The historical record indicates that the legislature’s creation of a uniquely American system was a deliberate and conscious process of promoting open access to the benefits of private property rights in inventions. The laws were enforced by a judiciary which was willing to grapple with difficult questions such as the extent to which a democratic and market-oriented political economy was consistent with exclusive rights. Courts explicitly attempted to implement decisions that promoted economic growth and social welfare.

The primary feature of the “American system” is that all applications are subject to an examination for conformity with the laws and for novelty. An examination system was set in place in 1790, when a select committee consisting of the Secretary of State (Thomas Jefferson), the Attorney General and the Secretary of War scrutinized the applications. These duties proved to be too time-consuming for highly ranked officials who had other onerous duties, so three years later it was replaced by a registration system. The validity of patents was left up to the district courts, which had the power to set in motion a process that could end in the repeal of the patent. However by the 1830s this process was viewed as cumbersome and the statute that was passed in 1836 set in place the essential structure of the current patent system. In particular, the 1836 Patent Law established the Patent Office, whose trained and technically qualified employees were authorized to examine applications. Employees of the Patent Office were not permitted to obtain patent rights. In order to constrain the ability of examiners to engage in arbitrary actions, the applicant was given the right to file a bill in equity to contest the decisions of the Patent Office with the further right of appeal to the Supreme Court of the United States.

American patent policy likewise stands out in its insistence on affordable fees. The legislature debated the question of appropriate fees, and the first patent law in 1790 set the rate at the minimal sum of $3.70 plus copy costs. In 1793 the fees were increased to $30, and were maintained at this level until 1861. In that year, they were raised to $35, and the term of the patent was changed from fourteen years (with the possibility of an extension) to seventeen years (with no extensions.) The 1869 Report of the Commissioner of Patents compared the $35 fee for a US patent to the significantly higher charges in European countries such as Britain, France, Russia ($450), Belgium ($420) and Austria ($350). The Commissioner speculated that both the private and social costs of patenting were lower in a system of impartial specialized examiners, than under a system where similar services were performed on a fee-per-service basis by private solicitors. He pointed out that in the U.S. the fees were not intended to exact a price for the patent privilege or to raise revenues for the state – the disclosure of information was the sole price for the patent property right – rather, they were imposed merely to cover the administrative expenses of the Office.

The basic parameters of the U.S. patent system were transparent and predictable, in itself an aid to those who wished to obtain patent rights. In addition, American legislators were concerned with ensuring that information about the stock of patented knowledge was readily available and diffused rapidly. As early as 1805 Congress stipulated that the Secretary of State should publish an annual list of patents granted the preceding year, and after 1832 also required the publication in newspapers of notices regarding expired patents. The Patent Office itself was a source of centralized information on the state of the arts. However, Congress was also concerned with the question of providing for decentralized access to patent materials. The Patent Office maintained repositories throughout the country, where inventors could forward their patent models at the expense of the Patent Office. Rural inventors could apply for patents without significant obstacles, because applications could be submitted by mail free of postage.

American laws employed the language of the English statute in granting patents to “the first and true inventor.” Nevertheless, unlike in England, the phrase was used literally, to grant patents for inventions that were original in the world, not simply within U.S. borders. American patent laws provided strong protection for citizens of the United States, but varied over time in its treatment of foreign inventors. Americans could not obtain patents for imported discoveries, but the earliest statutes of 1793, 1800 and 1832, restricted patent property to citizens or to residents who declared that they intended to become citizens. As such, while an American could not appropriate patent rights to a foreign invention, he could freely use the idea without any need to bear licensing or similar costs that would otherwise have been due if the inventor had been able to obtain a patent in this country. In 1836, the stipulations on citizenship or residency were removed, but were replaced with discriminatory patent fees: foreigners could obtain a patent in the U.S. for a fee of three hundred dollars, or five hundred if they were British. After 1861 patent rights (with the exception of caveats) were available to all applicants on the same basis without regard to nationality.

The American patent system was based on the presumption that social welfare coincided with the individual welfare of inventors. Accordingly, legislators rejected restrictions on the rights of American inventors. However, the 1832 and 1836 laws stipulated that foreigners had to exploit their patented invention within eighteen months. These clauses seem to have been interpreted by the courts in a fairly liberal fashion, since alien patentees “need not prove that they hawked the patented improvement to obtain a market for it, or that they endeavored to sell it to any person, but that it rested upon those who sought to defeat the patent to prove that the plaintiffs neglected or refused to sell the patented invention for reasonable prices when application was made to them to purchase.” Such provisions proved to be temporary aberrations and were not included in subsequent legislation. Working requirements or compulsory licenses were regarded as unwarranted infringements of the rights of “meritorious inventors,” and incompatible with the philosophy of U.S. patent grants. Patentees were not required to pay annuities to maintain their property, there were no opposition proceedings, and once granted a patent could not be revoked unless there was proven evidence of fraud.

One of the advantages of a system that secures property rights is that it facilitates contracts and trade. Assignments provide a straightforward index of the effectiveness of the American system, since trade in inventions would hardly proliferate if patent rights were uncertain or worthless. An extensive national network of licensing and assignments developed early on, aided by legal rulings that overturned contracts for useless or fraudulent patents. In 1845 the Patent Office recorded 2,108 assignments, which can be compared to the cumulative stock of 7188 patents that were still in force in that year. By the 1870s the number of assignments averaged over 9000 assignments per year, and this increased in the next decade to over 12,000 transactions recorded annually. This flourishing market for patented inventions provided an incentive for further inventive activity for inventors who were able to appropriate the returns from their efforts, and also linked patents and productivity growth.

Property rights are worth little unless they can be legally enforced in a consistent, certain, and predictable manner. A significant part of the explanation for the success of the American intellectual property system relates to the efficiency with which the laws were interpreted and implemented. United States federal courts from their inception attempted to establish a store of doctrine that fulfilled the intent of the Constitution to secure the rights of intellectual property owners. The judiciary acknowledged that inventive efforts varied with the extent to which inventors could appropriate the returns on their discoveries, and attempted to ensure that patentees were not unjustly deprived of the benefits from their inventions. Numerous reported decisions before the early courts declared that, rather than unwarranted monopolies, patent rights were “sacred” and to be regarded as the just recompense to inventive ingenuity. Early courts had to grapple with a number of difficult issues, such as the appropriate measure of damages, disputes between owners of conflicting patents, and how to protect the integrity of contracts when the law altered. Changes inevitably occurred when litigants and judiciary both adapted to a more complex inventive and economic environment. However, the system remained true to the Constitution in the belief that the defense of rights in patented invention was important in fostering industrial and economic development.

Economists such as Joseph Schumpeter have linked market concentration and innovation, and patent rights are often felt to encourage the establishment of monopoly enterprises. Thus, an important aspect of the enforcement of patents and intellectual property in general depends on competition or antitrust policies. The attitudes of the judiciary towards patent conflicts are primarily shaped by their interpretation of the monopoly aspect of the patent grant. The American judiciary in the early nineteenth century did not recognize patents as monopolies, arguing that patentees added to social welfare through innovations which had never existed before, whereas monopolists secured to themselves rights that already belong to the public. Ultimately, the judiciary came to openly recognize that the enforcement and protection of all property rights involved trade-offs between individual monopoly benefits and social welfare.

The passage of the Sherman Act in 1890 was associated with a populist emphasis on the need to protect the public from corporate monopolies, including those based on patent protection, and raised the prospect of conflicts between patent policies and the promotion of social welfare through industrial competition. Firms have rarely been charged directly with antitrust violations based on patent issues. At the same time, a number of landmark restraint of trade lawsuits have involved technological innovators. In the early decades of the 20th century these included innovative enterprises such as John Deere & Co., American Can and International Harvester, through to the numerous cases since 1970 against IBM, Xerox, Eastman Kodak and, most recently, Intel and Microsoft. The evidence suggests that, holding other factors constant, more innovative firms and those with larger patent stocks are more likely to be charged with antitrust violations. A growing fraction of cases involve firms jointly charged with antitrust violations that are linked to patent based market power and to concerns about “innovation markets.”

The Japanese Patent System

Japan emerged from the Meiji era as a follower nation which deliberately designed institutions to try to emulate those of the most advanced industrial countries. Accordingly, in 1886 Takahashi Korekiyo was sent on a mission to examine patent systems in Europe and the United States. The Japanese envoy was not favorably impressed with the European countries in this regard. Instead, he reported: ” … we have looked about us to see what nations are the greatest, so that we could be like them; … and we said, `What is it that makes the United States such a great nation?’ and we investigated and we found it was patents, and we will have patents.” The first national patent statute in Japan was passed in 1888, and copied many features of the U.S. system, including the examination procedures.

However, even in the first statute, differences existed that reflected Japanese priorities and the “wise eclecticism of Japanese legislators.” For instance, patents were not granted to foreigners, protection could not be obtained for fashion, food products, or medicines, patents that were not worked within three years could be revoked, and severe remedies were imposed for infringement, including penal servitude. After Japan became a signatory of the Paris Convention a new law was passed in 1899, which amended existing legislation to accord with the agreements of the Convention, and extended protection to foreigners. The influence of the German laws were evident in subsequent reforms in 1909 (petty or utility patents were protected) and 1921 (protection was removed from chemical products, work for hire doctrines were adopted, and an opposition procedure was introduced). The Act of 1921 also permitted the state to revoke a patent grant on payment of appropriate compensation if it was deemed in the public interest. Medicines, food and chemical products could not be patented, but protection could be obtained for processes relating to their manufacture.

The modern Japanese patent system is an interesting amalgam of features drawn from the major patent institutions in the world. Patent applications are filed, and the applicants then have seven years within which they can request an examination. Before 1996 examined patents were published prior to the actual grant, and could be opposed before the final grant; but at present, opposition can only occur in the first six months after the initial grant. Patents are also given for utility models or incremental inventions which are required to satisfy a lower standard of novelty and nonobviousness and can be more quickly commercialized. It has been claimed that the Japanese system favors the filing of a plethora of narrowly defined claims for utility models that build on the more substantive contributions of patent grants, leading to the prospect of an anti-commons through “patent flooding.” Others argue that utility models aid diffusion and innovation in the early stages of the patent term, and that the pre-grant publication of patent specifications also promotes diffusion.

Harmonization of International Patent Laws

Today very few developed countries would seriously consider eliminating statutory protection for intellectual property, but in the second half of the nineteenth century the “patent controversy” pitted advocates of patent rights against an effective abolitionist movement. For a short period the latter group was strong enough to obtain support in favor of dismantling the patent systems in countries such as England, and in 1863 the Congress of German Economists declared “patents of invention are injurious to common welfare.” The movement achieved its greatest victory in Holland, which repealed its patent legislation in 1869. The abolitionists based their arguments on the benefits of free trade and competition and viewed patents as part of a protectionist strategy analogous to tariffs. Instead of monopoly awards to inventors, their efforts could be rewarded by alternative policies, such as stipends from the government, payments from private industry or associations formed for that purpose, or simply through the lead time that the first inventor acquired over competitors by virtue of his prior knowledge.

The decisive victory of the patent proponents shifted the focus of interest to the other extreme, and led to efforts to attain uniformity in intellectual property rights regimes across countries. Part of the impetus for change occurred because the costs of discordant national rules became more burdensome as the volume of international trade in industrial products grew over time. Americans were also concerned about the lack of protection accorded to their exhibits in the increasingly more prominent World’s Fairs. Indeed, the first international patent convention was held in Austria in 1873, at the suggestion of U.S. policy makers, who wanted to be certain that their inventors would be adequately protected at the International Exposition in Vienna that year. It also yielded an opportunity to protest the provisions in Austrian law which discriminated against foreigners, including a requirement that patents had to be worked within one year or risk invalidation. The Vienna Convention adopted several resolutions, including a recommendation that the United States opposed, in favor of compulsory licenses if they were deemed in the public interest. However, the convention followed U.S. lead and did not approve compulsory working requirements.

International conventions proliferated in subsequent years, and their tenor tended to reflect the opinions of the conveners. Their objective was not to reach compromise solutions that would reflect the needs and wishes of all participants, but rather to promote preconceived ideas. The overarching goal was to pursue uniform international patent laws, although there was little agreement about the finer points of these laws. It became clear that the goal of complete uniformity was not practicable, given the different objectives, ideologies and economic circumstances of participants. Nevertheless, in 1884 the International Union for the Protection of Industrial Property was signed by Belgium, Portugal, France, Guatemala, Italy, the Netherlands, San Salvador, Serbia, Spain and Switzerland. The United States became a member in 1887, and a significant number of developing countries followed suit, including Brazil, Bulgaria, Cuba, the Dominican Republic, Ceylon, Mexico, Trinidad and Tobago and Indonesia, among others.

The United States was the most prolific patenting nation in the world, many of the major American enterprises owed their success to patents and were expanding into international markets, and the U.S. patent system was recognized as the most successful. It is therefore not surprising that patent harmonization implied convergence towards the American model despite resistance from other nations. Countries such as Germany were initially averse to extending equal protection to foreigners because they feared that their domestic industry would be overwhelmed by American patents. Ironically, because its patent laws were the most liberal towards patentees, the United States found itself with weaker bargaining abilities than nations who could make concessions by changing their provisions. The U.S. pressed for the adoption of reciprocity (which would ensure that American patentees were treated as favorably abroad as in the United States) but this principle was rejected in favor of “national treatment” (American patentees were to be granted the same rights as nationals of the foreign country). This likely influenced the U.S. tendency to use bilateral trade sanctions rather than multilateral conventions to obtain reforms in international patent policies.

It was commonplace in the nineteenth century to rationalize and advocate close links between trade policies, protection, and international laws regarding intellectual property. These links were evident at the most general philosophical level, and at the most specific, especially in terms of compulsory working requirements and provisions to allow imports by the patentee. For instance, the 1880 Paris Convention considered the question of imports of the patented product by the patentee. According to the laws of France, Mexico and Tunisia, such importation would result in the repeal of the patent grant. The Convention inserted an article that explicitly ruled out forfeiture of the patent under these circumstances, which led some French commentators to argue that “the laws on industrial property… will be truly disastrous if they do not have a counterweight in tariff legislation.” The movement to create an international patent system elucidated the fact that intellectual property laws do not exist in a vacuum, but are part of a bundle of rights that are affected by other laws and policies.


Appropriate institutions to promote creations in the material and intellectual sphere are especially critical because ideas and information are public goods that are characterized by nonrivalry and nonexclusion. Once the initial costs are incurred, ideas can be reproduced at zero marginal cost and it may be difficult to exclude others from their use. Thus, in a competitive market, public goods may suffer from underprovision or may never be created because of a lack of incentive on the part of the original provider who bears the initial costs but may not be able to appropriate the benefits. Market failure can be ameliorated in several ways, for instance through government provision, rewards or subsidies to original creators, private patronage, and through the creation of intellectual property rights.

Patents allow the initial producers a limited period during which they are able to benefit from a right of exclusion. If creativity is a function of expected profits, these grants to inventors have the potential to increase social production possibilities at lower cost. Disclosure requirements promote diffusion, and the expiration of the temporary monopoly right ultimately adds to the public domain. Overall welfare is enhanced if the social benefits of diffusion outweigh the deadweight and social costs of temporary exclusion. This period of exclusion may be costly for society, especially if future improvements are deterred, and if rent-seeking such as redistributive litigation results in wasted resources. Much attention has also been accorded to theoretical features of the optimal system, including the breadth, longevity, and height of patent and copyright grants.

However, strongly enforced rights do not always benefit the producers and owners of intellectual property rights, especially if there is a prospect of cumulative invention where follow-on inventors build on the first discovery. Thus, more nuanced models are ambivalent about the net welfare benefits of strong exclusive rights to inventions. Indeed, network models imply that the social welfare of even producers may increase from weak enforcement if more extensive use of the product increases the value to all users. Under these circumstances, the patent owner may benefit from the positive externalities created by piracy. In the absence of royalties, producers may appropriate returns through ancillary means, such as the sale of complementary items or improved reputation. In a variant of the durable-goods monopoly problem, it has been shown that piracy can theoretically increase the demand for products by ensuring that producers can credibly commit to uniform prices over time. Also in this vein, price and/or quality discrimination of non-private goods across pirates and legitimate users can result in net welfare benefits for society and for the individual firm. If the cost of imitation increases with quality, infringement can also benefit society if it causes firms to adopt a strategy of producing higher quality commodities.

Economic theorists who are troubled by the imperfections of intellectual property grants have proposed alternative mechanisms that lead to more satisfactory mathematical solutions. Theoretical analyses have advanced our understanding in this area, but such models by their nature cannot capture many complexities. They tend to overlook such factors as the potential for greater corruption or arbitrariness in the administration of alternatives to patents. Similarly, they fail to appreciate the role of private property rights in conveying information and facilitating markets, and their value in reducing risk and uncertainty for independent inventors with few private resources. The analysis becomes even less satisfactory when producers belong to different countries than consumers. Thus, despite the flurry of academic research on the economics of intellectual property, we have not progressed far beyond Fritz Machlup’s declaration that our state of knowledge does not allow to us to either recommend the introduction or the removal of such systems. Existing studies leave a wide area of ambiguity about the causes and consequences of institutional structures in general, and their evolution across time and region.

In the realm of intellectual property, questions from four centuries ago are still current, ranging from its philosophical underpinnings, to whether patents and copyrights constitute optimal policies towards intellectual inventions, to the growing concerns of international political economy. A number of scholars are so impressed with technological advances in the twenty-first century that they argue we have reached a critical juncture where we need completely new institutions. Throughout their history, patent and copyright regimes have confronted and accommodated technological innovations that were no less significant and contentious for their time. An economist from the nineteenth century would have been equally familiar with considerations about whether uniformity in intellectual property rights across countries harmed or benefited global welfare, and whether piracy might be to the advantage of developing countries. Similarly, the link between trade and intellectual property rights that informs the TRIPS (trade-related aspects of intellectual property rights) agreement was quite standard two centuries ago.

Today the majority of patents are filed in developed countries by the residents of developed countries, most notably those of Japan and the United States. The developing countries of the twenty-first century are under significant political pressure to adopt stronger patent laws and enforcement, even though few patents are filed by residents of the developing countries. Critics of intellectual property rights point to costs, such as monopoly rents and higher barriers to entry, administrative costs, outflows of royalty payments to foreign entities, and a lack of indigenous innovation. Other studies, however, have more optimistic findings regarding the role of patents in economic and social development. They suggest that stronger protection can encourage more foreign direct investment, greater access to technology, and increased benefits from trade openness. Moreover, both economic history and modern empirical research indicate that stronger patent rights and more effective markets in invention can, by encouraging and enabling the inventiveness of ordinary citizens of developing countries, help to increase social and economic welfare.

Patents Statistics for France, Britain, the United States and Germany, 1790-1960
1790 . 68 3 .
1791 34 57 33 .
1792 29 85 11 .
1793 4 43 20 .
1794 0 55 22 .
1795 1 51 12 .
1796 8 75 44 .
1797 4 54 51 .
1798 10 77 28 .
1799 22 82 44 .
1800 16 96 41 .
1801 34 104 44 .
1802 29 107 65 .
1803 45 73 97 .
1804 44 60 84 .
1805 63 95 57 .
1806 101 99 63 .
1807 66 94 99 .
1808 61 95 158 .
1809 52 101 203 .
1810 93 108 223 .
1811 66 115 215 0
1812 96 119 238 2
1813 88 142 181 2
1814 53 96 210 1
1815 77 102 173 10
1816 115 118 206 10
1817 162 103 174 16
1818 153 132 222 18
1819 138 101 156 10
1820 151 97 155 10
1821 180 109 168 11
1822 175 113 200 8
1823 187 138 173 22
1824 217 180 228 25
1825 321 250 304 17
1826 281 131 323 67
1827 333 150 331 69
1828 388 154 368 87
1829 452 130 447 59
1830 366 180 544 57
1831 220 150 573 34
1832 287 147 474 46
1833 431 180 586 76
1834 576 207 630 66
1835 556 231 752 73
1836 582 296 702 65
1837 872 256 426 46
1838 1312 394 514 104
1839 730 411 404 125
1840 947 440 458 156
1841 925 440 490 162
1842 1594 371 488 153
1843 1397 420 493 160
1844 1863 450 478 158
1845 2666 572 473 256
1846 2750 493 566 252
1847 2937 493 495 329
1848 1191 388 583 256
1849 1953 514 984 253
1850 2272 523 883 308
1851 2462 455 752 274
1852 3279 1384 885 272
1853 4065 2187 844 287
1854 4563 1878 1755 276
1855 5398 2046 1881 287
1856 5761 1094 2302 393
1857 6110 2028 2674 414
1858 5828 1954 3455 375
1859 5439 1977 4160 384
1860 6122 2063 4357 550
1861 5941 2047 3020 551
1862 5859 2191 3214 630
1863 5890 2094 3773 633
1864 5653 2024 4630 557
1865 5472 2186 6088 609
1866 5671 2124 8863 549
1867 6098 2284 12277 714
1868 6103 2490 12526 828
1869 5906 2407 12931 616
1870 3850 2180 12137 648
1871 2782 2376 11659 458
1872 4875 2771 12180 958
1873 5074 2974 11616 1130
1874 5746 3162 12230 1245
1875 6007 3112 13291 1382
1876 6736 3435 14169 1947
1877 7101 3317 12920 1604
1878 7981 3509 12345 4200
1879 7828 3524 12165 4410
1880 7660 3741 12902 3960
1881 7813 3950 15500 4339
1882 7724 4337 18091 4131
1883 8087 3962 21162 4848
1884 8253 9983 19118 4459
1885 8696 8775 23285 4018
1886 9011 9099 21767 4008
1887 8863 9226 20403 3882
1888 8669 9309 19551 3923
1889 9287 10081 23324 4406
1890 9009 10646 25313 4680
1891 9292 10643 22312 5550
1892 9902 11164 22647 5900
1893 9860 11600 22750 6430
1894 10433 11699 19855 6280
1895 10257 12191 20856 5720
1896 11430 12473 21822 5410
1897 12550 14210 22067 5440
1898 12421 14167 20377 5570
1899 12713 14160 23278 7430
1900 12399 13710 24644 8784
1901 12103 13062 25546 10508
1902 12026 13764 27119 10610
1903 12469 15718 31029 9964
1904 12574 15089 30258 9189
1905 12953 14786 29775 9600
1906 13097 14707 31170 13430
1907 13170 16272 35859 13250
1908 13807 16284 32735 11610
1909 13466 15065 36561 11995
1910 16064 15269 35141 12100
1911 15593 17164 32856 12640
1912 15737 15814 36198 13080
1913 15967 16599 33917 13520
1914 12161 15036 39892 12350
1915 5056 11457 43118 8190
1916 3250 8424 43892 6271
1917 4100 9347 40935 7399
1918 4400 10809 38452 7340
1919 10500 12301 36797 7766
1920 18950 14191 37060 14452
1921 17700 17697 37798 15642
1922 18300 17366 38369 20715
1923 19200 17073 38616 20526
1924 19200 16839 42584 18189
1925 18000 17199 46432 15877
1926 18200 17333 44733 15500
1927 17500 17624 41717 15265
1928 22000 17695 42357 15598
1929 24000 18937 45267 20202
1930 24000 20888 45226 26737
1931 24000 21949 51761 25846
1932 21850 21150 53504 26201
1933 20000 17228 48807 21755
1934 19100 16890 44452 17011
1935 18000 17675 40663 16139
1936 16700 17819 39831 16750
1937 16750 17614 37738 14526
1938 14000 19314 38102 15068
1939 15550 17605 43118 16525
1940 10100 11453 42323 14647
1941 8150 11179 41171 14809
1942 10000 7962 38514 14648
1943 12250 7945 31101 14883
1944 11650 7712 28091 .
1945 7360 7465 25712 .
1946 11050 8971 21859 .
1947 13500 11727 20191 .
1948 13700 15558 24007 .
1949 16700 20703 35224 .
1950 17800 13509 43219 .
1951 25200 13761 44384 27767
1952 20400 21380 43717 37179
1953 43000 17882 40546 37113
1954 34000 17985 33910 19140
1955 23000 20630 30535 14760
1956 21900 19938 46918 18150
1957 23000 25205 42873 20467
1958 24950 18531 48450 19837
1959 41600 18157 52509 22556
1960 35000 26775 47286 19666

Additional Reading

Khan, B. Zorina. The Democratization of Invention: Patents and Copyrights in American Economic Development. New York: Cambridge University Press, 2005.

Khan, B. Zorina, and Kenneth L. Sokoloff. “Institutions and Technological Innovation during Early Economic Growth, 1790-1930.” NBER Working Paper No. 10966. Cambridge, MA: December 2004. (Available at


Besen, Stanley M., and Leo J. Raskind, “Introduction to the Law and Economics of Intellectual Property.” Journal of Economic Perspectives 5, no. 1 (1991): 3-27.

Bugbee, Bruce. The Genesis of American Patent and Copyright Law. Washington, DC: Public Affairs Press, 1967.

Coulter, Moureen. Property in Ideas: The Patent Question in Mid-Victorian England. Kirksville, MO: Thomas Jefferson Press, 1991

Dutton, H. I. The Patent System and Inventive Activity during the Industrial Revolution, 1750-1852, Manchester, UK: Manchester University Press, 1984.

Epstein, R. “Industrial Inventions: Heroic or Systematic?” Quarterly Journal of Economics 40 (1926): 232-72.

Gallini, Nancy T. “The Economics of Patents: Lessons from Recent U.S. Patent Reform.” Journal of Economic Perspectives 16, no. 2 (2002): 131–54.

Gilbert, Richard and Carl Shapiro. “Optimal Patent Length and Breadth.” Rand Journal of Economics 21 (1990): 106-12.

Gilfillan, S. Colum. The Sociology of Invention. Cambridge, MA: Follett, 1935.

Gomme, A. A. Patents of Invention: Origin and Growth of the Patent System in Britain, London: Longmans Green, 1946.

Harding, Herbert. Patent Office Centenary, London: Her Majesty’s Stationery Office, 1953.

Hilaire-Pérez, Liliane. Inventions et Inventeurs en France et en Angleterre au XVIIIe siècle. Lille: Université de Lille, 1994.

Hilaire-Pérez, Liliane. L’invention technique au siècle des Lumières. Paris: Albin Michel, 2000.

Jeremy, David J., Transatlantic Industrial Revolution: The Diffusion of Textile Technologies between Britain and America, 1790-1830s. Cambridge, MA: MIT Press, 1981.

Khan, B. Zorina. “Property Rights and Patent Litigation in Early Nineteenth-Century America.” Journal of Economic History 55, no. 1 (1995): 58-97.

Khan, B. Zorina. “Married Women’s Property Right Laws and Female Commercial Activity.” Journal of Economic History 56, no. 2 (1996): 356-88.

Khan, B. Zorina. “Federal Antitrust Agencies and Public Policy towards Patents and Innovation.” Cornell Journal of Law and Public Policy 9 (1999): 133-69.

Khan, B. Zorina, “`Not for Ornament': Patenting Activity by Women Inventors.” Journal of Interdisciplinary History 33, no. 2 (2000): 159-95.

Khan, B. Zorina. “Technological Innovations and Endogenous Changes in U.S. Legal Institutions, 1790-1920.” NBER Working Paper No. 10346. Cambridge, MA: March 2004. (available at

Khan, B. Zorina, and Kenneth L. Sokoloff. “‘Schemes of Practical Utility’: Entrepreneurship and Innovation among ‘Great Inventors’ in the United States, 1790-1865.” Journal of Economic History 53, no. 2 (1993): 289-307.

Khan, B. Zorina, and Kenneth L. Sokoloff. “Entrepreneurship and Technological Change in Historical Perspective.” Advances in the Study of Entrepreneurship, Innovation, and Economic Growth 6 (1993): 37-66.

Khan, B. Zorina, and Kenneth L. Sokoloff. “Two Paths to Industrial Development and Technological Change.” In Technological Revolutions in Europe, 1760-1860, edited by Maxine Berg and Kristine Bruland. London: Edward Elgar, London, 1997.

Khan, B. Zorina, and Kenneth L. Sokoloff. “The Early Development of Intellectual Property Institutions in the United States.” Journal of Economic Perspectives 15, no. 3 (2001): 233-46.

Khan, B. Zorina, and Kenneth L. Sokoloff. “Innovation of Patent Systems in the Nineteenth Century: A Comparative Perspective.” Unpublished manuscript (2001).

Khan, B. Zorina, and Kenneth L. Sokoloff. “Institutions and Democratic Invention in Nineteenth-century America.” American Economic Review Papers and Proceedings 94 (2004): 395-401.

Khan, B. Zorina, and Kenneth L. Sokoloff. “Institutions and Technological Innovation during Early Economic Growth: Evidence from the Great Inventors of the United States, 1790-1930.” In Institutions and Economic Growth, edited by Theo Eicher and Cecilia Garcia-Penalosa. Cambridge, MA: MIT Press, 2006.

Lamoreaux, Naomi R. and Kenneth L. Sokoloff. “Long-Term Change in the Organization of Inventive Activity.” Science, Technology and the Economy 93 (1996): 1286-92.

Lamoreaux, Naomi R. and Kenneth L. Sokoloff. “The Geography of Invention in the American Glass Industry, 1870-1925.” Journal of Economic History 60, no. 3 (2000): 700-29.

Lamoreaux, Naomi R. and Kenneth L. Sokoloff. “Market Trade in Patents and the Rise of a Class of Specialized Inventors in the Nineteenth-century United States.” American Economic Review 91, no. 2 (2001): 39-44.

Landes, David S. Unbound Prometheus: Technological Change and Industrial Development in Western Europe from 1750 to the Present. Cambridge: Cambridge University Press, 1969.

Lerner, Josh. “Patent Protection and Innovation over 150 Years.” NBER Working Paper No. 8977. Cambridge, MA: June 2002.

Levin, Richard, A. Klevorick, R. Nelson and S. Winter. “Appropriating the Returns from Industrial Research and Development.” Brookings Papers on Economic Activity 3 (1987): 783-820.

Lo, Shih-Tse. “Strengthening Intellectual Property Rights: Evidence from the 1986 Taiwanese Patent Reforms.” Ph.D. diss., University of California at Los Angeles, 2005.

Machlup, Fritz. An Economic Review of the Patent System. Washington, DC: U.S. Government Printing Office, 1958.

Machlup, Fritz. “The Supply of Inventors and Inventions.” In The Rate and Direction of Inventive Activity, edited by R. Nelson. Princeton: Princeton University Press, 1962.

Machlup, Fritz, and Edith Penrose. “The Patent Controversy in the Nineteenth Century.” Journal of Economic History 10, no. 1 (1950): 1-29.

Macleod, Christine. Inventing the Industrial Revolution. Cambridge: Cambridge University Press, 1988.

McCloy, Shelby T. French Inventions of the Eighteenth Century. Lexington: University of Kentucky Press, 1952.

Mokyr, Joel. The Lever of Riches: Technological Creativity and Economic Growth. New York: Oxford University Press, 1990.

Moser, Petra. “How Do Patent Laws Influence Innovation? Evidence from Nineteenth-century World Fairs.” American Economic Review 95, no. 4 (2005): 1214-36.

O’Dell, T. H. Inventions and Official Secrecy: A History of Secret Patents in the United Kingdom, Oxford: Clarendon Press, 1994.

Penrose, Edith. The Economics of the International Patent System. Baltimore: John Hopkins University Press, 1951.

Sáiz González, Patricio. Invención, patentes e innovación en la Espaňa contemporánea. Madrid: OEPM, 1999.

Schmookler, Jacob. “Economic Sources of Inventive Activity.” Journal of Economic History 22 (1962): 1-20.

Schmookler, Jacob. Invention and Economic Growth. Cambridge, MA: Harvard University Press, 1966.

Schmookler, Jacob, and Zvi Griliches. “Inventing and Maximizing.” American Economic Review (1963): 725-29.

Schiff, Eric. Industrialization without National Patents: The Netherlands, 1869-1912; Switzerland, 1850-1907. Princeton: Princeton University Press, 1971.

Sokoloff, Kenneth L. “Inventive Activity in Early Industrial America: Evidence from Patent Records, 1790-1846.” Journal of Economic History 48, no. 4 (1988): 813-50.

Sokoloff, Kenneth L. “Invention, Innovation, and Manufacturing Productivity Growth in the Antebellum Northeast.” In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John Joseph Wallis, 345-78. Chicago: University of Chicago Press, 1992.

Sokoloff, Kenneth L., and B. Zorina Khan. “The Democratization of Invention in during Early Industrialization: Evidence from the United States, 1790-1846.” Journal of Economic History 50, no. 2 (1990): 363-78.

Sutthiphisal, Dhanoos. “Learning-by-Producing and the Geographic Links between Invention and Production.” Unpublished manuscript, McGill University, 2005.

Takeyama, Lisa N. “The Welfare Implications of Unauthorized Reproduction of Intellectual Property in the Presence of Demand Network Externalities.” Journal of Industrial Economics 42, no. 2 (1994): 155-66.

U.S. Patent Office. Annual Report of the Commissioner of Patents. Washington, DC: various years.

Van Dijk, T. “Patent Height and Competition in Product Improvements.” Journal of Industrial Economics 44, no. 2 (1996): 151-67.

Vojacek, Jan. A Survey of the Principal National Patent Systems. New York: Prentice-Hall, 1936.

Woodcroft, Bennet. Alphabetical Index of Patentees of Inventions [1617-1852]. New York: A. Kelley, 1854, reprinted 1969.

Woodcroft, Bennet. Titles of Patents of Invention: Chronologically Arranged from March 2, 1617 to October 1, 1852. London: Queen’s Printing Office, 1854.

Citation: Khan, B. “An Economic History of Patent Institutions”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL

The Economic History of Norway

Ola Honningdal Grytten, Norwegian School of Economics and Business Administration


Norway, with its population of 4.6 million on the northern flank of Europe, is today one of the most wealthy nations in the world, both measured as GDP per capita and in capital stock. On the United Nation Human Development Index, Norway has been among the three top countries for several years, and in some years the very top nation. Huge stocks of natural resources combined with a skilled labor force and the adoption of new technology made Norway a prosperous country during the nineteenth and twentieth century.

Table 1 shows rates of growth in the Norwegian economy from 1830 to the present using inflation-adjusted gross domestic product (GDP). This article splits the economic history of Norway into two major phases — before and after the nation gained its independence in 1814.

Table 1
Phases of Growth in the Real Gross Domestic Product of Norway, 1830-2003

(annual growth rates as percentages)

Year GDP GDP per capita
1830-1843 1.91 0.86
1843-1875 2.68 1.59
1875-1914 2.02 1.21
1914-1945 2.28 1.55
1945-1973 4.73 3.81
1973-2003 3.28 2.79
1830-2003 2.83 2.00

Source: Grytten (2004b)

Before Independence

The Norwegian economy was traditionally based on local farming communities combined with other types of industry, basically fishing, hunting, wood and timber along with a domestic and international-trading merchant fleet. Due to topography and climatic conditions the communities in the North and the West were more dependent on fish and foreign trade than the communities in the south and east, which relied mainly on agriculture. Agricultural output, fish catches and wars were decisive for the waves in the economy previous to independence. This is reflected in Figure 1, which reports a consumer price index for Norway from 1516 to present.

The peaks in this figure mark the sixteenth-century Price Revolution (1530s to 1590s), the Thirty Years War (1618-1648), the Great Nordic War (1700-1721), the Napoleonic Wars (1800-1815), the only period of hyperinflation in Norway — World War I (1914-1918) — and the stagflation period, i.e. high rates of inflation combined with a slowdown in production, in the 1970s and early 1980s.

Figure 1
Consumer Price Index for Norway, 1516-2003 (1850 = 100).

Figure 1
Source: Grytten (2004a)

During the last decades of the eighteenth century the Norwegian economy bloomed along with a first era of liberalism. Foreign trade of fish and timber had already been important for the Norwegian economy for centuries, and now the merchant fleet was growing rapidly. Bergen, located at the west coast, was the major city, with a Hanseatic office and one of the Nordic countries’ largest ports for domestic and foreign trade.

When Norway gained its independence from Denmark in 1814, after a tight union covering 417 years, it was a typical egalitarian country with a high degree of self-supply from agriculture, fisheries and hunting. According to the population censuses from 1801 and 1815 more than ninety percent of the population of 0.9 million lived in rural areas, mostly on small farms.

After Independence (1814)

Figure 2 shows annual development in GDP by expenditure (in fixed 2000 prices) from 1830 to 2003. The series, with few exceptions, reveal steady growth rates with few huge fluctuations. However, economic growth as a more or less continuous process started in the 1840s. We can also conclude that the growth process slowed down during the last three decades of the nineteenth century. The years 1914-1945 were more volatile than any other period in question, while there was an impressive and steady rate of growth until the mid 1970s and from then on slower growth.

Figure 2
Gross Domestic Product for Norway by Expenditure Category
(in 2000 Norwegian Kroner)

Figure 2
Source: Grytten (2004b)

Stagnation and Institution Building, 1814-1843

The newborn state lacked its own institutions, industrial entrepreneurs and domestic capital. However, due to its huge stocks of natural resources and its geographical closeness to the sea and to the United Kingdom, the new state, linked to Sweden in a loose royal union, seized its opportunities after some decades. By 1870 it had become a relatively wealthy nation. Measured in GDP per capita Norway was well over the European average, in the middle of the West European countries, and in fact, well above Sweden.

During the first decades after its independence from Denmark, the new state struggled with the international recession after the Napoleonic wars, deflationary monetary policy, and protectionism from the UK.

The Central Bank of Norway was founded in 1816, and a national currency, the spesidaler pegged to silver was introduced. The daler depreciated heavily during the first troubled years of recession in the 1820s.

The Great Boom, 1843-1875

After the Norwegian spesidaler gained its par value to silver in 1842, Norway saw a period of significant economic growth up to the mid 1870s. This impressive growth was mirrored in only a few other countries. The growth process was very much initiated by high productivity growth in agriculture and the success of the foreign sector. The adoption of new structures and technology along with substitution from arable to lifestock production made labor productivity in agriculture increase by about 150 percent between 1835 and 1910. The exports of timber, fish and in particular maritime services achieved high growth rates. In fact, Norway became a major power in shipping services during this period, accounting for about seven percent of the world merchant fleet in 1875. Norwegian sailing vessels freighted international goods all over the world at low prices.

The success of the Norwegian foreign sector can be explained by a number of factors. Liberalization of world trade and high international demand secured a market for Norwegian goods and services. In addition, Norway had vast stocks of fish and timber along with maritime skills. According to recent calculations, GDP per capita had an annual growth rate of 1.6 percent 1843 to 1876, well above the European average. At the same time the Norwegian annual rate of growth for exports was 4.8 percent. The first modern large-scale manufacturing industry in Norway saw daylight in the 1840s, when textile plants and mechanized industry were established. A second wave of industrialization took place in the 1860s and 1870s. Following the rapid productivity growth in agriculture, food processing and dairy production industries showed high growth in this period.

During this great boom, capital was imported mainly from Britain, but also from Sweden, Denmark and Germany, the four most important Norwegian trading partners at the time. In 1536 the King of Denmark and Norway chose the Lutheran faith as the state religion. In consequence of the Reformation, reading became compulsory; consequently Norway acquired a generally skilled and independent labor force. The constitution from 1814 also cleared the way for liberalism and democracy. The puritan revivals during the nineteenth century created a business environment, which raised entrepreneurship, domestic capital and a productive labor force. In the western and southern parts of the country these puritan movements are still strong, both in daily life and within business.

Relative Stagnation with Industrialization, 1875-1914

Norway’s economy was hit hard during the “depression” from mid 1870s to the early 1890s. GDP stagnated, particular during the 1880s, and prices fell until 1896. This stagnation is mirrored in the large-scale emigration from Norway to North America in the 1880s. At its peak in 1882 as many as 28,804 persons, 1.5 percent of the population, left the country. All in all, 250,000 emigrated in the period 1879-1893, equal to 60 percent of the birth surplus. Only Ireland had higher emigration rates than Norway between 1836 and 1930, when 860,000 Norwegians left the country.

The long slow down can largely been explained by Norway’s dependence on the international economy and in particular the United Kingdom, which experienced slower economic growth than the other major economies of the time. As a result of the international slowdown, Norwegian exports contracted in several years, but expanded in others. A second reason for the slowdown in Norway was the introduction of the international gold standard. Norway adopted gold in January 1874, and due to the trade deficit, lack of gold and lack of capital, the country experienced a huge contraction in gold reserves and in the money stock. The deflationary effect strangled the economy. Going onto the gold standard caused the appreciation of the Norwegian currency, the krone, as gold became relatively more expensive compared to silver. A third explanation of Norway’s economic problems in the 1880s is the transformation from sailing to steam vessels. Norway had by 1875 the fourth biggest merchant fleet in the world. However, due to lack of capital and technological skills, the transformation from sail to steam was slow. Norwegian ship owners found a niche in cheap second-hand sailing vessels. However, their market was diminishing, and finally, when the Norwegian steam fleet passed the size of the sailing fleet in 1907, Norway was no longer a major maritime power.

A short boom occurred from the early 1890s to 1899. Then, a crash in the Norwegian building industry led to a major financial crash and stagnation in GDP per capita from 1900 to 1905. Thus from the middle of the 1870s until 1905 Norway performed relatively bad. Measured in GDP per capita, Norway, like Britain, experienced a significant stagnation relative to most western economies.

After 1905, when Norway gained full independence from Sweden, a heavy wave of industrialization took place. In the 1890s the fish preserving and cellulose and paper industries started to grow rapidly. From 1905, when Norsk Hydro was established, manufacturing industry connected to hydroelectrical power took off. It is argued, quite convincingly, that if there was an industrial breakthrough in Norway, it must have taken place during the years 1905-1920. However, the primary sector, with its labor-intensive agriculture and increasingly more capital-intensive fisheries, was still the biggest sector.

Crises and Growth, 1914-1945

Officially Norway was neutral during World War I. However, in terms of the economy, the government clearly took the side of the British and their allies. Through several treaties Norway gave privileges to the allied powers, which protected the Norwegian merchant fleet. During the war’s first years, Norwegian ship owners profited from the war, and the economy boomed. From 1917, when Germany declared war against non-friendly vessels, Norway took heavy losses. A recession replaced the boom.

Norway suspended gold redemption in August 1914, and due to inflationary monetary policy during the war and in the first couple of years afterward, demand was very high. When the war came to an end this excess demand was met by a positive shift in supply. Thus, Norway, like other Western countries experienced a significant boom in the economy from the spring of 1919 to the early autumn 1920. The boom was followed by high inflation, trade deficits, currency depreciation and an overheated economy.

The international postwar recession beginning in autumn 1920, hit Norway more severely than most other countries. In 1921 GDP per capita fell by eleven percent, which was only exceeded by the United Kingdom. There are two major reasons for the devastating effect of the post-war recession. In the first place, as a small open economy, Norway was more sensitive to international recessions than most other countries. This was in particular the case because the recession hit the country’s most important trading partners, the United Kingdom and Sweden, so hard. Secondly, the combination of strong and mostly pro-cyclical inflationary monetary policy from 1914 to 1920 and thereafter a hard deflationary policy made the crisis worse (Figure 3).

Figure 3
Money Aggregates for Norway, 1910-1930

Figure 3
Source: Klovland (2004a)

In fact, Norway pursued a long, but non-persistent deflationary monetary policy aimed at restoring the par value of the krone (NOK) up to May 1928. In consequence, another recession hit the economy during the middle of the 1920s. Hence, Norway was one of the worst performers in the western world in the 1920s. This can best be seen in the number of bankruptcies, a huge financial crisis and mass unemployment. Bank losses amounted to seven percent of GDP in 1923. Total unemployment rose from about one percent in 1919 to more than eight percent in 1926 and 1927. In manufacturing it reached more than 18 percent the same years.

Despite a rapid boom and success within the whaling industry and shipping services, the country never saw a convincing recovery before the Great Depression hit Europe in late summer 1930. The worst year for Norway was 1931, when GDP per capita fell by 8.4 percent. This, however, was not only due to the international crisis, but also to a massive and violent labor conflict that year. According to the implicit GDP deflator prices fell more than 63 percent from 1920 to 1933.

All in all, however, the depression of the 1930s was milder and shorter in Norway than in most western countries. This was partly due to the deflationary monetary policy in the 1920s, which forced Norwegian companies to become more efficient in order to survive. However, it was probably more important that Norway left gold as early as September 27th, 1931 only a week after the United Kingdom. Those countries that left gold early, and thereby employed a more inflationary monetary policy, were the best performers in the 1930s. Among them were Norway and its most important trading partners, the United Kingdom and Sweden.

During the recovery period, Norway in particular saw growth in manufacturing output, exports and import substitution. This can to a large extent be explained by currency depreciation. Also, when the international merchant fleet contracted during the drop in international trade, the Norwegian fleet grew rapidly, as Norwegian ship owners were pioneers in the transformation from steam to diesel engines, tramp to line freights and into a new expanding niche: oil tankers.

The primary sector was still the largest in the economy during the interwar years. Both fisheries and agriculture struggled with overproduction problems, however. These were dealt with by introducing market controls and cartels, partly controlled by the industries themselves and partly by the government.

The business cycle reached its bottom in late 1932. Despite relatively rapid recovery and significant growth both in GDP and in employment, unemployment stayed high, and reached 10-11 percent on annual basis from 1931 to 1933 (Figure 4).

Figure 4
Unemployment Rate and Public Relief Work
as a Percent of the Work Force, 1919-1939

Figure 4
Source: Hodne and Grytten (2002)

The standard of living became poorer in the primary sector, among those employed in domestic services and for the underemployed and unemployed and their households. However, due to the strong deflation, which made consumer prices fall by than 50 percent from autumn 1920 to summer 1933, employees in manufacturing, construction and crafts experienced an increase in real wages. Unemployment stayed persistently high due to huge growth in labor supply, as result of immigration restrictions by North American countries from the 1920s onwards.

Denmark and Norway were both victims of a German surprise attack the 9th of April 1940. After two months of fighting, the allied troops surrendered in Norway on June 7th and the Norwegian royal family and government escaped to Britain.

From then until the end of the war there were two Norwegian economies, the domestic German-controlled and the foreign Norwegian- and Allied-controlled economy. The foreign economy was primarily established on the basis of the huge Norwegian merchant fleet, which again was among the biggest in the world accounting for more than seven percent of world total tonnage. Ninety percent of this floating capital escaped the Germans. The ships were united into one state-controlled company, NORTASHIP, which earned money to finance the foreign economy. The domestic economy, however, struggled with a significant fall in production, inflationary pressure and rationing of important goods, which three million Norwegians had to share with 400.000 Germans occupying the country.

Economic Planning and Growth, 1945-1973

After the war the challenge was to reconstruct the economy and re-establish political and economic order. The Labor Party, in office from 1935, grabbed the opportunity to establish a strict social democratic rule, with a growing public sector and widespread centralized economic planning. Norway first declined the U.S. proposition of financial aid after the world. However, due to lack of hard currencies they accepted the Marshall aid program. By receiving 400 million dollars from 1948 to 1952, Norway was one of the biggest per capita recipients.

As part of the reconstruction efforts Norway joined the Bretton Woods system, GATT, the IMF and the World Bank. Norway also chose to become member of NATO and the United Nations. In 1958 the country also joined the European Free Trade Area (EFTA). The same year Norway made the krone convertible to the U.S. dollar, as many other western countries did with their currencies.

The years from 1950 to 1973 are often called the golden era of the Norwegian economy. GDP per capita showed an annual growth rate of 3.3 percent. Foreign trade stepped up even more, unemployment barely existed and the inflation rate was stable. This has often been explained by the large public sector and good economic planning. The Nordic model, with its huge public sector, has been said to be a success in this period. If one takes a closer look into the situation, one will, nevertheless, find that the Norwegian growth rate in the period was lower than that for most western nations. The same is true for Sweden and Denmark. The Nordic model delivered social security and evenly-distributed wealth, but it did not necessarily give very high economic growth.

Figure 5
Public Sector as a Percent of GDP, 1900-1990

Figure 5
Source: Hodne and Grytten (2002)

Petroleum Economy and Neoliberalism, 1973 to the Present

After the Bretton Woods system fell apart (between August 1971 and March 1973) and the oil price shock in autumn 1973, most developed economies went into a period of prolonged recession and slow growth. In 1969 Philips Petroleum discovered petroleum resources at the Ekofisk field, which was defined as part of the Norwegian continental shelf. This enabled Norway to run a countercyclical financial policy during the stagflation period in the 1970s. Thus, economic growth was higher and unemployment lower than for most other western countries. However, since the countercyclical policy focused on branch and company subsidies, Norwegian firms soon learned to adapt to policy makers rather than to the markets. Hence, both productivity and business structure did not have the incentives to keep pace with changes in international markets.

Norway lost significant competitive power, and large-scale deindustrialization took place, despite efforts to save manufacturing industry. Another reason for deindustrialization was the huge growth in the profitable petroleum sector. Persistently high oil prices from the autumn 1973 to the end of 1985 pushed labor costs upward, through spillover effects from high wages in the petroleum sector. High labor costs made the Norwegian foreign sector less competitive. Thus, Norway saw deindustrialization at a more rapid pace than most of her largest trading partners. Due to the petroleum sector, however, Norway experienced high growth rates in all the three last decades of the twentieth century, bringing Norway to the top of the world GDP per capita list at the dawn of the new millennium. Nevertheless, Norway had economic problems both in the eighties and in the nineties.

In 1981 a conservative government replaced Labor, which had been in power for most of the post-war period. Norway had already joined the international wave of credit liberalization, and the new government gave fuel to this policy. However, along with the credit liberalization, the parliament still ran a policy that prevented market forces from setting interest rates. Instead they were set by politicians, in contradiction to the credit liberalization policy. The level of interest rates was an important part of the political game for power, and thus, they were set significantly below the market level. In consequence, a substantial credit boom was created in the early 1980s, and continued to the late spring of 1986. As a result, Norway had monetary expansion and an artificial boom, which created an overheated economy. When oil prices fell dramatically from December 1985 onwards, the trade surplus was suddenly turned to a huge deficit (Figure 6).

Figure 6
North Sea Oil Prices and Norway’s Trade Balance, 1975-2000

Figure 6
Source: Statistics Norway

The conservative-center government was forced to keep a tighter fiscal policy. The new Labor government pursued this from May 1986. Interest rates were persistently high as the government now tried to run a trustworthy fixed-currency policy. In the summer of 1990 the Norwegian krone was officially pegged to the ECU. When the international wave of currency speculation reached Norway during autumn 1992 the central bank finally had to suspend the fixed exchange rate and later devaluate.

In consequence of these years of monetary expansion and thereafter contraction, most western countries experienced financial crises. It was relatively hard in Norway. Prices of dwellings slid, consumers couldn’t pay their bills, and bankruptcies and unemployment reached new heights. The state took over most of the larger commercial banks to avoid a total financial collapse.

After the suspension of the ECU and the following devaluation, Norway had growth until 1998, due to optimism, an international boom and high prices of petroleum. The Asian financial crisis also rattled the Norwegian stock market. At the same time petroleum prices fell rapidly, due to internal problems among the OPEC countries. Hence, the krone depreciated. The fixed exchange rate policy had to be abandoned and the government adopted inflation targeting. Along with changes in monetary policy, the center coalition government was also able to monitor a tighter fiscal policy. At the same time interest rates were high. As result, Norway escaped the overheating process of 1993-1997 without any devastating effects. Today the country has a strong and sound economy.

The petroleum sector is still very important in Norway. In this respect the historical tradition of raw material dependency has had its renaissance. Unlike many other countries rich in raw materials, natural resources have helped make Norway one of the most prosperous economies in the world. Important factors for Norway’s ability to turn resource abundance into economic prosperity are an educated work force, the adoption of advanced technology used in other leading countries, stable and reliable institutions, and democratic rule.


Basberg, Bjørn L. Handelsflåten i krig: Nortraship: Konkurrent og alliert. Oslo: Grøndahl and Dreyer, 1992.

Bergh, Tore Hanisch, Even Lange and Helge Pharo. Growth and Development. Oslo: NUPI, 1979.

Brautaset, Camilla. “Norwegian Exports, 1830-1865: In Perspective of Historical National Accounts.” Ph.D. dissertation. Norwegian School of Economics and Business Administration, 2002.

Bruland, Kristine. British Technology and European Industrialization. Cambridge: Cambridge University Press, 1989.

Danielsen, Rolf, Ståle Dyrvik, Tore Grønlie, Knut Helle and Edgar Hovland. Norway: A History from the Vikings to Our Own Times. Oslo: Scandinavian University Press, 1995.

Eitrheim. Øyvind, Jan T. Klovland and Jan F. Qvigstad, editors. Historical Monetary Statistics for Norway, 1819-2003. Oslo: Norges Banks skriftserie/Occasional Papers, no 35, 2004.

Hanisch, Tore Jørgen. “Om virkninger av paripolitikken.” Historisk tidsskrift 58, no. 3 (1979): 223-238.

Hanisch, Tore Jørgen, Espen Søilen and Gunhild Ecklund. Norsk økonomisk politikk i det 20. århundre. Verdivalg i en åpen økonomi. Kristiansand: Høyskoleforlaget, 1999.

Grytten, Ola Honningdal. “A Norwegian Consumer Price Index 1819-1913 in a Scandinavian Perspective.” European Review of Economic History 8, no.1 (2004): 61-79.

Grytten, Ola Honningdal. “A Consumer Price Index for Norway, 1516-2003.” Norges Bank: Occasional Papers, no. 1 (2004a): 47-98.

Grytten. Ola Honningdal. “The Gross Domestic Product for Norway, 1830-2003.” Norges Bank: Occasional Papers, no. 1 (2004b): 241-288.

Hodne, Fritz. An Economic History of Norway, 1815-1970. Tapir: Trondheim, 1975.

Hodne, Fritz. The Norwegian Economy, 1920-1980. London: Croom Helm and St. Martin’s, 1983.

Hodne, Fritz and Ola Honningdal Grytten. Norsk økonomi i det 19. århundre. Bergen: Fagbokforlaget, 2000.

Hodne, Fritz and Ola Honningdal Grytten. Norsk økonomi i det 20. århundre. Bergen: Fagbokforlaget, 2002.

Klovland, Jan Tore. “Monetary Policy and Business Cycles in the Interwar Years: The Scandinavian Experience.” European Review of Economic History 2, no. 2 (1998):

Klovland, Jan Tore. “Monetary Aggregates in Norway, 1819-2003.” Norges Bank: Occasional Papers, no. 1 (2004a): 181-240.

Klovland, Jan Tore. “Historical Exchange Rate Data, 1819-2003”. Norges Bank: Occasional Papers, no. 1 (2004b): 289-328.

Lange, Even, editor. Teknologi i virksomhet. Verkstedsindustri i Norge etter 1840. Oslo: Ad Notam Forlag, 1989.

Nordvik, Helge W. “Finanspolitikken og den offentlige sektors rolle i norsk økonomi i mellomkrigstiden”. Historisk tidsskrift 58, no. 3 (1979): 239-268.

Sejersted, Francis. Demokratisk kapitalisme. Oslo: Universitetsforlaget, 1993.

Søilen. Espen. “Fra frischianisme til keynesianisme? En studie av norsk økonomisk politikk i lys av økonomisk teori, 1945-1980.” Ph.D. dissertation. Bergen: Norwegian School of Economics and Business Administration, 1998.

Citation: Grytten, Ola. “The Economic History of Norway”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL

An Economic History of New Zealand in the Nineteenth and Twentieth Centuries

John Singleton, Victoria University of Wellington, New Zealand

Living standards in New Zealand were among the highest in the world between the late nineteenth century and the 1960s. But New Zealand’s economic growth was very sluggish between 1950 and the early 1990s, and most Western European countries, as well as several in East Asia, overtook New Zealand in terms of real per capita income. By the early 2000s, New Zealand’s GDP per capita was in the bottom half of the developed world.

Table 1:
Per capita GDP in New Zealand
compared with the United States and Australia
(in 1990 international dollars)

US Australia New Zealand NZ as
% of US
NZ as % of
1840 1588 1374 400 25 29
1900 4091 4013 4298 105 107
1950 9561 7412 8456 88 114
2000 28129 21540 16010 57 74

Source: Angus Maddison, The World Economy: Historical Statistics. Paris: OECD, 2003, pp. 85-7.

Over the second half of the twentieth century, argue Greasley and Oxley (1999), New Zealand seemed in some respects to have more in common with Latin American countries than with other advanced western nations. As well as a snail-like growth rate, New Zealand followed highly protectionist economic policies between 1938 and the 1980s. (In absolute terms, however, New Zealanders continued to be much better off than their Latin American counterparts.) Maddison (1991) put New Zealand in a middle-income group of countries, including the former Czechoslovakia, Hungary, Portugal, and Spain.

Origins and Development to 1914

When Europeans (mainly Britons) started to arrive in Aotearoa (New Zealand) in the early nineteenth century, they encountered a tribal society. Maori tribes made a living from agriculture, fishing, and hunting. Internal trade was conducted on the basis of gift exchange. Maori did not hold to the Western concept of exclusive property rights in land. The idea that land could be bought and sold was alien to them. Most early European residents were not permanent settlers. They were short-term male visitors involved in extractive activities such as sealing, whaling, and forestry. They traded with Maori for food, sexual services, and other supplies.

Growing contact between Maori and the British was difficult to manage. In 1840 the British Crown and some Maori signed the Treaty of Waitangi. The treaty, though subject to various interpretations, to some extent regularized the relationship between Maori and Europeans (or Pakeha). At roughly the same time, the first wave of settlers arrived from England to set up colonies including Wellington and Christchurch. Settlers were looking for a better life than they could obtain in overcrowded and class-ridden England. They wished to build a rural and largely self-sufficient society.

For some time, only the Crown was permitted to purchase land from Maori. This land was then either resold or leased to settlers. Many Maori felt – and many still feel – that they were forced to give up land, effectively at gunpoint, in return for a pittance. Perhaps they did not always grasp that land, once sold, was lost forever. Conflict over land led to intermittent warfare between Maori and settlers, especially in the 1860s. There was brutality on both sides, but the Europeans on the whole showed more restraint in New Zealand than in North America, Australia, or Southern Africa.

Maori actually required less land in the nineteenth century because their numbers were falling, possibly by half between the late eighteenth and late nineteenth centuries. By the 1860s, Maori were outnumbered by British settlers. The introduction of European diseases, alcohol, and guns contributed to the decline in population. Increased mobility and contact between tribes may also have spread disease. The Maori population did not begin to recover until the twentieth century.

Gold was discovered in several parts of New Zealand (including Thames and Otago) in the mid-nineteenth century, but the introduction of sheep farming in the 1850s gave a more enduring boost to the economy. Australian and New Zealand wool was in high demand in the textile mills of Yorkshire. Sheep farming necessitated the clearing of native forests and the planting of grasslands, which changed the appearance of large tracts of New Zealand. This work was expensive, and easy access to the London capital market was critical. Economic relations between New Zealand and Britain were strong, and remained so until the 1970s.

Between the mid-1870s and mid-1890s, New Zealand was adversely affected by weak export prices, and in some years there was net emigration. But wool prices recovered in the 1890s, just as new exports – meat and dairy produce – were coming to prominence. Until the advent of refrigeration in the early 1880s, New Zealand did not export meat and dairy produce. After the introduction of refrigeration, however, New Zealand foodstuffs found their way on to the dinner tables of working class families in Britain, but not the tables of the middle and upper classes, as they could afford fresh produce.

In comparative terms, the New Zealand economy was in its heyday in the two decades before 1914. New Zealand (though not its Maori shadow, Aotearoa) was a wealthy, dynamic, and egalitarian society. The total population in 1914 was slightly above one million. Exports consisted almost entirely of land-intensive pastoral commodities. Manufactures loomed large in New Zealand’s imports. High labor costs, and the absence of scale economies in the tiny domestic market, hindered industrialization, though there was some processing of export commodities and imports.

War, Depression and Recovery, 1914-38

World War One disrupted agricultural production in Europe, and created a robust demand for New Zealand’s primary exports. Encouraged by high export prices, New Zealand farmers borrowed and invested heavily between 1914 and 1920. Land exchanged hands at very high prices. Unfortunately, the early twenties brought the start of a prolonged slump in international commodity markets. Many farmers struggled to service and repay their debts.

The global economic downturn, beginning in 1929-30, was transmitted to New Zealand by the collapse in commodity prices on the London market. Farmers bore the brunt of the depression. At the trough, in 1931-32, net farm income was negative. Declining commodity prices increased the already onerous burden of servicing and repaying farm mortgages. Meat freezing works, woolen mills, and dairy factories were caught in the spiral of decline. Farmers had less to spend in the towns. Unemployment rose, and some of the urban jobless drifted back to the family farm. The burden of external debt, the bulk of which was in sterling, rose dramatically relative to export receipts. But a protracted balance of payments crisis was avoided, since the demand for imports fell sharply in response to the drop in incomes. The depression was not as serious in New Zealand as in many industrial countries. Prices were more flexible in the primary sector and in small business than in modern, capital-intensive industry. Nevertheless, the experience of depression profoundly affected New Zealanders’ attitudes towards the international economy for decades to come.

At first, there was no reason to expect that the downturn in 1929-30 was the prelude to the worst slump in history. As tax and customs revenue fell, the government trimmed expenditure in an attempt to balance the budget. Only in 1931 was the severity of the crisis realized. Further cuts were made in public spending. The government intervened in the labor market, securing an order for an all-round reduction in wages. It pressured and then forced the banks to reduce interest rates. The government sought to maintain confidence and restore prosperity by helping farms and other businesses to lower costs. But these policies did not lead to recovery.

Several factors contributed to the recovery that commenced in 1933-34. The New Zealand pound was devalued by 14 percent against sterling in January 1933. As most exports were sold for sterling, which was then converted into New Zealand pounds, the income of farmers was boosted at a stroke of the pen. Devaluation increased the money supply. Once economic actors, including the banks, were convinced that the devaluation was permanent, there was an increase in confidence and in lending. Other developments played their part. World commodity prices stabilized, and then began to pick up. Pastoral output and productivity continued to rise. The 1932 Ottawa Agreements on imperial trade strengthened New Zealand’s position in the British market at the expense of non-empire competitors such as Argentina, and prefigured an increase in the New Zealand tariff on non-empire manufactures. As was the case elsewhere, the recovery in New Zealand was not the product of a coherent economic strategy. When beneficial policies were adopted it was as much by accident as by design.

Once underway, however, New Zealand’s recovery was comparatively rapid and persisted over the second half of the thirties. A Labour government, elected towards the end of 1935, nationalized the central bank (the Reserve Bank of New Zealand). The government instructed the Reserve Bank to create advances in support of its agricultural marketing and state housing schemes. It became easier to obtain borrowed funds.

An Insulated Economy, 1938-1984

A balance of payments crisis in 1938-39 was met by the introduction of administrative restrictions on imports. Labour had not been prepared to deflate or devalue – the former would have increased unemployment, while the latter would have raised working class living costs. Although intended as a temporary expedient, the direct control of imports became a distinctive feature of New Zealand economic policy until the mid-1980s.

The doctrine of “insulationism” was expounded during the 1940s. Full employment was now the main priority. In the light of disappointing interwar experience, there were doubts about the ability of the pastoral sector to provide sufficient work for New Zealand’s growing population. There was a desire to create more industrial jobs, even though there seemed no prospect of achieving scale economies within such a small country. Uncertainty about export receipts, the need to maintain a high level of domestic demand, and the competitive weakness of the manufacturing sector, appeared to justify the retention of quantitative import controls.

After 1945, many Western countries retained controls over current account transactions for several years. When these controls were relaxed and then abolished in the fifties and early sixties, the anomalous nature of New Zealand’s position became more visible. Although successive governments intended to liberalize, in practice they achieved little, except with respect to trade with Australia.

The collapse of the Korean War commodity boom, in the early 1950s, marked an unfortunate turning point in New Zealand’s economic history. International conditions were unpropitious for the pastoral sector in the second half of the twentieth century. Despite the aspirations of GATT, the United States, Western Europe and Japan restricted agricultural imports, especially of temperate foodstuffs, subsidized their own farmers and, in the case of the Americans and the Europeans, dumped their surpluses in third markets. The British market, which remained open until 1973, when the United Kingdom was absorbed into the EEC, was too small to satisfy New Zealand. Moreover, even the British resorted to agricultural subsidies. Compared with the price of industrial goods, the price of agricultural produce tended to weaken over the long term.

Insulation was a boon to manufacturers, and New Zealand developed a highly diversified industrial structure. But competition was ineffectual, and firms were able to pass cost increases on to the consumer. Import barriers induced many British, American, and Australian multinationals to establish plants in New Zealand. The protected industrial economy did have some benefits. It created jobs – there was full employment until the 1970s – and it increased the stock of technical and managerial skills. But consumers and farmers were deprived of access to cheaper – and often better quality – imported goods. Their interests and welfare were neglected. Competing demand from protected industries also raised the costs of farm inputs, including labor power, and thus reduced the competitiveness of New Zealand’s key export sector.

By the early 1960s, policy makers had realized that New Zealand was falling behind in the race for greater prosperity. The British food market was under threat, as the Macmillan government began a lengthy campaign to enter the protectionist EEC. New Zealand began to look for other economic partners, and the most obvious candidate was Australia. In 1901, New Zealand had declined to join the new federation of Australian colonies. Thus it had been excluded from the Australian common market. After lengthy negotiations, a partial New Zealand-Australia Free Trade Agreement (NAFTA) was signed in 1965. Despite initial misgivings, many New Zealand firms found that they could compete in the Australian market, where tariffs against imports from the rest of the world remained quite high. But this had little bearing on their ability to compete with European, Asian, and North American firms. NAFTA was given renewed impetus by the Closer Economic Relations (CER) agreement of 1983.

Between 1973 and 1984, New Zealand governments were overwhelmed by a group of inter-related economic crises, including two serious supply shocks (the oil crises), rising inflation, and increasing unemployment. Robert Muldoon, the National Party (conservative) prime minister between 1975 and 1984, pursued increasingly erratic macroeconomic policies. He tightened government control over the economy in the early eighties. There were dramatic fluctuations in inflation and in economic growth. In desperation, Muldoon imposed a wage and price freeze in 1982-84. He also mounted a program of large-scale investments, including the expansion of a steel works, and the construction of chemical plants and an oil refinery. By means of these investments, he hoped to reduce the import bill and secure a durable improvement in the balance of payments. But the “Think Big” strategy failed – the projects were inadequately costed, and inherently risky. Although Muldoon’s intention had been to stabilize the economy, his policies had the opposite effect.

Economic Reform, 1984-2000

Muldoon’s policies were discredited, and in 1984 the Labour Party came to power. All other economic strategies having failed, Labour resolved to deregulate and restore the market process. (This seemed very odd at the time.) Within a week of the election, virtually all controls over interest rates had been abolished. Financial markets were deregulated, and, in March 1985, the New Zealand dollar was floated. Other changes followed, including the sale of public sector trading organizations, the reduction of tariffs and the elimination of import licensing. However, reform of the labor market was not completed until the early 1990s, by which time National (this time without Muldoon or his policies) was back in office.

Once credit was no longer rationed, there was a large increase in private sector borrowing, and a boom in asset prices. Numerous speculative investment and property companies were set up in the mid-eighties. New Zealand’s banks, which were not used to managing risk in a deregulated environment, scrambled to lend to speculators in an effort not to miss out on big profits. Many of these ventures turned sour, especially after the 1987 share market crash. Banks were forced to reduce their lending, to the detriment of sound as well as unsound borrowers.

Tight monetary policy and financial deregulation led to rising interest rates after 1984. The New Zealand dollar appreciated strongly. Farmers bore the initial brunt of high borrowing costs and a rising real exchange rate. Manufactured imports also became more competitive, and many inefficient firms were forced to close. Unemployment rose in the late eighties and early nineties. The early 1990s were marked by an international recession, which was particularly painful in New Zealand, not least because of the high hopes raised by the post-1984 reforms.

An economic recovery began towards the end of 1991. With a brief interlude in 1998, strong growth persisted for the remainder of the decade. Confidence was gradually restored to the business sector. Unemployment began to recede. After a lengthy time lag, the economic reforms seemed to be paying off for the majority of the population.

Large structural changes took place after 1984. Factors of production switched out of the protected manufacturing sector, and were drawn into services. Tourism boomed as the relative cost of international travel fell. The face of the primary sector also changed, and the wine industry began to penetrate world markets. But not all manufacturers struggled. Some firms adapted to the new environment and became more export-oriented. For instance, a small engineering company, Scott Technology, became a world leader in the provision of equipment for the manufacture of refrigerators and washing machines.

Annual inflation was reduced to low single digits by the early nineties. Price stability was locked in through the 1989 Reserve Bank Act. This legislation gave the central bank operational autonomy, while compelling it to focus on the achievement and maintenance of price stability rather than other macroeconomic objectives. The Reserve Bank of New Zealand was the first central bank in the world to adopt a regime of inflation targeting. The 1994 Fiscal Responsibility Act committed governments to sound finance and the reduction of public debt.

By 2000, New Zealand’s population was approaching four million. Overall, the reforms of the eighties and nineties were responsible for creating a more competitive economy. New Zealand’s economic decline relative to the rest of the OECD was halted, though it was not reversed. In the nineties, New Zealand enjoyed faster economic growth than either Germany or Japan, an outcome that would have been inconceivable a few years earlier. But many New Zealanders were not satisfied. In particular, they were galled that their closest neighbor, Australia, was growing even faster. Australia, however, was an inherently much wealthier country with massive mineral deposits.


Several explanations have been offered for New Zealand’s relatively poor economic performance during the twentieth century.

Wool, meat, and dairy produce were the foundations of New Zealand’s prosperity in Victorian and Edwardian times. After 1920, however, international market conditions were generally unfavorable to pastoral exports. New Zealand had the wrong comparative advantage to enjoy rapid growth in the twentieth century.

Attempts to diversify were only partially successful. High labor costs and the small size of the domestic market hindered the efficient production of standardized labor-intensive goods (e.g. garments) and standardized capital-intensive goods (e.g. autos). New Zealand might have specialized in customized and skill-intensive manufactures, but the policy environment was not conducive to the promotion of excellence in niche markets. Between 1938 and the 1980s, Latin American-style trade policies fostered the growth of a ramshackle manufacturing sector. Only in the late eighties did New Zealand decisively reject this regime.

Geographical and geological factors also worked to New Zealand’s disadvantage. Australia drew ahead of New Zealand in the 1960s, following the discovery of large mineral deposits for which there was a big market in Japan. Staple theory suggests that developing countries may industrialize successfully by processing their own primary products, instead of by exporting them in a raw state. Canada had coal and minerals, and became a significant industrial power. But New Zealand’s staples of wool, meat and dairy produce offered limited downstream potential.

Canada also took advantage of its proximity to the U.S. market, and access to U.S. capital and technology. American-style institutions in the labor market, business, education and government became popular in Canada. New Zealand and Australia relied on, arguably inferior, British-style institutions. New Zealand was a long way from the world’s economic powerhouses, and it was difficult for its firms to establish and maintain contact with potential customers and collaborators in Europe, North America, or Asia.

Clearly, New Zealand’s problems were not all of its own making. The elimination of agricultural protectionism in the northern hemisphere would have given a huge boost the New Zealand economy. On the other hand, in the period between the late 1930s and mid-1980s, New Zealand followed inward-looking economic policies that hindered economic efficiency and flexibility.


Bassett, Michael. The State in New Zealand, 1840-1984. Auckland: Auckland University Press, 1998.

Belich, James. Making Peoples: A History of the New Zealanders from Polynesian Settlement to the End of the Nineteenth Century, Auckland: Penguin, 1996.

Condliffe, John B. New Zealand in the Making. London: George Allen & Unwin, 1930.

Dalziel, Paul. “New Zealand’s Economic Reforms: An Assessment.” Review of Political Economy 14, no. 2 (2002): 31-46.

Dalziel, Paul and Ralph Lattimore. The New Zealand Macroeconomy: Striving for Sustainable Growth with Equity. Melbourne: Oxford University Press, fifth edition, 2004.

Easton, Brian. In Stormy Seas: The Post-War New Zealand Economy. Dunedin: University of Otago Press, 1997.

Endres, Tony and Ken Jackson. “Policy Responses to the Crisis: Australasia in the 1930s.” In Capitalism in Crisis: International Responses to the Great Depression, edited by Rick Garside, 148-65. London: Pinter, 1993.

Evans, Lewis, Arthur Grimes, and Bryce Wilkinson (with David Teece), “Economic Reform in New Zealand 1984-95: The Pursuit of Efficiency.” Journal of Economic Literature 34, no. 4 (1996): 1856-1902.

Gould, John D. The Rake’s Progress: the New Zealand Economy since 1945. Auckland: Hodder and Stoughton, 1982.

Greasley, David and Les Oxley. “A Tale of Two Dominions: Comparing the Macroeconomic Records of Australia and Canada since 1870.” Economic History Review 51, no. 2 (1998): 294-318.

Greasley, David and Les Oxley. “Outside the Club: New Zealand’s Economic Growth, 1870-1993.” International Review of Applied Economics 14, no. 2 (1999): 173-92.

Greasley, David and Les Oxley. “Regime Shift and Fast Recovery on the Periphery: New Zealand in the 1930s.” Economic History Review 55, no. 4 (2002): 697-720.

Hawke, Gary R. The Making of New Zealand: An Economic History. Cambridge: Cambridge University Press, 1985.

Jones, Steve R.H. “Government Policy and Industry Structure in New Zealand, 1900-1970.” Australian Economic History Review 39, no, 3 (1999): 191-212.

Mabbett, Deborah. Trade, Employment and Welfare: A Comparative Study of Trade and Labour Market Policies in Sweden and New Zealand, 1880-1980. Oxford: Clarendon Press, 1995.

Maddison, Angus. Dynamic Forces in Capitalist Development. Oxford: Oxford University Press, 1991.

Maddison, Angus. The World Economy: Historical Statistics. Paris: OECD, 2003.

McKinnon, Malcolm. Treasury: 160 Years of the New Zealand Treasury. Auckland: Auckland University Press in association with the Ministry for Culture and Heritage, 2003.

Schedvin, Boris. “Staples and Regions of the Pax Britannica.” Economic History Review 43, no. 4 (1990): 533-59.

Silverstone, Brian, Alan Bollard, and Ralph Lattimore, editors. A Study of Economic Reform: The Case of New Zealand. Amsterdam: Elsevier, 1996.

Singleton, John. “New Zealand: Devaluation without a Balance of Payments Crisis.” In The World Economy and National Economies in the Interwar Slump, edited by Theo Balderston, 172-90. Basingstoke: Palgrave, 2003.

Singleton, John and Paul L. Robertson. Economic Relations between Britain and Australasia, 1945-1970. Basingstoke: Palgrave, 2002.

Ville, Simon. The Rural Entrepreneurs: A History of the Stock and Station Agent Industry in Australia and New Zealand. Cambridge: Cambridge University Press, 2000.

Citation: Singleton, John. “New Zealand in the Nineteenth and Twentieth Centuries”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL

Monetary Unions

Benjamin J. Cohen, University of California at Santa Barbara

Monetary tradition has long assumed that, in principle, each sovereign state issues and manages its own exclusive currency. In practice, however, there have always been exceptions — countries that elected to join together in a monetary union of some kind. Not all monetary unions have stood the test of time; in fact, many past initiatives have long since passed into history. Yet interest in monetary union persists, stimulated in particular by the example of the European Union’s Economic and Monetary Union (EMU), which has replaced a diversity of national monies with one joint currency called the euro. Today, the possibility of monetary union is actively discussed in many parts of the world.

A monetary union may be defined as a group of two or more states sharing a common currency or equivalent. Although some sources extend the definition to include the monetary regimes of national federations such as the United States or of imperial agglomerations such as the old Austro-Hungarian Empire, the conventional practice is to limit the term to agreements among units that are recognized as fully sovereign states under international law. The antithesis of a monetary union, of course, is a national currency with an independent central bank and a floating exchange rate.

In the strictest sense of the term, monetary union means complete abandonment of separate national currencies and full centralization of monetary authority in a single joint institution. In reality, considerable leeway exists for variations of design along two key dimensions. These dimensions are institutional provisions for (1) the issuing of currency and (2) the management of decisions. Currencies may continue to be issued by individual governments, tied together in an exchange-rate union. Alternatively, currencies may be replaced not by a joint currency but rather by the money of a larger partner — an arrangement generically labeled dollarization after the United States dollar, the money that is most widely used for this purpose. Similarly, monetary authority may continue to be exercised in some degree by individual governments or, alternatively, may be delegated not to a joint institution but rather to a single partner such as the United States.

In political terms, monetary unions divide into two categories, depending on whether national monetary sovereignty is shared or surrendered. Unions based on a joint currency or an exchange-rate union in effect pool monetary authority to some degree. They are a form of partnership or alliance of nominal equals. Unions created by dollarization are more hierarchical, a subordinate follower-leader type of regime.

The greatest attraction of a monetary union is that it reduces transactions costs as compared with a collection of separate national currencies. With a single money or equivalent, there is no need to incur the expense of currency conversion or hedging against exchange risk in transactions among the partners. But there are also two major economic disadvantages for governments to consider.

First, individual partners lose control of both the money supply and exchange rate as policy instruments to cope with domestic or external disturbances. Against a monetary union’s efficiency gains at the microeconomic level, governments must compare the cost of sacrificing autonomy of monetary policy at the macroeconomic level.

Second, individual partners lose the capacity derived from an exclusive national currency to augment public spending at will via money creation — a privilege known as seigniorage. Technically defined as the excess of the nominal value of a currency over its cost of production, seigniorage can be understood as an alternative source of revenue for the state beyond what can be raised by taxes or by borrowing from financial markets. Sacrifice of the seigniorage privilege must also be compared against a monetary union’s efficiency gains.

The seriousness of these two losses will depend on the type of monetary union adopted. In an alliance-type union, where authority is not surrendered but pooled, monetary control is delegated to the union’s joint institution, to be shared and in some manner collectively managed by all the countries involved. Hence each partner’s loss is simultaneously also each other’s gain. Though individual states may no longer have much latitude to act unilaterally, each government retains a voice in decision-making for the group as a whole. Losses will be greater with dollarization, which by definition transfers all monetary authority to the dominant power. Some measure of seigniorage may be retained by subordinate partners, but only with the consent of the leader.

The idea of monetary union among sovereign states was widely promoted in the nineteenth century, mainly in Europe, despite the fact that most national currencies were already tied together closely by the fixed exchange rates of the classical gold standard. Further efficiency gains could be realized, proponents argued, while little would be lost at a time when activist monetary policy was still unknown.

“Universal Currency” Fails, 1867

Most ambitious was a projected “universal currency” to be based on equivalent gold coins issued by the three biggest financial powers of the day: Britain, France, and the United States. As it happened, the gold content of French coins at the time was such that a 25-franc piece — not then in existence but easily mintable — would if brought into existence have contained 112.008 grains of gold, very close to both the English sovereign (containing 113.001 grains) and American half-eagle, equal to five dollars (containing 116.1 grains). Why not, then, seek some sort of standardization of coinage among the three countries to achieve the equivalent of one single money? That was the proposal of a major monetary conference sponsored by the French Government to coincide with an international exposition in Paris in 1867. Delegates from some 20 countries, with the critical exception of Britain’s representatives, enthusiastically supported creation of a universal currency based on a 25-franc piece and called for appropriate reductions in the gold content of the sovereign and half-eagle. In the end, however, no action was taken by either London or Washington, and for lack of sustained political support the idea ultimately faded away.

Latin Monetary Union

Two years before the 1867 conference, however, the French Government did succeed in gaining agreement for a more limited initiative — the Latin Monetary Union (LMU). Joining Belgium, Italy, and Switzerland together with France, the LMU was intended to standardize the existing gold and silver coinages of all four countries. Greece subsequently adhered to the terms of the LMU in 1868, though not becoming a formal member until 1876. In practical terms, a monetary partnership among these countries had already begun to coalesce even earlier as a result of independent decisions by Belgium, Greece, Italy, and Switzerland to model their currency systems on that of France. Each state chose to adopt a basic unit equal in value to the French franc — actually called a franc in Belgium and Switzerland — with equivalent subsidiary units defined according to the French-inspired decimal system. Starting in the 1850s, however, serious Gresham’s Law type problems developed as a result of differences in the weight and fineness of silver coins circulating in each country. The LMU established uniform standards for national coinages and, by making each member’s money legal tender throughout the Union, effectively created a wider area for the circulation of a harmonized supply of specie coins. In substance a formal exchange-rate union was created, with the authority for management of participating currencies remaining with each separate government.

As a group, members were distinguished from other countries by the reciprocal obligation of their central banks to accept one another’s coins at par and without limit. Soon after its founding, however, beginning in the late 1860s, the LMU was subjected to considerable strain owing to a global glut of silver production. The resulting depreciation of silver eventually led to a suspension of silver coinage by all the partners, effectively transforming the LMU from a bimetallic standard into what came to be called a “limping gold standard.” Even so, the arrangement managed to hold together until the generalized breakdown of global monetary relations during World War I. The LMU was not formally dissolved until 1927, following Switzerland’s decision to withdraw during the previous year.

Scandinavian Monetary Union

A similar arrangement also emerged in northern Europe — the Scandinavian Monetary Union (SMU), formed in 1873 by Sweden and Denmark and joined two years later by Norway. The Scandanavian Monetary Union too was an exchange-rate union designed to standardize existing coinages, although unlike the LMU the SMU was based from the start on a monometallic gold standard. The Union established the krone (crown) as a uniform unit of account, with national currencies permitted full circulation as legal tender in all three countries. As in the LMU, members of the SMU were distinguished from outsiders by the reciprocal obligation to accept one another’s currencies at par and without limit; also as in the LMU, mutual acceptability was initially limited to gold and silver coins only. In 1885, however, the three members went further, agreeing to accept one another’s bank notes and drafts as well, thus facilitating free intercirculation of all paper currency and resulting eventually in the total disappearance of exchange-rate quotations among the three moneys. By the turn of the century the SMU had come to function, in effect, as a single unit for all payments purposes, until relations were disrupted by the suspension of convertibility and floating of individual currencies at the start of World War I. Despite subsequent efforts during and after the war to restore at least some elements of the Union, particularly following the members’ return to the gold standard in the mid-1920s, the agreement was finally abandoned following the global financial crisis of 1931.

German Monetary Union

Repeated efforts to standardize coinages were made as well by various German states prior to Germany’s political union, but with rather less success. Early accords, following the start of the Zollverein (the German region’s customs union) in 1834, ostensibly established a German Monetary Union — technically, like the LMU and SMU, also an exchange-rate union — but in fact divided the area into two quite distinct currency alliances: one encompassing most northern states, using the thaler as its basic monetary unit; and a second including states in the south, based on the florin (also known as the guilder or gulden). Free intercirculation of coins was guaranteed in both groups but not at par: the exchange rate between the two units of account was fixed at one thaler for 1.75 florins (formally, 14: 24.5) rather than one-for-one. Moreover, states remained free to mint non-standardized coins in addition to their basic units, and many important German states (e.g., Bremen, Hamburg, and Schleswig-Holstein) chose to stay outside the agreement altogether. Nor were matters helped much by the short-lived Vienna Coinage Treaty signed with Austria in 1857, which added yet a third currency, Austria’s own florin, to the mix with a value slightly higher than that of the south German unit. The Austro-German Monetary Union was dissolved less than a decade later, following Austria’s defeat in the 1866 Austro-Prussian War. A full merger of all the currencies of the German states did not finally arrive until after consolidation of modern Germany, under Prussian leadership, in 1871.

The only truly successful monetary union in Europe prior to EMU came in 1922 with the birth of the Belgium-Luxembourg Economic Union (BLEU), which remained in force for more than seven decades until blended into EMU in 1999. Following severance of its traditional ties with the German Zollverein after World War I, Luxembourg elected to link itself commercially and financially with Belgium, agreeing to a comprehensive economic union including a merger of their separate money systems. Reflecting the partners’ considerable disparity of size (Belgium’s population is roughly thirty times Luxembourg’s), Belgian francs under BLEU formed the largest part of the money stock of Luxembourg as well as Belgium, and alone enjoyed full status as legal tender in both countries. Only Belgium, moreover, had a full-scale central bank. The Luxembourg franc was issued by a more modest institution, the Luxembourg Monetary Institute, was limited in supply, and served as legal tender just within Luxembourg itself. Despite the existence of formal joint decision-making bodies, Luxembourg in effect existed largely as an appendage of the Belgian monetary system until both nations joined their EU partners in creating the euro.

Monetary Disintegration

Europe in the twentieth century has also seen the disintegration of several monetary unions, usually as a by-product of political dissent or dissolution. A celebrated instance occurred after World War I when the Austro-Hungarian Empire was dismembered by the Treaty of Versailles. Almost immediately, in an abrupt and quite chaotic manner, new currencies were introduced by each successor state — including Czechoslovakia, Hungary, Yugoslavia, and ultimately even shrunken Austria itself — to replace the old imperial Austrian crown. Comparable examples have also been provided more recently, after the end of the Cold War, following fragmentation along ethnic lines of both the Czechoslovak and Yugoslav federations. Most spectacular was the collapse of the former ruble zone following the break-up of the seven-decade-old Soviet Union in late 1991. Out of the rubble of the ruble no fewer than a dozen new currencies emerged to take their place on the world stage.

Outside Europe, the idea of monetary union was promoted mainly in the context of colonial or other dependency relationships, including both alliance-type and dollarization arrangements. Though most imperial regimes were quickly abandoned in favor of newly created national currencies once decolonization began after World War II, a few have survived in modified form to the present day.

British Colonies

Alliance-type arrangements emerged in the colonial domains of both Britain and France, the two biggest imperial powers of the nineteenth century. First to act were the British, who after some experimentation succeeded in creating a series of common currency zones, each closely tied to the pound sterling through the mechanism of a currency board. With a currency board, exchange rates were firmly pegged to the pound and full sterling backing was required for any new issue of the colonial money. Joint currencies were created first in West Africa (1912) and East Africa (1919) and later for British possessions in Southeast Asia (1938) and the Caribbean (1950). In southern Africa, an equivalent zone was established during the 1920s based on the South African pound (later the rand), which became the sole legal tender in three of Britain’s nearby possessions, Bechuanaland (later Botswana), British Basutoland (later Lesotho), and Swaziland, as well as in South West Africa (later Namibia), a former German colony administered by South Africa under a League of Nations mandate. Of Britain’s various arrangements, only two still exist in some form.

East Caribbean

One is in the Caribbean, where Britain’s monetary legacy has proved remarkably durable. The British Caribbean Currency Board evolved first into the Eastern Caribbean Currency Authority in 1965 and then the Eastern Caribbean Central Bank in 1983, issuing one currency, the Eastern Caribbean dollar, to serve as legal tender for all participants. Included in the Eastern Caribbean Currency Union (ECCU) are the six independent microstates of Antigua and Barbuda, Dominica, Grenada, St. Kitts and Nevis, St. Lucia, and St. Vincent and the Grenadines, plus two islands that are still British dependencies, Anguilla and Montserrat. Embedded in a broadening network of other related agreements among the same governments (the Eastern Caribbean Common Market, the Organization of Eastern Caribbean States), the ECCU has functioned without serious difficulty since its formal establishment in 1965.

Southern Africa

The other is in southern Africa, where previous links have been progressively formalized, first in 1974 as the Rand Monetary Area, later in 1986 under the label Common Monetary Area (CMA), though, significantly, without the participation of diamond-rich Botswana, which has preferred to rely on its own national money. The CMA started as a monetary union tightly based on the rand, a local form of dollarization reflecting South Africa’s economic dominance of the region. But with the passage of time the degree of hierarchy has diminished considerably, as the three remaining junior partners have asserted their growing sense of national identity. Especially since the 1970s, the arrangement has been transformed into a looser exchange-rate union as each of South Africa’s partners introduced its own distinct national currency. One of them, Swaziland, has even gone so far as to withdraw the rand’s legal-tender status within its own borders. Moreover, though all three continue to peg their moneys to the rand at par, they are no longer bound by currency board-like provisions on money creation and may now in principle vary their exchange rates at will.

Africa’s CFA Franc Zone

In the French Empire monetary union did not arrive until 1945, when the newly restored government in Paris decided to consolidate the diverse currencies of its many African dependencies into one money, le franc des Colonies Françaises d’Afrique (CFA francs). Subsequently, in the early 1960s, as independence came to France’s African domains, the old colonial franc was replaced by two new regional currencies, each cleverly named to preserve the CFA franc appellation: for the eight present members of the West African Monetary Union, le franc de la Communauté Financière de l’Afrique, issued by the Central Bank of West African States; and for the six members of the Central African Monetary Area, le franc de la Coopération Financière Africaine, issued by the Bank of Central African States. Together the two groups comprise the Communauté Financière Africaine (African Financial Community). Though each of the two currencies is legal tender only within its own region, the two are equivalently defined and have always been jointly managed under the aegis of the French Ministry of Finance as integral parts of a single monetary union, popularly known as the CFA Franc Zone.

Elsewhere imperial powers preferred some version of a dollarization-type regime, promoting use of their own currencies in colonial possessions to reinforce dependency relationships — though few of these hierarchical arrangements survived the arrival of decolonization. The only major exceptions are to be found among smaller countries with special ties to the United States. Most prominently, these include Panama and Liberia, two states that owe their very existence to U.S. initiatives. Immediately after gaining its independence in 1903 with help from Washington, Panama adopted America’s greenback as its national currency in lieu of a money of its own. In similar fashion during World War II, Liberia — a nation founded by former American slaves — made the dollar its sole legal tender, replacing the British West African colonial coinage that had previously dominated the local money supply. Other long-time dollarizers include the Marshall Islands, Micronesia, and Palau, Pacific Ocean microstates that were all once administered by the United States under United Nations trusteeships. Most recently, the dollar replaced failed local currencies in Ecuador in 2000 and in El Salvador in 2001 and was adopted by East Timor when that state gained its independence in 2002.

Europe’s Monetary Union

The most dramatic episode in the history of monetary unions is of course EMU, in many ways a unique undertaking — a group of fully independent states, all partners in the European Union, that have voluntarily agreed to replace existing national currencies with one newly created money, the euro. The euro was first introduced in 1999 in electronic form (a “virtual” currency), with notes and coins following in 2002. Moreover, even while retaining political sovereignty, member governments have formally delegated all monetary sovereignty to a single joint authority, the European Central Bank. These are not former overseas dependencies like the members of ECCU or the CFA Franc Zone, inheriting arrangements that had originated in colonial times; nor are they small fragile economies like Ecuador or El Salvador, surrendering monetary sovereignty to an already proven and popular currency like the dollar. Rather, these are established states of long standing and include some of the biggest national economies in the world, engaged in a gigantic experiment of unprecedented proportions. Not surprisingly, therefore, EMU has stimulated growing interest in monetary union in many parts of the world. Despite the failure of many past initiatives, the future could see yet more joint currency ventures among sovereign states.

Bartel, Robert J. “International Monetary Unions: The XIXth Century Experience.” Journal of European Economic History 3, no. 3 (1974): 689-704.

Bordo, Michael, and Lars Jonung. Lessons for EMU from the History of Monetary Unions. London: Institute of Economic Affairs, 2000.

Capie, Forrest. “Monetary Unions in Historical Perspective: What Future for the Euro in the International Financial System.” In Ideas for the Future of the International Monetary System, edited by Michele Fratianni, Dominick Salvatore, and Paolo Savona., 77-95. Boston: Kluwer Academic Publishers, 1999.

Cohen, Benjamin J. “Beyond EMU: The Problem of Sustainability.” In The Political Economy of European Monetary Unification, second edition, edited by Barry Eichengreen and Jeffry A. Frieden, 179-204.Boulder, CO: Westview Press, 2001.

Cohen, Benjamin J. The Geography of Money. Ithaca, NY: Cornell University Press, 1998.

De Cecco, Marcello. “European Monetary and Financial Cooperation before the First World War.” Rivista di Storia Economica 9 (1992): 55-76.

Graboyes, Robert F. “The EMU: Forerunners and Durability.” Federal Reserve Bank of Richmond Economic Review 76, no. 4 (1990): 8-17.

Hamada, Koichi, and David Porteous. L’Intégration Monétaire dans Une Perspective Historique.” Revue d’Économie Financière 22 (1992): 77-92.

Helleiner, Eric. The Making of National Money: Territorial Currencies in Historical Perspective. Ithaca, NY: Cornell University Press, 2003.

Perlman, M. “In Search of Monetary Union.” Journal of European Economic History 22, no. 2 (1993): 313-332.

Vanthoor, Wim F.V. European Monetary Union Since 1848: A Political and Historical Analysis. Brookfield, VT: Edward Elgar, 1996.

Citation: Cohen, Benjamin. “Monetary Unions”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL

A Brief Economic History of Modern Israel

Nadav Halevi, Hebrew University

The Pre-state Background

The history of modern Israel begins in the 1880s, when the first Zionist immigrants came to Palestine, then under Ottoman rule, to join the small existing Jewish community, establishing agricultural settlements and some industry, restoring Hebrew as the spoken national language, and creating new economic and social institutions. The ravages of World War I reduced the Jewish population by a third, to 56,000, about what it had been at the beginning of the century.

As a result of the war, Palestine came under the control of Great Britain, whose Balfour Declaration had called for a Jewish National Home in Palestine. Britain’s control was formalized in 1920, when it was given the Mandate for Palestine by the League of Nations. During the Mandatory period, which lasted until May 1948, the social, political and economic structure for the future state of Israel was developed. Though the government of Palestine had a single economic policy, the Jewish and Arab economies developed separately, with relatively little connection.

Two factors were instrumental in fostering rapid economic growth of the Jewish sector: immigration and capital inflows. The Jewish population increased mainly through immigration; by the end of 1947 it had reached 630,000, about 35 percent of the total population. Immigrants came in waves, particularly large in the mid 1920s and mid 1930s. They consisted of ideological Zionists and refugees, economic and political, from Central and Eastern Europe. Capital inflows included public funds, collected by Zionist institutions, but were for the most part private funds. National product grew rapidly during periods of large immigration, but both waves of mass immigration were followed by recessions, periods of adjustment and consolidation.

In the period from 1922 to 1947 real net domestic product (NDP) of the Jewish sector grew at an average rate of 13.2 percent, and in 1947 accounted for 54 percent of the NDP of the Jewish and Arab economies together. NDP per capita in the Jewish sector grew at a rate of 4.8 percent; by the end of the period it was 8.5 times larger in than in 1922, and 2.5 times larger than in the Arab sector (Metzer, 1998). Though agricultural development – an ideological objective – was substantial, this sector never accounted for more than 15 percent of total net domestic product of the Jewish economy. Manufacturing grew slowly for most of the period, but very rapidly during World War II, when Palestine was cut off from foreign competition and was a major provider to the British armed forces in the Middle East. By the end of the period, manufacturing accounted for a quarter of NDP. Housing construction, though a smaller component of NDP, was the most volatile sector, and contributed to sharp business cycle movements. A salient feature of the Jewish economy during the Mandatory period, which carried over into later periods, was the dominant size of the services sector – more than half of total NDP. This included a relatively modern educational and health sector, efficient financial and business sectors, and semi-governmental Jewish institutions, which later were ready to take on governmental duties.

The Formative Years: 1948-1965

The state of Israel came into being, in mid May 1948, in the midst of a war with its Arab neighbors. The immediate economic problems were formidable: to finance and wage a war, to take in as many immigrants as possible (first the refugees kept in camps in Europe and on Cyprus), to provide basic commodities to the old and new population, and to create a government bureaucracy to cope with all these challenges. The creation of a government went relatively smoothly, as semi-governmental Jewish institutions which had developed during the Mandatory period now became government departments.

Cease-fire agreements were signed during 1949. By the end of that year a total of 340,000 immigrants had arrived, and by the end of 1951 an additional 345,000 (the latter including immigrants from Arab countries), thus doubling the Jewish population. Immediate needs were met by a strict austerity program and inflationary government finance, repressed by price controls and rationing of basic commodities. However, the problems of providing housing and employment for the new population were solved only gradually. A New Economic Policy was introduced in early 1952. It consisted of exchange rate devaluation, the gradual relaxation of price controls and rationing, and curbing of monetary expansion, primarily by budgetary restraint. Active immigration encouragement was curtailed, to await the absorption of the earlier mass immigration.

From 1950 until 1965, Israel achieved a high rate of growth: Real GNP (gross national product) grew by an average annual rate of over 11 percent, and per capita GNP by greater than 6 percent. What made this possible? Israel was fortunate in receiving large sums of capital inflows: U.S. aid in the forms of unilateral transfers and loans, German reparations and restitutions to individuals, sale of State of Israel Bonds abroad, and unilateral transfers to public institutions, mainly the Jewish Agency, which retained responsibility for immigration absorption and agricultural settlement. Thus, Israel had resources available for domestic use – for public and private consumption and investment – about 25 percent more than its own GNP. This made possible a massive investment program, mainly financed through a special government budget. Both the enormity of needs and the socialist philosophy of the main political party in the government coalitions led to extreme government intervention in the economy.

Governmental budgets and strong protectionist measures to foster import-substitution enabled the development of new industries, chief among them textiles, and subsidies were given to help the development of exports, additional to the traditional exports of citrus products and cut diamonds.

During the four decades from the mid 1960s until the present, Israel’s economy developed and changed, as did economic policy. A major factor affecting these developments has been the Arab-Israeli conflict. Its influence is discussed first, and is followed by brief descriptions of economic growth and fluctuations, and evolution of economic policy.

The Arab-Israel Conflict

The most dramatic event of the 1960s was the Six Day War of 1967, at the end of which Israel controlled the West Bank (of the Jordan River) – the area of Palestine absorbed by the Jordan since 1949 – and the Gaza Strip, controlled until then by Egypt.

As a consequence of the occupation of these territories Israel was responsible for the economic as well as the political life in the areas taken over. The Arab sections of Jerusalem were united with the Jewish section. Jewish settlements were established in parts of the occupied territories. As hostilities intensified, special investments in infrastructure were made to protect Jewish settlers. The allocation of resources to Jewish settlements in the occupied territories has been a political and economic issue ever since.

The economies of Israel and the occupied territories were partially integrated. Trade in goods and services developed, with restrictions placed on exports to Israel of products deemed too competitive, and Palestinian workers were employed in Israel particularly in construction and agriculture. At its peak, in 1996, Palestinian employment in Israel reached 115,000 to 120,000, about 40 percent of the Palestinian labor force, but never more than 6.5 percent of total Israeli employment. Thus, while employment in Israel was a major contributor to the economy of the Palestinians, its effects on the Israeli economy, except for the sectors of construction and agriculture, were not large.

The Palestinian economy developed rapidly – real per capita national income grew at an annual rate of close to 20 percent in 1969-1972 and 5 percent in 1973-1980 – but fluctuated widely thereafter, and actually decreased in times of hostilities. Palestinian per capita income equaled 10.2 percent of Israeli per capita income in 1968, 22.8 percent in 1986, and declined to 9.7 percent in 1998 (Kleiman, 2003).

As part of the peace process between Israel and the Palestinians initiated in the 1990s, an economic agreement was signed between the parties in 1994, which in effect transformed what had been essentially a one-sided customs agreement (which gave Israel full freedom to export to the Territories but put restrictions on Palestinian exports to Israel) into a more equal customs union: the uniform external trade policy was actually Israel’s, but the Palestinians were given limited sovereignty regarding imports of certain commodities.

Arab uprisings (intifadas), in the 1980s, and especially the more violent one beginning in 2000 and continuing into 2005, led to severe Israeli restrictions on interaction between the two economies, particularly employment of Palestinians in Israel, and even to military reoccupation of some areas given over earlier to Palestinian control. These measures set the Palestinian economy back many years, wiping out much of the gains in income which had been achieved since 1967 – per capita GNP in 2004 was $932, compared to about $1500 in 1999. Palestinian workers in Israel were replaced by foreign workers.

An important economic implication of the Arab-Israel conflict is that Israel must allocate a major part of its budget to defense. The size of the defense budget has varied, rising during wars and armed hostilities. The total defense burden (including expenses not in the budget) reached its maximum relative size during and after the Yom Kippur War of 1973, close to 30 percent of GNP in 1974-1978. In the 2000-2004 period, the defense budget alone reached about 22 to 25 percent of GDP. Israel has been fortunate in receiving generous amounts of U.S. aid. Until 1972 most of this came in the form of grants and loans, primarily for purchases of U.S. agricultural surpluses. But since 1973 U.S. aid has been closely connected to Israel’s defense needs. During 1973-1982 annual loans and grants averaged $1.9 billion, and covered some 60 percent of total defense imports. But even in more tranquil periods, the defense burden, exclusive of U.S. aid, has been much larger than usual in industrial countries during peace time.

Growth and Economic Fluctuations

The high rates of growth of income and income per capita which characterized Israel until 1973 were not achieved thereafter. GDP growth fluctuated, generally between 2 and 5 percent, reaching as high as 7.5 percent in 2000, but falling below zero in the recession years from 2001 to mid 2003. By the end of the twentieth century income per capita reached about $20,000, similar to many of the more developed industrialized countries.

Economic fluctuations in Israel have usually been associated with waves of immigration: a large flow of immigrants which abruptly increases the population requires an adjustment period until it is absorbed productively, with the investments for its absorption in employment and housing stimulating economic activity. Immigration never again reached the relative size of the first years after statehood, but again gained importance with the loosening of restrictions on emigration from the Soviet Union. The total number of immigrants in 1972-1982 was 325,000, and after the collapse of the Soviet Union immigration totaled 1,050,000 in 1990-1999, mostly from the former Soviet Union. Unlike the earlier period, these immigrants were gradually absorbed in productive employment (though often not in the same activity as abroad) without resort to make-work projects. By the end of the century the population of Israel passed 6,300,000, with the Jewish population being 78 percent of the total. The immigrants from the former Soviet Union were equal to about one-fifth of the Jewish population, and were a significant and important addition of human capital to the labor force.

As the economy developed, the structure of output changed. Though the service sectors are still relatively large – trade and services contributing 46 percent of the business sector’s product – agriculture has declined in importance, and industry makes up over a quarter of the total. The structure of manufacturing has also changed: both in total production and in exports the share of traditional, low-tech industries has declined, with sophisticated, high-tech products, particularly electronics, achieving primary importance.

Fluctuations in output were marked by periods of inflation and periods of unemployment. After a change in exchange rate policy in the late 1970s (discussed below), an inflationary spiral was unleashed. Hyperinflation rates were reached in the early 1980s, about 400 percent per year by the time a drastic stabilization policy was imposed in 1985. Exchange rate stabilization, budgetary and monetary restraint, and wage and price freezes sharply reduced the rate of inflation to less than 20 percent, and then to about 16 percent in the late 1980s. Very drastic monetary policy, from the late 1990s, finally reduced the inflation to zero by 2005. However, this policy, combined with external factors such as the bursting of the high-tech bubble, recession abroad, and domestic insecurity resulting from the intifada, led to unemployment levels above 10 percent at the beginning of the new century. The economic improvements since the latter half of 2003 have, as yet (February 2005), not significantly reduced the level of unemployment.

Policy Changes

The Israeli economy was initially subject to extensive government controls. Only gradually was the economy converted into a fairly free (though still not completely so) market economy. This process began in the 1960s. In response to a realization by policy makers that government intervention in the economy was excessive, and to the challenge posed by the creation in Europe of a customs union (which gradually progressed into the present European Union), Israel embarked upon a very gradual process of economic liberalization. This appeared first in foreign trade: quantitative restrictions on imports were replaced by tariff protection, which was slowly reduced, and both import-substitution and exports were encouraged by more realistic exchange rates rather than by protection and subsidies. Several partial trade agreements with the European Economic Community (EEC), starting in 1964, culminated in a free trade area agreement (FTA) in industrial goods in 1975, and an FTA agreement with the U.S. came into force in 1985.

By late 1977 a considerable degree of trade liberalization had taken place. In October of that year, Israel moved from a fixed exchange rate system to a floating rate system, and restrictions on capital movements were considerably liberalized. However, there followed a disastrous inflationary spiral which curbed the capital liberalization process. Capital flows were not completely liberalized until the beginning of the new century.

Throughout the 1980s and the 1990s there were additional liberalization measures: in monetary policy, in domestic capital markets, and in various instruments of governmental interference in economic activity. The role of government in the economy was considerably decreased. On the other hand, some governmental economic functions were increased: a national health insurance system was introduced, though private health providers continued to provide health services within the national system. Social welfare payments, such as unemployment benefits, child allowances, old age pensions and minimum income support, were expanded continuously, until they formed a major budgetary expenditure. These transfer payments compensated, to a large extent, for the continuous growth of income inequality, which had moved Israel from among the developed countries with the least income inequality to those with the most. By 2003, 15 percent of the government’s budget went to health services, 15 percent to education, and an additional 20 percent were transfer payments through the National Insurance Agency.

Beginning in 2003, the Ministry of Finance embarked upon a major effort to decrease welfare payments, induce greater participation in the labor force, privatize enterprises still owned by government, and reduce both the relative size of the government deficit and the government sector itself. These activities are the result of an ideological acceptance by the present policy makers of the concept that a truly free market economy is needed to fit into and compete in the modern world of globalization.

An important economic institution is the Histadrut, a federation of labor unions. What had made this institution unique is that, in addition to normal labor union functions, it encompassed agricultural and other cooperatives, major construction and industrial enterprises, and social welfare institutions, including the main health care provider. During the Mandatory period, and for many years thereafter, the Histadrut was an important factor in economic development and in influencing economic policy. During the 1990s, the Histadrut was divested of many of its non-union activities, and its influence in the economy has greatly declined. The major unions associated with it still have much say in wage and employment issues.

The Challenges Ahead

As it moves into the new century, the Israeli economy has proven to be prosperous, as it continuously introduces and applies economic innovation, and to be capable of dealing with economic fluctuations. However, it faces some serious challenges. Some of these are the same as those faced by most industrial economies: how to reconcile innovation, the switch from traditional activities which are no longer competitive, to more sophisticated, skill-intensive products, with the dislocation of labor it involves, and the income inequality it intensifies. Like other small economies, Israel has to see how it fits into the new global economy, marked by the two major markets of the EU and the U.S., and the emergence of China as a major economic factor.

Special issues relate to the relations of Israel with its Arab neighbors. First are the financial implications of continuous hostilities and military threats. Clearly, if peace can come to the region, resources can be transferred to more productive uses. Furthermore, foreign investment, so important for Israel’s future growth, is very responsive to political security. Other issues depend on the type of relations established: will there be the free movement of goods and workers between Israel and a Palestinian state? Will relatively free economic relations with other Arab countries lead to a greater integration of Israel in the immediate region, or, as is more likely, will Israel’s trade orientation continue to be directed mainly to the present major industrial countries? If the latter proves true, Israel will have to carefully maneuver between the two giants: the U.S. and the EU.

References and Recommended Reading

Ben-Bassat, Avi, editor. The Israeli Economy, 1985-1998: From Government Intervention to Market Economics. Cambridge, MA: MIT Press, 2002.

Ben-Porath, Yoram, editor. The Israeli Economy: Maturing through Crisis. Cambridge, MA: Harvard University Press, 1986.

Fischer, Stanley, Dani Rodrik and Elias Tuma, editors. The Economics of Middle East Peace. Cambridge, MA: MIT Press, 1993.

Halevi, Nadav and Ruth Klinov-Malul, The Economic Development of Israel. New York: Praeger, 1968.

Kleiman, Ephraim. “Palestinian Economic Viability and Vulnerability.” Paper presented at the UCLA Burkle Conference in Athens, August 2003. (Available at

Metz, Helen Chapin, editor. Israel: A Country Study. Washington: Library of Congress Country Studies, 1986.

Metzer, Jacob, The Divided Economy of Mandatory Palestine. Cambridge: Cambridge University Press, 1998.

Patinkin, Don. The Israel Economy: The First Decade. Jerusalem: Maurice Falk Institute for Economic Research in Israel, 1967.

Razin, Assaf and Efraim Sadka, The Economy of Modern Israel: Malaise and Promise. London: Chicago University Press, 1993.

World Bank. Developing the Occupied Territories: An Investment in Peace. Washington D.C.: The World Bank, September, 1993.

Citation: Halevi, Nadav. “A Brief Economic History of Modern Israel”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL

Life Insurance in the United States through World War I

Sharon Ann Murphy

The first American life insurance enterprises can be traced back to the late colonial period. The Presbyterian Synods in Philadelphia and New York set up the Corporation for Relief of Poor and Distressed Widows and Children of Presbyterian Ministers in 1759; the Episcopalian ministers organized a similar fund in 1769. In the half century from 1787 to 1837, twenty-six companies offering life insurance to the general public opened their doors, but they rarely survived more than a couple of years and sold few policies [Figures 1 and 2]. The only early companies to experience much success in this line of business were the Pennsylvania Company for Insurances on Lives and Granting Annuities (chartered 1812), the Massachusetts Hospital Life Insurance Company (1818), the Baltimore Life Insurance Company (1830), the New York Life Insurance and Trust Company (1830), and the Girard Life Insurance, Annuity and Trust Company of Pennsylvania (1836). [See Table 1.]

Despite this tentative start, the life insurance industry did make some significant strides beginning in the 1830s [Figure 2]. Life insurance in force (the total death benefit payable on all existing policies) grew steadily from about $600,000 in 1830 to just under $5 million a decade later, with New York Life and Trust policies accounting for more than half of this latter amount. Over the next five years insurance in force almost tripled to $14.5 million before surging by 1850 to just under $100 million of life insurance spread among 48 companies. The top three companies – the Mutual Life Insurance Company of New York (1842), the Mutual Benefit Life Insurance Company of New Jersey (1845), and the Connecticut Mutual Life Insurance Company (1846) – accounted for more than half of this amount. The sudden success of life insurance during the 1840s can be attributed to two main developments – changes in legislation impacting life insurance and a shift in the corporate structure of companies towards mutualization.

Married Women’s Acts

Life insurance companies targeted women and children as the main beneficiaries of insurance, despite the fact that the majority of women were prevented by law from gaining the protection offered in the unfortunate event of their husband’s death. The first problem was that companies strictly adhered to the common law idea of insurable interest which required that any person taking out insurance on the life of another have a specific monetary interest in that person’s continued life; “affection” (i.e. the relationship of husband and wife or parent and child) was not considered adequate evidence of insurable interest. Additionally, married women could not enter into contracts on their own and therefore could not take out life insurance policies either on themselves (for the benefit of their children or husband) or directly on their husbands (for their own benefit). One way around this problem was for the husband to take out the policy on his own life and assign his wife or children as the beneficiaries. This arrangement proved to be flawed, however, since the policy was considered part of the husband’s estate and therefore could be claimed by any creditors of the insured.

New York’s 1840 Law

This dilemma did not pass unnoticed by promoters of life insurance who viewed it as one of the main stumbling blocks to the growth of the industry. The New York Life and Trust stood at the forefront of a campaign to pass a state law enabling women to procure life insurance policies protected from the claims of creditors. The law, which passed the New York state legislature on April 1, 1840, accomplished four important tasks. First, it established the right of a woman to enter into a contract of insurance on the life of her husband “by herself and in her name, or in the name of any third person, with his assent, as her trustee.” Second, that insurance would be “free from the claims of the representatives of her husband, or of any of his creditors” unless the annual premiums on the policy exceeded $300 (approximately the premium required to take out the maximum $10,000 policy on the life of a 40 year old). Third, in the event of the wife predeceasing the husband, the policy reverted to the children who were granted the same protection from creditors. Finally, as the law was interpreted by both companies and the courts, wives were not required to prove their monetary interest in the life of the insured, establishing for the first time an instance of insurable interest independent of pecuniary interest in the life of another.

By December of 1840, Maryland had enacted an identical law – copied word for word from the New York statute. The Massachusetts legislation of 1844 went one step further by protecting from the claims of creditors all policies procured “for the benefit of a married woman, whether effected by her, her husband, or any other person.” The 1851 New Jersey law was the most stringent, limiting annual premiums to only $100. In those states where a general law did not exist, new companies often had the New York law inserted into their charter, with these provisions being upheld by the state courts. For example, the Connecticut Mutual Life Insurance Company (1846), the North Carolina Mutual Life Insurance Company (1849), and the Jefferson Life Insurance Company of Cincinnati, Ohio (1850) all provided this protection in their charters despite the silence of their respective states on the issue.


The second important development of the 1840s was the emergence of mutual life insurance companies in which any annual profits were redistributed to the policyholders rather than to stockholders. Although mutual insurance was not a new concept – the Society for Equitable Assurances on Lives and Survivorships of London had been operating under the mutual plan since its establishment in 1762 and American marine and fire companies were commonly organized as mutuals – the first American mutual life companies did not begin issuing policies until the early 1840s. The main impetus for this shift to mutualization was the panic of 1837 and the resulting financial crisis, which combined to dampen the enthusiasm of investors for projects ranging from canals and railroads to banks and insurance companies. Between 1838 and 1846, only one life insurance company was able to raise the capital essential for organization on a stock basis. On the other hand, mutuals required little initial capital, relying instead on the premium payments from high-volume sales to pay any death claims. The New England Mutual Life Insurance Company (1835) issued its first policy in 1844 and the Mutual Life Insurance Company of New York (1842) began operation in 1843; at least fifteen more mutuals were chartered by 1849.

Aggressive Marketing

In order to achieve the necessary sales volume, mutual companies began to aggressively promote life insurance through advertisements, editorials, pamphlets, and soliciting agents. These marketing tactics broke with the traditionally staid practices of banks and insurance companies whereby advertisements generally had provided only the location of the local office and agents passively had accepted applications from customers who inquired directly at their office.

Advantages of Mutuality

The mutual marketing campaigns not only advanced life insurance in general but mutuality in particular, which held widespread appeal for the public at large. Policyholders who could not afford to own stock in a proprietary insurance company could now share in the financial success of the mutual companies, with any annual profits (the excess of invested premium income over death payments) being redistributed to the policyholders, often in the form of reduced premium payments. The rapid success of life insurance during the late 1840s, as seen in Figure 3, thus can be attributed both to this active marketing as well as to the appeal of mutual insurance itself.

Regulation and Stagnation after 1849

While many of these companies operated on a sound financial basis, the ease of formation opened the field to several fraudulent or fiscally unsound companies. Stock institutions, concerned both for the reputation of life insurance in general as well as with self-preservation, lobbied the New York state legislature for a law to limit the operation of mutual companies. On April 10, 1849 the legislature passed a law requiring all new insurance companies either incorporating or planning to do business in New York to possess $100,000 of capital stock. Two years later, the legislature passed a more stringent law obligating all life insurance companies to deposit $100,000 with the Comptroller of New York. While this capital requirement was readily met by most stock companies and by the more established New York-based mutual companies, it effectively dampened the movement toward mutualization until the 1890s. Additionally, twelve out-of-state companies ceased doing business in New York altogether, leaving only the New England Mutual and the Mutual Benefit of New Jersey to compete with the New York companies in one of the largest markets. These laws were also largely responsible for the decade-long stagnation in insurance sales beginning in 1849 [Figure 3].

The Civil War and Its Aftermath

By the end of the 1850s life insurance sales again began to increase, climbing to almost $200 million by 1862 before tripling to just under $600 million by the end of the Civil War; life insurance in force peaked at $2 billion in 1871 [Figures 3 and 4]. Several factors contributed to this renewed success. First, the establishment of insurance departments in Massachusetts (1856) and New York (1859) to oversee the operation of fire, marine, and life insurance companies stimulated public confidence in the financial soundness of the industry. Additionally, in 1861 the Massachusetts legislature passed a non-forfeiture law, which forbade companies from terminating policies for lack of premium payment. Instead, the law stipulated that policies be converted to term life policies and that companies pay any death claims that occurred during this term period [term policies are issued only for a stipulated number of years, require reapplication on a regular basis, and consequently command significantly lower annual premiums which rise rapidly with age]. This law was further strengthened in 1880 when Massachusetts mandated that policyholders have the additional option of receiving a cash surrender value for a forfeited policy.

The Civil War was another factor in this resurgence. Although the industry had no experience with mortality during war – particularly a war on American soil – and most policies contained clauses that voided them in the case of military service, several major companies decided to ensure war risks for an additional premium rate of 2% to 5%. While most companies just about broke even on these soldiers’ policies, the goodwill and publicity engendered with the payment of each death claim combined with a generally heightened awareness of mortality to greatly increase interest in life insurance. In the immediate postbellum period, investment in most industries increased dramatically and life insurance was no exception. Whereas only 43 companies existed on the eve of the war, the newfound popularity of life insurance resulted in the establishment of 107 companies between 1865 and 1870 [Figure 1].


The other major innovation in life insurance occurred in 1867 when the Equitable Life Assurance Society (1859) began issuing tontine or deferred dividend policies. While a portion of each premium payment went directly towards an ordinary insurance policy, another portion was deposited in an investment fund with a set maturity date (usually 10, 15, or 20 years) and a restricted group of participants. The beneficiaries of deceased policyholders received only the face value of the standard life component while participants who allowed their policy to lapse either received nothing or only a small cash surrender value. At the end of the stipulated period, the dividends that had accumulated in the fund were divided among the remaining participants. Agents often promoted these policies with inflated estimates of future returns – and always assured the potential investor that he would be a beneficiary of the high lapse rate and not one of the lapsing participants. Estimates indicate that approximately two-thirds of all life insurance policies in force in 1905 – at the height of the industry’s power – were deferred dividend plans.

Reorganization and Innovation

The success and profitability of life insurance companies bred stiff competition during the 1860s; the resulting market saturation and a general economic downtown combined to push the industry into a severe depression during the 1870s. While the more well-established companies such as the Mutual Life Insurance Company of New York, the New York Life Insurance Company (1843), and the Equitable Life Assurance Society were strong enough to weather the depression with few problems, most of the new corporations organized during the 1860s were unable to survive the downturn. All told, 98 life insurance companies went out of business between 1868 and 1877, with 46 ceasing operations during the depression years of 1871 to 1874 [Figure 1]. Of these, 32 failed outright, resulting in $35 million of losses for policyholders. It was 1888 before the amount of insurance in force surpassed that of its peak in 1870 [Figure 4].

Assessment and Fraternal Insurance Companies

Taking advantage of these problems within the industry were numerous assessment and fraternal benefit societies. Assessment or cooperative companies, as they were sometimes called, were associations in which each member was assessed a flat fee to provide the death benefit when another member died rather than paying an annual premium. The two main problems with these organizations were the uncertain number of assessments each year and the difficulty of maintaining membership levels. As members aged and death rates rose, the assessment societies found it difficult to recruit younger members willing to take on the increasing risks of assessments. By the turn of the century, most assessment companies had collapsed or reorganized as mutual companies.

Fraternal organizations were voluntary associations of people affiliated through ethnicity, religion, profession, or some other tie. Although fraternal societies had existed throughout the history of the United States, it was only in the postbellum era that they mushroomed in number and emerged as a major provider of life insurance, mainly for working-class Americans. While many fraternal societies initially issued insurance on an assessment basis, most soon switched to mutual insurance. By the turn of the century, the approximately 600 fraternal societies in existence provided over $5 billion in life insurance to their members, making them direct competitors of the major stock and mutual companies. Just 5 years later, membership was over 6 million with $8 billion of insurance in force [Figure 4].

Industrial Life Insurance

For the few successful life insurance companies organized during the 1860s and 1870s, innovation was the only means of avoiding failure. Aware that they could not compete with the major companies in a tight market, these emerging companies concentrated on markets previously ignored by the larger life insurance organizations – looking instead to the example of the fraternal benefit societies. Beginning in the mid-1870s, companies such as the John Hancock Company (1862), the Metropolitan Life Insurance Company (1868), and the Prudential Insurance Company of America (1875) started issuing industrial life insurance. Industrial insurance, which began in England in the late 1840s, targeted lower income families by providing policies in amounts as small as $100, as opposed to the thousands of dollars normally required for ordinary insurance. Premiums ranging from $0.05 to $0.65 were collected on a weekly basis, often by agents coming door-to-door, instead of on an annual, semi-annual, or quarterly basis by direct remittance to the company. Additionally, medical examinations were often not required and policies could be written to cover all members of the family instead of just the main breadwinner. While the number of policies written skyrocketed to over 51 million by 1919, industrial insurance remained only a fraction of the amount of life insurance in force throughout the period [Figures 4 and 5].

International Expansion

The major life insurance companies also quickly expanded into the global market. While numerous firms ventured abroad as early as the 1860s and 1870s, the most rapid international growth occurred between 1885 and 1905. By 1900, the Equitable was providing insurance in almost 100 nations and territories, the New York Life in almost 50 and the Mutual in about 20. The international premium income (excluding Canada) of these Big Three life insurance companies amounted to almost $50 million in 1905, covering over $1 billion of insurance in force.

The Armstrong Committee Investigation

In response to a multitude of newspaper articles portraying extravagant spending and political payoffs by executives at the Equitable Life Assurance Society – all at the expense of their policyholders – Superintendent Francis Hendricks of the New York Insurance Department reluctantly conducted an investigation of the company in 1905. His report substantiated these allegations and prompted the New York legislature to create a special committee, known as the Armstrong Committee, to examine the conduct of all life insurance companies operating within the state. Appointed chief counsel of the investigation was future United States Supreme Court Chief Justice Charles Evans Hughes. Among the abuses uncovered by the committee were interlocking directorates, the creation of subsidiary financial institutions to evade restrictions on investments, the use of proxy voting to frustrate policyholder control of mutuals, unlimited company expenses, tremendous spending for lobbying activities, rebating (the practice of returning to a new client a portion of their first premium payment as an incentive to take out a policy), the encouragement of policy lapses, and the condoning of “twisting” (a practice whereby agents misrepresented and libeled rival firms in order to convince a policyholder to sacrifice their existing policy and replace it with one from that agent). Additionally, the committee severely chastised the New York Insurance Department for permitting such malpractice to occur and recommended the enactment of a wide array of reform measures. These revelations induced numerous other states to conduct their own investigations, including New Jersey, Massachusetts, Ohio, Missouri, Wisconsin, Tennessee, Kentucky, Minnesota, and Nebraska.

New Regulations

In 1907, the New York legislature responded to the committee’s report by issuing a series of strict regulations specifying acceptable investments, limiting lobbying practices and campaign contributions, democratizing management through the elimination of proxy voting, standardizing policy forms, and limiting agent activities including rebating and twisting. Most devastating to the industry, however, were the prohibition of deferred dividend policies and the requirement of regular dividend payments to policyholders. Nineteen other states followed New York’s lead in adopting similar legislation but the dominance of New York in the insurance industry enabled it to assert considerable influence over a large percentage of the industry. The state invoked the Appleton Rule, a 1901 administrative rule devised by New York Deputy Superintendent of Insurance Henry D. Appleton that required life insurance companies to comply with New York legislation both in New York and in all other states in which they conducted business, as a condition of doing business in New York. As the Massachusetts insurance commissioner immediately recognized, “In a certain sense [New York’s] supervision will be a national supervision, as its companies do business in all the states.” The rule was officially incorporated into New York’s insurance laws in 1939 and remained both in effect and highly effective until the 1970s.

Continued Growth in the Early Twentieth Century

The Armstrong hearings and the ensuing legislation renewed public confidence in the safety of life insurance, resulting in a surge of new company organizations not seen since the 1860s. Whereas only 106 companies existed in 1904, another 288 were established in the ten years from 1905 to 1914 [Figure 1]. Life insurance in force likewise rose rapidly, increasing from $20 billion on the eve of the hearings to almost $46 billion by the end of World War I, with the share insured by the fraternal and assessment societies decreasing from 40% to less than a quarter [Figure 5].

Group Insurance

One major innovation to occur during these decades was the development of group insurance. In 1911 the Equitable Life Assurance Society wrote a policy covering the 125 employees of the Pantasote Leather Company, requiring neither individual applications nor medical examinations. The following year, the Equitable organized a group department to promote this new product and soon was insuring the employees of Montgomery Ward Company. By 1919, 29 companies wrote group policies, which amounted to over a half billion dollars worth of life insurance in force.

War Risk Insurance

Not included in Figure 5 is the War Risk insurance issued by the United States government during World War I. Beginning in April 1917, all active military personnel received a $4,500 insurance policy payable by the federal government in the case of death or disability. In October of the same year, the government began selling low-cost term life and disability insurance, without medical examination, to all active members of the military. War Risk insurance proved to be extremely popular during the war, reaching over $40 billion of life insurance in force by 1919. In the aftermath of the war, these term policies quickly declined to under $3 billion of life insurance in force, with many servicemen turning instead to the whole life policies offered by the stock and mutual companies. As was the case after the Civil War, life insurance sales rose dramatically after World War I, peaking at $117 billion of insurance in force in 1930. By the eve of the Great Depression there existed over 120 million life insurance policies – approximately equivalent to one policy for every man, woman, and child living in the United States at that time.

(Sharon Ann Murphy is a Ph.D. Candidate at the Corcoran Department of History, University of Virginia.)

References and Further Reading

Buley, R. Carlyle. The American Life Convention, 1906-1952: A Study in the History of Life Insurance. New York: Appleton-Century-Crofts, Inc., 1953.

Grant, H. Roger. Insurance Reform: Consumer Action in the Progressive Era. Ames, Iowa: Iowa State University Press, 1988.

Keller, Morton. The Life Insurance Enterprise, 1885-1910: A Study in the Limits of Corporate Power. Cambridge, MA: Belknap Press, 1963.

Kimball, Spencer L. Insurance and Public Policy: A Study in the Legal Implications of Social and Economic Public Policy, Based on Wisconsin Records 1835-1959. Madison, WI: University of Wisconsin Press, 1960.

Merkel, Philip L. “Going National: The Life Insurance Industry’s Campaign for Federal Regulation after the Civil War.” Business History Review 65 (Autumn 1991): 528-553.

North, Douglass. “Capital Accumulation in Life Insurance between the Civil War and the Investigation of 1905.” In Men in Business: Essays on the Historical Role of the Entrepreneur, edited by William Miller, 238-253. New York: Harper & Row Publishers, 1952.

Ransom, Roger L., and Richard Sutch. “Tontine Insurance and the Armstrong Investigation: A Case of Stifled Innovation, 1868-1905.” Journal of Economic History 47, no. 2 (June 1987): 379-390.

Stalson, J. Owen. Marketing Life Insurance: Its History in America. Cambridge, MA: Harvard University Press, 1942.

Table 1

Early American Life Insurance Companies, 1759-1844

Company Year Chartered Terminated Insurance in Force in 1840
Corp. for the Relief of Poor and Distressed Widows and Children of Presbyterian Ministers (Presbyterian Ministers Fund) 1759
Corporation for the Relief of the Widows and Children of Clergymen in the Communion of the Church of England in America (Episcopal Ministers Fund) 1769
Insurance Company of the State of Pennsylvania 1794 1798
Insurance Company of North America, PA 1794 1798
United Insurance Company, NY 1798 1802
New York Insurance Company 1798 1802
Pennsylvania Company for Insurances on Lives and Granting Annuities 1812 1872* 691,000
New York Mechanics Life & Fire 1812 1813
Dutchess County Fire, Marine & Life, NY 1814 1818
Massachusetts Hospital Life Insurance Company 1818 1867* 342,000
Union Insurance Company, NY 1818 1840
Aetna Insurance Company (mainly fire insurance; separate life company chartered in 1853) 1820 1853
Farmers Loan & Trust Company, NY 1822 1843
Baltimore Life Insurance Company 1830 1867 750,000 (est.)
New York Life Insurance & Trust Company 1830 1865* 2,880,000
Lawrenceburg Insurance Company 1832 1836
Mississippi Insurance Company 1833 1837
Protection Insurance Company, Mississippi 1833 1837
Ohio Life Ins. & Trust Co. (life policies appear to have been reinsured with New York Life & Trust in the late 1840s) 1834 1857 54,000
New England Mutual Life Insurance Company, Massachusetts (did not begin issuing policies until 1844) 1835 0
Ocean Mutual, Louisiana 1835 1839
Southern Life & Trust, Alabama 1836 1840
American Life Insurance & Trust Company, Baltimore 1836 1840
Girard Life Insurance, Annuity & Trust Company, Pennsylvania 1836 1894 723,000
Missouri Life & Trust 1837 1841
Missouri Mutual 1837 1841
Globe Life Insurance, Trust & Annuity Company, Pennsylvania 1837 1857
Odd Fellow Life Insurance and Trust Company, Pennsylvania 1840 1857
National of Pennsylvania 1841 1852
Mutual Life Insurance Company of New York 1842
New York Life Insurance Company 1843
State Mutual Life Assurance Company, Massachusetts 1844

*Date company ceased writing life insurance.

Citation: Murphy, Sharon. “Life Insurance in the United States through World War I”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2002. URL

The Economic History of Indonesia

Jeroen Touwen, Leiden University, Netherlands


In recent decades, Indonesia has been viewed as one of Southeast Asia’s successful highly performing and newly industrializing economies, following the trail of the Asian tigers (Hong Kong, Singapore, South Korea, and Taiwan) (see Table 1). Although Indonesia’s economy grew with impressive speed during the 1980s and 1990s, it experienced considerable trouble after the financial crisis of 1997, which led to significant political reforms. Today Indonesia’s economy is recovering but it is difficult to say when all its problems will be solved. Even though Indonesia can still be considered part of the developing world, it has a rich and versatile past, in the economic as well as the cultural and political sense.

Basic Facts

Indonesia is situated in Southeastern Asia and consists of a large archipelago between the Indian Ocean and the Pacific Ocean, with more than 13.000 islands. The largest islands are Java, Kalimantan (the southern part of the island Borneo), Sumatra, Sulawesi, and Papua (formerly Irian Jaya, which is the western part of New Guinea). Indonesia’s total land area measures 1.9 million square kilometers (750,000 square miles). This is three times the area of Texas, almost eight times the area of the United Kingdom and roughly fifty times the area of the Netherlands. Indonesia has a tropical climate, but since there are large stretches of lowland and numerous mountainous areas, the climate varies from hot and humid to more moderate in the highlands. Apart from fertile land suitable for agriculture, Indonesia is rich in a range of natural resources, varying from petroleum, natural gas, and coal, to metals such as tin, bauxite, nickel, copper, gold, and silver. The size of Indonesia’s population is about 230 million (2002), of which the largest share (roughly 60%) live in Java.

Table 1

Indonesia’s Gross Domestic Product per Capita

Compared with Several Other Asian Countries (in 1990 dollars)

Indonesia Philippines Thailand Japan
1900 745 1 033 812 1 180
1913 904 1 066 835 1 385
1950 840 1 070 817 1 926
1973 1 504 1 959 1 874 11 439
1990 2 516 2 199 4 645 18 789
2000 3 041 2 385 6 335 20 084

Source: Angus Maddison, The World Economy: A Millennial Perspective, Paris: OECD Development Centre Studies 2001, 206, 214-215. For year 2000: University of Groningen and the Conference Board, GGDC Total Economy Database, 2003,

Important Aspects of Indonesian Economic History

“Missed Opportunities”

Anne Booth has characterized the economic history of Indonesia with the somewhat melancholy phrase “a history of missed opportunities” (Booth 1998). One may compare this with J. Pluvier’s history of Southeast Asia in the twentieth century, which is entitled A Century of Unfulfilled Expectations (Breda 1999). The missed opportunities refer to the fact that despite its rich natural resources and great variety of cultural traditions, the Indonesian economy has been underperforming for large periods of its history. A more cyclical view would lead one to speak of several ‘reversals of fortune.’ Several times the Indonesian economy seemed to promise a continuation of favorable economic development and ongoing modernization (for example, Java in the late nineteenth century, Indonesia in the late 1930s or in the early 1990s). But for various reasons Indonesia time and again suffered from severe incidents that prohibited further expansion. These incidents often originated in the internal institutional or political spheres (either after independence or in colonial times), although external influences such as the 1930s Depression also had their ill-fated impact on the vulnerable export-economy.

“Unity in Diversity”

In addition, one often reads about “unity in diversity.” This is not only a political slogan repeated at various times by the Indonesian government itself, but it also can be applied to the heterogeneity in the national features of this very large and diverse country. Logically, the political problems that arise from such a heterogeneous nation state have had their (negative) effects on the development of the national economy. The most striking difference is between densely populated Java, which has a long tradition of politically and economically dominating the sparsely populated Outer Islands. But also within Java and within the various Outer Islands, one encounters a rich cultural diversity. Economic differences between the islands persist. Nevertheless, for centuries, the flourishing and enterprising interregional trade has benefited regional integration within the archipelago.

Economic Development and State Formation

State formation can be viewed as a condition for an emerging national economy. This process essentially started in Indonesia in the nineteenth century, when the Dutch colonized an area largely similar to present-day Indonesia. Colonial Indonesia was called ‘the Netherlands Indies.’ The term ‘(Dutch) East Indies’ was mainly used in the seventeenth and eighteenth centuries and included trading posts outside the Indonesian archipelago.

Although Indonesian national historiography sometimes refers to a presumed 350 years of colonial domination, it is exaggerated to interpret the arrival of the Dutch in Bantam in 1596 as the starting point of Dutch colonization. It is more reasonable to say that colonization started in 1830, when the Java War (1825-1830) was ended and the Dutch initiated a bureaucratic, centralizing polity in Java without further restraint. From the mid-nineteenth century onward, Dutch colonization did shape the borders of the Indonesian nation state, even though it also incorporated weaknesses in the state: ethnic segmentation of economic roles, unequal spatial distribution of power, and a political system that was largely based on oppression and violence. This, among other things, repeatedly led to political trouble, before and after independence. Indonesia ceased being a colony on 17 August 1945 when Sukarno and Hatta proclaimed independence, although full independence was acknowledged by the Netherlands only after four years of violent conflict, on 27 December 1949.

The Evolution of Methodological Approaches to Indonesian Economic History

The economic history of Indonesia analyzes a range of topics, varying from the characteristics of the dynamic exports of raw materials, the dualist economy in which both Western and Indonesian entrepreneurs participated, and the strong measure of regional variation in the economy. While in the past Dutch historians traditionally focused on the colonial era (inspired by the rich colonial archives), from the 1960s and 1970s onward an increasing number of scholars (among which also many Indonesians, but also Australian and American scholars) started to study post-war Indonesian events in connection with the colonial past. In the course of the 1990s attention gradually shifted from the identification and exploration of new research themes towards synthesis and attempts to link economic development with broader historical issues. In 1998 the excellent first book-length survey of Indonesia’s modern economic history was published (Booth 1998). The stress on synthesis and lessons is also present in a new textbook on the modern economic history of Indonesia (Dick et al 2002). This highly recommended textbook aims at a juxtaposition of three themes: globalization, economic integration and state formation. Globalization affected the Indonesian archipelago even before the arrival of the Dutch. The period of the centralized, military-bureaucratic state of Soeharto’s New Order (1966-1998) was only the most recent wave of globalization. A national economy emerged gradually from the 1930s as the Outer Islands (a collective name which refers to all islands outside Java and Madura) reoriented towards industrializing Java.

Two research traditions have become especially important in the study of Indonesian economic history during the past decade. One is a highly quantitative approach, culminating in reconstructions of Indonesia’s national income and national accounts over a long period of time, from the late nineteenth century up to today (Van der Eng 1992, 2001). The other research tradition highlights the institutional framework of economic development in Indonesia, both as a colonial legacy and as it has evolved since independence. There is a growing appreciation among scholars that these two approaches complement each other.

A Chronological Survey of Indonesian Economic History

The precolonial economy

There were several influential kingdoms in the Indonesian archipelago during the pre-colonial era (e.g. Srivijaya, Mataram, Majapahit) (see further Reid 1988,1993; Ricklefs 1993). Much debate centers on whether this heyday of indigenous Asian trade was effectively disrupted by the arrival of western traders in the late fifteenth century

Sixteenth and seventeenth century

Present-day research by scholars in pre-colonial economic history focuses on the dynamics of early-modern trade and pays specific attention to the role of different ethnic groups such as the Arabs, the Chinese and the various indigenous groups of traders and entrepreneurs. During the sixteenth to the nineteenth century the western colonizers only had little grip on a limited number of spots in the Indonesian archipelago. As a consequence much of the economic history of these islands escapes the attention of the economic historian. Most data on economic matters is handed down by western observers with their limited view. A large part of the area remained engaged in its own economic activities, including subsistence agriculture (of which the results were not necessarily very meager) and local and regional trade.

An older research literature has extensively covered the role of the Dutch in the Indonesian archipelago, which began in 1596 when the first expedition of Dutch sailing ships arrived in Bantam. In the seventeenth and eighteenth centuries the Dutch overseas trade in the Far East, which focused on high-value goods, was in the hands of the powerful Dutch East India Company (in full: the United East Indies Trading Company, or Vereenigde Oost-Indische Compagnie [VOC], 1602-1795). However, the region was still fragmented and Dutch presence was only concentrated in a limited number of trading posts.

During the eighteenth century, coffee and sugar became the most important products and Java became the most important area. The VOC gradually took over power from the Javanese rulers and held a firm grip on the productive parts of Java. The VOC was also actively engaged in the intra-Asian trade. For example, cotton from Bengal was sold in the pepper growing areas. The VOC was a successful enterprise and made large dividend payments to its shareholders. Corruption, lack of investment capital, and increasing competition from England led to its demise and in 1799 the VOC came to an end (Gaastra 2002, Jacobs 2000).

The nineteenth century

In the nineteenth century a process of more intensive colonization started, predominantly in Java, where the Cultivation System (1830-1870) was based (Elson 1994; Fasseur 1975).

During the Napoleonic era the VOC trading posts in the archipelago had been under British rule, but in 1814 they came under Dutch authority again. During the Java War (1825-1830), Dutch rule on Java was challenged by an uprising led by Javanese prince Diponegoro. To repress this revolt and establish firm rule in Java, colonial expenses increased, which in turn led to a stronger emphasis on economic exploitation of the colony. The Cultivation System, initiated by Johannes van den Bosch, was a state-governed system for the production of agricultural products such as sugar and coffee. In return for a fixed compensation (planting wage), the Javanese were forced to cultivate export crops. Supervisors, such as civil servants and Javanese district heads, were paid generous ‘cultivation percentages’ in order to stimulate production. The exports of the products were consigned to a Dutch state-owned trading firm (the Nederlandsche Handel-Maatschappij, NHM, established in 1824) and sold profitably abroad.

Although the profits (‘batig slot’) for the Dutch state of the period 1830-1870 were considerable, various reasons can be mentioned for the change to a liberal system: (a) the emergence of new liberal political ideology; (b) the gradual demise of the Cultivation System during the 1840s and 1850s because internal reforms were necessary; and (c) growth of private (European) entrepreneurship with know-how and interest in the exploitation of natural resources, which took away the need for government management (Van Zanden and Van Riel 2000: 226).

Table 2

Financial Results of Government Cultivation, 1840-1849 (‘Cultivation System’) (in thousands of guilders in current values)

1840-1844 1845-1849
Coffee 40 278 24 549
Sugar 8 218 4 136
Indigo, 7 836 7 726
Pepper, Tea 647 1 725
Total net profits 39 341 35 057

Source: Fasseur 1975: 20.

Table 3

Estimates of Total Profits (‘batig slot’) during the Cultivation System,

1831/40 – 1861/70 (in millions of guilders)

1831/40 1841/50 1851/60 1861/70
Gross revenues of sale of colonial products 227.0 473.9 652.7 641.8
Costs of transport etc (NHM) 88.0 165.4 138.7 114.7
Sum of expenses 59.2 175.1 275.3 276.6
Total net profits* 150.6 215.6 289.4 276.7

Source: Van Zanden and Van Riel 2000: 223.

* Recalculated by Van Zanden and Van Riel to include subsidies for the NHM and other costs that in fact benefited the Dutch economy.

The heyday of the colonial export economy (1900-1942)

After 1870, private enterprise was promoted but the exports of raw materials gained decisive momentum after 1900. Sugar, coffee, pepper and tobacco, the old export products, were increasingly supplemented with highly profitable exports of petroleum, rubber, copra, palm oil and fibers. The Outer Islands supplied an increasing share in these foreign exports, which were accompanied by an intensifying internal trade within the archipelago and generated an increasing flow of foreign imports. Agricultural exports were cultivated both in large-scale European agricultural plantations (usually called agricultural estates) and by indigenous smallholders. When the exploitation of oil became profitable in the late nineteenth century, petroleum earned a respectable position in the total export package. In the early twentieth century, the production of oil was increasingly concentrated in the hands of the Koninklijke/Shell Group.

Figure 1

Foreign Exports from the Netherlands-Indies, 1870-1940

(in millions of guilders, current values)

Source: Trade statistics

The momentum of profitable exports led to a broad expansion of economic activity in the Indonesian archipelago. Integration with the world market also led to internal economic integration when the road system, railroad system (in Java and Sumatra) and port system were improved. In shipping lines, an important contribution was made by the KPM (Koninklijke Paketvaart-Maatschappij, Royal Packet boat Company) that served economic integration as well as imperialist expansion. Subsidized shipping lines into remote corners of the vast archipelago carried off export goods (forest products), supplied import goods and transported civil servants and military.

The Depression of the 1930s hit the export economy severely. The sugar industry in Java collapsed and could not really recover from the crisis. In some products, such as rubber and copra, production was stepped up to compensate for lower prices. In the rubber exports indigenous producers for this reason evaded the international restriction agreements. The Depression precipitated the introduction of protectionist measures, which ended the liberal period that had started in 1870. Various import restrictions were launched, making the economy more self-sufficient, as for example in the production of rice, and stimulating domestic integration. Due to the strong Dutch guilder (the Netherlands adhered to the gold standard until 1936), it took relatively long before economic recovery took place. The outbreak of World War II disrupted international trade, and the Japanese occupation (1942-1945) seriously disturbed and dislocated the economic order.

Table 4

Annual Average Growth in Economic Key Aggregates 1830-1990

GDP per capita Export volume Export


Government Expenditure
Cultivation System 1830-1840 n.a. 13.5 5.0 8.5
Cultivation System 1840-1848 n.a. 1.5 - 4.5 [very low]
Cultivation System 1849-1873 n.a. 1.5 1.5 2.6
Liberal Period 1874-1900 [very low] 3.1 - 1.9 2.3
Ethical Period 1901-1928 1.7 5.8 17.4 4.1
Great Depression 1929-1934 -3.4 -3.9 -19.7 0.4
Prewar Recovery 1934-1940 2.5 2.2 7.8 3.4
Old Order 1950-1965 1.0 0.8 - 2.1 1.8
New Order 1966-1990 4.4 5.4 11.6 10.6

Source: Booth 1998: 18.

Note: These average annual growth percentages were calculated by Booth by fitting an exponential curve to the data for the years indicated. Up to 1873 data refer only to Java.

The post-1945 period

After independence, the Indonesian economy had to recover from the hardships of the Japanese occupation and the war for independence (1945-1949), on top of the slow recovery from the 1930s Depression. During the period 1949-1965, there was little economic growth, predominantly in the years from 1950 to 1957. In 1958-1965, growth rates dwindled, largely due to political instability and inappropriate economic policy measures. The hesitant start of democracy was characterized by a power struggle between the president, the army, the communist party and other political groups. Exchange rate problems and absence of foreign capital were detrimental to economic development, after the government had eliminated all foreign economic control in the private sector in 1957/58. Sukarno aimed at self-sufficiency and import substitution and estranged the suppliers of western capital even more when he developed communist sympathies.

After 1966, the second president, general Soeharto, restored the inflow of western capital, brought back political stability with a strong role for the army, and led Indonesia into a period of economic expansion under his authoritarian New Order (Orde Baru) regime which lasted until 1997 (see below for the three phases in New Order). In this period industrial output quickly increased, including steel, aluminum, and cement but also products such as food, textiles and cigarettes. From the 1970s onward the increased oil price on the world market provided Indonesia with a massive income from oil and gas exports. Wood exports shifted from logs to plywood, pulp, and paper, at the price of large stretches of environmentally valuable rainforest.

Soeharto managed to apply part of these revenues to the development of technologically advanced manufacturing industry. Referring to this period of stable economic growth, the World Bank Report of 1993 speaks of an ‘East Asian Miracle’ emphasizing the macroeconomic stability and the investments in human capital (World Bank 1993: vi).

The financial crisis in 1997 revealed a number of hidden weaknesses in the economy such as a feeble financial system (with a lack of transparency), unprofitable investments in real estate, and shortcomings in the legal system. The burgeoning corruption at all levels of the government bureaucracy became widely known as KKN (korupsi, kolusi, nepotisme). These practices characterize the coming-of-age of the 32-year old, strongly centralized, autocratic Soeharto regime.

From 1998 until present

Today, the Indonesian economy still suffers from severe economic development problems following the financial crisis of 1997 and the subsequent political reforms after Soeharto stepped down in 1998. Secessionist movements and the low level of security in the provincial regions, as well as relatively unstable political policies, form some of its present-day problems. Additional problems include the lack of reliable legal recourse in contract disputes, corruption, weaknesses in the banking system, and strained relations with the International Monetary Fund. The confidence of investors remains low, and in order to achieve future growth, internal reform will be essential to build up confidence of international donors and investors.

An important issue on the reform agenda is regional autonomy, bringing a larger share of export profits to the areas of production instead of to metropolitan Java. However, decentralization policies do not necessarily improve national coherence or increase efficiency in governance.

A strong comeback in the global economy may be at hand, but has not as yet fully taken place by the summer of 2003 when this was written.

Additional Themes in the Indonesian Historiography

Indonesia is such a large and multi-faceted country that many different aspects have been the focus of research (for example, ethnic groups, trade networks, shipping, colonialism and imperialism). One can focus on smaller regions (provinces, islands), as well as on larger regions (the western archipelago, the eastern archipelago, the Outer Islands as a whole, or Indonesia within Southeast Asia). Without trying to be exhaustive, eleven themes which have been subject of debate in Indonesian economic history are examined here (on other debates see also Houben 2002: 53-55; Lindblad 2002b: 145-152; Dick 2002: 191-193; Thee 2002: 242-243).

The indigenous economy and the dualist economy

Although western entrepreneurs had an advantage in technological know-how and supply of investment capital during the late-colonial period, there has been a traditionally strong and dynamic class of entrepreneurs (traders and peasants) in many regions of Indonesia. Resilient in times of economic malaise, cunning in symbiosis with traders of other Asian nationalities (particularly Chinese), the Indonesian entrepreneur has been rehabilitated after the relatively disparaging manner in which he was often pictured in the pre-1945 literature. One of these early writers, J.H. Boeke, initiated a school of thought centering on the idea of ‘economic dualism’ (referring to a modern western and a stagnant eastern sector). As a consequence, the term ‘dualism’ was often used to indicate western superiority. From the 1960s onward such ideas have been replaced by a more objective analysis of the dualist economy that is not so judgmental about the characteristics of economic development in the Asian sector. Some focused on technological dualism (such as B. Higgins) others on ethnic specialization in different branches of production (see also Lindblad 2002b: 148, Touwen 2001: 316-317).

The characteristics of Dutch imperialism

Another vigorous debate concerns the character of and the motives for Dutch colonial expansion. Dutch imperialism can be viewed as having a rather complex mix of political, economic and military motives which influenced decisions about colonial borders, establishing political control in order to exploit oil and other natural resources, and preventing local uprisings. Three imperialist phases can be distinguished (Lindblad 2002a: 95-99). The first phase of imperialist expansion was from 1825-1870. During this phase interference with economic matters outside Java increased slowly but military intervention was occasional. The second phase started with the outbreak of the Aceh War in 1873 and lasted until 1896. During this phase initiatives in trade and foreign investment taken by the colonial government and by private businessmen were accompanied by extension of colonial (military) control in the regions concerned. The third and final phase was characterized by full-scale aggressive imperialism (often known as ‘pacification’) and lasted from 1896 until 1907.

The impact of the cultivation system on the indigenous economy

The thesis of ‘agricultural involution’ was advocated by Clifford Geertz (1963) and states that a process of stagnation characterized the rural economy of Java in the nineteenth century. After extensive research, this view has generally been discarded. Colonial economic growth was stimulated first by the Cultivation System, later by the promotion of private enterprise. Non-farm employment and purchasing power increased in the indigenous economy, although there was much regional inequality (Lindblad 2002a: 80; 2002b:149-150).

Regional diversity in export-led economic expansion

The contrast between densely populated Java, which had been dominant in economic and political regard for a long time, and the Outer Islands, which were a large, sparsely populated area, is obvious. Among the Outer Islands we can distinguish between areas which were propelled forward by export trade, either from Indonesian or European origin (examples are Palembang, East Sumatra, Southeast Kalimantan) and areas which stayed behind and only slowly picked the fruits of the modernization that took place elsewhere (as for example Benkulu, Timor, Maluku) (Touwen 2001).

The development of the colonial state and the role of Ethical Policy

Well into the second half of the nineteenth century, the official Dutch policy was to abstain from interference with local affairs. The scarce resources of the Dutch colonial administrators should be reserved for Java. When the Aceh War initiated a period of imperialist expansion and consolidation of colonial power, a call for more concern with indigenous affairs was heard in Dutch politics, which resulted in the official Ethical Policy which was launched in 1901 and had the threefold aim of improving indigenous welfare, expanding the educational system, and allowing for some indigenous participation in the government (resulting in the People’s Council (Volksraad) that was installed in 1918 but only had an advisory role). The results of the Ethical Policy, as for example measured in improvements in agricultural technology, education, or welfare services, are still subject to debate (Lindblad 2002b: 149).

Living conditions of coolies at the agricultural estates

The plantation economy, which developed in the sparsely populated Outer Islands (predominantly in Sumatra) between 1870 and 1942, was in bad need of labor. The labor shortage was solved by recruiting contract laborers (coolies) in China, and later in Java. The Coolie Ordinance was a government regulation that included the penal clause (which allowed for punishment by plantation owners). In response to reported abuse, the colonial government established the Labor Inspectorate (1908), which aimed at preventing abuse of coolies on the estates. The living circumstances and treatment of the coolies has been subject of debate, particularly regarding the question whether the government put enough effort in protecting the interests of the workers or allowed abuse to persist (Lindblad 2002b: 150).

Colonial drain

How large of a proportion of economic profits was drained away from the colony to the mother country? The detrimental effects of the drain of capital, in return for which European entrepreneurial initiatives were received, have been debated, as well as the exact methods of its measurement. There was also a second drain to the home countries of other immigrant ethnic groups, mainly to China (Van der Eng 1998; Lindblad 2002b: 151).

The position of the Chinese in the Indonesian economy

In the colonial economy, the Chinese intermediary trader or middleman played a vital role in supplying credit and stimulating the cultivation of export crops such as rattan, rubber and copra. The colonial legal system made an explicit distinction between Europeans, Chinese and Indonesians. This formed the roots of later ethnic problems, since the Chinese minority population in Indonesia has gained an important (and sometimes envied) position as capital owners and entrepreneurs. When threatened by political and social turmoil, Chinese business networks may have sometimes channel capital funds to overseas deposits.

Economic chaos during the ‘Old Order’

The ‘Old Order’-period, 1945-1965, was characterized by economic (and political) chaos although some economic growth undeniably did take place during these years. However, macroeconomic instability, lack of foreign investment and structural rigidity formed economic problems that were closely connected with the political power struggle. Sukarno, the first president of the Indonesian republic, had an outspoken dislike of colonialism. His efforts to eliminate foreign economic control were not always supportive of the struggling economy of the new sovereign state. The ‘Old Order’ has for long been a ‘lost area’ in Indonesian economic history, but the establishment of the unitary state and the settlement of major political issues, including some degree of territorial consolidation (as well as the consolidation of the role of the army) were essential for the development of a national economy (Dick 2002: 190; Mackie 1967).

Development policy and economic planning during the ‘New Order’ period

The ‘New Order’ (Orde Baru) of Soeharto rejected political mobilization and socialist ideology, and established a tightly controlled regime that discouraged intellectual enquiry, but did put Indonesia’s economy back on the rails. New flows of foreign investment and foreign aid programs were attracted, the unbridled population growth was reduced due to family planning programs, and a transformation took place from a predominantly agricultural economy to an industrializing economy. Thee Kian Wie distinguishes three phases within this period, each of which deserve further study:

(a) 1966-1973: stabilization, rehabilitation, partial liberalization and economic recovery;

(b) 1974-1982: oil booms, rapid economic growth, and increasing government intervention;

(c) 1983-1996: post-oil boom, deregulation, renewed liberalization (in reaction to falling oil-prices), and rapid export-led growth. During this last phase, commentators (including academic economists) were increasingly concerned about the thriving corruption at all levels of the government bureaucracy: KKN (korupsi, kolusi, nepotisme) practices, as they later became known (Thee 2002: 203-215).

Financial, economic and political crisis: KRISMON, KRISTAL

The financial crisis of 1997 started with a crisis of confidence following the depreciation of the Thai baht in July 1997. Core factors causing the ensuing economic crisis in Indonesia were the quasi-fixed exchange rate of the rupiah, quickly rising short-term foreign debt and the weak financial system. Its severity had to be attributed to political factors as well: the monetary crisis (KRISMON) led to a total crisis (KRISTAL) because of the failing policy response of the Soeharto regime. Soeharto had been in power for 32 years and his government had become heavily centralized and corrupt and was not able to cope with the crisis in a credible manner. The origins, economic consequences, and socio-economic impact of the crisis are still under discussion. (Thee 2003: 231-237; Arndt and Hill 1999).

(Note: I want to thank Dr. F. Colombijn and Dr. J.Th Lindblad at Leiden University for their useful comments on the draft version of this article.)

Selected Bibliography

In addition to the works cited in the text above, a small selection of recent books is mentioned here, which will allow the reader to quickly grasp the most recent insights and find useful further references.

General textbooks or periodicals on Indonesia’s (economic) history:

Booth, Anne. The Indonesian Economy in the Nineteenth and Twentieth Centuries: A History of Missed Opportunities. London: Macmillan, 1998.

Bulletin of Indonesian Economic Studies.

Dick, H.W., V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie. The Emergence of a National Economy in Indonesia, 1800-2000. Sydney: Allen & Unwin, 2002.

Itinerario “Economic Growth and Institutional Change in Indonesia in the 19th and 20th centuries” [special issue] 26 no. 3-4 (2002).

Reid, Anthony. Southeast Asia in the Age of Commerce, 1450-1680, Vol. I: The Lands below the Winds. New Haven: Yale University Press, 1988.

Reid, Anthony. Southeast Asia in the Age of Commerce, 1450-1680, Vol. II: Expansion and Crisis. New Haven: Yale University Press, 1993.

Ricklefs, M.C. A History of Modern Indonesia since ca. 1300. Basingstoke/Londen: Macmillan, 1993.

On the VOC:

Gaastra, F.S. De Geschiedenis van de VOC. Zutphen: Walburg Pers, 1991 (1st edition), 2002 (4th edition).

Jacobs, Els M. Koopman in Azië: de Handel van de Verenigde Oost-Indische Compagnie tijdens de 18de Eeuw. Zutphen: Walburg Pers, 2000.

Nagtegaal, Lucas. Riding the Dutch Tiger: The Dutch East Indies Company and the Northeast Coast of Java 1680-1743. Leiden: KITLV Press, 1996.

On the Cultivation System:

Elson, R.E. Village Java under the Cultivation System, 1830-1870. Sydney: Allen and Unwin, 1994.

Fasseur, C. Kultuurstelsel en Koloniale Baten. De Nederlandse Exploitatie van Java, 1840-1860. Leiden, Universitaire Pers, 1975. (Translated as: The Politics of Colonial Exploitation: Java, the Dutch and the Cultivation System. Ithaca, NY: Southeast Asia Program, Cornell University Press 1992.)

Geertz, Clifford. Agricultural Involution: The Processes of Ecological Change in Indonesia. Berkeley: University of California Press, 1963.

Houben, V.J.H. “Java in the Nineteenth Century: Consolidation of a Territorial State.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 56-81. Sydney: Allen & Unwin, 2002.

On the Late-Colonial Period:

Dick, H.W. “Formation of the Nation-state, 1930s-1966.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 153-193. Sydney: Allen & Unwin, 2002.

Lembaran Sejarah, “Crisis and Continuity: Indonesian Economy in the Twentieth Century” [special issue] 3 no. 1 (2000).

Lindblad, J.Th., editor. New Challenges in the Modern Economic History of Indonesia. Leiden: PRIS, 1993. Translated as: Sejarah Ekonomi Modern Indonesia. Berbagai Tantangan Baru. Jakarta: LP3ES, 2002.

Lindblad, J.Th., editor. The Historical Foundations of a National Economy in Indonesia, 1890s-1990s. Amsterdam: North-Holland, 1996.

Lindblad, J.Th. “The Outer Islands in the Nineteenthh Century: Contest for the Periphery.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 82-110. Sydney: Allen & Unwin, 2002a.

Lindblad, J.Th. “The Late Colonial State and Economic Expansion, 1900-1930s.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 111-152. Sydney: Allen & Unwin, 2002b.

Touwen, L.J. Extremes in the Archipelago: Trade and Economic Development in the Outer Islands of Indonesia, 1900‑1942. Leiden: KITLV Press, 2001.

Van der Eng, Pierre. “Exploring Exploitation: The Netherlands and Colonial Indonesia, 1870-1940.” Revista de Historia Económica 16 (1998): 291-321.

Zanden, J.L. van, and A. van Riel. Nederland, 1780-1914: Staat, instituties en economische ontwikkeling. Amsterdam: Balans, 2000. (On the Netherlands in the nineteenth century.)

Independent Indonesia:

Arndt, H.W. and Hal Hill, editors. Southeast Asia’s Economic Crisis: Origins, Lessons and the Way forward. Singapore: Institute of Southeast Asian Studies, 1999.

Cribb, R. and C. Brown. Modern Indonesia: A History since 1945. Londen/New York: Longman, 1995.

Feith, H. The Decline of Constitutional Democracy in Indonesia. Ithaca, New York: Cornell University Press, 1962.

Hill, Hal. The Indonesian Economy. Cambridge: Cambridge University Press, 2000. (This is the extended second edition of Hill, H., The Indonesian Economy since 1966. Southeast Asia’s Emerging Giant. Cambridge: Cambridge University Press, 1996.)

Hill, Hal, editor. Unity and Diversity: Regional Economic Development in Indonesia since 1970. Singapore: Oxford University Press, 1989.

Mackie, J.A.C. “The Indonesian Economy, 1950-1960.” In The Economy of Indonesia: Selected Readings, edited by B. Glassburner, 16-69. Ithaca NY: Cornell University Press 1967.

Robison, Richard. Indonesia: The Rise of Capital. Sydney: Allen and Unwin, 1986.

Thee Kian Wie. “The Soeharto Era and After: Stability, Development and Crisis, 1966-2000.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 194-243. Sydney: Allen & Unwin, 2002.

World Bank. The East Asian Miracle: Economic Growth and Public Policy. Oxford: World Bank /Oxford University Press, 1993.

On economic growth:

Booth, Anne. The Indonesian Economy in the Nineteenth and Twentieth Centuries. A History of Missed Opportunities. London: Macmillan, 1998.

Van der Eng, Pierre. “The Real Domestic Product of Indonesia, 1880-1989.” Explorations in Economic History 39 (1992): 343-373.

Van der Eng, Pierre. “Indonesia’s Growth Performance in the Twentieth Century.” In The Asian Economies in the Twentieth Century, edited by Angus Maddison, D.S. Prasada Rao and W. Shepherd, 143-179. Cheltenham: Edward Elgar, 2002.

Van der Eng, Pierre. “Indonesia’s Economy and Standard of Living in the Twentieth Century.” In Indonesia Today: Challenges of History, edited by G. Lloyd and S. Smith, 181-199. Singapore: Institute of Southeast Asian Studies, 2001.

Citation: Touwen, Jeroen. “The Economic History of Indonesia”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL

Economic History of Hong Kong

Catherine R. Schenk, University of Glasgow

Hong Kong’s economic and political history has been primarily determined by its geographical location. The territory of Hong Kong is comprised of two main islands (Hong Kong Island and Lantau Island) and a mainland hinterland. It thus forms a natural geographic port for Guangdong province in Southeast China. In a sense, there is considerable continuity in Hong Kong’s position in the international economy since its origins were as a commercial entrepot for China’s regional and global trade, and this is still a role it plays today. From a relatively unpopulated territory at the beginning of the nineteenth century, Hong Kong grew to become one of the most important international financial centers in the world. Hong Kong also underwent a rapid and successful process of industrialization from the 1950s that captured the imagination of economists and historians in the 1980s and 1990s.

Hong Kong from 1842 to 1949

After being ceded by China to the British under the Treaty of Nanking in 1842, the colony of Hong Kong quickly became a regional center for financial and commercial services based particularly around the Hongkong and Shanghai Bank and merchant companies such as Jardine Matheson. In 1841 there were only 7500 Chinese inhabitants of Hong Kong and a handful of foreigners, but by 1859 the Chinese community was over 85,000 supplemented by about 1600 foreigners. The economy was closely linked to commercial activity, dominated by shipping, banking and merchant companies. Gradually there was increasing diversification to services and retail outlets to meet the needs of the local population, and also shipbuilding and maintenance linked to the presence of the British naval and merchant shipping. There was some industrial expansion in the nineteenth century; notably sugar refining, cement and ice factories among the foreign sector, alongside smaller-scale local workshop manufactures. The mainland territory of Hong Kong was ceded to British rule by two further treaties in this period; Kowloon in 1860 and the New Territories in 1898.

Hong Kong was profoundly affected by the disastrous events in Mainland China in the inter-war period. After overthrow of the dynastic system in 1911, the Kuomintang (KMT) took a decade to pull together a republican nation-state. The Great Depression and fluctuations in the international price of silver then disrupted China’s economic relations with the rest of the world in the 1930s. From 1937, China descended into the Sino-Japanese War. Two years after the end of World War II, the civil war between the KMT and Chinese Communist Party pushed China into a downward economic spiral. During this period, Hong Kong suffered from the slowdown in world trade and in China’s trade in particular. However, problems on the mainland also diverted business and entrepreneurs from Shanghai and other cities to the relative safety and stability of the British colonial port of Hong Kong.

Post-War Industrialization

After the establishment of the People’s Republic of China (PRC) in 1949, the mainland began a process of isolation from the international economy, partly for ideological reasons and partly because of Cold War embargos on trade imposed first by the United States in 1949 and then by the United Nations in 1951. Nevertheless, Hong Kong was vital to the international economic links that the PRC continued in order to pursue industrialization and support grain imports. Even during the period of self-sufficiency in the 1960s, Hong Kong’s imports of food and water from the PRC were a vital source of foreign exchange revenue that ensured Hong Kong’s usefulness to the mainland. In turn, cheap food helped to restrain rises in the cost of living in Hong Kong thus helping to keep wages low during the period of labor-intensive industrialization.

The industrialization of Hong Kong is usually dated from the embargoes of the 1950s. Certainly, Hong Kong’s prosperity could no longer depend on the China trade in this decade. However, as seen above, industry emerged in the nineteenth century and it began to expand in the interwar period. Nevertheless, industrialization accelerated after 1945 with the inflow of refugees, entrepreneurs and capital fleeing the civil war on the mainland. The most prominent example is immigrants from Shanghai who created the cotton spinning industry in the colony. Hong Kong’s industry was founded in the textile sector in the 1950s before gradually diversifying in the 1960s to clothing, electronics, plastics and other labor-intensive production mainly for export.

The economic development of Hong Kong is unusual in a variety of respects. First, industrialization was accompanied by increasing numbers of small and medium-sized enterprises (SME) rather than consolidation. In 1955, 91 percent of manufacturing establishments employed fewer than one hundred workers, a proportion that increased to 96.5 percent by 1975. Factories employing fewer than one hundred workers accounted for 42 percent of Hong Kong’s domestic exports to the U.K. in 1968, amounting to HK$1.2 billion. At the end of 2002, SMEs still amounted to 98 percent of enterprises, providing 60 percent of total private employment.

Second, until the late 1960s, the government did not engage in active industrial planning. This was partly because the government was preoccupied with social spending on housing large flows of immigrants, and partly because of an ideological sympathy for free market forces. This means that Hong Kong fits outside the usual models of Asian economic development based on state-led industrialization (Japan, South Korea, Singapore, Taiwan) or domination of foreign firms (Singapore) or large firms with close relations to the state (Japan, South Korea). Low taxes, lax employment laws, absence of government debt, and free trade are all pillars of the Hong Kong experience of economic development.

In fact, of course, the reality was very different from the myth of complete laissez-faire. The government’s programs of public housing, land reclamation, and infrastructure investment were ambitious. New industrial towns were built to house immigrants, provide employment and aid industry. The government subsidized industry indirectly through this public housing, which restrained rises in the cost of living that would have threatened Hong Kong’s labor-cost advantage in manufacturing. The government also pursued an ambitious public education program, creating over 300,000 new primary school places between 1954 and 1961. By 1966, 99.8% of school-age children were attending primary school, although free universal primary school was not provided until 1971. Secondary school provision was expanded in the 1970s, and from 1978 the government offered compulsory free education for all children up to the age of 15. The hand of government was much lighter on international trade and finance. Exchange controls were limited to a few imposed by the U.K., and there were no controls on international flows of capital. Government expenditure even fell from 7.5% of GDP in the 1960s to 6.5% in the 1970s. In the same decades, British government spending as a percent of GDP rose from 17% to 20%.

From the mid-1950s Hong Kong’s rapid success as a textile and garment exporter generated trade friction that resulted in voluntary export restraints in a series of treaties with the U.K. beginning in 1959. Despite these agreements, Hong Kong’s exporters continued to exploit their flexibility and adaptability to increase production and find new markets. Indeed, exports increased from 54% of GDP in the 1960s to 64% in the 1970s. Figure 1 shows the annual changes in the growth of real GDP per capita. In the period from 1962 until the onset of the oil crisis in 1973, the average growth rate was 6.5% per year. From 1976 to 1996 GDP grew at an average of 5.6% per year. There were negative shocks in 1967-68 as a result of local disturbances from the onset of the Cultural Revolution in the PRC, and again in 1973 to 1975 from the global oil crisis. In the early 1980s there was another negative shock related to politics, as the terms of Hong Kong’s return to PRC control in 1997 were formalized.

 Annual percentage change of per capita GDP 1962-2001

Reintegration with China, 1978-1997

The Open Door Policy of the PRC announced by Deng Xiao-ping at the end of 1978 marked a new era for Hong Kong’s economy. With the newly vigorous engagement of China in international trade and investment, Hong Kong’s integration with the mainland accelerated as it regained its traditional role as that country’s main provider of commercial and financial services. From 1978 to 1997, visible trade between Hong Kong and the PRC grew at an average rate of 28% per annum. At the same time, Hong Kong firms began to move their labor-intensive activities to the mainland to take advantage of cheaper labor. The integration of Hong Kong with the Pearl River delta in Guangdong is the most striking aspect of these trade and investment links. At the end of 1997, the cumulative value of Hong Kong’s direct investment in Guangdong was estimated at US$48 billion, accounting for almost 80% of the total foreign direct investment there. Hong Kong companies and joint ventures in Guangdong province employed about five million people. Most of these businesses were labor-intensive assembly for export, but from 1997 onward there has been increased investment in financial services, tourism and retail trade.

While manufacturing was moved out of the colony during the 1980s and 1990s, there was a surge in the service sector. This transformation of the structure of Hong Kong’s economy from manufacturing to services was dramatic. Most remarkably it was accomplished without faltering growth rates overall, and with an average unemployment rate of only 2.5% from 1982 to 1997. Figure 2 shows that the value of manufacturing peaked in 1992 before beginning an absolute decline. In contrast, the value of commercial and financial services soared. This is reflected in the contribution of services and manufacturing to GDP shown in Figure 3. Employment in the service sector rose from 52% to 80% of the labor force from 1981 to 2000 while manufacturing employment fell from 39% to 10% in the same period.

 GDP by economic activity at current prices  Contribution to Hong Kong's GDP at factor prices

Asian Financial Crisis, 1997-2002

The terms for the return of Hong Kong to Chinese rule in July 1997 carefully protected the territory’s separate economic characteristics, which have been so beneficial to the Chinese economy. Under the Basic Law, a “one country-two systems” policy was formulated which left Hong Kong monetarily and economically separate from the mainland with exchange and trade controls remaining in place as well as restrictions on the movement of people. Hong Kong was hit hard by the Asian Financial Crisis that struck the region in mid-1997, just at the time of the handover of the colony back to Chinese administrative control. The crisis prompted a collapse in share prices and the property market that affected the ability of many borrowers to repay bank loans. Unlike most Asian countries, Hong Kong Special Administrative Region and mainland China maintained their currencies’ exchange rates with the U.S. dollar rather than devaluing. Along with the Sudden Acute Respiratory Syndrome (SARS) threat in 2002, the Asian Financial Crisis pushed Hong Kong into a new era of recession with a rise in unemployment (6% on average from 1998-2003) and absolute declines in output and prices. The longer-term impact of the crisis has been to increase the intensity and importance of Hong Kong’s trade and investment links with the PRC. Since the PRC did not fare as badly from the regional crisis, the economic prospects for Hong Kong have been tied more closely to the increasingly prosperous mainland.

Suggestions for Further Reading

For a general history of Hong Kong from the nineteenth century, see S. Tsang, A Modern History of Hong Kong, London: IB Tauris, 2004. For accounts of Hong Kong’s economic history see, D.R. Meyer, Hong Kong as a Global Metropolis, Cambridge: Cambridge University Press, 2000; C.R. Schenk, Hong Kong as an International Financial Centre: Emergence and Development, 1945-65, London: Routledge, 2001; and Y-P Ho, Trade, Industrial Restructuring and Development in Hong Kong, London: Macmillan, 1992. Useful statistics and summaries of recent developments are available on the website of the Hong Kong Monetary Authority

Citation: Schenk, Catherine. “Economic History of Hong Kong”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL

A History of Futures Trading in the United States

Joseph Santos, South Dakota State University

Many contemporary [nineteenth century] critics were suspicious of a form of business in which one man sold what he did not own to another who did not want it… Morton Rothstein (1966)

Anatomy of a Futures Market

The Futures Contract

A futures contract is a standardized agreement between a buyer and a seller to exchange an amount and grade of an item at a specific price and future date. The item or underlying asset may be an agricultural commodity, a metal, mineral or energy commodity, a financial instrument or a foreign currency. Because futures contracts are derived from these underlying assets, they belong to a family of financial instruments called derivatives.

Traders buy and sell futures contracts on an exchange – a marketplace that is operated by a voluntary association of members. The exchange provides buyers and sellers the infrastructure (trading pits or their electronic equivalent), legal framework (trading rules, arbitration mechanisms), contract specifications (grades, standards, time and method of delivery, terms of payment) and clearing mechanisms (see section titled The Clearinghouse) necessary to facilitate futures trading. Only exchange members are allowed to trade on the exchange. Nonmembers trade through commission merchants – exchange members who service nonmember trades and accounts for a fee.

The September 2004 light sweet crude oil contract is an example of a petroleum (mineral) future. It trades on the New York Mercantile exchange (NYM). The contract is standardized – every one is an agreement to trade 1,000 barrels of grade light sweet crude in September, on a day of the seller’s choosing. As of May 25, 2004 the contract sold for $40,120=$40.12x1000 and debits Member S’s margin account the same amount.

The Clearinghouse

The clearinghouse is the counterparty to every trade – its members buy every contract that traders sell on the exchange and sell every contract that traders buy on the exchange. Absent a clearinghouse, traders would interact directly, and this would introduce two problems. First, traders. concerns about their counterparty’s credibility would impede trading. For example, Trader A might refuse to sell to Trader B, who is supposedly untrustworthy.

Second, traders would lose track of their counterparties. This would occur because traders typically settle their contractual obligations by offset – traders buy/sell the contracts that they sold/bought earlier. For example, Trader A sells a contract to Trader B, who sells a contract to Trader C to offset her position, and so on.

The clearinghouse eliminates both of these problems. First, it is a guarantor of all trades. If a trader defaults on a futures contract, the clearinghouse absorbs the loss. Second, clearinghouse members, and not outside traders, reconcile offsets at the end of trading each day. Margin accounts and a process called marking-to-market all but assure the clearinghouse’s solvency.

A margin account is a balance that a trader maintains with a commission merchant in order to offset the trader’s daily unrealized loses in the futures markets. Commission merchants also maintain margins with clearinghouse members, who maintain them with the clearinghouse. The margin account begins as an initial lump sum deposit, or original margin.

To understand the mechanics and merits of marking-to-market, consider that the values of the long and short positions of an existing futures contract change daily, even though futures trading is a zero-sum game – a buyer’s gain/loss equals a seller’s loss/gain. So, the clearinghouse breaks even on every trade, while its individual members. positions change in value daily.

With this in mind, suppose Trader B buys a 5,000 bushel soybean contract for $9.70 from Trader S. Technically, Trader B buys the contract from Clearinghouse Member S and Trader S sells the contract to Clearinghouse Member B. Now, suppose that at the end of the day the contract is priced at $9.71. That evening the clearinghouse marks-to-market each member’s account. That is to say, the clearinghouse credits Member B’s margin account $50 and debits Member S’s margin account the same amount.

Member B is now in a position to draw on the clearinghouse $50, while Member S must pay the clearinghouse a $50 variation margin – incremental margin equal to the difference between a contract’s price and its current market value. In turn, clearinghouse members debit and credit accordingly the margin accounts of their commission merchants, who do the same to the margin accounts of their clients (i.e., traders). This iterative process all but assures the clearinghouse a sound financial footing. In the unlikely event that a trader defaults, the clearinghouse closes out the position and loses, at most, the trader’s one day loss.

Active Futures Markets

Futures exchanges create futures contracts. And, because futures exchanges compete for traders, they must create contracts that appeal to the financial community. For example, the New York Mercantile Exchange created its light sweet crude oil contract in order to fill an unexploited niche in the financial marketplace.

Not all contracts are successful and those that are may, at times, be inactive – the contract exists, but traders are not trading it. For example, of all contracts introduced by U.S. exchanges between 1960 and 1977, only 32% traded in 1980 (Stein 1986, 7). Consequently, entire exchanges can become active – e.g., the New York Futures Exchange opened in 1980 – or inactive – e.g., the New Orleans Exchange closed in 1983 (Leuthold 1989, 18). Government price supports or other such regulation can also render trading inactive (see Carlton 1984, 245).

Futures contracts succeed or fail for many reasons, but successful contracts do share certain basic characteristics (see for example, Baer and Saxon 1949, 110-25; Hieronymus 1977, 19-22). To wit, the underlying asset is homogeneous, reasonably durable, and standardized (easily describable); its supply and demand is ample, its price is unfettered, and all relevant information is available to all traders. For example, futures contracts have never derived from, say, artwork (heterogeneous and not standardized) or rent-controlled housing rights (supply, and hence price is fettered by regulation).

Purposes and Functions

Futures markets have three fundamental purposes. The first is to enable hedgers to shift price risk – asset price volatility – to speculators in return for basis risk – changes in the difference between a futures price and the cash, or current spot price of the underlying asset. Because basis risk is typically less than asset price risk, the financial community views hedging as a form of risk management and speculating as a form of risk taking.

Generally speaking, to hedge is to take opposing positions in the futures and cash markets. Hedgers include (but are not restricted to) farmers, feedlot operators, grain elevator operators, merchants, millers, utilities, export and import firms, refiners, lenders, and hedge fund managers (see Peck 1985, 13-21). Meanwhile, to speculate is to take a position in the futures market with no counter-position in the cash market. Speculators may not be affiliated with the underlying cash markets.

To demonstrate how a hedge works, assume Hedger A buys, or longs, 5,000 bushels of corn, which is currently worth $2.40 per bushel, or $12,000=$2.40×5000; the date is May 1st and Hedger A wishes to preserve the value of his corn inventory until he sells it on June 1st. To do so, he takes a position in the futures market that is exactly opposite his position in the spot – current cash – market. For example, Hedger A sells, or shorts, a July futures contract for 5,000 bushels of corn at a price of $2.50 per bushel; put differently, Hedger A commits to sell in July 5,000 bushels of corn for $12,500=$2.50×5000. Recall that to sell (buy) a futures contract means to commit to sell (buy) an amount and grade of an item at a specific price and future date.

Absent basis risk, Hedger A’s spot and futures markets positions will preserve the value of the 5,000 bushels of corn that he owns, because a fall in the spot price of corn will be matched penny for penny by a fall in the futures price of corn. For example, suppose that by June 1st the spot price of corn has fallen five cents to $2.35 per bushel. Absent basis risk, the July futures price of corn has also fallen five cents to $2.45 per bushel.

So, on June 1st, Hedger A sells his 5,000 bushels of corn and loses $250=($2.35-$2.40)x5000 in the spot market. At the same time, he buys a July futures contract for 5,000 bushels of corn and gains $250=($2.50-$2.45)x5000 in the futures market. Notice, because Hedger A has both sold and bought a July futures contract for 5,000 bushels of corn, he has offset his commitment in the futures market.

This example of a textbook hedge – one that eliminates price risk entirely – is instructive but it is also a bit misleading because: basis risk exists; hedgers may choose to hedge more or less than 100% of their cash positions; and hedgers may cross hedge – trade futures contracts whose underlying assets are not the same as the assets that the hedger owns. So, in reality hedgers cannot immunize entirely their cash positions from market fluctuations and in some cases they may not wish to do so. Again, the purpose of a hedge is not to avoid risk, but rather to manage or even profit from it.

The second fundamental purpose of a futures market is to facilitate firms’ acquisitions of operating capital – short term loans that finance firms’ purchases of intermediate goods such as inventories of grain or petroleum. For example, lenders are relatively more likely to finance, at or near prime lending rates, hedged (versus non-hedged) inventories. The futures contact is an efficient form of collateral because it costs only a fraction of the inventory’s value, or the margin on a short position in the futures market.

Speculators make the hedge possible because they absorb the inventory’s price risk; for example, the ultimate counterparty to the inventory dealer’s short position is a speculator. In the absence of futures markets, hedgers could only engage in forward contracts – unique agreements between private parties, who operate independently of an exchange or clearinghouse. Hence, the collateral value of a forward contract is less than that of a futures contract.3

The third fundamental purpose of a futures market is to provide information to decision makers regarding the market’s expectations of future economic events. So long as a futures market is efficient – the market forms expectations by taking into proper consideration all available information – its forecasts of future economic events are relatively more reliable than an individual’s. Forecast errors are expensive, and well informed, highly competitive, profit-seeking traders have a relatively greater incentive to minimize them.

The Evolution of Futures Trading in the U.S.

Early Nineteenth Century Grain Production and Marketing

Into the early nineteenth century, the vast majority of American grains – wheat, corn, barley, rye and oats – were produced throughout the hinterlands of the United States by producers who acted primarily as subsistence farmers – agricultural producers whose primary objective was to feed themselves and their families. Although many of these farmers sold their surplus production on the market, most lacked access to large markets, as well as the incentive, affordable labor supply, and myriad technologies necessary to practice commercial agriculture – the large scale production and marketing of surplus agricultural commodities.

At this time, the principal trade route to the Atlantic seaboard was by river through New Orleans4; though the South was also home to terminal markets – markets of final destination – for corn, provisions and flour. Smaller local grain markets existed along the tributaries of the Ohio and Mississippi Rivers and east-west overland routes. The latter were used primarily to transport manufactured (high valued and nonperishable) goods west.

Most farmers, and particularly those in the East North Central States – the region consisting today of Illinois, Indiana, Michigan, Ohio and Wisconsin – could not ship bulk grains to market profitably (Clark 1966, 4, 15).5 Instead, most converted grains into relatively high value flour, livestock, provisions and whiskies or malt liquors and shipped them south or, in the case of livestock, drove them east (14).6 Oats traded locally, if at all; their low value-to-weight ratios made their shipment, in bulk or otherwise, prohibitive (15n).

The Great Lakes provided a natural water route east to Buffalo but, in order to ship grain this way, producers in the interior East North Central region needed local ports to receive their production. Although the Erie Canal connected Lake Erie to the port of New York by 1825, water routes that connected local interior ports throughout northern Ohio to the Canal were not operational prior to the mid-1830s. Indeed, initially the Erie aided the development of the Old Northwest, not because it facilitated eastward grain shipments, but rather because it allowed immigrants and manufactured goods easy access to the West (Clark 1966, 53).

By 1835 the mouths of rivers and streams throughout the East North Central States had become the hubs, or port cities, from which farmers shipped grain east via the Erie. By this time, shippers could also opt to go south on the Ohio River and then upriver to Pittsburgh and ultimately to Philadelphia, or north on the Ohio Canal to Cleveland, Buffalo and ultimately, via the Welland Canal, to Lake Ontario and Montreal (19).

By 1836 shippers carried more grain north on the Great Lakes and through Buffalo, than south on the Mississippi through New Orleans (Odle 1964, 441). Though, as late as 1840 Ohio was the only state/region who participated significantly in the Great Lakes trade. Illinois, Indiana, Michigan, and the region of modern day Wisconsin either produced for their respective local markets or relied upon Southern demand. As of 1837 only 4,107 residents populated the “village” of Chicago, which became an official city in that year (Hieronymus 1977, 72).7

Antebellum Grain Trade Finance in the Old Northwest

Before the mid-1860s, a network of banks, grain dealers, merchants, millers and commission houses – buying and selling agents located in the central commodity markets – employed an acceptance system to finance the U.S. grain trade (see Clark 1966, 119; Odle 1964, 442). For example, a miller who required grain would instruct an agent in, say, New York to establish, on the miller’s behalf, a line of credit with a merchant there. The merchant extended this line of credit in the form of sight drafts, which the merchant made payable, in sixty or ninety days, up to the amount of the line of credit.

With this credit line established, commission agents in the hinterland would arrange with grain dealers to acquire the necessary grain. The commission agent would obtain warehouse receipts – dealer certified negotiable titles to specific lots and quantities of grain in store – from dealers, attach these to drafts that he drew on the merchant’s line of credit, and discount these drafts at his local bank in return for banknotes; the local bank would forward these drafts on to the New York merchant’s bank for redemption. The commission agents would use these banknotes to advance – lend – grain dealers roughly three quarters of the current market value of the grain. The commission agent would pay dealers the remainder (minus finance and commission fees) when the grain was finally sold in the East. That is, commission agents and grain dealers entered into consignment contracts.

Unfortunately, this approach linked banks, grain dealers, merchants, millers and commission agents such that the “entire procedure was attended by considerable risk and speculation, which was assumed by both the consignee and consignor” (Clark 1966, 120). The system was reasonably adequate if grain prices went unchanged between the time the miller procured the credit and the time the grain (bulk or converted) was sold in the East, but this was rarely the case. The fundamental problem with this system of finance was that commission agents were effectively asking banks to lend them money to purchase as yet unsold grain. To be sure, this inadequacy was most apparent during financial panics, when many banks refused to discount these drafts (Odle 1964, 447).

Grain Trade Finance in Transition: Forward Contracts and Commodity Exchanges

In 1848 the Illinois-Michigan Canal connected the Illinois River to Lake Michigan. The canal enabled farmers in the hinterlands along the Illinois River to ship their produce to merchants located along the river. These merchants accumulated, stored and then shipped grain to Chicago, Milwaukee and Racine. At first, shippers tagged deliverables according to producer and region, while purchasers inspected and chose these tagged bundles upon delivery. Commercial activity at the three grain ports grew throughout the 1850s. Chicago emerged as a dominant grain (primarily corn) hub later that decade (Pierce 1957, 66).8

Amidst this growth of Lake Michigan commerce, a confluence of innovations transformed the grain trade and its method of finance. By the 1840s, grain elevators and railroads facilitated high volume grain storage and shipment, respectively. Consequently, country merchants and their Chicago counterparts required greater financing in order to store and ship this higher volume of grain.9 And, high volume grain storage and shipment required that inventoried grains be fungible – of such a nature that one part or quantity could be replaced by another equal part or quantity in the satisfaction of an obligation. For example, because a bushel of grade No. 2 Spring Wheat was fungible, its price did not depend on whether it came from Farmer A, Farmer B, Grain Elevator C, or Train Car D.

Merchants could secure these larger loans more easily and at relatively lower rates if they obtained firm price and quantity commitments from their buyers. So, merchants began to engage in forward (not futures) contracts. According to Hieronymus (1977), the first such “time contract” on record was made on March 13, 1851. It specified that 3,000 bushels of corn were to be delivered to Chicago in June at a price of one cent below the March 13th cash market price (74).10

Meanwhile, commodity exchanges serviced the trade’s need for fungible grain. In the 1840s and 1850s these exchanges emerged as associations for dealing with local issues such as harbor infrastructure and commercial arbitration (e.g., Detroit in 1847, Buffalo, Cleveland and Chicago in 1848 and Milwaukee in 1849) (see Odle 1964). By the 1850s they established a system of staple grades, standards and inspections, all of which rendered inventory grain fungible (Baer and Saxon 1949, 10; Chandler 1977, 211). As collection points for grain, cotton, and provisions, they weighed, inspected and classified commodity shipments that passed from west to east. They also facilitated organized trading in spot and forward markets (Chandler 1977, 211; Odle 1964, 439).11

The largest and most prominent of these exchanges was the Board of Trade of the City of Chicago, a grain and provisions exchange established in 1848 by a State of Illinois corporate charter (Boyle 1920, 38; Lurie 1979, 27); the exchange is known today as the Chicago Board of Trade (CBT). For at least its first decade, the CBT functioned as a meeting place for merchants to resolve contract disputes and discuss commercial matters of mutual concern. Participation was part-time at best. The Board’s first directorate of 25 members included “a druggist, a bookseller, a tanner, a grocer, a coal dealer, a hardware merchant, and a banker” and attendance was often encouraged by free lunches (Lurie 1979, 25).

However, in 1859 the CBT became a state- (of Illinois) chartered private association. As such, the exchange requested and received from the Illinois legislature sanction to establish rules “for the management of their business and the mode in which it shall be transacted, as they may think proper;” to arbitrate over and settle disputes with the authority as “if it were a judgment rendered in the Circuit Court;” and to inspect, weigh and certify grain and grain trades such that these certifications would be binding upon all CBT members (Lurie 1979, 27).

Nineteenth Century Futures Trading

By the 1850s traders sold and resold forward contracts prior to actual delivery (Hieronymus 1977, 75). A trader could not offset, in the futures market sense of the term, a forward contact. Nonetheless, the existence of a secondary market – market for extant, as opposed to newly issued securities – in forward contracts suggests, if nothing else, speculators were active in these early time contracts.

On March 27, 1863, the Chicago Board of Trade adopted its first rules and procedures for trade in forwards on the exchange (Hieronymus 1977, 76). The rules addressed contract settlement, which was (and still is) the fundamental challenge associated with a forward contract – finding a trader who was willing to take a position in a forward contract was relatively easy to do; finding that trader at the time of contract settlement was not.

The CBT began to transform actively traded and reasonably homogeneous forward contracts into futures contracts in May, 1865. At this time, the CBT: restricted trade in time contracts to exchange members; standardized contract specifications; required traders to deposit margins; and specified formally contract settlement, including payments and deliveries, and grievance procedures (Hieronymus 1977, 76).

The inception of organized futures trading is difficult to date. This is due, in part, to semantic ambiguities – e.g., was a “to arrive” contract a forward contract or a futures contract or neither? However, most grain trade historians agree that storage (grain elevators), shipment (railroad), and communication (telegraph) technologies, a system of staple grades and standards, and the impetus to speculation provided by the Crimean and U.S. Civil Wars enabled futures trading to ripen by about 1874, at which time the CBT was the U.S.’s premier organized commodities (grain and provisions) futures exchange (Baer and Saxon 1949, 87; Chandler 1977, 212; CBT 1936, 18; Clark 1966, 120; Dies 1925, 15; Hoffman 1932, 29; Irwin 1954, 77, 82; Rothstein 1966, 67).

Nonetheless, futures exchanges in the mid-1870s lacked modern clearinghouses, with which most exchanges began to experiment only in the mid-1880s. For example, the CBT’s clearinghouse got its start in 1884, and a complete and mandatory clearing system was in place at the CBT by 1925 (Hoffman 1932, 199; Williams 1982, 306). The earliest formal clearing and offset procedures were established by the Minneapolis Grain Exchange in 1891 (Peck 1985, 6).

Even so, rudiments of a clearing system – one that freed traders from dealing directly with one another – were in place by the 1870s (Hoffman 1920, 189). That is to say, brokers assumed the counter-position to every trade, much as clearinghouse members would do decades later. Brokers settled offsets between one another, though in the absence of a formal clearing procedure these settlements were difficult to accomplish.

Direct settlements were simple enough. Here, two brokers would settle in cash their offsetting positions between one another only. Nonetheless, direct settlements were relatively uncommon because offsetting purchases and sales between brokers rarely balanced with respect to quantity. For example, B1 might buy a 5,000 bushel corn future from B2, who then might buy a 6,000 bushel corn future from B1; in this example, 1,000 bushels of corn remain unsettled between B1 and B2. Of course, the two brokers could offset the remaining 1,000 bushel contract if B2 sold a 1,000 bushel corn future to B1. But what if B2 had already sold a 1,000 bushel corn future to B3, who had sold a 1,000 bushel corn future to B1? In this case, each broker’s net futures market position is offset, but all three must meet in order to settle their respective positions. Brokers referred to such a meeting as a ring settlement. Finally, if, in this example, B1 and B3 did not have positions with each other, B2 could settle her position if she transferred her commitment (which she has with B1) to B3. Brokers referred to this method as a transfer settlement. In either ring or transfer settlements, brokers had to find other brokers who held and wished to settle open counter-positions. Often brokers used runners to search literally the offices and corridors for the requisite counter-parties (see Hoffman 1932, 185-200).

Finally, the transformation in Chicago grain markets from forward to futures trading occurred almost simultaneously in New York cotton markets. Forward contracts for cotton traded in New York (and Liverpool, England) by the 1850s. And, like Chicago, organized trading in cotton futures began on the New York Cotton Exchange in about 1870; rules and procedures formalized the practice in 1872. Futures trading on the New Orleans Cotton Exchange began around 1882 (Hieronymus 1977, 77).

Other successful nineteenth century futures exchanges include the New York Produce Exchange, the Milwaukee Chamber of Commerce, the Merchant’s Exchange of St. Louis, the Chicago Open Board of Trade, the Duluth Board of Trade, and the Kansas City Board of Trade (Hoffman 1920, 33; see Peck 1985, 9).

Early Futures Market Performance


Data on grain futures volume prior to the 1880s are not available (Hoffman 1932, 30). Though in the 1870s “[CBT] officials openly admitted that there was no actual delivery of grain in more than ninety percent of contracts” (Lurie 1979, 59). Indeed, Chart 1 demonstrates that trading was relatively voluminous in the nineteenth century.

An annual average of 23,600 million bushels of grain futures traded between 1884 and 1888, or eight times the annual average amount of crops produced during that period. By comparison, an annual average of 25,803 million bushels of grain futures traded between 1966 and 1970, or four times the annual average amount of crops produced during that period. In 2002, futures volume outnumbered crop production by a factor of eleven.

The comparable data for cotton futures are presented in Chart 2. Again here, trading in the nineteenth century was significant. To wit, by 1879 futures volume had outnumbered production by a factor of five, and by 1896 this factor had reached eight.

Price of Storage

Nineteenth century observers of early U.S. futures markets either credited them for stabilizing food prices, or discredited them for wagering on, and intensifying, the economic hardships of Americans (Baer and Saxon 1949, 12-20, 56; Chandler 1977, 212; Ferris 1988, 88; Hoffman 1932, 5; Lurie 1979, 53, 115). To be sure, the performance of early futures markets remains relatively unexplored. The extant research on the subject has generally examined this performance in the context of two perspectives on the theory of efficiency: the price of storage and futures price efficiency more generally.

Holbrook Working pioneered research into the price of storage – the relationship, at a point in time, between prices (of storable agricultural commodities) applicable to different future dates (Working 1949, 1254).12 For example, what is the relationship between the current spot price of wheat and the current September 2004 futures price of wheat? Or, what is the relationship between the current September 2004 futures price of wheat and the current May 2005 futures price of wheat?

Working reasoned that these prices could not differ because of events that were expected to occur between these dates. For example, if the May 2004 wheat futures price is less than the September 2004 price, this cannot be due to, say, the expectation of a small harvest between May 2004 and September 2004. On the contrary, traders should factor such an expectation into both May and September prices. And, assuming that they do, then this difference can only reflect the cost of carrying – storing – these commodities over time.13 Though this strict interpretation has since been modified somewhat (see Peck 1985, 44).

So, for example, the September 2004 price equals the May 2004 price plus the cost of storing wheat between May 2004 and September 2004. If the difference between these prices is greater or less than the cost of storage, and the market is efficient, arbitrage will bring the difference back to the cost of storage – e.g., if the difference in prices exceeds the cost of storage, then traders can profit if they buy the May 2004 contract, sell the September 2004 contract, take delivery in May and store the wheat until September. Working (1953) demonstrated empirically that the theory of the price of storage could explain quite satisfactorily these inter-temporal differences in wheat futures prices at the CBT as early as the late 1880s (Working 1953, 556).

Futures Price Efficiency

Many contemporary economists tend to focus on futures price efficiency more generally (for example, Beck 1994; Kahl and Tomek 1986; Kofi 1973; McKenzie, et al. 2002; Tomek and Gray, 1970). That is to say, do futures prices shadow consistently (but not necessarily equal) traders’ rational expectations of future spot prices? Here, the research focuses on the relationship between, say, the cash price of wheat in September 2004 and the September 2004 futures price of wheat quoted two months earlier in July 2004.

Figure 1illustrates the behavior of corn futures prices and their corresponding spot prices between 1877 and 1890. The data consist of the average month t futures price in the last full week of month t-2 and the average cash price in the first full week of month t.

The futures price and its corresponding spot price need not be equal; futures price efficiency does not mean that the futures market is clairvoyant. But, a difference between the two series should exist only because of an unpredictable forecast error and a risk premium – futures prices may be, say, consistently below the expected future spot price if long speculators require an inducement, or premium, to enter the futures market. Recent work finds strong evidence that these early corn (and corresponding wheat) futures prices are, in the long run, efficient estimates of their underlying spot prices (Santos 2002, 35). Although these results and Working’s empirical studies on the price of storage support, to some extent, the notion that early U.S. futures markets were efficient, this question remains largely unexplored by economic historians.

The Struggle for Legitimacy

Nineteenth century America was both fascinated and appalled by futures trading. This is apparent from the litigation and many public debates surrounding its legitimacy (Baer and Saxon 1949, 55; Buck 1913, 131, 271; Hoffman 1932, 29, 351; Irwin 1954, 80; Lurie 1979, 53, 106). Many agricultural producers, the lay community and, at times, legislatures and the courts, believed trading in futures was tantamount to gambling. The difference between the latter and speculating, which required the purchase or sale of a futures contract but not the shipment or delivery of the commodity, was ostensibly lost on most Americans (Baer and Saxon 1949, 56; Ferris 1988, 88; Hoffman 1932, 5; Lurie 1979, 53, 115).

Many Americans believed that futures traders frequently manipulated prices. From the end of the Civil War until 1879 alone, corners – control of enough of the available supply of a commodity to manipulate its price – allegedly occurred with varying degrees of success in wheat (1868, 1871, 1878/9), corn (1868), oats (1868, 1871, 1874), rye (1868) and pork (1868) (Boyle 1920, 64-65). This manipulation continued throughout the century and culminated in the Three Big Corners – the Hutchinson (1888), the Leiter (1898), and the Patten (1909). The Patten corner was later debunked (Boyle 1920, 67-74), while the Leiter corner was the inspiration for Frank Norris’s classic The Pit: A Story of Chicago (Norris 1903; Rothstein 1982, 60).14 In any case, reports of market corners on America’s early futures exchanges were likely exaggerated (Boyle 1920, 62-74; Hieronymus 1977, 84), as were their long term effects on prices and hence consumer welfare (Rothstein 1982, 60).

By 1892 thousands of petitions to Congress called for the prohibition of “speculative gambling in grain” (Lurie, 1979, 109). And, attacks from state legislatures were seemingly unrelenting: in 1812 a New York act made short sales illegal (the act was repealed in 1858); in 1841 a Pennsylvania law made short sales, where the position was not covered in five days, a misdemeanor (the law was repealed in 1862); in 1882 an Ohio law and a similar one in Illinois tried unsuccessfully to restrict cash settlement of futures contracts; in 1867 the Illinois constitution forbade dealing in futures contracts (this was repealed by 1869); in 1879 California’s constitution invalidated futures contracts (this was effectively repealed in 1908); and, in 1882, 1883 and 1885, Mississippi, Arkansas, and Texas, respectively, passed laws that equated futures trading with gambling, thus making the former a misdemeanor (Peterson 1933, 68-69).

Two nineteenth century challenges to futures trading are particularly noteworthy. The first was the so-called Anti-Option movement. According to Lurie (1979), the movement was fueled by agrarians and their sympathizers in Congress who wanted to end what they perceived as wanton speculative abuses in futures trading (109). Although options were (are) not futures contracts, and were nonetheless already outlawed on most exchanges by the 1890s, the legislation did not distinguish between the two instruments and effectively sought to outlaw both (Lurie 1979, 109).

In 1890 the Butterworth Anti-Option Bill was introduced in Congress but never came to a vote. However, in 1892 the Hatch (and Washburn) Anti-Option bills passed both houses of Congress, and failed only on technicalities during reconciliation between the two houses. Had either bill become law, it would have effectively ended options and futures trading in the United States (Lurie 1979, 110).

A second notable challenge was the bucket shop controversy, which challenged the legitimacy of the CBT in particular. A bucket shop was essentially an association of gamblers who met outside the CBT and wagered on the direction of futures prices. These associations had legitimate-sounding names such as the Christie Grain and Stock Company and the Public Grain Exchange. To most Americans, these “exchanges” were no less legitimate than the CBT. That some CBT members were guilty of “bucket shopping” only made matters worse!

The bucket shop controversy was protracted and colorful (see Lurie 1979, 138-167). Between 1884 and 1887 Illinois, Iowa, Missouri and Ohio passed anti-bucket shop laws (Lurie 1979, 95). The CBT believed these laws entitled them to restrict bucket shops access to CBT price quotes, without which the bucket shops could not exist. Bucket shops argued that they were competing exchanges, and hence immune to extant anti-bucket shop laws. As such, they sued the CBT for access to these price quotes.15

The two sides and the telegraph companies fought in the courts for decades over access to these price quotes; the CBT’s very survival hung in the balance. After roughly twenty years of litigation, the Supreme Court of the U.S. effectively ruled in favor of the Chicago Board of Trade and against bucket shops (Board of Trade of the City of Chicago v. Christie Grain & Stock Co., 198 U.S. 236, 25 Sup. Ct. (1905)). Bucket shops disappeared completely by 1915 (Hieronymus 1977, 90).


The anti-option movement, the bucket shop controversy and the American public’s discontent with speculation masks an ironic reality of futures trading: it escaped government regulation until after the First World War; though early exchanges did practice self-regulation or administrative law.16 The absence of any formal governmental oversight was due in large part to two factors. First, prior to 1895, the opposition tried unsuccessfully to outlaw rather than regulate futures trading. Second, strong agricultural commodity prices between 1895 and 1920 weakened the opposition, who blamed futures markets for low agricultural commodity prices (Hieronymus 1977, 313).

Grain prices fell significantly by the end of the First World War, and opposition to futures trading grew once again (Hieronymus 1977, 313). In 1922 the U.S. Congress enacted the Grain Futures Act, which required exchanges to be licensed, limited market manipulation and publicized trading information (Leuthold 1989, 369).17 However, regulators could rarely enforce the act because it enabled them to discipline exchanges, rather than individual traders. To discipline an exchange was essentially to suspend it, a punishment unfit (too harsh) for most exchange-related infractions.

The Commodity Exchange Act of 1936 enabled the government to deal directly with traders rather than exchanges. It established the Commodity Exchange Authority (CEA), a bureau of the U.S. Department of Agriculture, to monitor and investigate trading activities and prosecute price manipulation as a criminal offense. The act also: limited speculators’ trading activities and the sizes of their positions; regulated futures commission merchants; banned options trading on domestic agricultural commodities; and restricted futures trading – designated which commodities were to be traded on which licensed exchanges (see Hieronymus 1977; Leuthold, et al. 1989).

Although Congress amended the Commodity Exchange Act in 1968 in order to increase the regulatory powers of the Commodity Exchange Authority, the latter was ill-equipped to handle the explosive growth in futures trading in the 1960s and 1970s. So, in 1974 Congress passed the Commodity Futures Trading Act, which created far-reaching federal oversight of U.S. futures trading and established the Commodity Futures Trading Commission (CFTC).

Like the futures legislation before it, the Commodity Futures Trading Act seeks “to ensure proper execution of customer orders and to prevent unlawful manipulation, price distortion, fraud, cheating, fictitious trades, and misuse of customer funds” (Leuthold, et al. 1989, 34). Unlike the CEA, the CFTC was given broad regulator powers over all futures trading and related exchange activities throughout the U.S. The CFTC oversees and approves modifications to extant contracts and the creation and introduction of new contracts. The CFTC consists of five presidential appointees who are confirmed by the U.S. Senate.

The Futures Trading Act of 1982 amended the Commodity Futures Trading Act of 1974. The 1982 act legalized options trading on agricultural commodities and identified more clearly the jurisdictions of the CFTC and Securities and Exchange Commission (SEC). The regulatory overlap between the two organizations arose because of the explosive popularity during the 1970s of financial futures contracts. Today, the CFTC regulates all futures contracts and options on futures contracts traded on U.S. futures exchanges; the SEC regulates all financial instrument cash markets as well as all other options markets.

Finally, in 2000 Congress passed the Commodity Futures Modernization Act, which reauthorized the Commodity Futures Trading Commission for five years and repealed an 18-year old ban on trading single stock futures. The bill also sought to increase competition and “reduce systematic risk in markets for futures and over-the-counter derivatives” (H.R. 5660, 106th Congress 2nd Session).

Modern Futures Markets

The growth in futures trading has been explosive in recent years (Chart 3).

Futures trading extended beyond physical commodities in the 1970s and 1980s – currency futures in 1972; interest rate futures in 1975; and stock index futures in 1982 (Silber 1985, 83). The enormous growth of financial futures at this time was likely because of the breakdown of the Bretton Woods exchange rate regime, which essentially fixed the relative values of industrial economies’ exchange rates to the American dollar (see Bordo and Eichengreen 1993), and relatively high inflation from the late 1960s to the early 1980s. Flexible exchange rates and inflation introduced, respectively, exchange and interest rate risks, which hedgers sought to mitigate through the use of financial futures. Finally, although futures contracts on agricultural commodities remain popular, financial futures and options dominate trading today. Trading volume in metals, minerals and energy remains relatively small.

Trading volume in agricultural futures contracts first dropped below 50% in 1982. By 1985 this volume had dropped to less than one fourth all trading. In the same year the volume of futures trading in the U.S. Treasury bond contract alone exceeded trading volume in all agricultural commodities combined (Leuthold et al. 1989, 2). Today exchanges in the U.S. actively trade contracts on several underlying assets (Table 1). These range from the traditional – e.g., agriculture and metals – to the truly innovative – e.g. the weather. The latter’s payoff varies with the number of degree-days by which the temperature in a particular region deviates from 65 degrees Fahrenheit.

Table 1: Select Futures Contracts Traded as of 2002

Agriculture Currencies Equity Indexes Interest Rates Metals & Energy
Corn British pound S&P 500 index Eurodollars Copper
Oats Canadian dollar Dow Jones Industrials Euroyen Aluminum
Soybeans Japanese yen S&P Midcap 400 Euro-denominated bond Gold
Soybean meal Euro Nasdaq 100 Euroswiss Platinum
Soybean oil Swiss franc NYSE index Sterling Palladium
Wheat Australian dollar Russell 2000 index British gov. bond (gilt) Silver
Barley Mexican peso Nikkei 225 German gov. bond Crude oil
Flaxseed Brazilian real FTSE index Italian gov. bond Heating oil
Canola CAC-40 Canadian gov. bond Gas oil
Rye DAX-30 Treasury bonds Natural gas
Cattle All ordinary Treasury notes Gasoline
Hogs Toronto 35 Treasury bills Propane
Pork bellies Dow Jones Euro STOXX 50 LIBOR CRB index
Cocoa EURIBOR Electricity
Coffee Municipal bond index Weather
Cotton Federal funds rate
Milk Bankers’ acceptance
Orange juice

Source: Bodie, Kane and Marcus (2005), p. 796.

Table 2 provides a list of today’s major futures exchanges.

Table 2: Select Futures Exchanges as of 2002

Exchange Exchange
Chicago Board of Trade CBT Montreal Exchange ME
Chicago Mercantile Exchange CME Minneapolis Grain Exchange MPLS
Coffee, Sugar & Cocoa Exchange, New York CSCE Unit of Euronext.liffe NQLX
COMEX, a division of the NYME CMX New York Cotton Exchange NYCE
European Exchange EUREX New York Futures Exchange NYFE
Financial Exchange, a division of the NYCE FINEX New York Mercantile Exchange NYME
International Petroleum Exchange IPE OneChicago ONE
Kansas City Board of Trade KC Sydney Futures Exchange SFE
London International Financial Futures Exchange LIFFE Singapore Exchange Ltd. SGX
Marche a Terme International de France MATIF

Source: Wall Street Journal, 5/12/2004, C16.

Modern trading differs from its nineteenth century counterpart in other respects as well. First, the popularity of open outcry trading is waning. For example, today the CBT executes roughly half of all trades electronically. And, electronic trading is the rule, rather than the exception throughout Europe. Second, today roughly 99% of all futures contracts are settled prior to maturity. Third, in 1982 the Commodity Futures Trading Commission approved cash settlement – delivery that takes the form of a cash balance – on financial index and Eurodollar futures, whose underlying assets are not deliverable, as well as on several non-financial contracts including lean hog, feeder cattle and weather (Carlton 1984, 253). And finally, on Dec. 6, 2002, the Chicago Mercantile Exchange became the first publicly traded financial exchange in the U.S.

References and Further Reading

Baer, Julius B. and Olin. G. Saxon. Commodity Exchanges and Futures Trading. New York: Harper & Brothers, 1949.

Bodie, Zvi, Alex Kane and Alan J. Marcus. Investments. New York: McGraw-Hill/Irwin, 2005.

Bordo, Michael D. and Barry Eichengreen, editors. A Retrospective on the Bretton Woods System: Lessons for International Monetary Reform. Chicago: University of Chicago Press, 1993.

Boyle, James. E. Speculation and the Chicago Board of Trade. New York: MacMillan Company, 1920.

Buck, Solon. J. The Granger Movement: A Study of Agricultural Organization and Its Political,

Carlton, Dennis W. “Futures Markets: Their Purpose, Their History, Their Growth, Their Successes and Failures.” Journal of Futures Markets 4, no. 3 (1984): 237-271.

Chicago Board of Trade Bulletin. The Development of the Chicago Board of Trade. Chicago: Chicago Board of Trade, 1936.

Chandler, Alfred. D. The Visible Hand: The Managerial Revolution in American Business. Cambridge: Harvard University Press, 1977.

Clark, John. G. The Grain Trade in the Old Northwest. Urbana: University of Illinois Press, 1966.

Commodity Futures Trading Commission. Annual Report. Washington, D.C. 2003.

Dies, Edward. J. The Wheat Pit. Chicago: The Argyle Press, 1925.

Ferris, William. G. The Grain Traders: The Story of the Chicago Board of Trade. East Lansing, MI: Michigan State University Press, 1988.

Hieronymus, Thomas A. Economics of Futures Trading for Commercial and Personal Profit. New York: Commodity Research Bureau, Inc., 1977.

Hoffman, George W. Futures Trading upon Organized Commodity Markets in the United States. Philadelphia: University of Pennsylvania Press, 1932.

Irwin, Harold. S. Evolution of Futures Trading. Madison, WI: Mimir Publishers, Inc., 1954

Leuthold, Raymond M., Joan C. Junkus and Jean E. Cordier. The Theory and Practice of Futures Markets. Champaign, IL: Stipes Publishing L.L.C., 1989.

Lurie, Jonathan. The Chicago Board of Trade 1859-1905. Urbana: University of Illinois Press, 1979.

National Agricultural Statistics Service. “Historical Track Records.” Agricultural Statistics Board, U.S. Department of Agriculture, Washington, D.C. April 2004.

Norris, Frank. The Pit: A Story of Chicago. New York, NY: Penguin Group, 1903.

Odle, Thomas. “Entrepreneurial Cooperation on the Great Lakes: The Origin of the Methods of American Grain Marketing.” Business History Review 38, (1964): 439-55.

Peck, Anne E., editor. Futures Markets: Their Economic Role. Washington D.C.: American Enterprise Institute for Public Policy Research, 1985.

Peterson, Arthur G. “Futures Trading with Particular Reference to Agricultural Commodities.” Agricultural History 8, (1933): 68-80.

Pierce, Bessie L. A History of Chicago: Volume III, the Rise of a Modern City. New York: Alfred A. Knopf, 1957.

Rothstein, Morton. “The International Market for Agricultural Commodities, 1850-1873.” In Economic Change in the Civil War Era, edited by David. T. Gilchrist and W. David Lewis, 62-71. Greenville DE: Eleutherian Mills-Hagley Foundation, 1966.

Rothstein, Morton. “Frank Norris and Popular Perceptions of the Market.” Agricultural History 56, (1982): 50-66.

Santos, Joseph. “Did Futures Markets Stabilize U.S. Grain Prices?” Journal of Agricultural Economics 53, no. 1 (2002): 25-36.

Silber, William L. “The Economic Role of Financial Futures.” In Futures Markets: Their Economic Role, edited by Anne E. Peck, 83-114. Washington D.C.: American Enterprise Institute for Public Policy Research, 1985.

Stein, Jerome L. The Economics of Futures Markets. Oxford: Basil Blackwell Ltd, 1986.

Taylor, Charles. H. History of the Board of Trade of the City of Chicago. Chicago: R. O. Law, 1917.

Werner, Walter and Steven T. Smith. Wall Street. New York: Columbia University Press, 1991.

Williams, Jeffrey C. “The Origin of Futures Markets.” Agricultural History 56, (1982): 306-16.

Working, Holbrook. “The Theory of the Price of Storage.” American Economic Review 39, (1949): 1254-62.

Working, Holbrook. “Hedging Reconsidered.” Journal of Farm Economics 35, (1953): 544-61.

1 The clearinghouse is typically a corporation owned by a subset of exchange members. For details regarding the clearing arrangements of a specific exchange, go to and click on “Clearing Organizations.”

2 The vast majority of contracts are offset. Outright delivery occurs when the buyer receives from, or the seller “delivers” to the exchange a title of ownership, and not the actual commodity or financial security – the urban legend of the trader who neglected to settle his long position and consequently “woke up one morning to find several car loads of a commodity dumped on his front yard” is indeed apocryphal (Hieronymus 1977, 37)!

3 Nevertheless, forward contracts remain popular today (see Peck 1985, 9-12).

4 The importance of New Orleans as a point of departure for U.S. grain and provisions prior to the Civil War is unquestionable. According to Clark (1966), “New Orleans was the leading export center in the nation in terms of dollar volume of domestic exports, except for 1847 and a few years during the 1850s, when New York’s domestic exports exceeded those of the Crescent City” (36).

5 This area was responsible for roughly half of U.S. wheat production and a third of U.S. corn production just prior to 1860. Southern planters dominated corn output during the early to mid- 1800s.

6 Millers milled wheat into flour; pork producers fed corn to pigs, which producers slaughtered for provisions; distillers and brewers converted rye and barley into whiskey and malt liquors, respectively; and ranchers fed grains and grasses to cattle, which were then driven to eastern markets.

7 Significant advances in transportation made the grain trade’s eastward expansion possible, but the strong and growing demand for grain in the East made the trade profitable. The growth in domestic grain demand during the early to mid-nineteenth century reflected the strong growth in eastern urban populations. Between 1820 and 1860, the populations of Baltimore, Boston, New York and Philadelphia increased by over 500% (Clark 1966, 54). Moreover, as the 1840’s approached, foreign demand for U.S. grain grew. Between 1845 and 1847, U.S. exports of wheat and flour rose from 6.3 million bushels to 26.3 million bushels and corn exports grew from 840,000 bushels to 16.3 million bushels (Clark 1966, 55).

8 Wheat production was shifting to the trans-Mississippi West, which produced 65% of the nation’s wheat by 1899 and 90% by 1909, and railroads based in the Lake Michigan port cities intercepted the Mississippi River trade that would otherwise have headed to St. Louis (Clark 1966, 95). Lake Michigan port cities also benefited from a growing concentration of corn production in the West North Central region – Iowa, Kansas, Minnesota, Missouri, Nebraska, North Dakota and South Dakota, which by 1899 produced 40% percent of the country’s corn (Clark 1966, 4).

9 Corn had to be dried immediately after it was harvested and could only be shipped profitably by water to Chicago, but only after rivers and lakes had thawed; so, country merchants stored large quantities of corn. On the other hand, wheat was more valuable relative to its weight, and it could be shipped to Chicago by rail or road immediately after it was harvested; so, Chicago merchants stored large quantities of wheat.

10 This is consistent with Odle (1964), who adds that “the creators of the new system of marketing [forward contracts] were the grain merchants of the Great Lakes” (439). However, Williams (1982) presents evidence of such contracts between Buffalo and New York City as early as 1847 (309). To be sure, Williams proffers an intriguing case that forward and, in effect, future trading was active and quite sophisticated throughout New York by the late 1840s. Moreover, he argues that this trading grew not out of activity in Chicago, whose trading activities were quite primitive at this early date, but rather trading in London and ultimately Amsterdam. Indeed, “time bargains” were common in London and New York securities markets in the mid- and late 1700s, respectively. A time bargain was essentially a cash-settled financial forward contract that was unenforceable by law, and as such “each party was forced to rely on the integrity and credit of the other” (Werner and Smith 1991, 31). According to Werner and Smith, “time bargains prevailed on Wall Street until 1840, and were gradually replaced by margin trading by 1860” (68). They add that, “margin trading … had an advantage over time bargains, in which there was little protection against default beyond the word of another broker. Time bargains also technically violated the law as wagering contracts; margin trading did not” (135). Between 1818 and 1840 these contracts comprised anywhere from 0.7% (49-day average in 1830) to 34.6% (78-day average in 1819) of daily exchange volume on the New York Stock & Exchange Board (Werner and Smith 1991, 174).

11 Of course, forward markets could and indeed did exist in the absence of both grading standards and formal exchanges, though to what extent they existed is unclear (see Williams 1982).

12 In the parlance of modern financial futures, the term cost of carry is used instead of the term storage. For example, the cost of carrying a bond is comprised of the cost of acquiring and holding (or storing) it until delivery minus the return earned during the carry period.

13 More specifically, the price of storage is comprised of three components: (1) physical costs such as warehouse and insurance; (2) financial costs such as borrowing rates of interest; and (3) the convenience yield – the return that the merchant, who stores the commodity, derives from maintaining an inventory in the commodity. The marginal costs of (1) and (2) are increasing functions of the amount stored; the more the merchant stores, the greater the marginal costs of warehouse use, insurance and financing. Whereas the marginal benefit of (3) is a decreasing function of the amount stored; put differently, the smaller the merchant’s inventory, the more valuable each additional unit of inventory becomes. Working used this convenience yield to explain a negative price of storage – the nearby contract is priced higher than the faraway contract; an event that is likely to occur when supplies are exceptionally low. In this instance, there is little for inventory dealers to store. Hence, dealers face extremely low physical and financial storage costs, but extremely high convenience yields. The price of storage turns negative; essentially, inventory dealers are willing to pay to store the commodity.

14 Norris’ protagonist, Curtis Jadwin, is a wheat speculator emotionally consumed and ultimately destroyed, while the welfare of producers and consumers hang in the balance, when a nineteenth century CBT wheat futures corner backfires on him.

15 One particularly colorful incident in the controversy came when the Supreme Court of Illinois ruled that the CBT had to either make price quotes public or restrict access to everyone. When the Board opted for the latter, it found it needed to “prevent its members from running (often literally) between the [CBT and a bucket shop next door], but with minimal success. Board officials at first tried to lock the doors to the exchange…However, after one member literally battered down the door to the east side of the building, the directors abandoned this policy as impracticable if not destructive” (Lurie 1979, 140).

16 Administrative law is “a body of rules and doctrines which deals with the powers and actions of administrative agencies” that are organizations other than the judiciary or legislature. These organizations affect the rights of private parties “through either adjudication, rulemaking, investigating, prosecuting, negotiating, settling, or informally acting” (Lurie 1979, 9).

17 In 1921 Congress passed The Futures Trading Act, which was declared unconstitutional.

Citation: Santos, Joseph. “A History of Futures Trading in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL