EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

Economic History of Hong Kong

Catherine R. Schenk, University of Glasgow

Hong Kong’s economic and political history has been primarily determined by its geographical location. The territory of Hong Kong is comprised of two main islands (Hong Kong Island and Lantau Island) and a mainland hinterland. It thus forms a natural geographic port for Guangdong province in Southeast China. In a sense, there is considerable continuity in Hong Kong’s position in the international economy since its origins were as a commercial entrepot for China’s regional and global trade, and this is still a role it plays today. From a relatively unpopulated territory at the beginning of the nineteenth century, Hong Kong grew to become one of the most important international financial centers in the world. Hong Kong also underwent a rapid and successful process of industrialization from the 1950s that captured the imagination of economists and historians in the 1980s and 1990s.

Hong Kong from 1842 to 1949

After being ceded by China to the British under the Treaty of Nanking in 1842, the colony of Hong Kong quickly became a regional center for financial and commercial services based particularly around the Hongkong and Shanghai Bank and merchant companies such as Jardine Matheson. In 1841 there were only 7500 Chinese inhabitants of Hong Kong and a handful of foreigners, but by 1859 the Chinese community was over 85,000 supplemented by about 1600 foreigners. The economy was closely linked to commercial activity, dominated by shipping, banking and merchant companies. Gradually there was increasing diversification to services and retail outlets to meet the needs of the local population, and also shipbuilding and maintenance linked to the presence of the British naval and merchant shipping. There was some industrial expansion in the nineteenth century; notably sugar refining, cement and ice factories among the foreign sector, alongside smaller-scale local workshop manufactures. The mainland territory of Hong Kong was ceded to British rule by two further treaties in this period; Kowloon in 1860 and the New Territories in 1898.

Hong Kong was profoundly affected by the disastrous events in Mainland China in the inter-war period. After overthrow of the dynastic system in 1911, the Kuomintang (KMT) took a decade to pull together a republican nation-state. The Great Depression and fluctuations in the international price of silver then disrupted China’s economic relations with the rest of the world in the 1930s. From 1937, China descended into the Sino-Japanese War. Two years after the end of World War II, the civil war between the KMT and Chinese Communist Party pushed China into a downward economic spiral. During this period, Hong Kong suffered from the slowdown in world trade and in China’s trade in particular. However, problems on the mainland also diverted business and entrepreneurs from Shanghai and other cities to the relative safety and stability of the British colonial port of Hong Kong.

Post-War Industrialization

After the establishment of the People’s Republic of China (PRC) in 1949, the mainland began a process of isolation from the international economy, partly for ideological reasons and partly because of Cold War embargos on trade imposed first by the United States in 1949 and then by the United Nations in 1951. Nevertheless, Hong Kong was vital to the international economic links that the PRC continued in order to pursue industrialization and support grain imports. Even during the period of self-sufficiency in the 1960s, Hong Kong’s imports of food and water from the PRC were a vital source of foreign exchange revenue that ensured Hong Kong’s usefulness to the mainland. In turn, cheap food helped to restrain rises in the cost of living in Hong Kong thus helping to keep wages low during the period of labor-intensive industrialization.

The industrialization of Hong Kong is usually dated from the embargoes of the 1950s. Certainly, Hong Kong’s prosperity could no longer depend on the China trade in this decade. However, as seen above, industry emerged in the nineteenth century and it began to expand in the interwar period. Nevertheless, industrialization accelerated after 1945 with the inflow of refugees, entrepreneurs and capital fleeing the civil war on the mainland. The most prominent example is immigrants from Shanghai who created the cotton spinning industry in the colony. Hong Kong’s industry was founded in the textile sector in the 1950s before gradually diversifying in the 1960s to clothing, electronics, plastics and other labor-intensive production mainly for export.

The economic development of Hong Kong is unusual in a variety of respects. First, industrialization was accompanied by increasing numbers of small and medium-sized enterprises (SME) rather than consolidation. In 1955, 91 percent of manufacturing establishments employed fewer than one hundred workers, a proportion that increased to 96.5 percent by 1975. Factories employing fewer than one hundred workers accounted for 42 percent of Hong Kong’s domestic exports to the U.K. in 1968, amounting to HK$1.2 billion. At the end of 2002, SMEs still amounted to 98 percent of enterprises, providing 60 percent of total private employment.

Second, until the late 1960s, the government did not engage in active industrial planning. This was partly because the government was preoccupied with social spending on housing large flows of immigrants, and partly because of an ideological sympathy for free market forces. This means that Hong Kong fits outside the usual models of Asian economic development based on state-led industrialization (Japan, South Korea, Singapore, Taiwan) or domination of foreign firms (Singapore) or large firms with close relations to the state (Japan, South Korea). Low taxes, lax employment laws, absence of government debt, and free trade are all pillars of the Hong Kong experience of economic development.

In fact, of course, the reality was very different from the myth of complete laissez-faire. The government’s programs of public housing, land reclamation, and infrastructure investment were ambitious. New industrial towns were built to house immigrants, provide employment and aid industry. The government subsidized industry indirectly through this public housing, which restrained rises in the cost of living that would have threatened Hong Kong’s labor-cost advantage in manufacturing. The government also pursued an ambitious public education program, creating over 300,000 new primary school places between 1954 and 1961. By 1966, 99.8% of school-age children were attending primary school, although free universal primary school was not provided until 1971. Secondary school provision was expanded in the 1970s, and from 1978 the government offered compulsory free education for all children up to the age of 15. The hand of government was much lighter on international trade and finance. Exchange controls were limited to a few imposed by the U.K., and there were no controls on international flows of capital. Government expenditure even fell from 7.5% of GDP in the 1960s to 6.5% in the 1970s. In the same decades, British government spending as a percent of GDP rose from 17% to 20%.

From the mid-1950s Hong Kong’s rapid success as a textile and garment exporter generated trade friction that resulted in voluntary export restraints in a series of treaties with the U.K. beginning in 1959. Despite these agreements, Hong Kong’s exporters continued to exploit their flexibility and adaptability to increase production and find new markets. Indeed, exports increased from 54% of GDP in the 1960s to 64% in the 1970s. Figure 1 shows the annual changes in the growth of real GDP per capita. In the period from 1962 until the onset of the oil crisis in 1973, the average growth rate was 6.5% per year. From 1976 to 1996 GDP grew at an average of 5.6% per year. There were negative shocks in 1967-68 as a result of local disturbances from the onset of the Cultural Revolution in the PRC, and again in 1973 to 1975 from the global oil crisis. In the early 1980s there was another negative shock related to politics, as the terms of Hong Kong’s return to PRC control in 1997 were formalized.

 Annual percentage change of per capita GDP 1962-2001

Reintegration with China, 1978-1997

The Open Door Policy of the PRC announced by Deng Xiao-ping at the end of 1978 marked a new era for Hong Kong’s economy. With the newly vigorous engagement of China in international trade and investment, Hong Kong’s integration with the mainland accelerated as it regained its traditional role as that country’s main provider of commercial and financial services. From 1978 to 1997, visible trade between Hong Kong and the PRC grew at an average rate of 28% per annum. At the same time, Hong Kong firms began to move their labor-intensive activities to the mainland to take advantage of cheaper labor. The integration of Hong Kong with the Pearl River delta in Guangdong is the most striking aspect of these trade and investment links. At the end of 1997, the cumulative value of Hong Kong’s direct investment in Guangdong was estimated at US$48 billion, accounting for almost 80% of the total foreign direct investment there. Hong Kong companies and joint ventures in Guangdong province employed about five million people. Most of these businesses were labor-intensive assembly for export, but from 1997 onward there has been increased investment in financial services, tourism and retail trade.

While manufacturing was moved out of the colony during the 1980s and 1990s, there was a surge in the service sector. This transformation of the structure of Hong Kong’s economy from manufacturing to services was dramatic. Most remarkably it was accomplished without faltering growth rates overall, and with an average unemployment rate of only 2.5% from 1982 to 1997. Figure 2 shows that the value of manufacturing peaked in 1992 before beginning an absolute decline. In contrast, the value of commercial and financial services soared. This is reflected in the contribution of services and manufacturing to GDP shown in Figure 3. Employment in the service sector rose from 52% to 80% of the labor force from 1981 to 2000 while manufacturing employment fell from 39% to 10% in the same period.

 GDP by economic activity at current prices  Contribution to Hong Kong's GDP at factor prices

Asian Financial Crisis, 1997-2002

The terms for the return of Hong Kong to Chinese rule in July 1997 carefully protected the territory’s separate economic characteristics, which have been so beneficial to the Chinese economy. Under the Basic Law, a “one country-two systems” policy was formulated which left Hong Kong monetarily and economically separate from the mainland with exchange and trade controls remaining in place as well as restrictions on the movement of people. Hong Kong was hit hard by the Asian Financial Crisis that struck the region in mid-1997, just at the time of the handover of the colony back to Chinese administrative control. The crisis prompted a collapse in share prices and the property market that affected the ability of many borrowers to repay bank loans. Unlike most Asian countries, Hong Kong Special Administrative Region and mainland China maintained their currencies’ exchange rates with the U.S. dollar rather than devaluing. Along with the Sudden Acute Respiratory Syndrome (SARS) threat in 2002, the Asian Financial Crisis pushed Hong Kong into a new era of recession with a rise in unemployment (6% on average from 1998-2003) and absolute declines in output and prices. The longer-term impact of the crisis has been to increase the intensity and importance of Hong Kong’s trade and investment links with the PRC. Since the PRC did not fare as badly from the regional crisis, the economic prospects for Hong Kong have been tied more closely to the increasingly prosperous mainland.

Suggestions for Further Reading

For a general history of Hong Kong from the nineteenth century, see S. Tsang, A Modern History of Hong Kong, London: IB Tauris, 2004. For accounts of Hong Kong’s economic history see, D.R. Meyer, Hong Kong as a Global Metropolis, Cambridge: Cambridge University Press, 2000; C.R. Schenk, Hong Kong as an International Financial Centre: Emergence and Development, 1945-65, London: Routledge, 2001; and Y-P Ho, Trade, Industrial Restructuring and Development in Hong Kong, London: Macmillan, 1992. Useful statistics and summaries of recent developments are available on the website of the Hong Kong Monetary Authority www.info.gov.hk/hkma.

Citation: Schenk, Catherine. “Economic History of Hong Kong”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/economic-history-of-hong-kong/

A History of Futures Trading in the United States

Joseph Santos, South Dakota State University

Many contemporary [nineteenth century] critics were suspicious of a form of business in which one man sold what he did not own to another who did not want it… Morton Rothstein (1966)

Anatomy of a Futures Market

The Futures Contract

A futures contract is a standardized agreement between a buyer and a seller to exchange an amount and grade of an item at a specific price and future date. The item or underlying asset may be an agricultural commodity, a metal, mineral or energy commodity, a financial instrument or a foreign currency. Because futures contracts are derived from these underlying assets, they belong to a family of financial instruments called derivatives.

Traders buy and sell futures contracts on an exchange – a marketplace that is operated by a voluntary association of members. The exchange provides buyers and sellers the infrastructure (trading pits or their electronic equivalent), legal framework (trading rules, arbitration mechanisms), contract specifications (grades, standards, time and method of delivery, terms of payment) and clearing mechanisms (see section titled The Clearinghouse) necessary to facilitate futures trading. Only exchange members are allowed to trade on the exchange. Nonmembers trade through commission merchants – exchange members who service nonmember trades and accounts for a fee.

The September 2004 light sweet crude oil contract is an example of a petroleum (mineral) future. It trades on the New York Mercantile exchange (NYM). The contract is standardized – every one is an agreement to trade 1,000 barrels of grade light sweet crude in September, on a day of the seller’s choosing. As of May 25, 2004 the contract sold for $40,120=$40.12x1000 and debits Member S’s margin account the same amount.

The Clearinghouse

The clearinghouse is the counterparty to every trade – its members buy every contract that traders sell on the exchange and sell every contract that traders buy on the exchange. Absent a clearinghouse, traders would interact directly, and this would introduce two problems. First, traders. concerns about their counterparty’s credibility would impede trading. For example, Trader A might refuse to sell to Trader B, who is supposedly untrustworthy.

Second, traders would lose track of their counterparties. This would occur because traders typically settle their contractual obligations by offset – traders buy/sell the contracts that they sold/bought earlier. For example, Trader A sells a contract to Trader B, who sells a contract to Trader C to offset her position, and so on.

The clearinghouse eliminates both of these problems. First, it is a guarantor of all trades. If a trader defaults on a futures contract, the clearinghouse absorbs the loss. Second, clearinghouse members, and not outside traders, reconcile offsets at the end of trading each day. Margin accounts and a process called marking-to-market all but assure the clearinghouse’s solvency.

A margin account is a balance that a trader maintains with a commission merchant in order to offset the trader’s daily unrealized loses in the futures markets. Commission merchants also maintain margins with clearinghouse members, who maintain them with the clearinghouse. The margin account begins as an initial lump sum deposit, or original margin.

To understand the mechanics and merits of marking-to-market, consider that the values of the long and short positions of an existing futures contract change daily, even though futures trading is a zero-sum game – a buyer’s gain/loss equals a seller’s loss/gain. So, the clearinghouse breaks even on every trade, while its individual members. positions change in value daily.

With this in mind, suppose Trader B buys a 5,000 bushel soybean contract for $9.70 from Trader S. Technically, Trader B buys the contract from Clearinghouse Member S and Trader S sells the contract to Clearinghouse Member B. Now, suppose that at the end of the day the contract is priced at $9.71. That evening the clearinghouse marks-to-market each member’s account. That is to say, the clearinghouse credits Member B’s margin account $50 and debits Member S’s margin account the same amount.

Member B is now in a position to draw on the clearinghouse $50, while Member S must pay the clearinghouse a $50 variation margin – incremental margin equal to the difference between a contract’s price and its current market value. In turn, clearinghouse members debit and credit accordingly the margin accounts of their commission merchants, who do the same to the margin accounts of their clients (i.e., traders). This iterative process all but assures the clearinghouse a sound financial footing. In the unlikely event that a trader defaults, the clearinghouse closes out the position and loses, at most, the trader’s one day loss.

Active Futures Markets

Futures exchanges create futures contracts. And, because futures exchanges compete for traders, they must create contracts that appeal to the financial community. For example, the New York Mercantile Exchange created its light sweet crude oil contract in order to fill an unexploited niche in the financial marketplace.

Not all contracts are successful and those that are may, at times, be inactive – the contract exists, but traders are not trading it. For example, of all contracts introduced by U.S. exchanges between 1960 and 1977, only 32% traded in 1980 (Stein 1986, 7). Consequently, entire exchanges can become active – e.g., the New York Futures Exchange opened in 1980 – or inactive – e.g., the New Orleans Exchange closed in 1983 (Leuthold 1989, 18). Government price supports or other such regulation can also render trading inactive (see Carlton 1984, 245).

Futures contracts succeed or fail for many reasons, but successful contracts do share certain basic characteristics (see for example, Baer and Saxon 1949, 110-25; Hieronymus 1977, 19-22). To wit, the underlying asset is homogeneous, reasonably durable, and standardized (easily describable); its supply and demand is ample, its price is unfettered, and all relevant information is available to all traders. For example, futures contracts have never derived from, say, artwork (heterogeneous and not standardized) or rent-controlled housing rights (supply, and hence price is fettered by regulation).

Purposes and Functions

Futures markets have three fundamental purposes. The first is to enable hedgers to shift price risk – asset price volatility – to speculators in return for basis risk – changes in the difference between a futures price and the cash, or current spot price of the underlying asset. Because basis risk is typically less than asset price risk, the financial community views hedging as a form of risk management and speculating as a form of risk taking.

Generally speaking, to hedge is to take opposing positions in the futures and cash markets. Hedgers include (but are not restricted to) farmers, feedlot operators, grain elevator operators, merchants, millers, utilities, export and import firms, refiners, lenders, and hedge fund managers (see Peck 1985, 13-21). Meanwhile, to speculate is to take a position in the futures market with no counter-position in the cash market. Speculators may not be affiliated with the underlying cash markets.

To demonstrate how a hedge works, assume Hedger A buys, or longs, 5,000 bushels of corn, which is currently worth $2.40 per bushel, or $12,000=$2.40×5000; the date is May 1st and Hedger A wishes to preserve the value of his corn inventory until he sells it on June 1st. To do so, he takes a position in the futures market that is exactly opposite his position in the spot – current cash – market. For example, Hedger A sells, or shorts, a July futures contract for 5,000 bushels of corn at a price of $2.50 per bushel; put differently, Hedger A commits to sell in July 5,000 bushels of corn for $12,500=$2.50×5000. Recall that to sell (buy) a futures contract means to commit to sell (buy) an amount and grade of an item at a specific price and future date.

Absent basis risk, Hedger A’s spot and futures markets positions will preserve the value of the 5,000 bushels of corn that he owns, because a fall in the spot price of corn will be matched penny for penny by a fall in the futures price of corn. For example, suppose that by June 1st the spot price of corn has fallen five cents to $2.35 per bushel. Absent basis risk, the July futures price of corn has also fallen five cents to $2.45 per bushel.

So, on June 1st, Hedger A sells his 5,000 bushels of corn and loses $250=($2.35-$2.40)x5000 in the spot market. At the same time, he buys a July futures contract for 5,000 bushels of corn and gains $250=($2.50-$2.45)x5000 in the futures market. Notice, because Hedger A has both sold and bought a July futures contract for 5,000 bushels of corn, he has offset his commitment in the futures market.

This example of a textbook hedge – one that eliminates price risk entirely – is instructive but it is also a bit misleading because: basis risk exists; hedgers may choose to hedge more or less than 100% of their cash positions; and hedgers may cross hedge – trade futures contracts whose underlying assets are not the same as the assets that the hedger owns. So, in reality hedgers cannot immunize entirely their cash positions from market fluctuations and in some cases they may not wish to do so. Again, the purpose of a hedge is not to avoid risk, but rather to manage or even profit from it.

The second fundamental purpose of a futures market is to facilitate firms’ acquisitions of operating capital – short term loans that finance firms’ purchases of intermediate goods such as inventories of grain or petroleum. For example, lenders are relatively more likely to finance, at or near prime lending rates, hedged (versus non-hedged) inventories. The futures contact is an efficient form of collateral because it costs only a fraction of the inventory’s value, or the margin on a short position in the futures market.

Speculators make the hedge possible because they absorb the inventory’s price risk; for example, the ultimate counterparty to the inventory dealer’s short position is a speculator. In the absence of futures markets, hedgers could only engage in forward contracts – unique agreements between private parties, who operate independently of an exchange or clearinghouse. Hence, the collateral value of a forward contract is less than that of a futures contract.3

The third fundamental purpose of a futures market is to provide information to decision makers regarding the market’s expectations of future economic events. So long as a futures market is efficient – the market forms expectations by taking into proper consideration all available information – its forecasts of future economic events are relatively more reliable than an individual’s. Forecast errors are expensive, and well informed, highly competitive, profit-seeking traders have a relatively greater incentive to minimize them.

The Evolution of Futures Trading in the U.S.

Early Nineteenth Century Grain Production and Marketing

Into the early nineteenth century, the vast majority of American grains – wheat, corn, barley, rye and oats – were produced throughout the hinterlands of the United States by producers who acted primarily as subsistence farmers – agricultural producers whose primary objective was to feed themselves and their families. Although many of these farmers sold their surplus production on the market, most lacked access to large markets, as well as the incentive, affordable labor supply, and myriad technologies necessary to practice commercial agriculture – the large scale production and marketing of surplus agricultural commodities.

At this time, the principal trade route to the Atlantic seaboard was by river through New Orleans4; though the South was also home to terminal markets – markets of final destination – for corn, provisions and flour. Smaller local grain markets existed along the tributaries of the Ohio and Mississippi Rivers and east-west overland routes. The latter were used primarily to transport manufactured (high valued and nonperishable) goods west.

Most farmers, and particularly those in the East North Central States – the region consisting today of Illinois, Indiana, Michigan, Ohio and Wisconsin – could not ship bulk grains to market profitably (Clark 1966, 4, 15).5 Instead, most converted grains into relatively high value flour, livestock, provisions and whiskies or malt liquors and shipped them south or, in the case of livestock, drove them east (14).6 Oats traded locally, if at all; their low value-to-weight ratios made their shipment, in bulk or otherwise, prohibitive (15n).

The Great Lakes provided a natural water route east to Buffalo but, in order to ship grain this way, producers in the interior East North Central region needed local ports to receive their production. Although the Erie Canal connected Lake Erie to the port of New York by 1825, water routes that connected local interior ports throughout northern Ohio to the Canal were not operational prior to the mid-1830s. Indeed, initially the Erie aided the development of the Old Northwest, not because it facilitated eastward grain shipments, but rather because it allowed immigrants and manufactured goods easy access to the West (Clark 1966, 53).

By 1835 the mouths of rivers and streams throughout the East North Central States had become the hubs, or port cities, from which farmers shipped grain east via the Erie. By this time, shippers could also opt to go south on the Ohio River and then upriver to Pittsburgh and ultimately to Philadelphia, or north on the Ohio Canal to Cleveland, Buffalo and ultimately, via the Welland Canal, to Lake Ontario and Montreal (19).

By 1836 shippers carried more grain north on the Great Lakes and through Buffalo, than south on the Mississippi through New Orleans (Odle 1964, 441). Though, as late as 1840 Ohio was the only state/region who participated significantly in the Great Lakes trade. Illinois, Indiana, Michigan, and the region of modern day Wisconsin either produced for their respective local markets or relied upon Southern demand. As of 1837 only 4,107 residents populated the “village” of Chicago, which became an official city in that year (Hieronymus 1977, 72).7

Antebellum Grain Trade Finance in the Old Northwest

Before the mid-1860s, a network of banks, grain dealers, merchants, millers and commission houses – buying and selling agents located in the central commodity markets – employed an acceptance system to finance the U.S. grain trade (see Clark 1966, 119; Odle 1964, 442). For example, a miller who required grain would instruct an agent in, say, New York to establish, on the miller’s behalf, a line of credit with a merchant there. The merchant extended this line of credit in the form of sight drafts, which the merchant made payable, in sixty or ninety days, up to the amount of the line of credit.

With this credit line established, commission agents in the hinterland would arrange with grain dealers to acquire the necessary grain. The commission agent would obtain warehouse receipts – dealer certified negotiable titles to specific lots and quantities of grain in store – from dealers, attach these to drafts that he drew on the merchant’s line of credit, and discount these drafts at his local bank in return for banknotes; the local bank would forward these drafts on to the New York merchant’s bank for redemption. The commission agents would use these banknotes to advance – lend – grain dealers roughly three quarters of the current market value of the grain. The commission agent would pay dealers the remainder (minus finance and commission fees) when the grain was finally sold in the East. That is, commission agents and grain dealers entered into consignment contracts.

Unfortunately, this approach linked banks, grain dealers, merchants, millers and commission agents such that the “entire procedure was attended by considerable risk and speculation, which was assumed by both the consignee and consignor” (Clark 1966, 120). The system was reasonably adequate if grain prices went unchanged between the time the miller procured the credit and the time the grain (bulk or converted) was sold in the East, but this was rarely the case. The fundamental problem with this system of finance was that commission agents were effectively asking banks to lend them money to purchase as yet unsold grain. To be sure, this inadequacy was most apparent during financial panics, when many banks refused to discount these drafts (Odle 1964, 447).

Grain Trade Finance in Transition: Forward Contracts and Commodity Exchanges

In 1848 the Illinois-Michigan Canal connected the Illinois River to Lake Michigan. The canal enabled farmers in the hinterlands along the Illinois River to ship their produce to merchants located along the river. These merchants accumulated, stored and then shipped grain to Chicago, Milwaukee and Racine. At first, shippers tagged deliverables according to producer and region, while purchasers inspected and chose these tagged bundles upon delivery. Commercial activity at the three grain ports grew throughout the 1850s. Chicago emerged as a dominant grain (primarily corn) hub later that decade (Pierce 1957, 66).8

Amidst this growth of Lake Michigan commerce, a confluence of innovations transformed the grain trade and its method of finance. By the 1840s, grain elevators and railroads facilitated high volume grain storage and shipment, respectively. Consequently, country merchants and their Chicago counterparts required greater financing in order to store and ship this higher volume of grain.9 And, high volume grain storage and shipment required that inventoried grains be fungible – of such a nature that one part or quantity could be replaced by another equal part or quantity in the satisfaction of an obligation. For example, because a bushel of grade No. 2 Spring Wheat was fungible, its price did not depend on whether it came from Farmer A, Farmer B, Grain Elevator C, or Train Car D.

Merchants could secure these larger loans more easily and at relatively lower rates if they obtained firm price and quantity commitments from their buyers. So, merchants began to engage in forward (not futures) contracts. According to Hieronymus (1977), the first such “time contract” on record was made on March 13, 1851. It specified that 3,000 bushels of corn were to be delivered to Chicago in June at a price of one cent below the March 13th cash market price (74).10

Meanwhile, commodity exchanges serviced the trade’s need for fungible grain. In the 1840s and 1850s these exchanges emerged as associations for dealing with local issues such as harbor infrastructure and commercial arbitration (e.g., Detroit in 1847, Buffalo, Cleveland and Chicago in 1848 and Milwaukee in 1849) (see Odle 1964). By the 1850s they established a system of staple grades, standards and inspections, all of which rendered inventory grain fungible (Baer and Saxon 1949, 10; Chandler 1977, 211). As collection points for grain, cotton, and provisions, they weighed, inspected and classified commodity shipments that passed from west to east. They also facilitated organized trading in spot and forward markets (Chandler 1977, 211; Odle 1964, 439).11

The largest and most prominent of these exchanges was the Board of Trade of the City of Chicago, a grain and provisions exchange established in 1848 by a State of Illinois corporate charter (Boyle 1920, 38; Lurie 1979, 27); the exchange is known today as the Chicago Board of Trade (CBT). For at least its first decade, the CBT functioned as a meeting place for merchants to resolve contract disputes and discuss commercial matters of mutual concern. Participation was part-time at best. The Board’s first directorate of 25 members included “a druggist, a bookseller, a tanner, a grocer, a coal dealer, a hardware merchant, and a banker” and attendance was often encouraged by free lunches (Lurie 1979, 25).

However, in 1859 the CBT became a state- (of Illinois) chartered private association. As such, the exchange requested and received from the Illinois legislature sanction to establish rules “for the management of their business and the mode in which it shall be transacted, as they may think proper;” to arbitrate over and settle disputes with the authority as “if it were a judgment rendered in the Circuit Court;” and to inspect, weigh and certify grain and grain trades such that these certifications would be binding upon all CBT members (Lurie 1979, 27).

Nineteenth Century Futures Trading

By the 1850s traders sold and resold forward contracts prior to actual delivery (Hieronymus 1977, 75). A trader could not offset, in the futures market sense of the term, a forward contact. Nonetheless, the existence of a secondary market – market for extant, as opposed to newly issued securities – in forward contracts suggests, if nothing else, speculators were active in these early time contracts.

On March 27, 1863, the Chicago Board of Trade adopted its first rules and procedures for trade in forwards on the exchange (Hieronymus 1977, 76). The rules addressed contract settlement, which was (and still is) the fundamental challenge associated with a forward contract – finding a trader who was willing to take a position in a forward contract was relatively easy to do; finding that trader at the time of contract settlement was not.

The CBT began to transform actively traded and reasonably homogeneous forward contracts into futures contracts in May, 1865. At this time, the CBT: restricted trade in time contracts to exchange members; standardized contract specifications; required traders to deposit margins; and specified formally contract settlement, including payments and deliveries, and grievance procedures (Hieronymus 1977, 76).

The inception of organized futures trading is difficult to date. This is due, in part, to semantic ambiguities – e.g., was a “to arrive” contract a forward contract or a futures contract or neither? However, most grain trade historians agree that storage (grain elevators), shipment (railroad), and communication (telegraph) technologies, a system of staple grades and standards, and the impetus to speculation provided by the Crimean and U.S. Civil Wars enabled futures trading to ripen by about 1874, at which time the CBT was the U.S.’s premier organized commodities (grain and provisions) futures exchange (Baer and Saxon 1949, 87; Chandler 1977, 212; CBT 1936, 18; Clark 1966, 120; Dies 1925, 15; Hoffman 1932, 29; Irwin 1954, 77, 82; Rothstein 1966, 67).

Nonetheless, futures exchanges in the mid-1870s lacked modern clearinghouses, with which most exchanges began to experiment only in the mid-1880s. For example, the CBT’s clearinghouse got its start in 1884, and a complete and mandatory clearing system was in place at the CBT by 1925 (Hoffman 1932, 199; Williams 1982, 306). The earliest formal clearing and offset procedures were established by the Minneapolis Grain Exchange in 1891 (Peck 1985, 6).

Even so, rudiments of a clearing system – one that freed traders from dealing directly with one another – were in place by the 1870s (Hoffman 1920, 189). That is to say, brokers assumed the counter-position to every trade, much as clearinghouse members would do decades later. Brokers settled offsets between one another, though in the absence of a formal clearing procedure these settlements were difficult to accomplish.

Direct settlements were simple enough. Here, two brokers would settle in cash their offsetting positions between one another only. Nonetheless, direct settlements were relatively uncommon because offsetting purchases and sales between brokers rarely balanced with respect to quantity. For example, B1 might buy a 5,000 bushel corn future from B2, who then might buy a 6,000 bushel corn future from B1; in this example, 1,000 bushels of corn remain unsettled between B1 and B2. Of course, the two brokers could offset the remaining 1,000 bushel contract if B2 sold a 1,000 bushel corn future to B1. But what if B2 had already sold a 1,000 bushel corn future to B3, who had sold a 1,000 bushel corn future to B1? In this case, each broker’s net futures market position is offset, but all three must meet in order to settle their respective positions. Brokers referred to such a meeting as a ring settlement. Finally, if, in this example, B1 and B3 did not have positions with each other, B2 could settle her position if she transferred her commitment (which she has with B1) to B3. Brokers referred to this method as a transfer settlement. In either ring or transfer settlements, brokers had to find other brokers who held and wished to settle open counter-positions. Often brokers used runners to search literally the offices and corridors for the requisite counter-parties (see Hoffman 1932, 185-200).

Finally, the transformation in Chicago grain markets from forward to futures trading occurred almost simultaneously in New York cotton markets. Forward contracts for cotton traded in New York (and Liverpool, England) by the 1850s. And, like Chicago, organized trading in cotton futures began on the New York Cotton Exchange in about 1870; rules and procedures formalized the practice in 1872. Futures trading on the New Orleans Cotton Exchange began around 1882 (Hieronymus 1977, 77).

Other successful nineteenth century futures exchanges include the New York Produce Exchange, the Milwaukee Chamber of Commerce, the Merchant’s Exchange of St. Louis, the Chicago Open Board of Trade, the Duluth Board of Trade, and the Kansas City Board of Trade (Hoffman 1920, 33; see Peck 1985, 9).

Early Futures Market Performance

Volume

Data on grain futures volume prior to the 1880s are not available (Hoffman 1932, 30). Though in the 1870s “[CBT] officials openly admitted that there was no actual delivery of grain in more than ninety percent of contracts” (Lurie 1979, 59). Indeed, Chart 1 demonstrates that trading was relatively voluminous in the nineteenth century.

An annual average of 23,600 million bushels of grain futures traded between 1884 and 1888, or eight times the annual average amount of crops produced during that period. By comparison, an annual average of 25,803 million bushels of grain futures traded between 1966 and 1970, or four times the annual average amount of crops produced during that period. In 2002, futures volume outnumbered crop production by a factor of eleven.

The comparable data for cotton futures are presented in Chart 2. Again here, trading in the nineteenth century was significant. To wit, by 1879 futures volume had outnumbered production by a factor of five, and by 1896 this factor had reached eight.

Price of Storage

Nineteenth century observers of early U.S. futures markets either credited them for stabilizing food prices, or discredited them for wagering on, and intensifying, the economic hardships of Americans (Baer and Saxon 1949, 12-20, 56; Chandler 1977, 212; Ferris 1988, 88; Hoffman 1932, 5; Lurie 1979, 53, 115). To be sure, the performance of early futures markets remains relatively unexplored. The extant research on the subject has generally examined this performance in the context of two perspectives on the theory of efficiency: the price of storage and futures price efficiency more generally.

Holbrook Working pioneered research into the price of storage – the relationship, at a point in time, between prices (of storable agricultural commodities) applicable to different future dates (Working 1949, 1254).12 For example, what is the relationship between the current spot price of wheat and the current September 2004 futures price of wheat? Or, what is the relationship between the current September 2004 futures price of wheat and the current May 2005 futures price of wheat?

Working reasoned that these prices could not differ because of events that were expected to occur between these dates. For example, if the May 2004 wheat futures price is less than the September 2004 price, this cannot be due to, say, the expectation of a small harvest between May 2004 and September 2004. On the contrary, traders should factor such an expectation into both May and September prices. And, assuming that they do, then this difference can only reflect the cost of carrying – storing – these commodities over time.13 Though this strict interpretation has since been modified somewhat (see Peck 1985, 44).

So, for example, the September 2004 price equals the May 2004 price plus the cost of storing wheat between May 2004 and September 2004. If the difference between these prices is greater or less than the cost of storage, and the market is efficient, arbitrage will bring the difference back to the cost of storage – e.g., if the difference in prices exceeds the cost of storage, then traders can profit if they buy the May 2004 contract, sell the September 2004 contract, take delivery in May and store the wheat until September. Working (1953) demonstrated empirically that the theory of the price of storage could explain quite satisfactorily these inter-temporal differences in wheat futures prices at the CBT as early as the late 1880s (Working 1953, 556).

Futures Price Efficiency

Many contemporary economists tend to focus on futures price efficiency more generally (for example, Beck 1994; Kahl and Tomek 1986; Kofi 1973; McKenzie, et al. 2002; Tomek and Gray, 1970). That is to say, do futures prices shadow consistently (but not necessarily equal) traders’ rational expectations of future spot prices? Here, the research focuses on the relationship between, say, the cash price of wheat in September 2004 and the September 2004 futures price of wheat quoted two months earlier in July 2004.

Figure 1illustrates the behavior of corn futures prices and their corresponding spot prices between 1877 and 1890. The data consist of the average month t futures price in the last full week of month t-2 and the average cash price in the first full week of month t.

The futures price and its corresponding spot price need not be equal; futures price efficiency does not mean that the futures market is clairvoyant. But, a difference between the two series should exist only because of an unpredictable forecast error and a risk premium – futures prices may be, say, consistently below the expected future spot price if long speculators require an inducement, or premium, to enter the futures market. Recent work finds strong evidence that these early corn (and corresponding wheat) futures prices are, in the long run, efficient estimates of their underlying spot prices (Santos 2002, 35). Although these results and Working’s empirical studies on the price of storage support, to some extent, the notion that early U.S. futures markets were efficient, this question remains largely unexplored by economic historians.

The Struggle for Legitimacy

Nineteenth century America was both fascinated and appalled by futures trading. This is apparent from the litigation and many public debates surrounding its legitimacy (Baer and Saxon 1949, 55; Buck 1913, 131, 271; Hoffman 1932, 29, 351; Irwin 1954, 80; Lurie 1979, 53, 106). Many agricultural producers, the lay community and, at times, legislatures and the courts, believed trading in futures was tantamount to gambling. The difference between the latter and speculating, which required the purchase or sale of a futures contract but not the shipment or delivery of the commodity, was ostensibly lost on most Americans (Baer and Saxon 1949, 56; Ferris 1988, 88; Hoffman 1932, 5; Lurie 1979, 53, 115).

Many Americans believed that futures traders frequently manipulated prices. From the end of the Civil War until 1879 alone, corners – control of enough of the available supply of a commodity to manipulate its price – allegedly occurred with varying degrees of success in wheat (1868, 1871, 1878/9), corn (1868), oats (1868, 1871, 1874), rye (1868) and pork (1868) (Boyle 1920, 64-65). This manipulation continued throughout the century and culminated in the Three Big Corners – the Hutchinson (1888), the Leiter (1898), and the Patten (1909). The Patten corner was later debunked (Boyle 1920, 67-74), while the Leiter corner was the inspiration for Frank Norris’s classic The Pit: A Story of Chicago (Norris 1903; Rothstein 1982, 60).14 In any case, reports of market corners on America’s early futures exchanges were likely exaggerated (Boyle 1920, 62-74; Hieronymus 1977, 84), as were their long term effects on prices and hence consumer welfare (Rothstein 1982, 60).

By 1892 thousands of petitions to Congress called for the prohibition of “speculative gambling in grain” (Lurie, 1979, 109). And, attacks from state legislatures were seemingly unrelenting: in 1812 a New York act made short sales illegal (the act was repealed in 1858); in 1841 a Pennsylvania law made short sales, where the position was not covered in five days, a misdemeanor (the law was repealed in 1862); in 1882 an Ohio law and a similar one in Illinois tried unsuccessfully to restrict cash settlement of futures contracts; in 1867 the Illinois constitution forbade dealing in futures contracts (this was repealed by 1869); in 1879 California’s constitution invalidated futures contracts (this was effectively repealed in 1908); and, in 1882, 1883 and 1885, Mississippi, Arkansas, and Texas, respectively, passed laws that equated futures trading with gambling, thus making the former a misdemeanor (Peterson 1933, 68-69).

Two nineteenth century challenges to futures trading are particularly noteworthy. The first was the so-called Anti-Option movement. According to Lurie (1979), the movement was fueled by agrarians and their sympathizers in Congress who wanted to end what they perceived as wanton speculative abuses in futures trading (109). Although options were (are) not futures contracts, and were nonetheless already outlawed on most exchanges by the 1890s, the legislation did not distinguish between the two instruments and effectively sought to outlaw both (Lurie 1979, 109).

In 1890 the Butterworth Anti-Option Bill was introduced in Congress but never came to a vote. However, in 1892 the Hatch (and Washburn) Anti-Option bills passed both houses of Congress, and failed only on technicalities during reconciliation between the two houses. Had either bill become law, it would have effectively ended options and futures trading in the United States (Lurie 1979, 110).

A second notable challenge was the bucket shop controversy, which challenged the legitimacy of the CBT in particular. A bucket shop was essentially an association of gamblers who met outside the CBT and wagered on the direction of futures prices. These associations had legitimate-sounding names such as the Christie Grain and Stock Company and the Public Grain Exchange. To most Americans, these “exchanges” were no less legitimate than the CBT. That some CBT members were guilty of “bucket shopping” only made matters worse!

The bucket shop controversy was protracted and colorful (see Lurie 1979, 138-167). Between 1884 and 1887 Illinois, Iowa, Missouri and Ohio passed anti-bucket shop laws (Lurie 1979, 95). The CBT believed these laws entitled them to restrict bucket shops access to CBT price quotes, without which the bucket shops could not exist. Bucket shops argued that they were competing exchanges, and hence immune to extant anti-bucket shop laws. As such, they sued the CBT for access to these price quotes.15

The two sides and the telegraph companies fought in the courts for decades over access to these price quotes; the CBT’s very survival hung in the balance. After roughly twenty years of litigation, the Supreme Court of the U.S. effectively ruled in favor of the Chicago Board of Trade and against bucket shops (Board of Trade of the City of Chicago v. Christie Grain & Stock Co., 198 U.S. 236, 25 Sup. Ct. (1905)). Bucket shops disappeared completely by 1915 (Hieronymus 1977, 90).

Regulation

The anti-option movement, the bucket shop controversy and the American public’s discontent with speculation masks an ironic reality of futures trading: it escaped government regulation until after the First World War; though early exchanges did practice self-regulation or administrative law.16 The absence of any formal governmental oversight was due in large part to two factors. First, prior to 1895, the opposition tried unsuccessfully to outlaw rather than regulate futures trading. Second, strong agricultural commodity prices between 1895 and 1920 weakened the opposition, who blamed futures markets for low agricultural commodity prices (Hieronymus 1977, 313).

Grain prices fell significantly by the end of the First World War, and opposition to futures trading grew once again (Hieronymus 1977, 313). In 1922 the U.S. Congress enacted the Grain Futures Act, which required exchanges to be licensed, limited market manipulation and publicized trading information (Leuthold 1989, 369).17 However, regulators could rarely enforce the act because it enabled them to discipline exchanges, rather than individual traders. To discipline an exchange was essentially to suspend it, a punishment unfit (too harsh) for most exchange-related infractions.

The Commodity Exchange Act of 1936 enabled the government to deal directly with traders rather than exchanges. It established the Commodity Exchange Authority (CEA), a bureau of the U.S. Department of Agriculture, to monitor and investigate trading activities and prosecute price manipulation as a criminal offense. The act also: limited speculators’ trading activities and the sizes of their positions; regulated futures commission merchants; banned options trading on domestic agricultural commodities; and restricted futures trading – designated which commodities were to be traded on which licensed exchanges (see Hieronymus 1977; Leuthold, et al. 1989).

Although Congress amended the Commodity Exchange Act in 1968 in order to increase the regulatory powers of the Commodity Exchange Authority, the latter was ill-equipped to handle the explosive growth in futures trading in the 1960s and 1970s. So, in 1974 Congress passed the Commodity Futures Trading Act, which created far-reaching federal oversight of U.S. futures trading and established the Commodity Futures Trading Commission (CFTC).

Like the futures legislation before it, the Commodity Futures Trading Act seeks “to ensure proper execution of customer orders and to prevent unlawful manipulation, price distortion, fraud, cheating, fictitious trades, and misuse of customer funds” (Leuthold, et al. 1989, 34). Unlike the CEA, the CFTC was given broad regulator powers over all futures trading and related exchange activities throughout the U.S. The CFTC oversees and approves modifications to extant contracts and the creation and introduction of new contracts. The CFTC consists of five presidential appointees who are confirmed by the U.S. Senate.

The Futures Trading Act of 1982 amended the Commodity Futures Trading Act of 1974. The 1982 act legalized options trading on agricultural commodities and identified more clearly the jurisdictions of the CFTC and Securities and Exchange Commission (SEC). The regulatory overlap between the two organizations arose because of the explosive popularity during the 1970s of financial futures contracts. Today, the CFTC regulates all futures contracts and options on futures contracts traded on U.S. futures exchanges; the SEC regulates all financial instrument cash markets as well as all other options markets.

Finally, in 2000 Congress passed the Commodity Futures Modernization Act, which reauthorized the Commodity Futures Trading Commission for five years and repealed an 18-year old ban on trading single stock futures. The bill also sought to increase competition and “reduce systematic risk in markets for futures and over-the-counter derivatives” (H.R. 5660, 106th Congress 2nd Session).

Modern Futures Markets

The growth in futures trading has been explosive in recent years (Chart 3).

Futures trading extended beyond physical commodities in the 1970s and 1980s – currency futures in 1972; interest rate futures in 1975; and stock index futures in 1982 (Silber 1985, 83). The enormous growth of financial futures at this time was likely because of the breakdown of the Bretton Woods exchange rate regime, which essentially fixed the relative values of industrial economies’ exchange rates to the American dollar (see Bordo and Eichengreen 1993), and relatively high inflation from the late 1960s to the early 1980s. Flexible exchange rates and inflation introduced, respectively, exchange and interest rate risks, which hedgers sought to mitigate through the use of financial futures. Finally, although futures contracts on agricultural commodities remain popular, financial futures and options dominate trading today. Trading volume in metals, minerals and energy remains relatively small.

Trading volume in agricultural futures contracts first dropped below 50% in 1982. By 1985 this volume had dropped to less than one fourth all trading. In the same year the volume of futures trading in the U.S. Treasury bond contract alone exceeded trading volume in all agricultural commodities combined (Leuthold et al. 1989, 2). Today exchanges in the U.S. actively trade contracts on several underlying assets (Table 1). These range from the traditional – e.g., agriculture and metals – to the truly innovative – e.g. the weather. The latter’s payoff varies with the number of degree-days by which the temperature in a particular region deviates from 65 degrees Fahrenheit.

Table 1: Select Futures Contracts Traded as of 2002

Agriculture Currencies Equity Indexes Interest Rates Metals & Energy
Corn British pound S&P 500 index Eurodollars Copper
Oats Canadian dollar Dow Jones Industrials Euroyen Aluminum
Soybeans Japanese yen S&P Midcap 400 Euro-denominated bond Gold
Soybean meal Euro Nasdaq 100 Euroswiss Platinum
Soybean oil Swiss franc NYSE index Sterling Palladium
Wheat Australian dollar Russell 2000 index British gov. bond (gilt) Silver
Barley Mexican peso Nikkei 225 German gov. bond Crude oil
Flaxseed Brazilian real FTSE index Italian gov. bond Heating oil
Canola CAC-40 Canadian gov. bond Gas oil
Rye DAX-30 Treasury bonds Natural gas
Cattle All ordinary Treasury notes Gasoline
Hogs Toronto 35 Treasury bills Propane
Pork bellies Dow Jones Euro STOXX 50 LIBOR CRB index
Cocoa EURIBOR Electricity
Coffee Municipal bond index Weather
Cotton Federal funds rate
Milk Bankers’ acceptance
Orange juice
Sugar
Lumber
Rice

Source: Bodie, Kane and Marcus (2005), p. 796.

Table 2 provides a list of today’s major futures exchanges.

Table 2: Select Futures Exchanges as of 2002

Exchange Exchange
Chicago Board of Trade CBT Montreal Exchange ME
Chicago Mercantile Exchange CME Minneapolis Grain Exchange MPLS
Coffee, Sugar & Cocoa Exchange, New York CSCE Unit of Euronext.liffe NQLX
COMEX, a division of the NYME CMX New York Cotton Exchange NYCE
European Exchange EUREX New York Futures Exchange NYFE
Financial Exchange, a division of the NYCE FINEX New York Mercantile Exchange NYME
International Petroleum Exchange IPE OneChicago ONE
Kansas City Board of Trade KC Sydney Futures Exchange SFE
London International Financial Futures Exchange LIFFE Singapore Exchange Ltd. SGX
Marche a Terme International de France MATIF

Source: Wall Street Journal, 5/12/2004, C16.

Modern trading differs from its nineteenth century counterpart in other respects as well. First, the popularity of open outcry trading is waning. For example, today the CBT executes roughly half of all trades electronically. And, electronic trading is the rule, rather than the exception throughout Europe. Second, today roughly 99% of all futures contracts are settled prior to maturity. Third, in 1982 the Commodity Futures Trading Commission approved cash settlement – delivery that takes the form of a cash balance – on financial index and Eurodollar futures, whose underlying assets are not deliverable, as well as on several non-financial contracts including lean hog, feeder cattle and weather (Carlton 1984, 253). And finally, on Dec. 6, 2002, the Chicago Mercantile Exchange became the first publicly traded financial exchange in the U.S.

References and Further Reading

Baer, Julius B. and Olin. G. Saxon. Commodity Exchanges and Futures Trading. New York: Harper & Brothers, 1949.

Bodie, Zvi, Alex Kane and Alan J. Marcus. Investments. New York: McGraw-Hill/Irwin, 2005.

Bordo, Michael D. and Barry Eichengreen, editors. A Retrospective on the Bretton Woods System: Lessons for International Monetary Reform. Chicago: University of Chicago Press, 1993.

Boyle, James. E. Speculation and the Chicago Board of Trade. New York: MacMillan Company, 1920.

Buck, Solon. J. The Granger Movement: A Study of Agricultural Organization and Its Political,

Carlton, Dennis W. “Futures Markets: Their Purpose, Their History, Their Growth, Their Successes and Failures.” Journal of Futures Markets 4, no. 3 (1984): 237-271.

Chicago Board of Trade Bulletin. The Development of the Chicago Board of Trade. Chicago: Chicago Board of Trade, 1936.

Chandler, Alfred. D. The Visible Hand: The Managerial Revolution in American Business. Cambridge: Harvard University Press, 1977.

Clark, John. G. The Grain Trade in the Old Northwest. Urbana: University of Illinois Press, 1966.

Commodity Futures Trading Commission. Annual Report. Washington, D.C. 2003.

Dies, Edward. J. The Wheat Pit. Chicago: The Argyle Press, 1925.

Ferris, William. G. The Grain Traders: The Story of the Chicago Board of Trade. East Lansing, MI: Michigan State University Press, 1988.

Hieronymus, Thomas A. Economics of Futures Trading for Commercial and Personal Profit. New York: Commodity Research Bureau, Inc., 1977.

Hoffman, George W. Futures Trading upon Organized Commodity Markets in the United States. Philadelphia: University of Pennsylvania Press, 1932.

Irwin, Harold. S. Evolution of Futures Trading. Madison, WI: Mimir Publishers, Inc., 1954

Leuthold, Raymond M., Joan C. Junkus and Jean E. Cordier. The Theory and Practice of Futures Markets. Champaign, IL: Stipes Publishing L.L.C., 1989.

Lurie, Jonathan. The Chicago Board of Trade 1859-1905. Urbana: University of Illinois Press, 1979.

National Agricultural Statistics Service. “Historical Track Records.” Agricultural Statistics Board, U.S. Department of Agriculture, Washington, D.C. April 2004.

Norris, Frank. The Pit: A Story of Chicago. New York, NY: Penguin Group, 1903.

Odle, Thomas. “Entrepreneurial Cooperation on the Great Lakes: The Origin of the Methods of American Grain Marketing.” Business History Review 38, (1964): 439-55.

Peck, Anne E., editor. Futures Markets: Their Economic Role. Washington D.C.: American Enterprise Institute for Public Policy Research, 1985.

Peterson, Arthur G. “Futures Trading with Particular Reference to Agricultural Commodities.” Agricultural History 8, (1933): 68-80.

Pierce, Bessie L. A History of Chicago: Volume III, the Rise of a Modern City. New York: Alfred A. Knopf, 1957.

Rothstein, Morton. “The International Market for Agricultural Commodities, 1850-1873.” In Economic Change in the Civil War Era, edited by David. T. Gilchrist and W. David Lewis, 62-71. Greenville DE: Eleutherian Mills-Hagley Foundation, 1966.

Rothstein, Morton. “Frank Norris and Popular Perceptions of the Market.” Agricultural History 56, (1982): 50-66.

Santos, Joseph. “Did Futures Markets Stabilize U.S. Grain Prices?” Journal of Agricultural Economics 53, no. 1 (2002): 25-36.

Silber, William L. “The Economic Role of Financial Futures.” In Futures Markets: Their Economic Role, edited by Anne E. Peck, 83-114. Washington D.C.: American Enterprise Institute for Public Policy Research, 1985.

Stein, Jerome L. The Economics of Futures Markets. Oxford: Basil Blackwell Ltd, 1986.

Taylor, Charles. H. History of the Board of Trade of the City of Chicago. Chicago: R. O. Law, 1917.

Werner, Walter and Steven T. Smith. Wall Street. New York: Columbia University Press, 1991.

Williams, Jeffrey C. “The Origin of Futures Markets.” Agricultural History 56, (1982): 306-16.

Working, Holbrook. “The Theory of the Price of Storage.” American Economic Review 39, (1949): 1254-62.

Working, Holbrook. “Hedging Reconsidered.” Journal of Farm Economics 35, (1953): 544-61.

1 The clearinghouse is typically a corporation owned by a subset of exchange members. For details regarding the clearing arrangements of a specific exchange, go to www.cftc.gov and click on “Clearing Organizations.”

2 The vast majority of contracts are offset. Outright delivery occurs when the buyer receives from, or the seller “delivers” to the exchange a title of ownership, and not the actual commodity or financial security – the urban legend of the trader who neglected to settle his long position and consequently “woke up one morning to find several car loads of a commodity dumped on his front yard” is indeed apocryphal (Hieronymus 1977, 37)!

3 Nevertheless, forward contracts remain popular today (see Peck 1985, 9-12).

4 The importance of New Orleans as a point of departure for U.S. grain and provisions prior to the Civil War is unquestionable. According to Clark (1966), “New Orleans was the leading export center in the nation in terms of dollar volume of domestic exports, except for 1847 and a few years during the 1850s, when New York’s domestic exports exceeded those of the Crescent City” (36).

5 This area was responsible for roughly half of U.S. wheat production and a third of U.S. corn production just prior to 1860. Southern planters dominated corn output during the early to mid- 1800s.

6 Millers milled wheat into flour; pork producers fed corn to pigs, which producers slaughtered for provisions; distillers and brewers converted rye and barley into whiskey and malt liquors, respectively; and ranchers fed grains and grasses to cattle, which were then driven to eastern markets.

7 Significant advances in transportation made the grain trade’s eastward expansion possible, but the strong and growing demand for grain in the East made the trade profitable. The growth in domestic grain demand during the early to mid-nineteenth century reflected the strong growth in eastern urban populations. Between 1820 and 1860, the populations of Baltimore, Boston, New York and Philadelphia increased by over 500% (Clark 1966, 54). Moreover, as the 1840’s approached, foreign demand for U.S. grain grew. Between 1845 and 1847, U.S. exports of wheat and flour rose from 6.3 million bushels to 26.3 million bushels and corn exports grew from 840,000 bushels to 16.3 million bushels (Clark 1966, 55).

8 Wheat production was shifting to the trans-Mississippi West, which produced 65% of the nation’s wheat by 1899 and 90% by 1909, and railroads based in the Lake Michigan port cities intercepted the Mississippi River trade that would otherwise have headed to St. Louis (Clark 1966, 95). Lake Michigan port cities also benefited from a growing concentration of corn production in the West North Central region – Iowa, Kansas, Minnesota, Missouri, Nebraska, North Dakota and South Dakota, which by 1899 produced 40% percent of the country’s corn (Clark 1966, 4).

9 Corn had to be dried immediately after it was harvested and could only be shipped profitably by water to Chicago, but only after rivers and lakes had thawed; so, country merchants stored large quantities of corn. On the other hand, wheat was more valuable relative to its weight, and it could be shipped to Chicago by rail or road immediately after it was harvested; so, Chicago merchants stored large quantities of wheat.

10 This is consistent with Odle (1964), who adds that “the creators of the new system of marketing [forward contracts] were the grain merchants of the Great Lakes” (439). However, Williams (1982) presents evidence of such contracts between Buffalo and New York City as early as 1847 (309). To be sure, Williams proffers an intriguing case that forward and, in effect, future trading was active and quite sophisticated throughout New York by the late 1840s. Moreover, he argues that this trading grew not out of activity in Chicago, whose trading activities were quite primitive at this early date, but rather trading in London and ultimately Amsterdam. Indeed, “time bargains” were common in London and New York securities markets in the mid- and late 1700s, respectively. A time bargain was essentially a cash-settled financial forward contract that was unenforceable by law, and as such “each party was forced to rely on the integrity and credit of the other” (Werner and Smith 1991, 31). According to Werner and Smith, “time bargains prevailed on Wall Street until 1840, and were gradually replaced by margin trading by 1860” (68). They add that, “margin trading … had an advantage over time bargains, in which there was little protection against default beyond the word of another broker. Time bargains also technically violated the law as wagering contracts; margin trading did not” (135). Between 1818 and 1840 these contracts comprised anywhere from 0.7% (49-day average in 1830) to 34.6% (78-day average in 1819) of daily exchange volume on the New York Stock & Exchange Board (Werner and Smith 1991, 174).

11 Of course, forward markets could and indeed did exist in the absence of both grading standards and formal exchanges, though to what extent they existed is unclear (see Williams 1982).

12 In the parlance of modern financial futures, the term cost of carry is used instead of the term storage. For example, the cost of carrying a bond is comprised of the cost of acquiring and holding (or storing) it until delivery minus the return earned during the carry period.

13 More specifically, the price of storage is comprised of three components: (1) physical costs such as warehouse and insurance; (2) financial costs such as borrowing rates of interest; and (3) the convenience yield – the return that the merchant, who stores the commodity, derives from maintaining an inventory in the commodity. The marginal costs of (1) and (2) are increasing functions of the amount stored; the more the merchant stores, the greater the marginal costs of warehouse use, insurance and financing. Whereas the marginal benefit of (3) is a decreasing function of the amount stored; put differently, the smaller the merchant’s inventory, the more valuable each additional unit of inventory becomes. Working used this convenience yield to explain a negative price of storage – the nearby contract is priced higher than the faraway contract; an event that is likely to occur when supplies are exceptionally low. In this instance, there is little for inventory dealers to store. Hence, dealers face extremely low physical and financial storage costs, but extremely high convenience yields. The price of storage turns negative; essentially, inventory dealers are willing to pay to store the commodity.

14 Norris’ protagonist, Curtis Jadwin, is a wheat speculator emotionally consumed and ultimately destroyed, while the welfare of producers and consumers hang in the balance, when a nineteenth century CBT wheat futures corner backfires on him.

15 One particularly colorful incident in the controversy came when the Supreme Court of Illinois ruled that the CBT had to either make price quotes public or restrict access to everyone. When the Board opted for the latter, it found it needed to “prevent its members from running (often literally) between the [CBT and a bucket shop next door], but with minimal success. Board officials at first tried to lock the doors to the exchange…However, after one member literally battered down the door to the east side of the building, the directors abandoned this policy as impracticable if not destructive” (Lurie 1979, 140).

16 Administrative law is “a body of rules and doctrines which deals with the powers and actions of administrative agencies” that are organizations other than the judiciary or legislature. These organizations affect the rights of private parties “through either adjudication, rulemaking, investigating, prosecuting, negotiating, settling, or informally acting” (Lurie 1979, 9).

17 In 1921 Congress passed The Futures Trading Act, which was declared unconstitutional.

Citation: Santos, Joseph. “A History of Futures Trading in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/a-history-of-futures-trading-in-the-united-states/

The Economic History of the Fur Trade: 1670 to 1870

Ann M. Carlos, University of Colorado
Frank D. Lewis, Queen’s University

Introduction

A commercial fur trade in North America grew out of the early contact between Indians and European fisherman who were netting cod on the Grand Banks off Newfoundland and on the Bay of Gaspé near Quebec. Indians would trade the pelts of small animals, such as mink, for knives and other iron-based products, or for textiles. Exchange at first was haphazard and it was only in the late sixteenth century, when the wearing of beaver hats became fashionable, that firms were established who dealt exclusively in furs. High quality pelts are available only where winters are severe, so the trade took place predominantly in the regions we now know as Canada, although some activity took place further south along the Mississippi River and in the Rocky Mountains. There was also a market in deer skins that predominated in the Appalachians.

The first firms to participate in the fur trade were French, and under French rule the trade spread along the St. Lawrence and Ottawa Rivers, and down the Mississippi. In the seventeenth century, following the Dutch, the English developed a trade through Albany. Then in 1670, a charter was granted by the British crown to the Hudson’s Bay Company, which began operating from posts along the coast of Hudson Bay (see Figure 1). For roughly the next hundred years, this northern region saw competition of varying intensity between the French and the English. With the conquest of New France in 1763, the French trade shifted to Scottish merchants operating out of Montreal. After the negotiation of Jay’s Treaty (1794), the northern border was defined and trade along the Mississippi passed to the American Fur Company under John Jacob Astor. In 1821, the northern participants merged under the name of the Hudson’s Bay Company, and for many decades this merged company continued to trade in furs. Finally, in the 1990s, under pressure from animal rights groups, the Hudson’s Bay Company, which in the twentieth century had become a large Canadian retailer, ended the fur component of its operation.

Figure 1
Hudson’s Bay Company Hinterlands
 Hudson's Bay Company Hinterlands (map)

Source: Ray (1987, plate 60)

The fur trade was based on pelts destined either for the luxury clothing market or for the felting industries, of which hatting was the most important. This was a transatlantic trade. The animals were trapped and exchanged for goods in North America, and the pelts were transported to Europe for processing and final sale. As a result, forces operating on the demand side of the market in Europe and on the supply side in North America determined prices and volumes; while intermediaries, who linked the two geographically separated areas, determined how the trade was conducted.

The Demand for Fur: Hats, Pelts and Prices

However much hats may be considered an accessory today, they were for centuries a mandatory part of everyday dress, for both men and women. Of course styles changed, and, in response to the vagaries of fashion and politics, hats took on various forms and shapes, from the high-crowned, broad-brimmed hat of the first two Stuarts to the conically-shaped, plainer hat of the Puritans. The Restoration of Charles II of England in 1660 and the Glorious Revolution in 1689 brought their own changes in style (Clarke, 1982, chapter 1). What remained a constant was the material from which hats were made – wool felt. The wool came from various animals, but towards the end of the fifteenth century beaver wool began to be predominate. Over time, beaver hats became increasingly popular eventually dominating the market. Only in the nineteenth century did silk replace beaver in high-fashion men’s hats.

Wool Felt

Furs have long been classified as either fancy or staple. Fancy furs are those demanded for the beauty and luster of their pelt. These furs – mink, fox, otter – are fashioned by furriers into garments or robes. Staple furs are sought for their wool. All staple furs have a double coating of hair with long, stiff, smooth hairs called guard hairs which protect the shorter, softer hair, called wool, that grows next to the animal skin. Only the wool can be felted. Each of the shorter hairs is barbed and once the barbs at the ends of the hair are open, the wool can be compressed into a solid piece of material called felt. The prime staple fur has been beaver, although muskrat and rabbit have also been used.

Wool felt was used for over two centuries to make high-fashion hats. Felt is stronger than a woven material. It will not tear or unravel in a straight line; it is more resistant to water, and it will hold its shape even if it gets wet. These characteristics made felt the prime material for hatters especially when fashion called for hats with large brims. The highest quality hats would be made fully from beaver wool, whereas lower quality hats included inferior wool, such as rabbit.

Felt Making

The transformation of beaver skins into felt and then hats was a highly skilled activity. The process required first that the beaver wool be separated from the guard hairs and the skin, and that some of the wool have open barbs, since felt required some open-barbed wool in the mixture. Felt dates back to the nomads of Central Asia, who are said to have invented the process of felting and made their tents from this light but durable material. Although the art of felting disappeared from much of western Europe during the first millennium, felt-making survived in Russia, Sweden, and Asia Minor. As a result of the Medieval Crusades, felting was reintroduced through the Mediterranean into France (Crean, 1962).

In Russia, the felting industry was based on the European beaver (castor fiber). Given their long tradition of working with beaver pelts, the Russians had perfected the art of combing out the short barbed hairs from among the longer guard hairs, a technology that they safeguarded. As a consequence, the early felting trades in England and France had to rely on beaver wool imported from Russia, although they also used domestic supplies of wool from other animals, such rabbit, sheep and goat. But by the end of the seventeenth century, Russian supplies were drying up, reflecting the serious depletion of the European beaver population.

Coincident with the decline in European beaver stocks was the emergence of a North American trade. North American beaver (castor canadensis) was imported through agents in the English, French and Dutch colonies. Although many of the pelts were shipped to Russia for initial processing, the growth of the beaver market in England and France led to the development of local technologies, and more knowledge of the art of combing. Separating the beaver wool from the felt was only the first step in the felting process. It was also necessary that some of the barbs on the short hairs be raised or open. On the animal these hairs were naturally covered with keratin to prevent the barbs from opening, thus to make felt, the keratin had to be stripped from at least some of the hairs. The process was difficult to refine and entailed considerable experimentation by felt-makers. For instance, one felt maker “bundled [the skins] in a sack of linen and boiled [them] for twelve hours in water containing several fatty substances and nitric acid” (Crean, 1962, p. 381). Although such processes removed the keratin, they did so at the price of a lower quality wool.

The opening of the North American trade not only increased the supply of skins for the felting industry, it also provided a subset of skins whose guard hairs had already been removed and the keratin broken down. Beaver pelts imported from North America were classified as either parchment beaver (castor sec – dry beaver), or coat beaver (castor gras – greasy beaver). Parchment beaver were from freshly caught animals, whose skins were simply dried before being presented for trade. Coat beaver were skins that had been worn by the Indians for a year or more. With wear, the guard hairs fell out and the pelt became oily and more pliable. In addition, the keratin covering the shorter hairs broke down. By the middle of the seventeenth century, hatters and felt-makers came to learn that parchment and coat beaver could be combined to produce a strong, smooth, pliable, top-quality waterproof material.

Until the 1720s, beaver felt was produced with relatively fixed proportions of coat and parchment skins, which led to periodic shortages of one or the other type of pelt. The constraint was relaxed when carotting was developed, a chemical process by which parchment skins were transformed into a type of coat beaver. The original carrotting formula consisted of salts of mercury diluted in nitric acid, which was brushed on the pelts. The use of mercury was a big advance, but it also had serious health consequences for hatters and felters, who were forced to breathe the mercury vapor for extended periods. The expression “mad as a hatter” dates from this period, as the vapor attacked the nervous systems of these workers.

The Prices of Parchment and Coat Beaver

Drawn from the accounts of the Hudson’s Bay Company, Table 1 presents some eighteenth century prices of parchment and coat beaver pelts. From 1713 to 1726, before the carotting process had become established, coat beaver generally fetched a higher price than parchment beaver, averaging 6.6 shillings per pelt as compared to 5.5 shillings. Once carotting was widely used, however, the prices were reversed, and from 1730 to 1770 parchment exceeded coat in almost every year. The same general pattern is seen in the Paris data, although there the reversal was delayed, suggesting slower diffusion in France of the carotting technology. As Crean (1962, p. 382) notes, Nollet’s L’Art de faire des chapeaux included the exact formula, but it was not published until 1765.

A weighted average of parchment and coat prices in London reveals three episodes. From 1713 to 1722 prices were quite stable, fluctuating within the narrow band of 5.0 and 5.5 shillings per pelt. During the period, 1723 to 1745, prices moved sharply higher and remained in the range of 7 to 9 shillings. The years 1746 to 1763 saw another big increase to over 12 shillings per pelt. There are far fewer prices available for Paris, but we do know that in the period 1739 to 1753 the trend was also sharply higher with prices more than doubling.

Table 1
Price of Beaver Pelts in Britain: 1713-1763
(shillings per skin)

Year Parchment Coat Averagea Year Parchment Coat Averagea
1713 5.21 4.62 5.03 1739 8.51 7.11 8.05
1714 5.24 7.86 5.66 1740 8.44 6.66 7.88
1715 4.88 5.49 1741 8.30 6.83 7.84
1716 4.68 8.81 5.16 1742 7.72 6.41 7.36
1717 5.29 8.37 5.65 1743 8.98 6.74 8.27
1718 4.77 7.81 5.22 1744 9.18 6.61 8.52
1719 5.30 6.86 5.51 1745 9.76 6.08 8.76
1720 5.31 6.05 5.38 1746 12.73 7.18 10.88
1721 5.27 5.79 5.29 1747 10.68 6.99 9.50
1722 4.55 4.97 4.55 1748 9.27 6.22 8.44
1723 8.54 5.56 7.84 1749 11.27 6.49 9.77
1724 7.47 5.97 7.17 1750 17.11 8.42 14.00
1725 5.82 6.62 5.88 1751 14.31 10.42 12.90
1726 5.41 7.49 5.83 1752 12.94 10.18 11.84
1727 7.22 1753 10.71 11.97 10.87
1728 8.13 1754 12.19 12.68 12.08
1729 9.56 1755 12.05 12.04 11.99
1730 8.71 1756 13.46 12.02 12.84
1731 6.27 1757 12.59 11.60 12.17
1732 7.12 1758 13.07 11.32 12.49
1733 8.07 1759 15.99 14.68
1734 7.39 1760 13.37 13.06 13.22
1735 8.33 1761 10.94 13.03 11.36
1736 8.72 7.07 8.38 1762 13.17 16.33 13.83
1737 7.94 6.46 7.50 1763 16.33 17.56 16.34
1738 8.95 6.47 8.32

a A weighted average of the prices of parchment, coat and half parchment beaver pelts. Weights are based on the trade in these types of furs at Fort Albany. Prices of the individual types of pelts are not available for the years, 1727 to 1735.

Source: Carlos and Lewis, 1999.

The Demand for Beaver Hats

The main cause of the rising beaver pelt prices in England and France was the increasing demand for beaver hats, which included hats made exclusively with beaver wool and referred to as “beaver hats,” and those hats containing a combination of beaver and a lower cost wool, such as rabbit. These were called “felt hats.” Unfortunately, aggregate consumption series for the eighteenth century Europe are not available. We do, however, have Gregory King’s contemporary work for England which provides a good starting point. In a table entitled “Annual Consumption of Apparell, anno 1688,” King calculated that consumption of all types of hats was about 3.3 million, or nearly one hat per person. King also included a second category, caps of all sorts, for which he estimated consumption at 1.6 million (Harte, 1991, p. 293). This means that as early as 1700, the potential market for hats in England alone was nearly 5 million per year. Over the next century, the rising demand for beaver pelts was a result of a number factors including population growth, a greater export market, a shift toward beaver hats from hats made of other materials, and a shift from caps to hats.

The British export data indicate that demand for beaver hats was growing not just in England, but in Europe as well. In 1700 a modest 69,500 beaver hats were exported from England and almost the same number of felt hats; but by 1760, slightly over 500,000 beaver hats and 370,000 felt halts were shipped from English ports (Lawson, 1943, app. I). In total, over the seventy years to 1770, 21 million beaver and felt hats were exported from England. In addition to the final product, England exported the raw material, beaver pelts. In 1760, £15,000 in beaver pelts were exported along with a range of other furs. The hats and the pelts tended to go to different parts of Europe. Raw pelts were shipped mainly to northern Europe, including Germany, Flanders, Holland and Russia; whereas hats went to the southern European markets of Spain and Portugal. In 1750, Germany imported 16,500 beaver hats, while Spain imported 110,000 and Portugal 175,000 (Lawson, 1943, appendices F & G). Over the first six decades of the eighteenth century, these markets grew dramatically, such that the value of beaver hat sales to Portugal alone was £89,000 in 1756-1760, representing about 300,000 hats or two-thirds of the entire export trade.

European Intermediaries in the Fur Trade

By the eighteenth century, the demand for furs in Europe was being met mainly by exports from North America with intermediaries playing an essential role. The American trade, which moved along the main water systems, was organized largely through chartered companies. At the far north, operating out of Hudson Bay, was the Hudson’s Bay Company, chartered in 1670. The Compagnie d’Occident, founded in 1718, was the most successful of a series of monopoly French companies. It operated through the St. Lawrence River and in the region of the eastern Great Lakes. There was also an English trade through Albany and New York, and a French trade down the Mississippi.

The Hudson’s Bay Company and the Compagnie d’Occident, although similar in title, had very different internal structures. The English trade was organized along hierarchical lines with salaried managers, whereas the French monopoly issued licenses (congés) or leased out the use of its posts. The structure of the English company allowed for more control from the London head office, but required systems that could monitor the managers of the trading posts (Carlos and Nicholas, 1990). The leasing and licensing arrangements of the French made monitoring unnecessary, but led to a system where the center had little influence over the conduct of the trade.

The French and English were distinguished as well by how they interacted with the Natives. The Hudson’s Bay Company established posts around the Bay and waited for the Indians, often middlemen, to come to them. The French, by contrast, moved into the interior, directly trading with the Indians who harvested the furs. The French arrangement was more conducive to expansion, and by the end of the seventeenth century, they had moved beyond the St. Lawrence and Ottawa rivers into the western Great Lakes region (see Figure 1). Later they established posts in the heart of the Hudson Bay hinterland. In addition, the French explored the river systems to the south, setting up a post at the mouth of the Mississippi. As noted earlier, after Jay’s Treaty was signed, the French were replaced in the Mississippi region by U.S. interests which later formed the American Fur Company (Haeger, 1991).

The English takeover of New France at the end of the French and Indian Wars in 1763 did not, at first, fundamentally change the structure of the trade. Rather, French management was replaced by Scottish and English merchants operating in Montreal. But, within a decade, the Montreal trade was reorganized into partnerships between merchants in Montreal and traders who wintered in the interior. The most important of these arrangements led to the formation of the Northwest Company, which for the first two decades of the nineteenth century, competed with the Hudson’s Bay Company (Carlos and Hoffman, 1986). By the early decades of the nineteenth century, the Hudson’s Bay Company, the Northwest Company, and the American Fur Company had, combined, a system of trading posts across North America, including posts in Oregon and British Columbia and on the Mackenzie River. In 1821, the Northwest Company and the Hudson’s Bay Company merged under the name of the Hudson’s Bay Company. The Hudson’s Bay Company then ran the trade as a monopsony until the late 1840s when it began facing serious competition from trappers to the south. The Company’s role in the northwest changed again with the Canadian Confederation in 1867. Over the next decades treaties were signed with many of the northern tribes forever changing the old fur trade order in Canada.

The Supply of Furs: The Harvesting of Beaver and Depletion

During the eighteenth century, the changing technology of felt production and the growing demand for felt hats were met by attempts to increase the supply of furs, especially the supply of beaver pelts. Any permanent increase, however, was ultimately dependent on the animal resource base. How that base changed over time must be a matter of speculation since no animal counts exist from that period; nevertheless, the evidence we do have points to a scenario in which over-harvesting, at least in some years, gave rise to serious depletion of the beaver and possibly other animals such as marten that were also being traded. Why the beaver were over-harvested was closely related to the prices Natives were receiving, but important as well was the nature of Native property rights to the resource.

Harvests in the Fort Albany and York Factory Regions

That beaver populations along the Eastern seaboard regions of North America were depleted as the fur trade advanced is widely accepted. In fact the search for new sources of supply further west, including the region of Hudson Bay, has been attributed in part to dwindling beaver stocks in areas where the fur trade had been long established. Although there has been little discussion of the impact that the Hudson’s Bay Company and the French, who traded in the region of Hudson Bay, were having on the beaver stock, the remarkably complete records of the Hudson’s Bay Company provide the basis for reasonable inferences about depletion. From 1700 there is an uninterrupted annual series of fur returns at Fort Albany; the fur returns from York Factory begin in 1716 (see Figure 1).

The beaver returns at Fort Albany and York Factory for the period 1700 to 1770 are described in Figure 2. At Fort Albany the number of beaver skins over the period 1700 to 1720 averaged roughly 19,000, with wide year-to-year fluctuations; the range was about 15,000 to 30,000. After 1720 and until the late 1740s average returns declined by about 5,000 skins, and remained within the somewhat narrower range of roughly 10,000 to 20,000 skins. The period of relative stability was broken in the final years of the 1740s. In 1748 and 1749, returns increased to an average of nearly 23,000. Following these unusually strong years, the trade fell precipitously so that in 1756 fewer than 6,000 beaver pelts were received. There was a brief recovery in the early 1760s but by the end decade trade had fallen below even the mid-1750s levels. In 1770, Fort Albany took in just 3,600 beaver pelts. This pattern – unusually large returns in the late 1740s and low returns thereafter – indicates that the beaver in the Fort Albany region were being seriously depleted.

Figure 2
Beaver Traded at Fort Albany and York Factory 1700 – 1770

Source: Carlos and Lewis, 1993.

The beaver returns at York Factory from 1716 to 1770, also described in Figure 2, have some of the key features of the Fort Albany data. After some low returns early on (from 1716 to 1720), the number of beaver pelts increased to an average of 35,000. There were extraordinary returns in 1730 and 1731, when the average was 55,600 skins, but beaver receipts then stabilized at about 31,000 over the remainder of the decade. The first break in the pattern came in the early 1740s shortly after the French established several trading posts in the area. Surprisingly perhaps, given the increased competition, trade in beaver pelts at the Hudson’s Bay Company post increased to an average of 34,300, this over the period 1740 to 1743. Indeed, the 1742 return of 38,791 skins was the largest since the French had established any posts in the region. The returns in 1745 were also strong, but after that year the trade in beaver pelts began a decline that continued through to 1770. Average returns over the rest of the decade were 25,000; the average during the 1750s was 18,000, and just 15,500 in the 1760s. The pattern of beaver returns at York Factory – high returns in the early 1740s followed by a large decline – strongly suggests that, as in the Fort Albany hinterland, the beaver population had been greatly reduced.

The overall carrying capacity of any region, or the size of the animal stock, depends on the nature of the terrain and the underlying biological determinants such as birth and death rates. A standard relationship between the annual harvest and the animal population is the Lotka-Volterra logistic, commonly used in natural resource models to relate the natural growth of a population to the size of that population:
F(X) = aX – bX2, a, b > 0 (1)

where X is the population, F(X) is the natural growth in the population, a is the maximum proportional growth rate of the population, and b = a/X, where X is the upper limit to population size. The population dynamics of the species exploited depends on the harvest each period:

DX = aX – bX2- H (2)

where DX is the annual change in the population and H is the harvest. The choice of parameter a and maximum population X is central to the population estimates and have been based largely on estimates from the beaver ecology literature and Ontario provincial field reports of beaver densities (Carlos and Lewis, 1993).

Simulations based on equation 2 suggest that, until the 1730s, beaver populations remained at levels roughly consistent with maximum sustained yield management, sometimes referred to as the biological optimum. But after the 1730s there was a decline in beaver stocks to about half the maximum sustained yield levels. The cause of the depletion was closely related to what was happening in Europe. There, buoyant demand for felt hats and dwindling local fur supplies resulted in much higher prices for beaver pelts. These higher prices, in conjunction with the resulting competition from the French in the Hudson Bay region, led the Hudson’s Bay Company to offer much better terms to Natives who came to their trading posts (Carlos and Lewis, 1999).

Figure 3 reports a price index for furs at Fort Albany and at York Factory. The index represents a measure of what Natives received in European goods for their furs. At Fort Albany, fur prices were close to 70 from 1713 to 1731, but in 1732, in response to higher European fur prices and the entry of la Vérendrye, an important French trader, the price jumped to 81. After that year, prices continued to rise. The pattern at York Factory was similar. Although prices were high in the early years when the post was being established, beginning in 1724 the price settled down to about 70. At York Factory, the jump in price came in 1738, which was the year la Vérendrye set up a trading post in the York Factory hinterland. Prices then continued to increase. It was these higher fur prices that led to over-harvesting and, ultimately, a decline in beaver stocks.

Figure 3
Price Index for Furs: Fort Albany and York Factory, 1713 – 1770

Source: Carlos and Lewis, 2001.

Property Rights Regimes

An increase in price paid to Native hunters did not have to lead to a decline in the animal stocks, because Indians could have chosen to limit their harvesting. Why they did not was closely related their system of property rights. One can classify property rights along a spectrum with, at one end, open access, where anyone can hunt or fish, and at the other, complete private property, where a sole owner has full control over the resource. Between, there are a range of property rights regimes with access controlled by a community or a government, and where individual members of the group do not necessarily have private property rights. Open access creates a situation where there is less incentive to conserve, because animals not harvested by a particular hunter will be available to other hunters in the future. Thus the closer is a system to open access the more likely it is that the resource will be depleted.

Across aboriginal societies in North America, one finds a range of property rights regimes. Native Americans did have a concept of trespass and of property, but individual and family rights to resources were not absolute. Sometimes referred to as the Good Samaritan principle (McManus, 1972), outsiders were not permitted to harvest furs on another’s territory for trade, but they were allowed to hunt game and even beaver for food. Combined with this limitation to private property was an Ethic of Generosity that included liberal gift-giving where any visitor to one’s encampment was to be supplied with food and shelter.

Why a social norm such as gift-giving or the related Good Samaritan principle emerged was due to the nature of the aboriginal environment. The primary objective of aboriginal societies was survival. Hunting was risky, and so rules were put in place that would reduce the risk of starvation. As Berkes et al.(1989, p. 153) notes, for such societies: “all resources are subject to the overriding principle that no one can prevent a person from obtaining what he needs for his family’s survival.” Such actions were reciprocal and especially in the sub-arctic world were an insurance mechanism. These norms, however, also reduced the incentive to conserve the beaver and other animals that were part of the fur trade. The combination of these norms and the increasing price paid to Native traders led to the large harvests in the 1740s and ultimately depletion of the animal stock.

The Trade in European Goods

Indians were the primary agents in the North American commercial fur trade. It was they who hunted the animals, and transported and traded the pelts or skins to European intermediaries. The exchange was a voluntary. In return for their furs, Indians obtained both access to an iron technology to improve production and access to a wide range of new consumer goods. It is important to recognize, however, that although the European goods were new to aboriginals, the concept of exchange was not. The archaeological evidence indicates an extensive trade between Native tribes in the north and south of North America prior to European contact.

The extraordinary records of the Hudson’s Bay Company allow us to form a clear picture of what Indians were buying. Table 2 lists the goods received by Natives at York Factory, which was by far the largest of the Hudson’s Bay Company trading posts. As is evident from the table, the commercial trade was more than in beads and baubles or even guns and alcohol; rather Native traders were receiving a wide range of products that improved their ability to meet their subsistence requirements and allowed them to raise their living standards. The items have been grouped by use. The producer goods category was dominated by firearms, including guns, shot and powder, but also includes knives, awls and twine. The Natives traded for guns of different lengths. The 3-foot gun was used mainly for waterfowl and in heavily forested areas where game could be shot at close range. The 4-foot gun was more accurate and suitable for open spaces. In addition, the 4-foot gun could play a role in warfare. Maintaining guns in the harsh sub-arctic environment was a serious problem, and ultimately, the Hudson’s Bay Company was forced to send gunsmiths to its trading posts to assess quality and help with repairs. Kettles and blankets were the main items in the “household goods” category. These goods probably became necessities to the Natives who adopted them. Then there were the luxury goods, which have been divided into two broad categories: “tobacco and alcohol,” and “other luxuries,” dominated by cloth of various kinds (Carlos and Lewis, 2001; 2002).

Table 2
Value of Goods Received at York Factory in 1740 (made beaver)

We have much less information about the French trade. The French are reported to have exchanged similar items, although given their higher transport costs, both the furs received and the goods traded tended to be higher in value relative to weight. The Europeans, it might be noted, supplied no food to the trade in the eighteenth century. In fact, Indians helped provision the posts with fish and fowl. This role of food purveyor grew in the nineteenth century as groups known as the “home guard Cree” came to live around the posts; as well, pemmican, supplied by Natives, became an important source of nourishment for Europeans involved in the buffalo hunts.

The value of the goods listed in Table 2 is expressed in terms of the unit of account, the made beaver, which the Hudson’s Bay Company used to record its transactions and determine the rate of exchange between furs and European goods. The price of a prime beaver pelt was 1 made beaver, and every other type of fur and good was assigned a price based on that unit. For example, a marten (a type of mink) was a made beaver, a blanket was 7 made beaver, a gallon of brandy, 4 made beaver, and a yard of cloth, 3? made beaver. These were the official prices at York Factory. Thus Indians, who traded at these prices, received, for example, a gallon of brandy for four prime beaver pelts, two yards of cloth for seven beaver pelts, and a blanket for 21 marten pelts. This was barter trade in that no currency was used; and although the official prices implied certain rates of exchange between furs and goods, Hudson’s Bay Company factors were encouraged to trade at rates more favorable to the Company. The actual rates, however, depended on market conditions in Europe and, most importantly, the extent of French competition in Canada. Figure 3 illustrates the rise in the price of furs at York Factory and Fort Albany in response to higher beaver prices in London and Paris, as well as to a greater French presence in the region (Carlos and Lewis, 1999). The increase in price also reflects the bargaining ability of Native traders during periods of direct competition between the English and French and later the Hudson’s Bay Company and the Northwest Company. At such times, the Native traders would play both parties off against each other (Ray and Freeman, 1978).

The records of the Hudson’s Bay Company provide us with a unique window to the trading process, including the bargaining ability of Native traders, which is evident in the range of commodities received. Natives only bought goods they wanted. Clear from the Company records is that it was the Natives who largely determined the nature and quality of those goods. As well the records tell us how income from the trade was being allocated. The breakdown differed by post and varied over time; but, for example, in 1740 at York Factory, the distribution was: producer goods – 44 percent; household goods – 9 percent; alcohol and tobacco – 24 percent; and other luxuries – 23 percent. An important implication of the trade data is that, like many Europeans and most American colonists, Native Americans were taking part in the consumer revolution of the eighteenth century (de Vries, 1993; Shammas, 1993). In addition to necessities, they were consuming a remarkable variety of luxury products. Cloth, including baize, duffel, flannel, and gartering, was by far the largest class, but they also purchased beads, combs, looking glasses, rings, shirts, and vermillion among a much longer list. Because these items were heterogeneous in nature, the Hudson’s Bay Company’s head office went to great lengths to satisfy the specific tastes of Native consumers. Attempts were also made, not always successfully, to introduce new products (Carlos and Lewis, 2002).

Perhaps surprising, given the emphasis that has been placed on it in the historical literature, was the comparatively small role of alcohol in the trade. At York Factory, Native traders received in 1740 a total of 494 gallons of brandy and “strong water,” which had a value of 1,976 made beaver. More than twice this amount was spent on tobacco in that year, nearly five times was spent on firearms, twice was spent on cloth, and more was spent on blankets and kettles than on alcohol. Thus, brandy, although a significant item of trade, was by no means a dominant one. In addition, alcohol could hardly have created serious social problems during this period. The amount received would have allowed for no more than ten two-ounce drinks per year for the adult Native population living in the region.

The Labor Supply of Natives

Another important question can be addressed using the trade data. Were Natives “lazy and improvident” as they have been described by some contemporaries, or were they “industrious” like the American colonists and many Europeans? Central to answering this question is how Native groups responded to the price of furs, which began rising in the 1730s. Much of the literature argues that Indian trappers reduced their effort in response to higher fur prices; that is, they had backward-bending supply curves of labor. The view is that Natives had a fixed demand for European goods that, at higher fur prices, could be met with fewer furs, and hence less effort. Although widely cited, this argument does not stand up. Not only were higher fur prices accompanied by larger total harvests of furs in the region, but the pattern of Native expenditure also points to a scenario of greater effort. From the late 1730s to the 1760s, as the price of furs rose, the share of expenditure on luxury goods increased dramatically (see Figure 4). Thus Natives were not content simply to accept their good fortune by working less; rather they seized the opportunity provided to them by the strong fur market by increasing their effort in the commercial sector, thereby dramatically augmenting the purchases of those goods, namely the luxuries, that could raise their living standards.

Figure 4
Native Expenditure Shares at York Factory 1716 – 1770

Source: Carlos and Lewis, 2001.

A Note on the Non-commercial Sector

As important as the fur trade was to Native Americans in the sub-arctic regions of Canada, commerce with the Europeans comprised just one, relatively small, part of their overall economy. Exact figures are not available, but the traditional sectors; hunting, gathering, food preparation and, to some extent, agriculture must have accounted for at least 75 to 80 percent of Native labor during these decades. Nevertheless, despite the limited time spent in commercial activity, the fur trade had a profound effect on the nature of the Native economy and Native society. The introduction of European producer goods, such as guns, and household goods, mainly kettles and blankets, changed the way Native Americans achieved subsistence; and the European luxury goods expanded the range of products that allowed them to move beyond subsistence. Most importantly, the fur trade connected Natives to Europeans in ways that affected how and how much they chose to work, where they chose to live, and how they exploited the resources on which the trade and their survival was based.

References

Berkes, Fikret, David Feeny, Bonnie J. McCay, and James M. Acheson. “The Benefits of the Commons.” Nature 340 (July 13, 1989): 91-93.

Braund, Kathryn E. Holland.Deerskins and Duffels: The Creek Indian Trade with Anglo-America, 1685-1815. Lincoln: University of Nebraska Press, 1993.

Carlos, Ann M., and Elizabeth Hoffman. “The North American Fur Trade: Bargaining to a Joint Profit Maximum under Incomplete Information, 1804-1821.” Journal of Economic History 46, no. 4 (1986): 967-86.

Carlos, Ann M., and Frank D. Lewis. “Indians, the Beaver and the Bay: The Economics of Depletion in the Lands of the Hudson’s Bay Company, 1700-1763.” Journal of Economic History 53, no. 3 (1993): 465-94.

Carlos, Ann M., and Frank D. Lewis. “Property Rights, Competition and Depletion in the Eighteenth-Century Canadian Fur Trade: The Role of the European Market.” Canadian Journal of Economics 32, no. 3 (1999): 705-28.

Carlos, Ann M., and Frank D. Lewis. “Property Rights and Competition in the Depletion of the Beaver: Native Americans and the Hudson’s Bay Company.” In The Other Side of the Frontier: Economic Explorations in Native American History, edited by Linda Barrington, 131-149. Boulder, CO: Westview Press, 1999.

Carlos, Ann M., and Frank D. Lewis. “Trade, Consumption, and the Native Economy: Lessons from York Factory, Hudson Bay.” Journal of Economic History61, no. 4 (2001): 465-94.

Carlos, Ann M., and Frank D. Lewis. “Marketing in the Land of Hudson Bay: Indian Consumers and the Hudson’s Bay Company, 1670-1770.” Enterprise and Society 2 (2002): 285-317.

Carlos, Ann and Nicholas, Stephen. “Agency Problems in Early Chartered Companies: The Case of the Hudson’s Bay Company.” Journal of Economic History 50, no. 4 (1990): 853-75.

Clarke, Fiona. Hats. London: Batsford, 1982.

Crean, J. F. “Hats and the Fur Trade.” Canadian Journal of Economics and Political Science 28, no. 3 (1962): 373-386.

Corner, David. “The Tyranny of Fashion: The Case of the Felt-Hatting Trade in the Late Seventeenth and Eighteenth Centuries.” Textile History 22, no.2 (1991): 153-178.

de Vries, Jan. “Between Purchasing Power and the World of Goods: Understanding the Household Economy in Early Modern Europe.” In Consumption and the World of Goods, edited by John Brewer and Roy Porter, 85-132. London: Routledge, 1993.

Ginsburg Madeleine. The Hat: Trends and Traditions. London: Studio Editions, 1990.

Haeger, John D. John Jacob Astor: Business and Finance in the Early Republic. Detroit: Wayne State University Press, 1991.

Harte, N.B. “The Economics of Clothing in the Late Seventeenth Century.” Textile History 22, no. 2 (1991): 277-296.

Heidenreich, Conrad E., and Arthur J. Ray. The Early Fur Trade: A Study in Cultural Interaction. Toronto: McClelland and Stewart, 1976.

Helm, Jane, ed. Handbook of North American Indians 6, Subarctic. Washington: Smithsonian, 1981.

Innis, Harold. The Fur Trade in Canada (revised edition). Toronto: University of Toronto Press, 1956.

Krech III, Shepard. The Ecological Indian: Myth and History. New York: Norton, 1999.

Lawson, Murray G. Fur: A Study in English Mercantilism. Toronto: University of Toronto Press, 1943.

McManus, John. “An Economic Analysis of Indian Behavior in the North American Fur Trade.” Journal of Economic History 32, no.1 (1972): 36-53.

Ray, Arthur J. Indians in the Fur Trade: Their Role as Hunters, Trappers and Middlemen in the Lands Southwest of Hudson Bay, 1660-1870. Toronto: University of Toronto Press, 1974.

Ray, Arthur J. and Donald Freeman. “Give Us Good Measure”: An Economic Analysis of Relations between the Indians and the Hudson’s Bay Company before 1763. Toronto: University of Toronto Press, 1978.

Ray, Arthur J. “Bayside Trade, 1720-1780.” In Historical Atlas of Canada 1, edited by R. Cole Harris, plate 60. Toronto: University of Toronto Press, 1987.

Rich, E. E. Hudson’s Bay Company, 1670 – 1870. 2 vols. Toronto: McClelland and Stewart, 1960.

Rich, E.E. “Trade Habits and Economic Motivation among the Indians of North America.” Canadian Journal of Economics and Political Science 26, no. 1 (1960): 35-53.

Shammas, Carole. “Changes in English and Anglo-American Consumption from 1550-1800.” In Consumption and the World of Goods, edited by John Brewer and Roy Porter, 177-205. London: Routledge, 1993.

Wien, Thomas. “Selling Beaver Skins in North America and Europe, 1720-1760: The Uses of Fur-Trade Imperialism.” Journal of the Canadian Historical Association, New Series 1 (1990): 293-317.

Citation: Carlos, Ann and Frank Lewis. “Fur Trade (1670-1870)”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-the-fur-trade-1670-to-1870/

The Freedmen’s Bureau

William Troost, University of British Columbia

The Bureau of Refugees, Freedmen, and Abandoned Lands, more commonly know as the Freedmen’s Bureau, was a federal agency established to help Southern blacks transition from their lives as slaves to free individuals. The challenges of this transformation were enormous as the Civil War devastated the region – leaving farmland dilapidated and massive amounts of capital destroyed. Additionally, the entire social order of the region was disturbed as slave owners and former slaves were forced to interact with one another in completely new ways. The Freedmen’s Bureau was an unprecedented foray by the federal government into the sphere of social welfare during a critical period of American history. This article briefly describes this unique agency, its colorful history, and many functions that the bureau performed during its brief existence.

The Beginning of the Bureau

In March 1863, the American Freedmen’s Inquiry Commission was set up to investigate “the measures which may best contribute to the protection and improvement of the recently emancipated freedmen of the United States, and to their self-defense and self-support.”1 The commission debated various methods and activities to alleviate the current condition of freedmen and aid their transition to free individuals. Basic aid activities to alleviate physical suffering and provide legal justice, education, and land redistribution were commonly mentioned in these meetings and hearings. This inquiry commission examined many issues and came up with some ideas that would eventually become the foundation for the eventual Freedmen’s Bureau Law. In 1864, the commission issued their final report which laid out the basic philosophy that would guide the actions of the Freedmen’s Bureau.

“The sum of our recommendations is this: Offer the freedmen temporary aid and counsel until they become a little accustomed to their new sphere of life; secure to them, by law, their just rights of person and property; relieve them, by a fair and equal administration of justice, from the depressing influence of disgraceful prejudice; above all, guard them against the virtual restoration of slavery in any form, and let them take care of themselves. If we do this, the future of the African race in this country will be conducive to its prosperity and associated with its well-being. There will be nothing connected with it to excite regret to inspire apprehension.”2

When the Congress finally got down to the business of writing a bill to aid the transition of the freedmen they tried to integrate many of the American Freedmen’s Inquiry Commission’s recommendations. Originally the agency set up to aid in this transition was to be named the Bureau of Emancipation. However, when the bill came up for a vote on March 1, 1864 the name was changed to the Bureau of Refugees, Freedmen, and Abandoned Lands. This change was due in large part to objections that the bill was exclusionary and aimed solely towards the aid of blacks. This name changed was aimed at enlarging support for the bill.

The House and the Senate argued about the powers and place that the bureau should reside within the government. Those in the House wanted the agency placed within the War Department, concluding that the power used to free the slaves would be best to aid them in their transition. Oppositely, in the Senate Charles Sumner’s Committee on Slavery and Freedom wanted the bureau placed within the Department of the Treasury – as it had the power to tax and had possession of confiscated lands. Sumner felt that they “should not be separated from their best source of livelihood.”3 After a year of debate, finally a compromise was agreed to that entrusted the Freedmen’s Bureau with the administration of confiscated lands while placing the bureau within the Department of War. Thus, On March 3, 1865, with the stroke of a pen, Abraham Lincoln signed into existence the Bureau of Refugees, Freedmen, and Abandoned Lands. Selected to head of the new bureau was General Otis Oliver Howard – commonly known as the Christian General. Howard had strong ties with the philanthropic community and forged strong ties with freedmen’s aid organizations.

The Freedmen’s Bureau was active in a variety of aid functions. Eric Foner writes it was “an experiment in social policy that did not belong to the America of its day”.4 The bureau did important work in many key areas and had many functions that even today are not considered the responsibility of the national government.

Relief Services

A key function of the bureau, especially in the beginning, was to provide temporary relief for the suffering of destitute freedmen. The bureau provided rations for those most in need due to the abandonment of plantations, poor crop yields, and unemployment. This aid was taken advantage of by a staggering number of both freedmen and refugees. A ration was defined as enough corn meal, flour, and sugar sufficient to feed a person for one week. In “the first 15 months following the war, the Bureau issued over 13 million rations, two thirds to blacks.”5 The size of this aid was staggering and while it was deemed a great necessity, it also fostered tremendous anxiety for both General Howard and the general population – mainly that it would cause idleness. Because of these worries, General Howard ordered that this form of relief be discontinued in the fall of 1866.

Health Care

In a similar vein the bureau also provided medical care to the recently freed slaves. The health situation of freedmen at the conclusion of the Civil War was atrocious. Frequent pandemics of cholera, poor sanitation, and outbreaks of smallpox killed scores of freedmen. Because the freed population lacked the financial assets to purchase private healthcare and were denied care in many other cases, the bureau played a valuable role.

“Since hospitals and doctors could not be relied on to provide adequate health care for freedmen, individual bureau agents on occasion responded innovatively to black distress. During epidemics, Pine Bluff and Little Rock agents relocated freedpersons to less contagion-ridden places. When blacks could not be moved, agents imposed quarantines to prevent the spread of disease. General Order Number 8…prohibited new residents from congregating in towns. The order also mandated weekly inspections of freedmen’s homes to check for filth and overcrowding.”6

In addition to preventing and containing outbreaks, the bureau also engaged more directly in health care. Being placed in the War Department, the bureau was also able to assume operations of hospitals established by the Army during the war. After the war it expanded the system to areas previously not under military control. Observing that freedmen were not receiving an adequate quality of health services, the bureau established dispensaries providing basic medical care and drugs free of charge, or at a nominal cost. The Bureau “managed in the early years of Reconstruction to treat an estimated half million suffering freedmen, as well as a smaller but significant number of whites.”7

Land Redistribution

Perhaps the most well-known function of the bureau was one that never came to fruition. During the course of the Civil War, the U.S. Army took control of a good deal of land that had been confiscated or abandoned by the Confederacy. From the time of emancipation there were rumors that confiscated lands would be provided to the recently freed slaves. This land would enable the blacks to be economically self-sufficient and provide protection from their former owners. In January 1865, General Sherman issued Special Field Orders, No. 15, which set aside the Sea Islands and lands from South Carolina to Florida for blacks to settle. According to his order, each family would receive forty acres of land and the loan of horses and mules from the Army. Similar to General Sherman’s order, the promise of land was incorporated into the bureau bill. Quickly the bureau helped blacks settle some of the abandoned lands and “by June 1865, roughly 10,000 families of freed people, with the assistance of the Freedmen’s Bureau, had taken up more than 400,000 acres.”8

While the promise of “forty acres and a mule” excited the freedmen, the widespread implementation of this policy was quickly thwarted. In the summer of 1865, President Andrew Johnson issued special pardons restoring the property of many Confederates – throwing into question the status of abandoned lands. In response, General Howard, the Commissioner of the Freedmen’s Bureau, issued Circular 13 which told agents to conserve forty-acre tracts of land for the freedmen – as he claimed presidential pardons conflicted with the laws establishing the bureau. However, Johnson quickly instructed Howard to rescind his circular and send out a new circular ordering the restoration to pardoned owners of all land except those tracts already sold. These actions by the President were devastating, as freedmen were evicted from lands that they had long occupied and improved. Johnson’s actions took away what many felt was the freedmen’s best chance at economic protection and self-sufficiency.

Judicial Functions

While the land distribution of the new agency was thwarted, the bureau was able to perform many duties. Bureau agents had judicial authority in the South attempting to secure equal justice from the state and local governments for both blacks and white Unionists. Local agents individually adjudicated a wide variety of disputes. In some circumstances the bureau established courts where freedmen could bring forth their complaints. After the local courts regained their jurisdiction, bureau agents kept an eye on local courts retaining the authority to overturn decisions that were discriminatory towards blacks. In May 1865, the Commissioner of the bureau issued a circular “authorizing assistant commissioners to exercise jurisdiction in cases where blacks were not allowed to testify.”9

In addition to these judicial functions, the bureau also helped provide legal services in the domestic sphere. Agents helped legitimize slave marriages and presided over freedmen marriage ceremonies in areas where black marriages were obstructed. Beginning in 1866, the bureau became responsible for filing the claims of black soldiers for back pay, pensions, and bounties. The claims division remained in operation until the end of the bureau’s existence. During a time when many of the states tried to strip rights away from blacks, the bureau was essential in providing freedmen redress and access to more equitable judicial decisions and services.

Labor Relations

Another important function of the bureau was to help draw up work contracts to help facilitate the hiring of freedmen. The abolition of slavery created economic confusion and stagnation as many planters had a difficult time finding labor to work their fields. Additionally, many blacks were anxious and unsure about working for former slave owners. “Into this chaos stepped the Freedmen’s Bureau as an intermediary.”10 The bureau helped planters and freedmen draft contracts on mutually agreeable terms – negotiating several hundred thousand contracts. Once agreed upon, the agency tried to make sure both planter and worker lived up to their part of the agreement. In essence, the bureau “would undertake the role of umpire.”11

Of the bureau’s many activities this was one of its most controversial. Both planters and freedmen complained about the insistence on labor contracts. Planters complained that labor contracts forbade the use of corporal punishment used in the past. They resented the limits on their activities and felt the restrictions of the contracts limited the productivity of their workers. On the other hand, freedmen complained that the contract structures were too restrictive and didn’t allow them to move freely. In essence, the bureau had an impossible task – trying to get the freedmen to return to work for former slave owners while preserving their rights and limiting abuse. The Freedmen’s Bureau’s judicial functions were of great help in enforcing these contracts in a fair manner making both parties live up to their end of the bargain. While historians have split over whether the bureau favored planters or the freedmen, Ralph Shlomowitz in his detailed analysis of bureau-assisted labor contracts found that contracts were determined by the free interplay of market forces.12 First, he finds contracts brokered by the bureau were extremely detailed to an extent that would not make sense in the absence of compliance. Second, contrary to popular belief he finds the share of crops received by labor was highly variable. In areas of higher quality land the share awarded to labor was less than in areas with lower land quality.

Educational Efforts

Prior to the Civil War it had been policy in the sixteen slave states to fine, whip, or imprison those who gave instruction to blacks or mulattos. In many states the punishments for teaching a person of color were quite severe. These laws severely restricted the educational opportunity of blacks – especially access to formal schooling. As a result, when given their freedom, many former slaves lacked the literacy skills necessary to protect themselves from discrimination and exploitation, and pursue many personal activities. This lack of literacy created great problems for blacks in a free labor system. Freedmen were repeatedly taken advantage of as they were often unable to read or draft contracts. Additionally, individuals lacked the ability to read newspapers and trade manuals, or worship by reading the Bible. Thus, when emancipated there was a great demand for freedmen schools.

General Howard quickly realized that education was perhaps the most important endeavor that the bureau could undertake. However, the financial resources and the few functions that the bureau was authorized to undertake limited the extent to which it was able to assist. Much of the early work in schooling was done by a number of benevolent and religious Northern societies. While initially the direct aid of the bureau was limited, it provided an essential role in organizing and coordinating these organizations in their efforts. The agency also allowed the use of many buildings in the Army’s possession and the bureau helped transport a trove of teachers from the North – commonly referred to as yankee school marms.

While the limits of the original Freedmen’s Bureau bill hamstrung the efforts of agents, subsequent bills changed the situation as the purse strings and functions of the bureau in the area of education were rapidly expanded. This shift in attention followed the lead of General Howard whose “stated goal was to close one after another of the original bureau divisions while the educational work was increased with all possible energy.”13 Among the provisions of the second bureau bill were: the appropriation of salaries for State Superintendents of Education, the repair and rental of school buildings, the ability to use military taxes to pay teachers’ salaries, and the establishment of the education division as a separate entity in the bureau.

These new resources were used to great success as enrollments at bureau-financed schools grew quickly, new schools were constructed in a variety of areas, and the quality and curriculum of the schools was significantly improved. The Freedmen’s Bureau was very successful in establishing a vast network of schools to help educate the freedmen. In retrospect this was a Herculean task for the federal government to accomplish. In a region where it was illegal to teach blacks how to read or write just a few years prior, the bureau was able to help establish nearly 1,600 day schools educating over 100,000 blacks at a time. The number of bureau-aided day and night schools in operation grew to a maximum of 1,737 in March 1870, employing 2,799 teachers, and instructing 103,396 pupils. In addition, 1,034 Sabbath schools were aided by the bureau that employed 4,988 teachers and instructed 85,557 pupils.

Matching the Integrated Public Use Sample of the 1870 Census and a constructed data set on bureau school location, one can examine the reach and prevalence of bureau-aided schools.14 Table 1 presents the summary statistics of various school concentration measures and educational outcomes for individual blacks 10-15 years old.

The variable “Freedmen’s Bureau School” equals one if there was at least one bureau-aided school in the individual’s county. The data reveals that 63.6 percent of blacks lived in counties with at least one bureau school. This shows the bureau was quite effective in reaching a large segment of the black population – as nearly two thirds of blacks living in the states of the ex-Confederacy had at least some minimal exposure to these schools. While the schools were widespread, it appears their concentration was somewhat low. For individuals living in a county with at least one bureau-aided school, the concentration of bureau-aided schools was 0.3165 per 30 square miles, or 0.4630 bureau aided-schools per 1,000 blacks.

Although the concentration of schools was somewhat low it appears they had a large impact on the educational outcomes of southern blacks. Ten to fifteen year olds living in a county with at least one bureau-aided school had literacy rates that were 6.1 percentage points higher. This appears to have been driven by the bureau increasing access to formal education for black children in these counties as school attendance rates were 7.5 percentage points higher than in counties without such schools.

Andrew Johnson and the Freedmen’s Bureau

Only eleven days after signing the bureau into existence, Abraham Lincoln was struck down by John Wilkes Booth. Taking his place in office was Andrew Johnson, a former Democratic Senator from Tennessee. Despite Johnson’s Southern roots, hopes were high that Congress and the new President could work closer together than the previous administration. President Lincoln and Congress had championed vastly different policies for Reconstruction. Lincoln preferred the term “Restoration” instead of “Reconstruction,” as he felt it was constitutionally impossible for a state to succeed.15 Lincoln championed the quick integration of the South into the Union and believed it could best be accomplished under the direction of the executive branch. Oppositely, Republicans in Congress led by Charles Sumner and Thaddeus Stevens felt the Confederate states had actually seceded and relinquished their constitutional rights. The Republicans in Congress advocated strict conditions for re-entry into the Union and programs aimed at reshaping society.

The ascension of Johnson to the presidency gave hope to Congress that they would have an ally in the White House in terms of Reconstruction philosophy. According to Howard Nash, the “Radicals were delighted….to have Vice President Andrew Johnson, who they had good reason to suppose was one of their number, elevated to the presidency.”16 In the months before and immediately after taking office, Johnson repeatedly talked about the need to punish rebels in the South. After Lincoln’s death Johnson became more impassioned in his speeches. In late April 1865 Johnson told an Indiana delegation “Treason must be made odious…traitors must be punished and impoverished…their social power must be destroyed.”17 If anything, many feared that Johnson may stray too far from the Presidential Reconstruction offered by Lincoln and be overly harsh in his treatment of the South.

Immediately after taking office Johnson honored Lincoln’s choice to head the bureau by appointing General Oliver Otis Howard as commissioner of the bureau. While this action raised hopes in Congress they would be able to work with the new administration, Johnson quickly switched course. After his selection of Howard, President Johnson and the “Radical” Republicans would scarcely agree on anything during the remainder of his term. On May 29, 1865, Johnson issued a proclamation that conferred amnesty, pardon, and the restoration of property rights for almost all Confederate soldiers who took an oath pledging loyalty to the Union. Johnson later came out in support of the black codes of the South, which tried to bring blacks back to a position of near slavery and argued that the Confederate states should be accepted back into the Union without the condition of ratifying and adopting the Fourteenth Amendment in their state constitutions.

The original bill signed by Lincoln established the bureau during and for a period of one year after the Civil War. The language of the bill was somewhat ambiguous, and with the surrender of Confederate forces military conflict had ceased. This led people to debate when the bureau would be discontinued. Consensus seemed to imply that if another bill wasn’t brought forth that the bureau would be discontinued in early 1866. In response Congress quickly got to work on a new Freedmen’s Bureau bill.

While Congress started work on a new bill, President Johnson tried to gain support for the view that the need for the bureau had come to an end. Ulysses S. Grant was called upon by the President to make a whirlwind tour of the South, and report on the present situation. The route set up was exceptionally brief and skewed to those areas best under control. Accordingly, his report said that the Freedmen’s Bureau had done good work and it appeared as though the freedmen were now able to fend for themselves without the help of the federal government.

In contrast, Carl Schurz made a long tour of the South only a few months after Grant and found the freedmen in a much different situation. In many areas the bureau was viewed as the only restraint to the most insidious of treatment of blacks. As Q.A. Gilmore stated in the report,

“For reasons already suggested I believe that the restoration of civil power that would take the control of this question out of the hands of the United States authorities (whether exercised through the military authorities or through the Freedmen’s Bureau) would, instead of removing existing evils, be almost certain to augment them.”18

While the first bill was adequate in many ways, it was rather weak in a few areas. In particular, the bill didn’t have any appropriations for officers of the bureau or direct funds earmarked for the establishment of schools. General Howard and many of his officers reported on the great need for the bureau and pushed for its existence indefinitely or at least until the freedmen were in a less vulnerable position. After listening to the reports and the recommendations of General Howard, a new bill was crafted by Senator Lyman Trumbull, a moderate Republican. The new bill proposed the bureau should remain in existence until abolished by law, provide more explicit aid to education and land to the freedmen, and protect the civil rights of blacks. The bill passed in both the Senate and House and was sent to Andrew Johnson, who promptly vetoed the measure. In his response to the Senate, Johnson wrote “there can be no necessity for the enlargement of the powers of the bureau for which provision is made in the bill.”19

While the President’s message was definitive, the veto came as a shock to many in Congress. President Johnson had been consulted prior to its passage and assured General Howard and Senator Trumbull that he would support the bill. In response to the President’s opposition, the Senate and House passed a bill that addressed some of the complaints that Johnson had with the bill, including limiting the length of the bill to two more years. Even after this watering down of the bill, it was once again vetoed. However, the new bill garnered enough support to override President Johnson’s veto. The veto of the bill and the subsequent override officially established a policy of open hostility between the legislative and executive branch. Prior to the Johnson administration, overriding a veto was extremely rare – as it had only occurred six times up until this time.20 However, after the passage of this bill it became mere commonplace for the remainder of Johnson’s term, as Congress would overturn fifteen vetoes during the less than four years Johnson was in office.

End of the Bureau

While work in the educational division picked up after the passage of the second bill, many of the other activities of the bureau were winding down. On July 25, 1868 a bill was signed into law requiring the withdrawal of most bureau officers from the states, and to stop the functions of the bureau except those that were related to education and claims. Although the educational activities of the bureau were to continue for an indefinite period of time, most state superintendent of education offices had closed by the middle of 1870. On November 30, 1870 Rev. Alvord resigned his post as General Superintendent of Education.21 While some small activities of the bureau continued after his resignation, these activities were scaled back greatly and largely consisted of correspondence. Finally due to lack of appropriations the activities of the bureau ceased in March 1871.

The expiration of the bureau was somewhat anti-climatic. A number of representatives wanted to establish a permanent bureau or organization for blacks, so that they could regulate their relations with the national and state governments.22 However, this concept was too radical to get passed by enough of a margin to override a veto. There was also talk of moving many of its functions into other parts of the government. However, over time the appropriations began to dwindle and the urgency to work out a proposal for transfer withered away in a manner similar to the bureau.

References

Alston, Lee J. and Joseph P. Ferrie. “Paternalism in Agricultural Labor Contracts in the U.S. South: Implications for the Growth of the Welfare State.” American Economic Review 83, no. 4 (1993): 852-76.

American Freedmen’s Inquiry Commission. Records of the American Freedmen’s Inquiry Commission, Final Report, Senate Executive Document 53, 38th Congress, 1st Session, Serial 1176, 1864.

Cimbala, Paul and Randall Miller. The Freedmen’s Bureau and Reconstruction: Reconsiderations. New York: Fordham University Press, 1999.

Congressional Research Service, http://clerk.house.gov/art_history/house_history/vetoes.html

Finley, Randy. From Slavery to Uncertain Freedom: The Freedmen’s Bureau in Arkansas, 1865-1869. Fayetteville: University of Arkansas Press, 1996.

Johnson, Andrew. “Message of the President: Returning Bill (S.60),” Pg. 3, 39th Congress, 1st Session, Executive Document No. 25, February 19, 1866.

McFeely, William S. Yankee Stepfather: General O.O. Howard and the Freedmen. New York: W.W. Norton, 1994.

Milton, George Fort. The Age of Hate: Andrew Johnson and the Radicals. New York: Coward-McCann, 1930.

Nash, Howard P. Andrew Johnson: Congress and Reconstruction. Rutherford, NJ: Fairleigh Dickinson University Press, 1972.

Parker, Marjorie H. “Some Educational Activities of the Freedmen’s Bureau.” Journal of Negro Education 23, no. 1 (1954): 9-21.

Q.A. Gillmore to Carl Schurz, July 27, 1865, Documents Accompanying the Report of Major General Carl Schurz, Hilton Head, SC.

Ruggles, Steven, Matthew Sobek, Trent Alexander, Catherine A. Fitch, Ronald Goeken, Patricia Kelly Hall, Miriam King, and Chad Ronnander. Integrated Public Use Microdata Series: Version 3.0 [Machine-readable database]. Minneapolis, MN: Minnesota Population Center [producer and distributor], 2004.

Shlomowitz, Ralph. “The Transition from Slave to Freedman Labor Arrangements in Southern Agriculture, 1865-1870.” Journal of Economic History 39, no. 1 (1979): 333-36.

Shlomowitz, Ralph, “The Origins of Southern Sharecropping,” Agricultural History 53, no. 3 (1979): 557-75.

Simpson, Brooks D. “Ulysses S. Grant and the Freedmen’s Bureau.” In The Freedmen’s Bureau and Reconstruction: Reconsiderations, edited by Paul A. Cimbala and Randall M. Miller. New York: Fordham University Press, 1999.

Citation: Troost, William. “Freedmen’s Bureau”. EH.Net Encyclopedia, edited by Robert Whaples. June 5, 2008. URL http://eh.net/encyclopedia/the-freedmens-bureau/

Fraternal Sickness Insurance

Herb Emery, University of Calgary

Introduction

During the nineteenth and early-twentieth century, lost income due to illness was one of the greatest risks to a wage earner’s household’s standard of living (Horrell and Oxley 2000, Hoffman 2001). Prior to the introduction of state health insurance in England in 1911, similar “patchworks of protection” — that included fraternal organizations, trade unions and workplace-based mutual benefit associations, commercial insurance contracts and discretionary charity — were available to workers in England and North America. Within the patchwork the largest source of illness-related income protection was through Friendly Societies; voluntary organizations such as fraternal orders and trade unions that provided stipulated amounts of “relief” for members who were sick and unable to work. Conditions have changed since the 1920s. Health care for family members, not loss of the family head’s income, has become the chief cost of sickness. Government social programs and commercial group plans have become the principal sources of disability insurance and health insurance. Friendly societies have largely discontinued their sick benefits. Most of them, moreover, have had declining memberships in growing populations.

Overview

This article

  • Explains the types of fraternal orders that existed in the late nineteenth and early twentieth centuries and the types of insurance they offered.
  • Provides estimates of the share of the adult male population that participated in fraternal self-help organizations – over 40 percent in the UK and almost as high in the US – and describes the characteristics of these society’s members.
  • Explains how friendly societies worked to provide sickness insurance as a reasonable price by overcoming the adverse selection and moral hazard problems, while facing problems of risk diversification.
  • Discusses the decline of fraternal sickness insurance after the turn of the twentieth century.
    • Concludes that fraternal lodges were financially sound despite claims that they were weakened by unsoundly pricing sickness insurance.
    • Examines the impact of competition from other insurers – including group insurance, government programs, labor unions, and company-sponsored sick-benefits societies.
    • Examines the impact of broader social and economic changes.
    • Concludes that fraternal sickness insurance was in greatest demand among young men and that its decline is tied mainly to the ageing of fraternal membership.
  • Closes by examining historians’ assessments of the importance and adequacy of fraternal sickness insurance.
  • Includes a lengthy bibliography of sources on fraternal sickness insurance.

Some Details and Definitions Pertaining to Fraternal Sickness Insurance

Fraternal orders were affiliated societies, or societies with branches. The branches were known by various names such as lodges, courts, tents, and hives. Fraternal orders emphasized benefits to their members rather than service to the community. They used secret passwords, rituals, and benefits to attract, bond, and hold members and distinguish themselves from members of rival orders.

Fraternal orders fell into three groups from an insurance perspective. The Masonic order and the Elks comprised the no-benefit group. Lodges in these orders often aided their members on a discretionary basis; that is where members were determined to be in “need” of assistance. They did not provide stipulated stated) insurance benefits (or relief).

econd group, the friendly societies, provided stipulated sick and funeral benefits to their members. The Independent Order of Odd Fellows, the Knights of Pythias, the Improved Order of Red Men, the Loyal Order of Moose, the Fraternal Order of Eagles, the Ancient Order of Foresters and the Foresters of America were the largest orders in this group.

A third group, the life-insurance orders, provided stipulated life-insurance, endowment, and annuity benefits to their members. The Maccabees, the Royal Arcanum, the Independent Order of Foresters, the Woodmen of the World, the Modern Woodmen of America, the Ancient Order of United Workmen, and the Catholic Order of Foresters were major orders in this group. In historical usage, the term “fraternal insurance” meant life insurance, but not sickness and funeral (burial) insurance.

The boundaries between the categories blur on close examination. Certain friendly societies, such as the Knights of Pythias and the Improved Order of Red Men, offered optional life-insurance at extra cost through their centrally-administered endowment branches. Certain insurance orders, such as the Independent Order of Foresters, offered optional sick and funeral benefits at extra cost through centrally-administered separate sickness and funeral funds. In other cases, the members of a society had privileged access to third-party insurance. The Canadian Odd Fellows Relief Association, for example, was entirely separate from the IOOF, but sold life policies exclusively to Odd Fellows.

Friendly Societies and Sickness Insurance

From the late eighteenth and early nineteenth centuries, friendly societies were often local lodges with no affiliations to other lodges. Over time, larger national and sometimes international orders that consisted of local lodges affiliated under jurisdictional grand lodges and national or international supreme bodies displaced the purely local lodge.1 The Ancient Order of Foresters was one of England’s larger affiliated Orders and it had subordinate Courts and jurisdictions in North America. The first Independent Order of Odd Fellows (IOOF) subordinate lodge in North America opened in Baltimore in 1819 under the jurisdiction of the British IOOF Manchester Unity. In the 1840s, the North American Odd Fellows seceded from the IOOFMU and founded the IOOF Sovereign Grand Lodge (SGL) that had jurisdiction over state and province level Grand Lodge jurisdictions in North America.

Membership Estimates

For the United Kingdom near the peak of the self-help movement in the 1890s, estimates of participation in friendly societies and trade unions for insurance against the costs of sickness and/or burial range from as many as 20 percent of the population (Horrell and Oxley 2000), to 41.2 percent of adult males (Johnson 1985) to one-half or more of adult males and as many as two-thirds of workingmen (Riley 1997). Estimates for participation in self-help organizations in North America are somewhat lower but they suggest a similar importance of friendly societies for insuring households against the costs of sickness and burial. Beito (1999) argues that a conservative estimate of participation in fraternal self-help organizations in the United States would have one of three adult males as a member in 1920, “including a large segment of the working class.” Millis (1937) reports that 30 per cent of Illinois wage-earners had market insurance for the disability risk in 1919 where fraternal organizations were the principal source of market insurance.

Characteristics of Friendly Society Members

Studies of British friendly societies suggest that friendly society membership was the “badge of the skilled worker” and made no appeal whatever to the “grey, faceless, lower third” of the working class (Johnson 1985, Hopkins 1995, Riley 1997). The major friendly societies in North America found their market for insurance among white, protestant males who came from upper-working-class and lower-middle-class backgrounds. Not surprisingly, the composition of local lodge memberships bore a resemblance to that of the local working population. Most Odd Fellows in Canada and the United States, however, were higher-paid workers, shop keepers, clerks, and farmers (Emery and Emery 1999). As Theodore Ross, the SGL’s grand secretary, noted in 1890, American Odd Fellows came from “the great middle, industrial classes almost exclusively.” Similarly, studies for Lynn, Massachusetts and Missouri found a heavy working-class representation among IOOF lodge memberships (Cumbler, 1979, p.46; Thelen, 1986, p. 165). In Missouri the social-class composition of Odd Fellows was similar to those for the Knights of Pythias and three life-insurance orders (the Ancient Order of United Workmen, the Maccabees, and the Modern Woodmen of the World). Beito’s (2000) work suggests that while the poor, non-whites and immigrants were not usually members of the larger fraternal orders’ memberships, they had their own mutual aid organizations.

Friendly Insurance: Modest Benefits at Low Cost

Friendly society sick benefits exemplified classic features of working-class insurance: a low cost and a small, fixed benefit amount equal to part of the wages of a worker with average wages. By contrast, commercial policies for middle-class clients offered insurance in variable amounts up to full-income replacement, at a cost beyond the reach of most workers. The affiliated orders established Constitutions which standardized rules and arrangements for sick benefit provision. For most of the friendly societies, local lodges or courts paid the sick claims of its members. Subject to requirements of higher bodies, the local lodge set the amounts of its weekly benefit, joining fees, and membership dues. The affiliation of lodges across locations also resulted in members having portable sickness insurance. If a member moved from one location to another, he could transfer his membership from one lodge to another within the organization.

Claiming Benefits

To claim benefits in the IOOF, a member had to provide his lodge with notice of sickness or disability within a week of its commencement. On receiving notice of a brother’s illness, the member of the visiting committee was to visit the brother within twenty-four hours to render him aid and confirm his sickness. Subsequently, the lodge visitors reported weekly on the brother’s condition until he recovered.

Strengths of Friendly Society’s Insurance: Low Overhead, Effective Monitoring

The local lodge or court system of the affiliated friendly societies like the IOOF and the Ancient Order of Foresters had important strengths for the sickness-insurance market. First, it had low overhead costs. Lodge members, not paid agents, recruited clients. Nominally-paid or unpaid lodge officers did the administrative work. Second, the intrusive methods of monitoring within the lodge system helped friendly societies to respond effectively to two classic problems in sickness insurance: adverse selection and moral hazard.

Overcoming the Adverse Selection Problem

An adverse selection of customers for sickness insurers refers to the fact that when the insurance is priced to reflect the average risk of a specified population, unhealthy persons (above average risk of sickness) have more incentive than healthy persons to purchase sickness insurance. Adverse selection in fraternal memberships was potentially a large problem as many orders had membership dues that were not scaled according to age despite the reality that the risk of sickness increased with age. To keep claims and costs manageable, an insurer needs ways to screen out poor risks. To this end, many organizations scaled initiation fees by the age of an initiate to discourage applications from older males, who had above-average sickness risk. In other cases, fraternal lodges or courts scaled the membership dues by the age at which the member was initiated. In addition, lodge-approved physicians often examined the physical conditions and health histories of applicants for membership. Lodge committees investigated the “moral character” of applicants.

Overcoming the Moral Hazard Problem

Sickness insurers also faced the problem of moral hazard (malingering) — an insured person has an incentive to claim to be disabled when he is not and an incentive to not take due care in avoiding injury or illness. The moral hazard problem was small for accident insurance as disability from accident is definite as to time and cause, and external symptoms are usually self-evident (Osborn, 1958). Disability from sickness, by contrast, is subjective and variable in definition. Friendly societies defined sickness, or disability, as the inability to work at one’s usual occupation. Relatively minor complaints disabled some individuals, while serious complaints failed to incapacitate others. The very possession of sickness insurance may have increased a worker’s willingness to consider himself disabled. The friendly society benefit contract dealt with this problem in several ways. First, by having one to two week waiting periods, and much less than full earnings replacement, self-help benefits required the disabled member to co-insure the loss which reduces the incentive to make a claim. In many fraternal orders, members receiving benefits could not drink or gamble and in some cases were not allowed to be away from their residence after dark. The activities of the lodge visiting committee helped to ward off false claims. In addition, fraternal ideology emphasized a member’s moral responsibility for not making a false claim and for reporting on brothers who were falsely claiming benefits.

Problem with Lack of Risk Diversification

On the negative side, the fraternal-lodge system made little provision for risk diversification. In the IOOF, the Knights of Pythias and the Ancient Order of Foresters, each subordinate lodge (or Court) was responsible for the sick claims of its members. Thus in principle, a high local rate of sick claims in a given year could shock a lodge’s financial condition. Certain commercial practices might have reduced the problem. For example, a grand lodge could have pooled the risks from all lodges in a central fund. Alternatively, it could have initiated a scheme of reinsurance, whereby each lodge assumed a portion of the claims in other lodges. Yet any centralization stood to weaken a friendly society’s management of adverse selection and the moral hazard. The behaviour of lodge members was observed to be a function of the structure of the benefit system. In 1908, for example, when the IOOF, Manchester Unity, in New South Wales, Australia established central funds for sick and funeral benefits, the effect was to turn the lodges into “mere collection agencies.” Participation in lodge affairs fell off, and members developed a more selfish attitude to claims. “When the lodges administered sick pay,” Green and Cromwell observed, “the members knew who was paying — it was the members themselves. But once ‘head office’ took over, the illusion that someone else was paying made its entry” (Green and Cromwell, 1984, pp. 59-60).

Commercial Insurers Couldn’t Match Friendly Societies in the Working-Class Sickness Insurance Market

On balance friendly societies provided an efficient delivery of working-class sickness insurance that commercial insurers could not match. Without the intrusive screening methods and low overhead of the decentralized lodge system, commercial insurers could not as easily solve the problems of moral hazard and adverse selection. “The assurance of a stipulated sum during sickness,” the president of the Prudential Insurance Company conceded in 1909, “can only safely be transacted ? by fraternal organizations having a perfect knowledge of and complete supervision over the individual members.”2

The Decline of Fraternal Sickness Insurance

By the 1890s, friendly societies in North America were withdrawing from the sickness insurance field. The IOOF imposed limits on the length of time that full sick benefits had to be paid, and one or two week waiting periods before the payment of claims began. In 1894, the Knights of Pythias eliminated their constitutional requirement that all subordinate lodges be required to pay stated sick benefits. By the 1920s, the IOOF followed the Knights of Pythias and eliminated its compulsory requirement for the payment of stipulated sick benefits. In England, where friendly societies opposed government pension and insurance schemes in the 1890s, they did not stand in the way of the introduction of Old Age Pensions in 1908 and compulsory state health insurance in 1911. Thus, the decline of fraternal sickness insurance pre-dates the Depression of the 1930s and for many organizations dates from at least the 1890s.

Unsound Pricing Practices?

Why did sickness insurance provided by friendly societies decline? Perhaps friendly society sickness insurance was a casualty of unsound pricing practices in the presence of ageing memberships. To illustrate this argument, consider the IOOF benefit contract. On the one hand, the incidence and duration of sickness claims increased with a member’s age. On the other hand, most IOOF lodges set quarterly dues at a flat rate, rather than by the member’s age, or the member’s age at joining. As the IOOF lodge benefit arrangement was essentially insurance benefits provided on a pay-as-you-go basis (current revenues are used to meet current expenditures), this posed little problem during a lodge’s early years when its members were young and had low sick-claim rates. Over time, however, the members aged and their claim rates showed a rising trend. When revenues from level dues became insufficient to cover claims, the argument goes, the lodge’s insurance provision collapsed. Thus fraternal-insurance provision was essentially a failed, experimental phase in the development of sickness and health insurance.

Lodges Were Financially Sound Despite Non-Actuarial Pricing

By contrast with the above scenario, evidence for British Columbia showed that the IOOF lodges were financially sound, despite their non-actuarial pricing practices (Emery 1996). Typically a lodge accumulated assets during its first years of operation, when its members were young and had below-average sickness risk. In later years, as its membership aged and the cost of claims exceeded income from members’ dues and fees, income from investments made up the difference. Consequently none of British Columbia’s twenty lodge closures before 1929 resulted from the bankruptcy of lodge assets. Similarly none of the British Columbia lodges had a significant probability of ruin from high claims in a particular year.

Non-payment of dues also helped lodge finances. A member became ineligible for benefits if he fell behind in his dues. If he fell far enough behind on his dues, his lodge could suspend him from membership or declare him “ceased” (dropped from membership). A member’s unpaid dues continued to accumulate after suspension. Thus a suspended member had to pay the full, accumulated amount (or a maximum sum, if his grand lodge set one), to get reinstated. Lodges did not pay sick claims to members who were in arrears.

Turnover of Membership Explains How They Remained Financially Sound

When members did not pay their dues owing to be reinstated, their exit from membership relieved lodge financial pressures. Most men joined fraternal lodges when they were under age 35 and for the members who quit, they typically did so before age 40.3 Thus, a substantial proportion of initiates into fraternal memberships did not remain in the membership long enough for their rising risk of illness after age 40 to pose a problem for lodge finances. On average, they belonged when they were most likely net payers and quit before they were net recipients. These features of the substantial turnover in fraternal memberships helps to explain how fraternal lodges were actually going concerns when official actuarial valuations of lodge finances and reserves inevitably showed that the lodges had actuarial deficits at the prevailing levels of dues. These valuations assessed the adequacy of accumulated reserves and dues revenues expected over the remaining lifetimes of the membership at the time of the valuation for meeting the expected benefits of the membership over the remainder of the members’ lifetimes. The assumption that all current members would remain in the membership until death always resulted in valuations that showed the sick benefits were inadequately, if not hazardously, priced. The fact that many members were not lifetime members meant that the pricing was not so hazardous.

Competition from Other Insurers

If poor finances cannot explain the decline of friendly society sickness benefits, then perhaps increasing competition from government and commercial insurance arrangements can explain the decline. Trends for competition do not provide strong support for this explanation for the decline of friendly society sickness-insurance. Competition for friendly societies came from commercial-group plans, government workmen’s compensation programs, trade unions and industrial unions, company-sponsored mutual benefit societies, and other fraternal orders that provided life insurance, or non-stipulated (discretionary) relief.

Group Insurance

Group insurance used the employer’s mass-purchasing power to provide low-cost insurance without a medical examination (Ilse, 1953, chapter 1). Often the employer paid the premium. Otherwise employees paid part of the cost through payroll deductions, a practice that kept the insurer’s overhead costs low. The insurance company made the group-plan contract with the employer, who then issued certificates to individuals in the plan. Group plans compared favourably with IOOF benefits in terms of cost and the amount of the benefit. They also gave a viable commercial solution to the problems of adverse selection and moral hazard.

During the 1920s, however, group plans were available to few workers. In the United States, they missed men who were self-employed or employed in firms with less than fifty workers. The employee’s coverage ceased if he left the company. It also stopped if either the insurer or the employer did not renew the contract at the end of its standard one-year term. When coverage ceased, the employee might find himself too old or unhealthy to obtain insurance elsewhere. More importantly, the challenge of commercial-group insurance was just beginning during the 1920s. By 1929 Americans and Canadians in group plans were less numerous than the number of Odd Fellows alone.

Government Programs

Government programs such as compulsory sickness insurance dated from 1883 in Germany and 1911 in Britain. Between 1914 and 1920, eight state commissions, two national conferences, and several state legislatures attended to the issue in the United States (see Armstrong, 1932, Beito 2000, Hoffman 2001). Despite these initiatives, no American or Canadian government — national, state, or provincial — adopted compulsory sickness insurance until the 1940s (Osborn, 1958, chapter 4; Ilse, 1953, chapter 8).

Workmen’s compensation was another matter. During the years 1911-25, forty-two of the forty-eight American states and six of Canada’s nine provinces passed workmen’s compensation laws (Weis, 1935; Leacy, 1983). Nevertheless, half of all state laws in 1917, and a fifth of them in 1932, applied only to persons in hazardous occupations. None of the various state laws covered employees of interstate railways. In twenty-four states, the law exempted small businesses; in five it exempted public employees. In some states the law was so hedged with restrictions that the scale of benefits was uncertain. Although comprehensive by American standards, Ontario’s law omitted persons in farming, wholesale and retail establishments, and domestic service (Guest, 1980).

Overall, government programs provided negligible competition for friendly society sick benefits during the 1920s. No state or province provided for compulsory sickness insurance. Workmen’s compensation laws were commonplace, but missed important parts of the workforce. More importantly, industrial accidents accounted for just ten percent of all disability (Armstrong, 1932, pp. 284ff; Osborn, 1958, chapter 1).

Labor Unions

Labor unions traditionally used benefits to attract members and hold the loyalty of existing members. During the 1890s miners’ unions in the American west and British Columbia reportedly devoted more time to mutual aid than to collective bargaining (Derickson, 1988, chapter 3). By 1907 nineteen unions, accounting for 25 per cent of organized labor in the United States, offered sick benefits (Rubinow, 1913, chapter 18). During the 1920s, however, the friendly society competition from unions followed a declining trend. After years of steady growth, for example, the membership of American trade unions dropped by 32 per cent between 1920 and 1929.4 Similarly, the membership of Canadian trade unions fell by 23 per cent between 1919 and 1926. In an unprecedented development in 1926, the street railway workers’ union in Newburgh, New York, obtained commercial group-sickness coverage through a collective bargaining agreement with the employer (Ilse, 1953, ch. 13). Although rare during the 1920s, this marked the start of collective bargaining for sick benefits rather than direct union provision.

Company-sponsored Sick-Benefit Societies

Company-sponsored sick-benefit societies, often known as Mutual Benefit Associations, originated in a tradition of corporate paternalism during the 1870s (Brandes, 1976; Brody, 1980; Zahavi, 1988; McCallum, 1990). The United States had more than 500 such societies by 1908. Typically these societies obtained most or all of their funds from employee dues, not company funds, ostensibly to encourage the workers to be self-reliant.

Participation was voluntary in 85 per cent of 461 American societies surveyed on the eve of the First World War. Eligibility for membership commonly required a waiting period (a minimum period of permanent employment). A major disadvantage, compared to fraternal order sickness benefits, was that coverage ceased when the employee left the firm. In the amount and cost of the benefit (benefits of $5 to 6 per week for up to thirteen weeks for annual dues of $2.50 to $6 per year) the societies were similar to fraternal lodges.

The institutions were part of a larger program of corporate welfarism that had developed during the First World War in conditions of labor scarcity, labor unrest, rising union membership, and government management of capital-labor relations. At the war’s end, however, the economy slumped, the supply of labor became abundant, unions became cooperative and were losing members, and wartime government-economic management ended. In the new circumstances, the pressure on businessmen to promote welfare programs abated, and the membership of company-sponsored sick-benefit societies entered a flat trend.5 By 1929 the societies were still a minority phenomenon. They existed in 30 percent of large firms (250 or more employees), but in just 4.5 percent of small firms, which accounted for half the industrial work force (Jacoby, 1985, ch.6).

Competition from Insurance Orders

Friendly societies (orders with sick and funeral benefits) also competed with the insurance orders (orders with life and/or annuity benefits in small amounts) that offered an optional sick benefit. The Maccabees, Woodmen of the World, Independent Order of Foresters, and the Royal Arcanum were some main rivals in the insurance-order group for the friendly societies.

The insurance-order sick benefit had several features of commercial insurance and compared poorly with the friendly-society benefit. In many cases, these orders paid sick claims from a centrally-administered “sick and funeral fund,” not local lodge funds. They financed sick claims by requiring monthly premiums, paid in advance, not quarterly dues. Their central authority could cancel the member’s sickness insurance by giving him notice; in the IOOF, by contrast, the member retained his coverage as long as his dues were paid up. A member could draw benefits for a maximum of twenty-six weeks in the Maccabees and a maximum of twelve weeks in the IOF. During the 1920s, competition from fraternal life insurance orders showed a flat or declining trend. In terms of membership size, the largest friendly society, the IOOF, gained ground on all competitors in the insurance-order group.

Broader Economic and Social Trends in the 1920s

Another popular explanation for the decline of friendly society sick benefits is one of “changing times” where friendly societies provided an outdated social arrangement. Here fraternal orders were multiple-function organizations that offered their members a variety of social and indirect economic benefits, as well as insurance. Thus in principle, the declining trend for IOOF sickness insurance could have been a by-product of social changes during the 1920s that were undermining the popularity of fraternal lodges (Dumenil, 1984; Brody, 1980; Carnes, 1989; Charles, 1993; Clawson, 1989; Rotundo, 1989; Burley, 1994; Tucker, 1990). For example, the fraternal-lodge meeting faced competition from new forms of entertainment (radio, cinema, automobile travel). The development of installment buying and consumerism undermined fraternal culture and working-class institutional life. Trends in sex relations sapped the appeal of all-male social activities and fraternal ritual of lodge meetings. The rising popularity of luncheon-club organizations (Kiwanis, Lions, Kinsmen) expressed a popular shift to a community-service orientation, as opposed to the fraternal tradition of services to members. The luncheon clubs also exemplified a popular shift to class-specific organizations, at the expense of fraternal orders, which had a cross-class appeal. Finally, with the waning popularity of lodge meetings, lodge nights became less useful occasions for making business contacts.

Rising Health-Care Costs

The decade also gave rise to two important insurance-related developments. The one, described above, was the diffusion of commercial-group plans for income-replacement insurance. The other was the emergence of health-care services as the principal cost of sickness (Starr, 1982). In 1914 lost wages had been between two and four times the medical costs of a worker’s sickness, or about equal if one included the worker’s family. During the 1920’s, however, the medical costs soared, by 20 per cent for families with less than $1,200 income and 85 per cent for families with incomes between $1,200 and $2,500. The medical costs were highly variable as well as rising. Effectively, a serious hospitalized illness could consume a third to a half of a family’s annual income.

External Changes and Competition Don’t Explain the Decline of Fraternal Sickness Insurance Well

Changes during the 1920s, however, provide a poor explanation for the declining trend for the friendly-society sick benefit in North America. First, the timing was wrong. On the one hand, the declining trend dated from the 1890s, not the 1920s. On the other hand, key developments during the decade were at an early stage. By 1929 commercial-group insurance was established, but not widespread. Similarly, health insurance scarcely existed, despite the rising trend for the health-care costs. As Starr explains, health insurance presented an extreme problem of moral hazard that insurers did not solve until the 1930s.6 Second, we lack a theory to explain why the waning of interest in lodge meetings would have caused a declining trend for the sick benefit. Finally, the “changing times” explanation, on its own, incorrectly portrays the sick benefit as a static product that became less relevant in an exogenously changing society and economy.

Young Men Value Sickness Insurance

If external pressure did not cause the decline of the friendly society sick benefits, then why did friendly society sickness insurance decline? Emery and Emery (1999) argue that the sick benefit was primarily in demand amongst men who lacked alternatives to market insurance. For example, at the start of their working lives, male breadwinners had no older children to earn secondary incomes (family insurance). They also lacked savings to cover the disability risk (self-insurance). Thus men joined the Odd Fellows when they were “young”. They then quit after a few years as family and self-insurance alternatives to market insurance opened up to them. Further, as the friendly society sick benefit was a form of precautionary saving, demand for it would have declined as a household accumulated wealth.

Aging Membership and the Declining Demand for Sickness Insurance

Over time, fraternal memberships were ageing as rates of initiation slowed and suspensions from membership continued on at steady rates. Initiates and suspended members were disproportionately from the lower age groups in the memberships thus slower membership growth in the friendly societies represented ageing memberships. In this context of the demand for the sick benefit over the life-cycle, ageing fraternal memberships became less attached to the sick benefit. Thus, as the memberships aged, their collective preferences changed. Older members had priorities and objectives other than sickness insurance.

Friendly Societies and Compulsory State Insurance

Despite the similarity of organizations and the high rates of participation in them in the late nineteenth and early twentieth centuries, the role of voluntary self-help organizations like the friendly societies, diverged on either side of the Atlantic. In England, the “administrative machinery” of friendly societies was the vehicle for introducing and delivering compulsory government sickness/health insurance under the Approved Societies system that prevailed from 1911 to 1944 at which time the government centralized the provision of health insurance (Gosden 1973). In North America the friendly society sickness insurance arrangement declined from at least the 1890s despite growing memberships in the organizations up to the 1920s. While the friendly society sickness insurance declined, government showed little activity in the health/sickness insurance field. Only through the 1930s did commercial and non-profit group health and hospital insurance plans and government social programs rise to primacy in the sickness and health insurance field.7

Critics of Friendly Societies’ Voluntary Self-Help

Critics of voluntary self-help arrangements for insuring the costs of sickness argue that voluntary self-help was a failed system and its obvious short-comings and financial difficulties were the impetus for government involvement in social insurance arrangements. (Smiles 1876, Moffrey 1910, Peebles 1936, Gosden 1961, Gilbert 1965, Hopkins 1995, Horrell and Oxley 2000, Hoffman 2001). Horrell and Oxley (2000) argue that friendly society benefits were too paltry to offer true relief. Hopkins (1995) argues that for those workers who could afford it, self-help through friendly society membership worked well but too much of the working population remained outside the safety net due to low incomes. At best, the critics applaud the intent of individuals taking the initiative to protect themselves and for friendly societies in pioneering the preparation of actuarial data on morbidity and sickness duration that aided commercial insurers in insuring the sickness risk in a financially sound way.

Positive Assessments of Friendly Societies’ Roles

In contrast, Beito (2000) presents a positive assessment of fraternal mutual aid in the United States, and hence working-class self-help, for dealing with the economic consequences of poor health. Beito argues that fraternal societies in America extended social welfare service, such as insurance, to the poor (notably immigrants and blacks) and working class Americans who otherwise would not have had access to such coverage. Far from being an inadequate form of safety net, fraternal mutual aid sustained needy Americans from cradle to grave and over time, extended the range of benefits provided to include hospitals and homes for the aged as the needs in society arose. Beito suggests that changing cultural attitudes and the expanding scale and scope of a paternalistic welfare state undermined an efficient and viable fraternal social insurance arrangement.

Government’s Role in “Crowding Out” Self-Help

Similarly, Green and Cromwell (1984) argue that state paternalism crowded out efficient fraternal methods of social insurance in Australia. Hopkins (1995) suggests that while friendly societies were effective for aiding a sizable portion of the working class, working class self-help “had been weighed in the balance and found wanting” since it failed to provide income protection for the working classes as a whole. Hopkins concludes that compulsory state aid inevitably had to replace voluntary self-help to “spread the net over the abyss” to protect the poorest of the working class. Similar to Beito’s view, Hopkins suggests that equity considerations were the reason for undermining otherwise efficient voluntary self-help arrangements. Beveridge (1948) expresses dismay over the crowding out of friendly societies as social insurers in England following the centralization of compulsory government health insurance arrangements in 1944.

References:

Applebaum, L. “The Development of Voluntary Health Insurance in the United States.” Journal of Insurance 28 (1961): 25-33.

Armstrong, Barbara N. Insuring the Essentials. New York: MacMillan, 1932.

Beito, David. From Mutual Aid to the Welfare State: Fraternal Societies and Social Services, 1890-1967. Chapel Hill: University of North Carolina Press, 2000.

Berkowitz, Edward. “How to Think About the Welfare State” Labor History 32 (1991): 489-502

Berkowitz, Edward and Monroe Berkowitz, “Challenges to Workers’ Compensation: An Historical Analysis.” In Workers’ Compensation Benefits: Adequacy, Equity, and Efficiency, edited by John D. Worrall and David Appel. Ithaca, NY: ILR Press, 1985.

Berkowitz, Edward and Kim McQuaid. “Businessman and Bureaucrat: the Evolution of the American Welfare System, 1900-1940.” Journal of Economic History 38 (1978): 120-41.

Berkowitz, Edward and Kim McQuaid. Creating the Welfare State: The Political Economy of Twentieth Century Reform. New York: Praeger, 1988.

Bernstein, Irving. The Lean Years: A History of the American Worker, 1920-1933. Boston: Houghton Mifflin, 1960.

Bradbury, Bettina. Working Families, Age, Gender, and Daily Survival in Industrializing Montreal. Toronto: McClelland and Stewart, 1993.

Brandes, Stuart D. American Welfare Capitalism 1880-1940. Chicago: University of Chicago Press, 1976.

Brody, David. Workers in Industrial America: Essays on the Twentieth Century Struggle. New York: Oxford University Press, 1980.

Brumberg, Joan Jacobs, and Faye E. Dudden. “Masculinity and Mumbo Jumbo: Nineteenth-Century Fraternalism Revisited.” Reviews in American History 18 (1990): 363-70 [review of Carnes].

Burley, David G. A Particular Condition in Life, Self-Employment and Social Mobility in Mid-Victorian Brantford, Ontario. McGill-Queen’s University Press, 1994.

Burrows, V.A. “On Friendly Societies since the Advent of National Health Insurance.” Journal of the Institute of Actuaries 63 (1932): 307-401

Carnes, Mark C. Secret Ritual and Manhood in Victorian America. New Haven, Yale University Press, 1989.

Charles, Jeffrey A. Service Clubs in American Society, Rotary, Kiwanis, and Lions. Urbana: University of Illinois Press, 1993.

Clawson, Mary Ann. Constructing Brotherhood, Class, Gender, and Fraternalism Princeton: Princeton University Press, 1989.

Cordery, Simon. “Fraternal Orders in the United States: A Quest for Protection and Identity.” In Social Security Mutualism: The Comparative history of Mutual Benefit Societies, edited by Marcel Van der Linden, 83-110. Bern: Peter Lang, 1996.

Cordery, Simon. “Friendly Societies and the Discourse of Respectability in Britain, 1825-1875.” Journal of British Studies 34, no. 1 (1995): 35-58

Costa, Dora. “The Political Economy of State Provided Health Insurance in the Progressive Era: Evidence from California.” National Bureau of Economic Research Working Paper, no. 5328, 1995

Cumbler, John T. Working-Class Community in Industrial America: Work, Leisure, and Struggle in Two Industrial Cities, 1880-1930. Westport: Greenwood Press, 1979.

Davis, K. “National Health Insurance: A Proposal.” American Economic Review 79, no. 2 (1989): 349-352

Derickson, Alan. Workers’ Health Workers’ Democracy, The Western Miners” Struggle, 1891-1925. Ithaca: Cornell University Press, 1988.

Dumenil, Lynn. Freemasonry and American Culture 1880-1930. Princeton: Princeton University Press, 1984.

Ehrlich, Isaac and Gary S. Becker. “Market Insurance, Self-Insurance, and Self-Protection.” Journal of Political Economy 80, no. 4 (1972): 623-648.

Emery, J.C. Herbert. The Rise and Fall of Fraternal Methods of Social Insurance: A Case Study of the Independent Order of Oddfellows of British Columbia Sickness Insurance, 1874-1951. Ph.D. Dissertation: University of British Columbia, 1993.

Emery, J.C. Herbert. “Risky Business? Nonactuarial Pricing Practices and the Financial Viability of Fraternal Sickness Insurers.” Explorations in Economic History 33 (1996): 195-226.

Emery, George and J.C. Herbert Emery. A Young Man’s Benefit: The Independent Order of Odd Fellows and Sickness Insurance in the United States and Canada, 1860-1929. Montreal: McGill-Queen’s University Press, 1999.

Fischer, Stanley. “A Life Cycle Model of Life Insurance Purchases.” International Economic Review 14, no. 1 (1973): 132-152.

Follmann, J.F. “The Growth of Group Health Insurance.” Journal of Risk and Insurance 32 (1965): 105-112.

Galanter, Marc. Cults, Faith, Healing and Coercion. New York: Oxford University Press, 1989.

Gilbert, B.B. “The Decay of Nineteenth-Century Provident Institutions and the Coming of Old Age Pensions in Great Britain.” Economic History Review, 2nd Series 17 (1965): 551-563.

Gilbert, B.B. The Evolution of National Health Insurance in Great Britain: The Origins of the Welfare State. London: Michael Joseph, 1966.

Gist, Noel P. “Secret Societies: A Cultural Study of Fraternalism in the United States.” University of Missouri Studies XV, no. 4 (1940): 1-176.

Gosden, P.(1961). The Friendly Societies in England 1815 to 1875. Manchester: Manchester University Press.

Gosden, P. Self-Help, Voluntary Associations in the 19th Century London: B.T. Batsford, 1973.

Grourinchas, Pierre-Olivier and Jonathan A. Parker. “The Empirical Importance of Precautionary Savings.” National Bureau of Economic Research Working Paper no. 8107, 2001.

Gratton, Brian. “The Poverty of Impoverishment Theory: The Economic Well-Being of the Elderly, 1890-1950.” Journal of Economic History 56, no. 1 (1996): 39-61.

Green, D.G. and L.G. Cromwell. Mutual Aid or Welfare State: Australia’s Friendly Societies. Boston: Allen & Unwin, 1984.

Greenberg, Brian. “Worker and Community: Fraternal Orders in Albany, New York, 1845-1885.” Maryland Historian 8 (1977): 38-53.

Guest, D. The Emergence of Social Security in Canada. Vancouver: University of British Columbia Press, 1980.

Haines, Michael R. “Industrial Work and the Family Life Cycle, 1889-1890.” Research in Economic History 4 (1979): 289-356.

Hirschman, Albert O. Exit, Voice, and Loyalty, Responses to Decline in Firms, Organizations, and States. Cambridge: Harvard University Press, 1970.

History of Odd-Fellowship in Canada under the Old Regime. Brantford: Grand Lodge of Ontario, 1879.

History of the Maccabees, Ancient and Modern, 1881 to 1896. Port Huron, 1896.

Hopkins, Eric. Working-Class Self-Help in Nineteenth-Century England: Responses to Industrialization. New York: St. Martin’s Press, 1995.

Hoffman, Beatrix. The Wages of Sickness: The Politics of Health Insurance in Progressive America. Chapel Hill: University of North Carolina Press, 2001.

Horrell, Sara and Deborah Oxley. “Work and Prudence: Household Responses to Income Variation in Nineteenth Century Britain.” European Review of Economic History 4, no. 1 (2000): 27-58.

Ilse, Louise Wolters. Group Insurance and Employee Retirement Plans. New York: Prentice-Hall, 1953.

Jacoby, Sanford M. Employing Bureaucracy, Managers, Unions, and the Transformation of Work in American Industry, 1900-1945. New York: Columbia University Press, 1985.

James, Marquis. The Metropolitan Life, A Study in Business Growth. New York: Viking Press, 1bove, Roy. The Struggle for Social Security: 1900-1935. Cambridge: Harvard University Press, 1968.

Lynd, Robert S. and Helen Merrell. Middletown: A Study in Contemporary American Culture. New York: Harcourt, Brace & World, 1929.

MacDonald, Fergus. The Catholic Church and Secret Societies in the United States. New York: U.S. Catholic Historical Society, 1946.

Markey, Raymond. “The History of Mutual Benefit Societies in Australia, 1830-1991.” In Social Security Mutualism: The Comparative History of Mutual Benefit Societies, edited by Marcel Van der Linden, 147-76. Bern: Peter Lang, 1996.

McCallum, Margaret E. “Corporate Welfarism in Canada, 1919-39.” Canadian Historical Review LXXI, no. 1 (1990): 49-79.

Millis, Harry A. Sickness Insurance: A Study of the Sickness Problem and Health Insurance.. Chicago: Chicago University Press, 1937.

Moffrey, R.W. A Century of Odd Fellowship. Manchester: IOOFMU G.M. and Board of Directors, 1910.

National Inoulder: Westview Press, 1991.

Osborn, Grant M. Compulsory Temporary Disability Insurance in the United States. Homewood, IL: Richard D. Irwin, 1958.

Palmer, Bryan D. “Mutuality and the Masking/Making of Difference: The Making of Mutual Benefit Societies in Canada, 1850-1950.” In Social Security Mutualism: The Comparative History of Mutual Benefit Societies, edited by Marcel Van der Linden, 111-46. Bern: Peter Lang, 1996.

Peebles, A. “The State and Medicine.” Canadian Journal of Economics and Political Studies 2 (1936): 464-480.

Preuss, Arthur. Dictionary of Secret and Other Societies. St. Louis: B. Herder Co., 1924.

Quadagno, Jill. “Theories of the Welfare State.” Annual Reviews of Sociology 13 (1987): 109-28

Quadagno, Jill. The Transformation of Old Age Security: Class and Politics in the American Welfare State. Chicago: University of Chicago Press, 1988.

Riley, James C. “Ill Health during the English Mortality Decline: The Friendly Societies’ y. “Boston Masons, 1900-1935: The Lower Middle Class in a Divided Society.” Journal of Voluntary Action Research 6 (1977): 119-26.

Ross, Theo. A. Odd Fellowship, Its History and Manual. New York: M.W. Hazen, 1890.

Rotundo, E. Anthony. “Romantic Friendship: Male Intimacy and Middle-Class Youth in the Northern United States, 1800-1900.” Journal of Social History 23 no. 1 (1989): 1-25.

Rubinow, Isaac Max. Social Insurance: With Special References to American Conditions. Henry Holt & Co., 1913.

Schmidt, A.J. Fraternal Organizations. Westport: Greenwood Press, 1980.

Senior, Hereward. Orangeism: The Canadian Phase. Toronto: McGraw-Hill Ryerson, 1972.

Stalson, J. Owen. Marketing Life Insurance: Its History in America. Cambridge: Harvard University Press, 1942; Homewood: R.D. Irwin, 1969.

Smiles, Samuel. Thrift. Toronto: Belford Brothers, 1876.

Starr, Paul. The Social Transformation of American Medicine: The Rise of a Sovereign Profesc History.” Journal of Economic History 51, no. 2 (1991): 271-288.

Thelen, David. Paths of Resistance: Tradition and Dignity in Industrializing Missouri. New York: Oxford University Press, 1986.

Tishler, Hace Sorel. Self-Reliance and Social Security, 1870-1917. Port Washington, N.Y.: Kennikat, 1971.

Tucker, Eric. Administering Danger in the Workplace: The Law and Politics of Occupational Health and Safety Regulation in Ontario, 1850-1914. Toronto: University of Toronto Press, 1990.

Van der Linden, Marcel. “Introduction.” In Social Security Mutualism: The Comparative History of Mutual Benefit Societies, edited by Marcel Van der Linden, 11-38. Bern: Peter Lang, 1996.

Vondracek, Felix John. “The Rise of Fraternal Organizations in the United States, 1868-1900.” Social Science 47 (1972): 26-33.

Weis, Harry. “Employers’ Liability and Workmen’s Compensation.” In History of Labor in the United States, 1896-1932, Vol. III, edited by Don D. Lescohier

Footnotes

1 See Gosden (1961), Hopkins (1995) and Riley (1997) for excellent discussions of the evolution of friendly societies in England.

2 Cited in Starr (1982, p. 242). British industrial-life companies did not offer sickness insurance until 1911, when government allowed them qualify as approved societies under the National Health Act. In acting as approved societies, their motive was not to write sickness insurance, but rather to protect their interest in burial insurance. See Beveridge, 1948, p. 81; Gilbert, 1966, p. 323.

3 Emery and Emery (1999). Riley (1997) shows that British men in their twenties were the majority of initiates and members who exited did so within “a few years of joining”.

4 Data for unions are from Wolman, 1936, pp. 16, 239 and Leacy, 1983, series E175. By 1931 just 10 per cent of non-agricultural workers in the United States were unionized, down from 19 per cent in 1919 (Bernstein, 1960, chapter 2). Unions affiliated with the American Federation of Labor accounted for approximately 80 per cent of the total membership of American labor unions (Wolman, p.7). The reported AFL membership statistics are high. Unions paid per capita tax on more than their actual paid-up memberships for prestige and to maintain their voting strength at AFL meetings. In 1929, the United Mine Workers, an extreme case, reported 400,000 members, but probably had just 262,000 members, including 169,000 paid-up members and 93,000 “exonerated” members (kept on the books because they were unemployed or on strike).

5 Brandes (1976, chapter 10) places their membership at 749,000 in 1916 and 825,000 in 1931.

6 The probable costs of health-care claims were hard to predict (Starr, 1982, pp. 290-1). As with income-replacement insurance, sickness was not a well-defined condition. In addition, the treatment costs were within the insured’s control. They also were within the control of the physician and hospital, both of which could profit from additional services and raise prices as the patient’s ability to pay increased.

7 Employer-purchased/provided group plans came to be the most common source of the health insurance coverage in the United States (Applebaum, 1961; Follmann, 1965; Davis, 1989). In Canada, provincial government health insurance plans, with universal coverage, replaced the work-place based arrangements in the 1960s.

Citation: Emery, Herb. “Fraternal Sickness Insurance”. EH.Net Encyclopedia, edited by Robert Whaples. March 26, 2008. URL http://eh.net/encyclopedia/fraternal-sickness-insurance/

The Economic History of the International Film Industry

Gerben Bakker, University of Essex

Introduction

Like other major innovations such as the automobile, electricity, chemicals and the airplane, cinema emerged in most Western countries at the same time. As the first form of industrialized mass-entertainment, it was all-pervasive. From the 1910s onwards, each year billions of cinema-tickets were sold and consumers who did not regularly visit the cinema became a minority. In Italy, today hardly significant in international entertainment, the film industry was the fourth-largest export industry before the First World War. In the depression-struck U.S., film was the tenth most profitable industry, and in 1930s France it was the fastest-growing industry, followed by paper and electricity, while in Britain the number of cinema-tickets sold rose to almost one billion a year (Bakker 2001b). Despite this economic significance, despite its rapid emergence and growth, despite its pronounced effect on the everyday life of consumers, and despite its importance as an early case of the industrialization of services, the economic history of the film industry has hardly been examined.

This article will limit itself exclusively to the economic development of the industry. It will discuss just a few countries, mainly the U.S., Britain and France, and then exclusively to investigate the economic issues it addresses, not to give complete histories of the industries in those countries. This entry cannot do justice to developments in each and every country, given the nature of an encyclopedia article. This entry also limits itself to the evolution of the Western film industry, because it has been and still is the largest film industry in the world, in revenue terms, although this may well change in the future.

Before Cinema

In the late eighteenth century most consumers enjoyed their entertainment in an informal, haphazard and often non-commercial way. When making a trip they could suddenly meet a roadside entertainer, and their villages were often visited by traveling showmen, clowns and troubadours. Seasonal fairs attracted a large variety of musicians, magicians, dancers, fortune-tellers and sword-swallowers. Only a few large cities harbored legitimate theaters, strictly regulated by the local and national rulers. This world was torn apart in two stages.

First, most Western countries started to deregulate their entertainment industries, enabling many more entrepreneurs to enter the business and make far larger investments, for example in circuits of fixed stone theaters. The U.S. was the first with liberalization in the late eighteenth century. Most European countries followed during the nineteenth century. Britain, for example, deregulated in the mid-1840s, and France in the late 1860s. The result of this was that commercial, formalized and standardized live entertainment emerged that destroyed a fair part of traditional entertainment. The combined effect of liberalization, innovation and changes in business organization, made the industry grow rapidly throughout the nineteenth century, and integrated local and regional entertainment markets into national ones. By the end of the nineteenth century, integrated national entertainment industries and markets maximized productivity attainable through process innovations. Creative inputs, for example, circulated swiftly along the venues – often in dedicated trains – coordinated by centralized booking offices, maximizing capital and labor utilization.

At the end of the nineteenth century, in the era of the second industrial revolution, falling working hours, rising disposable income, increasing urbanization, rapidly expanding transport networks and strong population growth resulted in a sharp rise in the demand for entertainment. The effect of this boom was further rapid growth of live entertainment through process innovations. At the turn of the century, the production possibilities of the existing industry configuration were fully realized and further innovation within the existing live-entertainment industry could only increase productivity incrementally.

At this moment, in a second stage, cinema emerged and in its turn destroyed this world, by industrializing it into the modern world of automated, standardized, tradable mass-entertainment, integrating the national entertainment markets into an international one.

Technological Origins

In the early 1890s, Thomas Edison introduced the kinematograph, which enabled the shooting of films and their play-back in slot-coin machines for individual viewing. In the mid-1890s, the Lumière brothers added projection to the invention and started to play films in theater-like settings. Cinema reconfigured different technologies that all were available from the late 1880s onwards: photography (1830s), taking negative pictures and printing positives (1880s), roll films (1850s), celluloid (1868), high-sensitivity photographic emulsion (late 1880s), projection (1645) and movement dissection/ persistence of vision (1872).

After the preconditions for motion pictures had been established, cinema technology itself was invented. Already in 1860/1861 patents were filed for viewing and projecting motion pictures, but not for the taking of pictures. The scientist Jean Marey completed the first working model of a film camera in 1888 in Paris. Edison visited Georges Demeney in 1888 and saw his films. In 1891, he filed an American patent for a film camera, which had a different moving mechanism than the Marey camera. In 1890, the Englishman Friese Green presented a working camera to a group of enthusiasts. In 1893 the Frenchman Demeney filed a patent for a camera. Finally, the Lumière brothers filed a patent for their type of camera and for projection in February 1895. In December of that year they gave the first projection for a paying audience. They were followed in February 1896 by the Englishman Robert W. Paul. Paul also invented the ‘Maltese cross,’ a device which is still used in film cameras today. It is instrumental in the smooth rolling of the film, and in the correcting of the lens for the space between the exposures (Michaelis 1958; Musser 1990: 65-67; Low and Manvell 1948).

Three characteristics stand out in this innovation process. First, it was an international process of invention, taking place in several countries at the same time, and the inventors building upon and improving upon each other’s inventions. This connects to Joel Mokyr’s notion that in the nineteenth century communication became increasingly important to innovations, and many innovations depended on international communication between inventors (Mokyr 1990: 123-124). Second, it was what Mokyr calls a typical nineteenth century invention, in that it was a smart combination of many existing technologies. Many different innovations in the technologies which it combined had been necessary to make possible the innovation of cinema. Third, cinema was a major innovation in the sense that it was quickly and universally adopted throughout the western world, quicker than the steam engine, the railroad or the steamship.

The Emergence of Cinema

For about the first ten years of its existence, cinema in the United States and elsewhere was mainly a trick and a gadget. Before 1896 the coin-operated Kinematograph of Edison was present at fairs and in entertainment venues. Spectators had to throw a coin in the machine and peek through glasses to see the film. The first projections, from 1896 onwards, attracted large audiences. Lumière had a group of operators who traveled around the world with the cinematograph, and showed the pictures in theaters. After a few years films became a part of the program in vaudeville and sometimes in theater as well. At the same time traveling cinema emerged: cinemas which traveled around with a tent or mobile theater and set up shop for a short time in towns and villages. These differed from the Lumière operators and others in that they catered for the general, popular audiences, while the former were more upscale parts of theater programs, or a special program for the bourgeoisie (Musser 1990: 140, 299, 417-20).

This whole era, which in the U.S. lasted up to about 1905, was a time in which cinema seemed just one of many new fashions, and it was not at all certain that it would persist, or that it would be forgotten or marginalized quickly, such as happened to the boom in skating rinks and bowling alleys at the time. This changed when Nickelodeons, fixed cinemas with a few hundred seats, emerged and quickly spread all over the country between 1905 and 1907. From this time onwards cinema changed into an industry in its own right, which was distinct from other entertainments, since it had its own buildings and its own advertising. The emergence of fixed cinemas coincided which a huge growth phase in the business in general; film production increased greatly, and film distribution developed into a special activity, often managed by large film producers. However, until about 1914, besides the cinemas, films also continued to be combined with live entertainment in vaudeville and other theaters (Musser 1990; Allen 1980).

Figure 1 shows the total length of negatives released on the U.S., British and French film markets. In the U.S., the total released negative length increased from 38,000 feet in 1897, to two million feet in 1910, to twenty million feet in 1920. Clearly, the initial U.S. growth between 1893 and 1898 was very strong: the market increased by over three orders of magnitude, but from an infinitesimal initial base. Between 1898 and 1906, far less growth took place, and in this period it may well have looked like the cinematograph would remain a niche product, a gimmick shown at fairs and used to be interspersed with live entertainment. From 1907, however, a new, sharp sustained growth phase starts: The market increased further again by two orders of magnitude – and from a far higher base this time. At the same time, the average film length increased considerably, from eighty feet in 1897 to seven hundred feet in 1910 to three thousand feet in 1920. One reel of film held about 1,500 feet and had a playing time of about fifteen minutes.

Between the mid-1900s and 1914 the British and French markets were growing at roughly the same rates as the U.S. one. World War I constituted a discontinuity: from 1914 onwards European growth rates are far lower those in the U.S.

The prices the Nickelodeons charged were between five and ten cents, for which spectators could stay as long as they liked. Around 1910, when larger cinemas emerged in hot city center locations, more closely resembling theaters than the small and shabby Nickelodeons, prices increased. They varied from between one dollar to one dollar and-a-half for ‘first run’ cinemas to five cents for sixth-run neighborhood cinemas (see also Sedgwick 1998).

Figure 1

Total Released Length on the U.S., British and French Film Markets (in Meters), 1893-1922

Note: The length refers to the total length of original negatives that were released commercially.

See Bakker 2005, appendix I for the method of estimation and for a discussion of the sources.

Source: Bakker 2001b; American Film Institute Catalogue, 1893-1910; Motion Picture World, 1907-1920.

The Quality Race

Once Nickelodeons and other types of cinemas were established, the industry entered a new stage with the emergence of the feature film. Before 1915, cinemagoers saw a succession of many different films, each between one and fifteen minutes, of varying genres such as cartoons, newsreels, comedies, travelogues, sports films, ‘gymnastics’ pictures and dramas. After the mid-1910s, going to the cinema meant watching a feature film, a heavily promoted dramatic film with a length that came closer to that of a theater play, based on a famous story and featuring famous stars. Shorts remained only as side dishes.

The feature film emerged when cinema owners discovered that films with a far higher quality and length, enabled them to ask far higher ticket prices and get far more people into their cinemas, resulting in far higher profits, even if cinemas needed to pay far more for the film rental. The discovery that consumers would turn their back on packages of shorts (newsreels, sports, cartoons and the likes) as the quality of features increased set in motion a quality race between film producers (Bakker 2005). They all started investing heavily in portfolios of feature films, spending large sums on well-known stars, rights to famous novels and theater plays, extravagant sets, star directors, etc. A contributing factor in the U.S. was the demise of the Motion Picture Patents Company (MPPC), a cartel that tried to monopolize film production and distribution. Between about 1908 and 1912 the Edison-backed MPPC had restricted quality artificially by setting limits on film length and film rental prices. When William Fox and the Department of Justice started legal action in 1912, the power of the MPPC quickly waned and the ‘independents’ came to dominate the industry.

In the U.S., the motion picture industry became the internet of the 1910s. When companies put the word motion pictures in their IPO investors would flock to it. Many of these companies went bankrupt, were dissolved or were taken over. A few survived and became the Hollywood studios most of which we still know today: Paramount, Metro-Goldwyn-Mayer (MGM), Warner Brothers, Universal, Radio-Keith-Orpheum (RKO), Twentieth Century-Fox, Columbia and United Artists.

A necessary condition for the quality race was some form of vertical integration. In the early film industry, films were sold. This meant that the cinema-owner who bought a film, would receive all the marginal revenues the film generated. In the film industry, these revenues were largely marginal profits, as most costs were fixed, so an additional film ticket sold was pure (gross) profit. Because the producer did not get any of these revenues, at the margin there was little incentive to increase quality. When outright sales made way for the rental of films to cinemas for a fixed fee, producers got a higher incentive to increase a film’s quality, because it would generate more rentals (Bakker 2005). This further increased when percentage contracts were introduced for large city center cinemas, and when producers-distributors actually started to buy large cinemas. The changing contractual relationship between cinemas and producers was paralleled between producers and distributors.

The Decline and Fall of the European Film Industry

Because the quality race happened when Europe was at war, European companies could not participate in the escalation of quality (and production costs) discussed above. This does not mean all of them were in crisis. Many made high profits during the war from newsreels, other short films, propaganda films and distribution. They also were able to participate in the shift towards the feature film, substantially increasing output in the new genre during the war (Figure 2). However, it was difficult for them to secure the massive amount of venture capital necessary to participate in the quality race while their countries were at war. Even if they would have managed it may have been difficult to justify these lavish expenditures when people were dying in the trenches.

Yet a few European companies did participate in the escalation phase. The Danish Nordisk company invested heavily in long feature-type films, and bought cinema chains and distributors in Germany, Austria and Switzerland. Its strategy ended when the German government forced it to sell its German assets to the newly founded UFA company, in return for a 33 percent minority stake. The French Pathé company was one of the largest U.S. film producers. It set up its own U.S. distribution network and invested in heavily advertised serials (films in weekly installments) expecting that this would become the industry standard. As it turned out, Pathé bet on the wrong horse and was overtaken by competitors riding high on the feature film. Yet it eventually switched to features and remained a significant company. In the early 1920s, its U.S. assets were sold to Merrill Lynch and eventually became part of RKO.

Figure 2

Number of Feature Films Produced in Britain, France and the U.S., 1911-1925

(semi-logarithmic scale)

Source: Bakker 2005 [American Film Institute Catalogue; British Film Institute; Screen Digest; Globe, World Film Index, Chirat, Longue métrage.]

Because it could not participate in the quality race, the European film industry started to decline in relative terms. Its market share at home and abroad diminished substantially (Figure 3). In the 1900s European companies supplied at least half of the films shown in the U.S. In the early 1910s this dropped to about twenty percent. In the mid-1910s, when the feature film emerged, the European market share declined to nearly undetectable levels.

By the 1920s, most large European companies gave up film production altogether. Pathé and Gaumont sold their U.S. and international business, left film making and focused on distribution in France. Éclair, their major competitor, went bankrupt. Nordisk continued as an insignificant Danish film company, and eventually collapsed into receivership. The eleven largest Italian film producers formed a trust, which terribly failed and one by one they fell into financial disaster. The famous British producer, Cecil Hepworth, went bankrupt. By late 1924, hardly any films were being made in Britain. American films were shown everywhere.

Figure 3

Market Shares by National Film Industries, U.S., Britain, France, 1893-1930

Note: EU/US is the share of European companies on the U.S. market, EU/UK is the share of European companies on the British market, and so on. For further details see Bakker 2005.

The Rise of Hollywood

Once they had lost out, it was difficult for European companies to catch up. First of all, since the sharply rising film production costs were fixed and sunk, market size was becoming of essential importance as it affected the amount of money that could be spent on a film. Exactly at this crucial moment, the European film market disintegrated, first because of war, later because of protectionism. The market size was further diminished by heavy taxes on cinema tickets that sharply increased the price of cinema compared to live entertainment.

Second, the emerging Hollywood studios benefited from first mover advantages in feature film production: they owned international distribution networks, they could offer cinemas large portfolios of films at a discount (block-booking), sometimes before they were even made (blind-bidding), the quality gap with European features was so large it would be difficult to close in one go, and, finally, the American origin of the feature films in the 1910s had established U.S. films as a kind of brand, leaving consumers with high switching costs to try out films from other national origins. It would be extremely costly for European companies to re-enter international distribution, produce large portfolios, jump-start film quality, and establish a new brand of films – all at the same time (Bakker 2005).

A third factor was the rise of Hollywood as production location. The large existing American Northeast coast film industry and the newly emerging film industry in Florida declined as U.S. film companies started to locate in Southern California. First of all, the ‘sharing’ of inputs facilitated knowledge spillovers and allowed higher returns. The studios lowered costs because creative inputs had less down-time, needed to travel less, could participate in many try-outs to achieve optimal casting and could be rented out easily to competitors when not immediately wanted. Hollywood also attracted new creative inputs through non-monetary means: even more than money creative inputs wanted to maximize fame and professional recognition. For an actress, an offer to work with the world’s best directors, costume designers, lighting specialists and make-up artists was difficult to decline.

Second, a thick market for specialized supply and demand existed. Companies could easily rent out excess studio capacity (for example, during the nighttime B-films were made), and a producer was quite likely to find the highly specific products or services needed somewhere in Hollywood (Christopherson and Storper 1987, 1989). While a European industrial ‘film’ district may have been competitive and even have a lower over-all cost/quality ratio than Hollywood, a first European major would have a substantially higher cost/quality ratio (lacking external economies) and would therefore not easily enter (see, for example, Krugman and Obstfeld 2003, chapter 6). If entry did happen, the Hollywood studios could and would buy successful creative inputs away, since they could realize higher returns on these inputs, which resulted in American films with even a higher perceived quality, thus perpetuating the situation.

Sunlight, climate and the variety of landscape in California were of course favorable to film production, but were not unique. Locations such as Florida, Italy, Spain and Southern France offered similar conditions.

The Coming of Sound

In 1927, sound films were introduced. The main innovator was Warner Brothers, backed by the bank Goldman, Sachs, which actually parachuted a vice-president to Warner. Although many other sound systems had been tried and marketed from the 1900s onwards, the electrical microphone, invented at Bell labs in the mid-1920s, sharply increased the quality of sound films and made possible the change of the industry. Sound increased the interests in the film industry of large industrial companies such as General Electric, Western Electric and RCA, as well as those of the banks who were eager the finance the new innovation, such as the Bank of America and Goldman, Sachs.

In economic terms, sound represented an exogenous jump in sunk costs (and product quality) which did not affect the basic industry structure very much: The industry structure was already highly concentrated before sound and the European, New York/Jersey and Florida film industries were already shattered. What it did do was industrialize away most of the musicians and entertainers that had complemented the silent films with sound and entertainment, especially those working in the smaller cinemas. This led to massive unemployment among musicians (see, for example, Gomery 1975; Kraft 1996).

The effect of sound film in Europe was to increase the domestic revenues of European films, because they became more culture-specific as they were in the local language, but at the same time it decreased the foreign revenues European films received (Bakker 2004b). It is difficult to completely assess the impact of sound film, as it coincided with increased protection; many European countries set quotas for the amount of foreign films that could be shown shortly before the coming of sound. In France, for example, where sound became widely adopted from 1930 onwards, the U.S. share of films dropped from eighty to fifty percent between 1926 and 1929, mainly the result of protectionist legislation. During the 1930s, the share temporarily declined to about forty percent, and then hovered to between fifty and sixty percent. In short, protectionism decreased the U.S. market share and increased the French market shares of French and other European films, while sound film increased French market share, mostly at the expense of other European films and less so at the expense of U.S. films.

In Britain, the share of releases of American films declined from eighty percent in 1927 to seventy percent in 1930, while British films increased from five percent to twenty percent, exactly in line with the requirements of the 1927 quota act. After 1930, the American share remained roughly stable. This suggests that sound film did not have a large influence, and that the share of U.S. films was mainly brought down by the introduction of the Cinematograph Films Act in 1927, which set quotas for British films. Nevertheless, revenue data, which are unfortunately lacking, would be needed to give a definitive answer, as little is known about effects on the revenue per film.

The Economics of the Interwar Film Trade

Because film production costs were mainly fixed and sunk, international sales or distribution were important, because these were additional sales without much additional cost to the producer; the film itself had already been made. Films had special characteristics that necessitated international sales. Because they essentially were copyrights rather than physical products, theoretically the costs of additional sales were zero. Film production involved high endogenous sunk costs, recouped through renting the copyright to the film. The marginal foreign revenue equaled marginal net revenue (and marginal profits after the film’s production costs had been fully amortized). All companies large or small had to take into account foreign sales when setting film budgets (Bakker 2004b).

Films were intermediate products sold to foreign distributors and cinemas. While the rent paid varied depending on perceived quality and general conditions of supply and demand, the ticket price paid by consumers generally did not vary. It only varied by cinema: highest in first-run city center cinemas and lowest in sixth-run ramshackle neighborhood cinemas. Cinemas used films to produce ‘spectator-hours’: a five-hundred-seat cinema providing one hour of film, produced five hundred spectator-hours of entertainment. If it sold three hundred tickets, the other two hundred spectator-hours produced would have perished.

Because film was an intermediate product and a capital good at that, international competition could not be on price alone, just as sales of machines depend on the price/performance ratio. If we consider a film’s ‘capacity to sell spectator-hours’ (hereafter called selling capacity) as proportional to production costs, a low-budget producer could not simply push down a film’s rental price in line with its quality in order to make a sale; even at a price of zero, some low-budget films could not be sold. The reasons were twofold.

First, because cinemas had mostly fixed costs and few variable costs, a film’s selling capacity needed to be at least as large as fixed cinema costs plus its rental price. A seven-hundred-seat cinema, with a production capacity of 39,200 spectator-hours a week, weekly fixed costs of five hundred dollars, and an average admission price of five cents per spectator-hour, needed a film selling at least ten thousand spectator-hours, and would not be prepared to pay for that (marginal) film, because it only recouped fixed costs. Films needed a minimum selling capacity to cover cinema fixed costs. Producers could only price down low-budget films to just above the threshold level. With a lower expected selling capacity, these films could not be sold at any price.

This reasoning assumes that we know a film’s selling capacity ex ante. A main feature distinguishing foreign markets from domestic ones was that uncertainty was markedly lower: from a film’s domestic launch the audience appeal was known, and each subsequent country added additional information. While a film’s audience appeal across countries was not perfectly correlated, uncertainty was reduced. For various companies, correlations between foreign and domestic revenues for entire film portfolios fluctuated between 0.60 and 0.95 (Bakker 2004b). Given the riskiness of film production, this reduction in uncertainty undoubtedly was important.

The second reason for limited price competition was the opportunity cost, given cinemas’ production capacities. If the hypothetical cinema obtained a high-capacity film for a weekly rental of twelve hundred dollars, which sold all 39,200 spectator-hours, the cinema made a profit of $260 (($0.05 times 39,200) – $1,200 – $500 = $260). If a film with half the budget and, we assume, half the selling capacity, rented for half the price, the cinema-owner would lose $120 (($0.05 times 19,600) – $600 – $500 = -$120). Thus, the cinema owner would want to pay no more than $220 for the lower budget film, given that the high budget film is available (($0.05 times 19,600) – $220- $500 = $260). So the low-capacity film with half the selling capacity of the high-capacity film would need to sell for under a fifth of the price of the high capacity film to even enable the possibility of a transaction.

These sharply increasing returns to selling capacity made the setting of production outlays important, as a right price/capacity ratio was crucial to win foreign markets.

How Films Became Branded Products

To make sure film revenues reached above cinema fixed costs, film companies transformed films into branded products. With the emergence of the feature film, they started to pay large sums to actors, actresses and directors and for rights to famous plays and novels. This is still a major characteristic of the film industry today that fascinates many people. Yet the huge sums paid for stars and stories are not as irrational and haphazard as they sometimes may seem. Actually, they might be just as ‘rational’ and have just as quantifiable a return as direct spending on marketing and promotion (Bakker 2001a).

To secure an audience, film producers borrowed branding techniques from other consumer goods’ industries, but the short product-life-cycle forced them to extend the brand beyond one product – using trademarks or stars – to buy existing ‘brands,’ such as famous plays or novels, and to deepen the product-life-cycle by licensing their brands.

Thus, the main value of stars and stories lay not in their ability to predict successes, but in their services as giant ‘publicity machines’ which optimized advertising effectiveness by rapidly amassing high levels of brand-awareness. After a film’s release, information such as word-of-mouth and reviews would affect its success. The young age at which stars reached their peak, and the disproportionate income distribution even among the superstars, confirm that stars were paid for their ability to generate publicity. Likewise, because ‘stories’ were paid several times as much as original screenplays, they were at least partially bought for their popular appeal (Bakker 2001a).

Stars and stories marked a film’s qualities to some extent, confirming that they at least contained themselves. Consumer preferences confirm that stars and stories were the main reason to see a film. Further, fame of stars is distributed disproportionately, possibly even twice as unequal as income. Film companies, aided by long-term contracts, probably captured part of the rent of their popularity. Gradually these companies specialized in developing and leasing their ‘instant brands’ to other consumer goods’ industries in the form of merchandising.

Already from the late 1930s onwards, the Hollywood studios used the new scientific market research techniques of George Gallup to continuously track the brand-awareness among the public of their major stars (Bakker 2003). Figure 4 is based on one such graph used by Hollywood. It shows that Lana Turner was a rising star, Gable was consistently a top star, while Stewart’s popularity was high but volatile. James Stewart was eleven percentage-points more popular among the richest consumers than among the poorest, while Lana Turner differed only a few percentage-points. Additional segmentation by city size seemed to matter, since substantial differences were found: Clark Gable was ten percentage-points more popular in small cities than in large ones. Of the richest consumers, 51 percent wanted to see a movie starring Gable, but altogether they constituted just 14 percent of Gable’s market, while the 57 percent poorest Gable-fans constituted 34 percent. The increases in Gable’s popularity roughly coincided with his releases, suggesting that while producers used Gable partially for the brand-awareness of his name, each use (film) subsequently increased or maintained that awareness in what seems to have been a self-reinforcing process.

Figure 4

Popularity of Clark Gable, James Stewart and Lana Turner among U.S. respondents

April 1940 – October 1942, in percentage

Source: Audience Research Inc.; Bakker 2003.

The Film Industry’s Contribution to Economic Growth and Welfare

By the late 1930s, cinema had become an important mass entertainment industry. Nearly everyone in the Western world went to the cinema and many at least once a week. Cinema had made possible a massive growth in productivity in the entertainment industry and thereby disproved the notions of some economists that productivity growth in certain service industries is inherently impossible. Between 1900 and 1938, output of the entertainment industry, measured in spectator-hours, grew substantially in the U.S., Britain and France, varying from three to eleven percent per year over a period of nearly forty years (Table 1). The output per worker increased from 2,453 spectator hours in the U.S. in 1900 to 34,879 in 1938. In Britain it increased from 16,404 to 37,537 spectator-hours and in France from 1,575 to 8,175 spectator-hours. This phenomenal growth could be explained partially by adding more capital (such as in the form of film technology and film production outlays) and partially by simply producing more efficiently with the existing amount of capital and labor. The increase in efficiency (‘total factor productivity’) varied from about one percent per year in Britain to over five percent in the U.S., with France somewhere in between. In all countries, this increase in efficiency was at least one and a half times the increase in efficiency at the level of the entire nation. For the U.S. it was as much as five times and for France it was more than three times the national increase in efficiency (Bakker 2004a).

Another noteworthy feature is that the labor productivity in entertainment varied less across countries in the late 1930s than it did in 1900. Part of the reason is that cinema technology made entertainment partially tradable and therefore forced productivity in similar directions in all countries; the tradable part of the entertainment industry would now exert competitive pressure on the non-tradable part (Bakker 2004a). It is therefore not surprising that cinema caused the lowest efficiency increase in Britain, which had already a well-developed and competitive entertainment industry (with the highest labor and capital productivity both in 1900 and in 1938) and higher efficiency increases in the U.S. and to a lesser extent in France, which had less well-developed entertainment industries in 1900.

Another way to measure the contribution of film technology to the economy in the late 1930s is by using a social savings methodology. If we assume that cinema did not exist and all demand for entertainment (measured in spectator-hours) would have to be met by live entertainment, we can calculate the extra costs to society and thus the amount saved by film technology. In the U.S., these social savings amounted to as much as 2.2 percent ($2.5 billion) of GDP, in France to just 1.4 percent ($0.16 billion) and in Britain to only 0.3 percent ($0.07 billion) of GDP.

A third and different way to look at the contribution of film technology to the economy is to look at the consumer surplus generated by cinema. Contrary to the TFP and social savings techniques used above, which assume that cinema is a substitute for live entertainment, this approach assumes that cinema is a wholly new good and that therefore the entire consumer surplus generated by it is ‘new’ and would not have existed without cinema. For an individual consumer, the surplus is the difference between the price she was willing to pay and the ticket she actually paid. This difference varies from consumer to consumer, but with econometric techniques, one can estimate the sum of individual surpluses for an entire country. The resulting national consumer surpluses for entertainment varied from about a fifth of total entertainment expenditure in the U.S., to about half in Britain and as much as three quarters in France.

All the measures show that by the late 1930s cinema was making an essential contribution in increasing total welfare as well as the entertainment industry’s productivity.

Vertical Disintegration

After the Second World War, the Hollywood film industry disintegrated: production, distribution and exhibition became separate activities that were not always owned by the same organization. Three main causes brought about the vertical disintegration. First, the U.S. Supreme Court forced the studios to divest their cinema chains in 1948. Second, changes in the social-demographic structure in the U.S. brought about a shift towards entertainment within the home: many young couples started to live in the new suburbs and wanted to stay home for entertainment. Initially, they mainly used radio for this purpose and later they switched to television (Gomery 1985). Third, television broadcasting in itself (without the social-demographic changes that increased demand for it) constituted a new distribution channel for audiovisual entertainment and thus decreased the scarcity of distribution capacity. This meant that television took over the focus on the lowest common denominator from radio and cinema, while the latter two differentiated their output and started to focus more on specific market segments.

Figure 5

Real Cinema Box Office Revenue, Real Ticket Price and Number of Screens in the U.S., 1945-2002

Note: The values are in dollars of 2002, using the EH.Net consumer price deflator.

Source: Adapted from Vogel 2004 and Robertson 2001.

The consequence was a sharp fall in real box office revenue in the decade after the war (Figure 5). After the mid-1950s, real revenue stabilized, and remained the same, with some fluctuations, until the mid-1990s. The decline in screens was more limited. After 1963 the number of screens increased again steadily to reach nearly twice the 1945 level in the 1990s. Since the 1990s there have been more movie screens in the U.S. than ever before. The proliferation of screens, coinciding with declining capacity per screen, facilitated market segmentation. Revenue per screen nearly halved in the decade after the war, then made a rebound during the 1960s, to start a long and steady decline from 1970 onwards. The real price of a cinema ticket was quite stable until the 1960s, after which it more than doubled. Since the early 1970s, the price has been declining again and nowadays the real admission price is about what it was in 1965.

It was in this adverse post-war climate that the vertical disintegration unfolded. It took place at three levels. First (obviously) the Hollywood studios divested their cinema-chains. Second, they outsourced part of their film production and most of their production factors to independent companies. This meant that the Hollywood studios would only produce part of the films they distributed themselves, that they changed the long-term, seven-year contracts with star actors for per-film contracts and that they sold off part of their studio facilities to rent them back for individual films. Third, the Hollywood studios’ main business became film distribution and financing. They specialized in planning and assembling a portfolio of films, contracting and financing most of them and marketing and distributing them world-wide.

The developments had three important effects. First, production by a few large companies was replaced by production by many small flexibly specialized companies. Southern California became an industrial district for the film industry and harbored an intricate network of these businesses, from set design companies and costume makers, to special effects firms and equipment rental outfits (Storper and Christopherson 1989). Only at the level of distribution and financing did concentration remain high. Second, films became more differentiated and tailored to specific market segments; they were now aimed at a younger and more affluent audience. Third, the European film market gained in importance: because the social-demographic changes (suburbanization) and the advent of television happened somewhat later in Europe, the drop in cinema attendance also happened later there. The result was that the Hollywood off-shored a large chunk – at times over half – of their production to Europe in the 1960s. This was stimulated by lower European production costs, difficulties in repatriating foreign film revenues and by the vertical disintegration in California, which severed the studios’ ties with their production units and facilitated outside contracting.

European production companies could better adapt to changes in post-war demand because they were already flexibly specialized. The British film production industry, for example, had been fragmented almost from its emergence in the 1890s. In the late 1930s, distribution became concentrated, mainly through the efforts of J. Arthur Rank, while the production sector, a network of flexibly specialized companies in and around London, boomed. After the war, the drop in admissions followed the U.S. with about a ten year delay (Figure 6). The drop in the number of screens experienced the same lag, but was more severe: about two-third of British cinema screens disappeared, versus only one-third in the U.S. In France, after the First World War film production had disintegrated rapidly and chaotically into a network of numerous small companies, while a few large firms dominated distribution and production finance. The result was a burgeoning industry, actually one of the fastest growing French industries in the 1930s.

Figure 6

Admissions and Number of Screens in Britain, 1945-2005

Source: Screen Digest/Screen Finance/British Film Institute and Robertson 2001.

Several European companies attempted to (re-)enter international film distribution, such as Rank in the 1930s and 1950s, the International Film Finance Corporation in the 1960s, Gaumont in the 1970s, PolyGram in the 1970s and again in the 1990s, Cannon in the 1980s. All of them failed in terms of long-run survival, even if they made profits during some years. The only postwar entry strategy that was successful in terms of survival was the direct acquisition of a Hollywood studio (Bakker 2000).

The Come-Back of Hollywood

From the mid-1970s onwards, the Hollywood studios revived. The slide of box office revenue was brought to a standstill. Revenues were stabilized by the joint effect of seven different factors. First, the blockbuster movie increased cinema attendance. This movie was heavily marketed and supported by intensive television advertisement. Jaws was one of the first of these kind of movies and an enormous success. Second, the U.S. film industry received several kinds of tax breaks from the early 1970s onwards, which were kept in force until the mid-1980s, when Hollywood was in good shape again. Third, coinciding with the blockbuster movie and tax-breaks film budgets increased substantially, resulting in a higher perceived quality and higher quality difference with television, drawing more consumers into the cinema. Fourth, a rise in multiplex cinemas, cinemas with several screens, increased consumer choice and increased the appeal of cinema by offering more variety within a specific cinema, thus decreasing the difference with television in this respect. Fifth, one could argue that the process of flexible specialization of the California film industry was completed in the early 1970s, thus making the film industry ready to adapt more flexibly to changes in the market. MGM’s sale of its studio complex in 1970 marked the final ending of an era. Sixth, new income streams from video sales and rentals and cable television increased the revenues a high-quality film could generate. Seventh, European broadcasting deregulation increased the demand for films by television stations substantially.

From the 1990s onwards further growth was driven by newer markets in Eastern Europe and Asia. Film industries from outside the West also grew substantially, such as those of Japan, Hong Kong, India and China. At the same time, the European Union started a large scale subsidy program for its audiovisual film industry, with mixed economic effects. By 1997, ten years after the start of the program, a film made in the European Union cost 500,000 euros on average, was seventy to eighty percent state-financed, and grossed 800,000 euros world-wide, reaching an audience of 150,000 persons. In contrast, the average American film cost fifteen million euros, was nearly hundred percent privately financed, grossed 58 million euros, and reached 10.5 million persons (Dale 1997). This seventy-fold difference in performance is remarkable. Even when measured in gross return on investment or gross margin, the U.S. still had a fivefold and twofold lead over Europe, respectively.[1] In few other industries does such a pronounced difference exist.

During the 1990s, the film industry moved into television broadcasting. In Europe, broadcasters often co-funded small-scale boutique film production. In the U.S., the Hollywood studios started to merge with broadcasters. In the 1950s they had experienced difficulties with obtaining broadcasting licenses, because their reputation had been compromised by the antitrust actions. They had to wait for forty years before they could finally complete what they intended.[2] Disney, for example, bought the ABC network, Paramount’s owner Viacom bought CBS, and General Electric, owner of NBC, bought Universal. At the same time, the feature film industry was also becoming more connected to other entertainment industries, such as videogames, theme parks and musicals. With video game revenues now exceeding films’ box office revenues, it seems likely that feature films will simply be the flagship part of large entertainment supply system that will exploit the intellectual property in feature films in many different formats and markets.

Conclusion

The take-off of the film industry in the early twentieth century had been driven mainly by changes in demand. Cinema industrialized entertainment by standardizing it, automating it and making it tradable. After its early years, the industry experienced a quality race that led to increasing industrial concentration. Only later did geographical concentration take place, in Southern California. Cinema made a substantial contribution to productivity and total welfare, especially before television. After television, the industry experienced vertical disintegration, the flexible specialization of production, and a self-reinforcing process of increasing distribution channels and capacity as well as market growth. Cinema, then, was not only the first in a row of media industries that industrialized entertainment, but also the first in a series of international industries that industrialized services. The evolution of the film industry thus may give insight into technological change and its attendant welfare gains in many service industries to come.

Selected Bibliography

Allen, Robert C. Vaudeville and Film, 1895-1915. New York: Arno Press, 1980.

Bächlin, Peter, Der Film als Ware. Basel: Burg-Verlag, 1945.

Bakker, Gerben, “American Dreams: The European Film Industry from Dominance to Decline.” EUI Review (2000): 28-36.

Bakker, Gerben. “Stars and Stories: How Films Became Branded Products.” Enterprise and Society 2, no. 3 (2001a): 461-502.

Bakker, Gerben. Entertainment Industrialised: The Emergence of the International Film Industry, 1890-1940. Ph.D. dissertation, European University Institute, 2001b.

Bakker, Gerben. “Building Knowledge about the Consumer: The Emergence of Market Research in the Motion Picture Industry.” Business History 45, no. 1 (2003): 101-27.

Bakker, Gerben. “At the Origins of Increased Productivity Growth in Services: Productivity, Social Savings and the Consumer Surplus of the Film Industry, 1900-1938.” Working Papers in Economic History, No. 81, Department of Economic History, London School of Economics, 2004a.

Bakker, Gerben. “Selling French Films on Foreign Markets: The International Strategy of a Medium-Sized Film Company.” Enterprise and Society 5 (2004b): 45-76.

Bakker, Gerben. “The Decline and Fall of the European Film Industry: Sunk Costs, Market Size and Market Structure, 1895-1926.” Economic History Review 58, no. 2 (2005): 311-52.

Caves, Richard E. Creative Industries: Contracts between Art and Commerce. Cambridge, MA: Harvard University Press, 2000.

Christopherson, Susan, and Michael Storper. “Flexible Specialization and Regional Agglomerations: The Case of the U.S. Motion Picture Industry.” Annals of the Association of American Geographers 77, no. 1 (1987).

Christopherson, Susan, and Michael Storper. “The Effects of Flexible Specialization on Industrial Politics and the Labor Market: The Motion Picture Industry.” Industrial and Labor Relations Review 42, no. 3 (1989): 331-47.

Gomery, Douglas, The Coming of Sound to the American Cinema: A History of the Transformation of an Industry. Ph.D. dissertation, University of Wisconsin, 1975.

Gomery, Douglas, “The Coming of television and the ‘Lost’ Motion Picture Audience.” Journal of Film and Video 37, no. 3 (1985): 5-11.

Gomery, Douglas. The Hollywood Studio System. London: MacMillan/British Film Institute, 1986; reprinted 2005.

Kraft, James P. Stage to Studio: Musicians and the Sound Revolution, 1890-1950. Baltimore: Johns Hopkins University Press, 1996.

Krugman, Paul R., and Maurice Obstfeld, International Economics: Theory and Policy (sixth edition). Reading, MA: Addison-Wesley, 2003.

Low, Rachael, and Roger Manvell, The History of the British Film, 1896-1906. London, George Allen & Unwin, 1948.

Michaelis, Anthony R. “The Photographic Arts: Cinematography.” In A History of Technology, Vol. V: The Late Nineteenth Century, c. 1850 to c. 1900, edited by Charles Singer, 734-51. Oxford, Clarendon Press, 1958, reprint 1980.

Mokyr, Joel. The Lever of Riches: Technological Creativity and Economic Progress. Oxford: Oxford University Press, 1990.

Musser, Charles. The Emergence of Cinema: The American Screen to 1907. The History of American Cinema, Vol. I. New York: Scribner, 1990.

Sedgwick, John, “Product Differentiation at the Movies: Hollywood, 1946-65.” Journal of Economic History 63 (2002): 676-705.

Sedgwick, John, and Michael Pokorny. “The Film Business in Britain and the United States during the 1930s.” Economic History Review 57, no. 1 (2005): 79-112.

Sedgwick, John, and Mike Pokorny, editors. An Economic History of Film. London: Routledge, 2004.

Thompson, Kristin.. Exporting Entertainment: America in the World Film Market, 1907-1934. London: British Film Institute, 1985.

Vogel, Harold L. Entertainment Industry Economics: A Guide for Financial Analysis. Cambridge: Cambridge University Press, Sixth Edition, 2004.

Gerben Bakker may be contacted at gbakker at essex.ac.uk


[1] Gross return on investment, disregarding interest costs and distribution charges was 60 percent for European vs. 287 percent for U.S. films. Gross margin was 37 percent for European vs. 74 percent for U.S. films. Costs per viewer are 3.33 vs. 1.43 euros, revenues per viewer are 5.30 vs. 5.52 euros.

[2] The author is indebted to Douglas Gomery for this point.

Citation: Bakker, Gerben. “The Economic History of the International Film Industry”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-the-international-film-industry/

The Dust Bowl

Geoff Cunfer, Southwest Minnesota State University

What Was “The Dust Bowl”?

The phrase “Dust Bowl” holds a powerful place in the American imagination. It connotes a confusing mixture of concepts. Is the Dust Bowl a place? Was it an event? An era? American popular culture employs the term in all three ways. Ask most people about the Dust Bowl and they can place it in the Middle West, though in the imagination it wanders widely, from the Rocky Mountains, through the Great Plains, to Illinois and Indiana. Many people can situate the event in the 1930s. Ask what happened then, and a variety of stories emerge. A combination of severe drought and economic depression created destitution among farmers. Millions of desperate people took to the roads, seeking relief in California where they became exploited itinerant farm laborers. Farmers plowed up a pristine wilderness for profit, and suffered ecological collapse because of their recklessness. Dust Bowl stories, like its definitions, are legion, and now approach the mythological.

The words also evoke powerful graphic images taken from art and literature. Consider these lines from the opening chapter of John Steinbeck’s The Grapes of Wrath (1939):

“Now the wind grew strong and hard and it worked at the rain crust in the corn fields. Little by little the sky was darkened by the mixing dust, and carried away. The wind grew stronger. The rain crust broke and the dust lifted up out of the fields and drove gray plumes into the air like sluggish smoke. The corn threshed the wind and made a dry, rushing sound. The finest dust did not settle back to earth now, but disappeared into the darkening sky. … The people came out of their houses and smelled the hot stinging air and covered their noses from it. And the children came out of the houses, but they did not run or shout as they would have done after a rain. Men stood by their fences and looked at the ruined corn, drying fast now, only a little green showing through the film of dust. The men were silent and they did not move often. And the women came out of the houses to stand beside their men – to feel whether this time the men would break.”

When Americans hear the words “Dust Bowl,” grainy black and white photographs of devastated landscapes and destitute people leap to mind. Dorothea Lange and Arthur Rothstein classics bring the Dust Bowl vividly to life in our imaginations (Figures [1] [2] [3] [4]). For the musically inclined, Woody Guthrie’s Dust Bowl ballads define the event with evocative lyrics such as those in “The Great Dust Storm” (Figure 5). Some of America’s most memorable art – literature, photography, music – emerged from the Dust Bowl and that art helped to define the event and build the myth in American popular culture.

The Dust Bowl was an event defined by artists and by government bureaucrats. It has become part of American mythology, an episode in the nation’s progression from the Pilgrims to Lexington and Concord, through Civil War and frontier settlement, to industrial modernization, Depression, and Dust Bowl. Many of the great themes of American history are tied up in the Dust Bowl story: agricultural settlement and frontier struggle; industrial mechanization with the arrival of tractors; the migration from farm to city, the transformation from rural to urban. Add the Great Depression and the rise of a powerful federal government, and we have covered many of the themes of a standard U.S. history survey course.

Despite the multiple uses of the phrase “Dust Bowl” it was an event which occurred in a specific place and time. The Dust Bowl was a coincidence of drought, severe wind erosion, and economic depression that occurred on the Southern and Central Great Plains during the 1930s. The drought – the longest and deepest in over a century of systematic meteorological observation – began in 1933 and continued through 1940. In 1941 rain poured down on the region, dust storms ceased, crops thrived, economic prosperity returned, and the Dust Bowl was over. But for those eight years crops failed, sandy soils blew and drifted over failed croplands, and rural people, unable to meet cash obligations, suffered through tax delinquency, farm foreclosure, business failure, and out-migration. The Dust Bowl was defined by a combination of:

  • extended severe drought and unusually high temperatures
  • episodic regional dust storms and routine localized wind erosion
  • agricultural failure, including both cropland and livestock operations
  • the collapse of the rural economy, affecting farmers, rural businesses, and local governments
  • an aggressive reform movement by the federal government
  • migration from rural to urban areas and out of the region

The Dust Bowl on the Great Plains coincided with the Great Depression. Though few plainsmen suffered directly from the 1929 stock market crash, they were too intimately connected to national and world markets to be immune from economic repercussions. The farm recession had begun in the 1920s; after the 1919 Armistice transformed Europe from an importer to an exporter of agricultural products, American farmers again faced their constant nemesis: production so high that prices were pushed downward. Farmers grew more cotton, wheat, and corn, than the market could consume, and prices fell, fell more, and then hit rock bottom by the early 1930s. Cotton, one of the staple crops of the southern plains, for example, sold for 36 cents per pound in 1919, dropped to 18 cents in 1928, then collapsed to a dismal 6 cents per pound in 1931. One irony of the Dust Bowl is that the world could not really buy all of the crops Great Plains farmers produced. Even the severe drought and crop failures of the 1930s had little impact on the flood of farm commodities inundating the world market.

Routine Dust Storms on the Southern and Central Plains

The location of the drought and the dust storms shifted from place to place between 1934 and 1940 (Figure 6 [large]). The core of the Dust Bowl was in the Texas and Oklahoma panhandles, southwestern Kansas and southeastern Colorado. The drought began on the Great Plains, from the Dakotas through Texas and New Mexico, in 1931. The following year was wetter, but 1933 and 1934 set low rainfall records across the plains. In some places is did not rain at all. Others quickly accumulated a deep deficit. Figure 7 [large] shows percent difference from average rainfall over five-year periods, with the location of the shifting Dust Bowl over top. Only a handful of counties (mapped in blue) had more rain than average between 1932 and 1940. And few counties fall into the 0 to -10 percent range. Most counties were 10 percent drier than average, or more, and more than eighty counties were at least 20 percent drier. Scientists now believe that the 1930s drought coincided with a severe La Nina event in the Pacific Ocean. Cool sea surface temperatures reduced the amount of moisture entering the jet stream and directed it south of the continental U.S. The drought was deep, extensive, and persisted for more than a decade.

Whenever there is drought on the southern and central plains dust blows. The flat topography and continental climate mean that winds are routinely high. When soil moisture declines, plant cover, whether native plants or crops, diminishes in tandem. Normally dry conditions mean that native plants typically cover less than 60 percent of the ground surface, leaving the other 40+ percent in bare, exposed soils. During the driest conditions native prairie vegetation sometimes covers less than 20 percent of the ground surface, exposing 80 percent or more of the soil to strong prairie winds. Failed crop fields are completely bare of vegetation. In these circumstances soil blows. Local wind erosion can drift soil from one field into ridges and ripples in a neighboring field (Figure 8). Stronger regional dust storms can move dirt many miles before it drifts down along fence lines and around buildings (Figure 9). In rare instances very large dust storms carry soils high into the air where they can travel for many hundreds of miles. These “black blizzards” are the most spectacular and memorable of dust storms, but happen only infrequently (Figure 10).

When wind erosion and dust storms began in the 1930s experienced plains residents hardly welcomed the development, but neither did it surprise them. Dust storms were an occasional spring occurrence from Texas and New Mexico through Kansas and Colorado. They did not happen every year, but often enough to be treated casually. This series of excerpts from the Salina, Kansas Journal and Herald in 1879 indicates that dust storms were a routine part of plains life in dry years:

“For the past few days the gentle winds have enveloped the city with dust decorations. And some of this time it has been intensely hot. Imagine the pleasantness of the situation.”

“During the past few days we have had several exhibitions of what dust can do when propelled by a gale. We had the disagreeable March winds, and saw with ample disgust the evolutions and gyrations of the dust. We have had enough of it, but will undoubtedly get much more of the same kind during this very disagreeable month.”

“Real estate moved considerably this week.”

“Another ‘hardest’ blow ever seen in Kansas … Salina was tantalized with a small sprinkle of rain Thursday afternoon. The wind and dust soon resumed full sway.”

“People have just got through digging from the pores of the skin the dirt driven there by the furious dust storms which for several days since our last issue have been lifting this county ‘clean off its toes.’ Even sinners have stood some chance of being translated with such favoring gales.”

“The wind which held high carnival in this section last Thursday, filled the air with such clouds of dust that darkness of the ‘consistency of twilight’ prevailed. Buildings across the street could not be distinguished. The title of all land about for a while was not worth a cotton hat – it was so ‘unsettled.’ It was of the nature of personal property, because it was not a ‘fixture’ and very moveable. The air was so filled with dust as to be stifling even within houses.”

The Salina newspapers reported dust storms many springs through the late nineteenth century. An item in the Journal in 1885 epitomizes the local attitude: “When the March winds commenced raising dust Monday, the average citizen calmly smiled and whispered ‘so natural!'”

What Made the 1930s Different?

Dust storms were not new to the region in the 1930s, but a number of demographic and cultural factors were new. First there were a lot more people living in the region in the 1930s than there had been in the 1880s. The population of the Great Plains – 450 counties stretching from Texas and New Mexico to the Dakotas and Montana – stood at only 800,000 in 1880; it was seven times that, at 5.6 million in 1930. The dust storms affected many more people than they had ever done before. And many of those people were relative newcomers, having only arrived in recent years. They had no personal or family memory of life in the plains, and many interpreted the arrival of episodic dust storms as an entirely new phenomenon. An example is the reminiscence by Minnie Zeller Doehring, written in 1981. Having moved with her family to western Kansas in 1906, at age 7, she reported “I remember the first Dirt storm in Western Kansas. I think it was about 1911. And a drouth that year followed by a severe winter.” Neither she nor her family had experienced any of the nineteenth century dust storms reported in local newspapers, so when one arrived during a dry spring five years after they arrived, it seemed like a brand new development.

Second, this drought and sequence of dust storms coincided with an international economic depression, the worst in two centuries of American history. The financial stresses and personal misery of the Depression blended seamlessly into the environmental disasters of drought, crop failure, farm loss, and dust. It was difficult to assign blame. Were farmers failing because of the economic crisis? Bank failures? Landlords squeezing tenants? Drought? Dust storms? In the midst of these concurrent crises emerged an activist and newly powerful federal government. Franklin Roosevelt’s New Deal roared into Washington in 1933 with a landslide mandate from voters to fix all of the ills plaguing the nation: depression, bank failures, unemployment, agricultural overproduction, underconsumption, the list went on and on. And several items quickly added to that list of ills to be fixed were rural poverty, agricultural land use, soil erosion, and dust storms.

The drought and dust storms were certainly hard on farmers. Crop failure was widespread and repeated. In 1935 46.6 million acres of crops failed on the Great Plains, with over 130 counties losing more than half their planted acreage. Many farmers lived on the edge of financial failure. In debt for land, tractor, automobile, and even for last year’s seed, one or two years with reduced income often meant bankruptcy. Tax delinquency became a serious problem throughout the plains. As land owners fell behind on their local property tax payments, county governments grew desperate. Many counties had delinquency rates over 40 percent for several consecutive years, and were faced with laying off teachers, police, and other employees. A few counties considered closing county government altogether and merging with neighboring counties. Their only alternative was to foreclose on now nearly worthless farms which they could neither rent nor sell. Many families behind on mortgage payments and taxes simply packed up and left without notice. The crisis was not restricted to farmers, bankers, and county employees. Throughout the plains sales of tractors, automobiles, and fertilizer declined in the early 1930s, affecting small town merchants across the board.

Consider the example of William and Sallie DeLoach, typical southern plains farmers who moved from farm to farm through the early twentieth century, repeatedly trying to buy land and repeatedly losing it to the bank in the face of drought or low crop prices. After an earlier failed attempt to buy land, the family invested in a 177 acre cotton farm in Lamb County, Texas in 1924, paying 30 dollars per acre. A month later they passed up a chance to sell it for 35 dollars an acre. Within three months of the purchase late summer rains failed to arrive, the cotton crop bloomed late, and the first freeze of winter killed it. Unable to make the upcoming mortgage payment, the DeLoaches forfeited their land and the 200 dollars they had already paid toward it. One bad season meant default. Through the rest of the 1920s the DeLoaches rented from Sallie’s father and farmed cotton in Lamb County. In September, 1929, just weeks before the stock market crashed, William thought the time auspicious to invest in land again, and bought 90 acres. He farmed it, then rented part of it to another farmer. Rain was plentiful in 1931, and by the end of that year DeLoach had repaid back rent to his father-in-law, paid off all outstanding debts except his land mortgage, and started 1932 in good shape. But the 1930s were hard on the southern plains, with the extended drought, dust storms, and widespread poverty. The one bright spot for farmers was the farm subsidies instituted by Franklin Roosevelt’s New Deal. In 1933 DeLoach plowed up 55 acres of already growing cotton in exchange for a check from the federal government. Lamb County led the state in the cotton reduction program, bringing nearly 1.4 million dollars into the county in 1933. Drought lingered over the Texas panhandle through 1934 and 1935, and by early 1936 DeLoach was beleaguered again. When the Supreme Court declared the Agricultural Adjustment Act (AAA) unconstitutional it appeared that federal farm subsidies would disappear. A few weeks after that decision DeLoach had a visit from his real estate agent:

Mr. Gholson came by this A.M. and wanted to know what I was going to do about my land notes. I told him I could do nothing, only let them have the land back. … I told him I had payed the school tax for 1934. Owed the state and county for 1935, also the state for 1934. All tole [sic] about $37.50. He said he would pay that and we (wife & I) could deed the land back to the Nugent people. I hate to lose the land and what I have payed on it, but I can’t do any thing else. ‘Big fish eat the little ones.’ The law is take from the poor devil that wants a home, give to the rich. I have lost about $1000.00 on the land.

A week later:

Mr. Gholson came by. Told me about the deed he had drawn in Dallas. … He said if I would pay for the deed and stamps, which would be $5.00, the deal would be closed. I asked him if that meant just as the land stood now. He said yes. He said they would pay the balance of taxes. Well, they ought to. I have payed $800.00 or better on the land, but got behind and could not do any thing else. Any way my mind is at ease. I do not think Gholson or any of the cold blooded land grafters would lose any sleep on account of taking a home away from any poor devil.

For the third time in his career DeLoach defaulted and turned over his farm. Later that month Congress rewrote the AAA legislation to meet Constitutional requirements, and the farm programs have continued ever since. With federal program income again assured, DeLoach purchased yet another 68 acre farm in September, 1936, moved the family onto it, and tried again. Other families were not as persistent, and when crop failure led to bankruptcy they packed up and left the region. The term popularly assigned to such emigrants, “Dust Bowl refugees,” assigned a single cause – dust storms – to what was in fact a complex and multi-causal event (Figure 11).

Like dust storms and agricultural setbacks, high out-migration was not new to the plains. Throughout the settlement period, from about 1870 to 1920, there was very high turnover in population. Many people moved into the region, but many moved out also. James Malin found that 10 year population turnover on the western Kansas frontier ranged from 41 to 67 percent between 1895 and 1930. Many people were half farmers, half land speculators, buying frontier land cheap (or homesteading it for free), then selling a few years later on a rising market. People moved from farm to farm, always looking for a better opportunity, often following a succession of frontiers over a lifetime, from Ohio to Illinois to Kansas to Colorado. Outmigration from the Great Plains in the 1930s was not considerably higher than it had been over the previous 50 years. What changed in the 1930s was that new immigrants stopped moving in to replace those leaving. Many rural areas of the grassland began a slow population decline that had not yet bottomed out in 2000.

The New Deal Response to Drought and Dust Storms

Emigrants from the Great Plains were not new in the 1930s. Neither was drought, agricultural crisis, or dust storms. This drought and these dust storms were certainly more severe than those that wracked the plains in 1879-1880, in the mid 1890s, and again in 1911. And more people were adversely affected because total population was higher. But what was most different about the 1930s was the response of the federal government. In past crises, when farmers went bankrupt, when grassland counties lost 20 percent of their population, when dust storms descended, the federal government stood aloof. It felt no responsibility for the problems, no popular mandate to solve them. Just the opposite was the case in the 1930s. The New Deal set out to solve the nation’s problems, and in the process contributed to the creation of the Dust Bowl as an historic event of mythological proportions.

The economic and agricultural disaster of the 1930s provided an opening for experimentation with federal land use management. The idea had begun among economists in agricultural colleges in the 1920s who proposed removing “submarginal” land from crop production. “Submarginal” referred to land low in productivity, unsuited for the production of farm crops, or incapable of profitable cultivation. A “land utilization” movement emerged in the 1920s to classify farm land as good, poor, marginal, or submarginal, and to forcibly retire the latter from production. Such rational planning aimed to reduce farm poverty, contract chronic overproduction of farm crops, and protect land vulnerable to damage. M.L. Wilson, of Montana State Agricultural College, focused the academic movement while Lewis C. Gray, at the Bureau of Agricultural Economics (BAE), led the effort within the U.S. Department of Agriculture. The land utilization movement began well before the 1930s, but the drought and dust storms of that decade provided a fortuitous justification for a land use policy already on the table, and newly created agencies like the Soil Conservation Service (SCS), the Resettlement Administration (RA), and the Farm Security Administration (FSA) were the loudest to publicize and deplore the Dust Bowl wracking America’s heartland.

Whereas the land use adjustment movement had begun as an attempt to solve chronic rural poverty, the arrival of dust storms in 1934 provided a second justification for aggressive federal action to change land use practices. Federal bureaucrats created the central narrative of the Dust Bowl, in part because it emphasized the need for these new reform agencies. The FSA launched a sophisticated public relations campaign to publicize the disaster unfolding in the Great Plains. It hired world class photographers to document the suffering of plains people, giving them specific instructions from Washington to photograph the most eroded landscapes and the most destitute people. Dorothea Lange’s photographs of emigrants on the road to California still stand as some of the most evocative images in American history (Figures 12-13). The Resettlement Administration also hired filmmaker Pare Lorentz the make a series of movies, including “The Plow that Broke the Plains.”

The narrative behind this publicity campaign was this: in the nineteenth and early twentieth centuries farmers had come to the dry western plains, encouraged by a misguided Homestead Act, where they plowed up land unsuited for farming. The grassland should have been left in native grass for grazing, but small farmers, hoping to make profits growing cash crops like wheat had plowed the land, exposing soils to relentless winds. When serious drought struck in the 1930s the wounded landscape succumbed to dust storms that devastated farms, farmers, and local economies. The result was a mass exodus of desperately poor people, a social failure caused by misuse of land. The profit motive and private land ownership were behind this failure, and only a scientifically grounded federal bureaucracy could manage land use wisely in the interests of all Americans, rather than for the profit of a few individuals. Federal agents would retire land from cultivation, return it to grassland, and teach remaining farmers how to use their land more carefully to prevent erosion. This effort would, of course, require large budgets and thousands of employees, but it was vital to resolving a rural disaster.

The New Deal government, with Congressional support and appropriations, began to put reform plan into place. A host of new agencies vied to manage the program, including the FSA, the SCS, the RA, and the Agricultural Adjustment Administration (AAA). Each implemented a variety of reforms. The RA began purchasing “submarginal” land from farmers, eventually acquiring some 10 million acres for former farmland in the Great Plains. (These lands are now mostly managed by the U.S. Forest Service as National Grasslands leased to nearby private ranchers for grazing.) The RA and the FSA worked to relocate destitute farmers on better lands, or move them out of farming altogether. The SCS established demonstration projects in counties across the nation, where local cooperator farmers implemented recommended soils conservation techniques on their farms, such as fallowing, strip cropping, contour plowing, terracing, growing cover crops, and a variety of cultivation techniques. There were efforts in each county to establish Land Use Planning Committees made of local farmers and federal agents who would have authority over land use practices on private farms. These committees functioned for several years in the late 1930s, but ended in most places by the early 1940s. The most important and expensive measure was the AAA’s development of a comprehensive system of farm subsidies, which paid farmers cash for reducing their acreage of commodity crops. The subsidies, created as an emergency Depression measure, have become routine and persist 70 years later. They brought millions of dollars into nearly every farming county in the U.S. and permanently transformed the economics of agriculture. In a multitude of innovative ways the federal government set out to remake American farming. The Dust Bowl narrative served exceedingly well to justify these massive and revolutionary changes in farming, America’s most common occupation for most of its history.

Conclusion

The Dust Bowl finally ended in 1941 with the arrival of drenching rains on the southern and central plains and with the advent of World War II. The rains restored crops and settled the dust. The war diverted public and government attention from the plains. In a telling move, the FSA photography corps was reconstituted as the Office of War Information, the propaganda wing of the government’s war effort. The narrative of World War II replaced the Dust Bowl narrative in the public’s attention. Congress diverted funding away from the Great Plains and toward mobilization. The Land Utilization Program stopped buying submarginal land and the county Land Use Planning Committees ceased. Some of the New Deal reforms became permanent. The AAA subsidy system continued through the present and the Soil Conservation Service (now the Natural Resources Conservation Service) created a stable niche promoting wise agricultural land management and soil mapping.

Ironically, overall land use on the Great Plains had changed little during the decade. About the same amount of land was devoted to crops in the second half of the twentieth century as in the first half. Farmers grew the same crops in the same mixtures. Many implemented the milder reforms promoted by New Dealers – contour plowing, terracing – but little cropland was converted back to pasture. The “submarginal” regions have continued to grow wheat, sorghum, and other crops in roughly the same quantities. Despite these facts the public has generally adopted the Dust Bowl narrative. If asked, most will identify the Dust Bowl as caused by misuse of land. The descendants of the federal agencies created in the 1930s still claim to have played a leading role in solving the crisis. Periodic droughts and dust storms have returned to the region since 1941, notably in the early 1950s and again in the 1970s. Towns in the core dust storm region still have dust storms in dry years. Lubbock, Texas, for example, experienced 35 dust storms in 1973-74. Rural depopulation continues in the Great Plains (although cities in the region have grown even faster than rural places have declined). None of these droughts, dust storms, or periods of depopulation have received the concentrated public attention that those of the 1930s did. Nonetheless, environmentalists and critics of modern agricultural systems continue to warn that unless we reform modern farming the Dust Bowl may return.

References and Additional Reading

Bonnifield, Mathew P. The Dust Bowl: Men, Dirt, and Depression. Albuquerque: University of New Mexico Press, 1979.

Cronon, William. “A Place for Stories: Nature, History, and Narrative.” Journal of American History 78 (March 1992): 1347-1376.

Cunfer, Geoff. “Causes of the Dust Bowl.” In Past Time, Past Place: GIS for History, edited by Anne Kelly Knowles, 93-104. Redlands, CA: ESRI Press, 2002.

Cunfer, Geoff. “The New Deal’s Land Utilization Program in the Great Plains.” Great Plains Quarterly 21 (Summer 2001): 193-210.

Cunfer, Geoff. On the Great Plains: Agriculture and Environment. Texas A&M University Press, 2005.

The Future of the Great Plains: Report of the Great Plains Committee. Washington: Government Printing Office, 1936.

Ganzel, Bill. Dust Bowl Descent. Lincoln: University of Nebraska Press, 1984.

Great Plains Quarterly 6 (Spring 1986), special issue on the Dust Bowl.

Gregory, James N. American Exodus: The Dust Bowl Migration and Okie Culture in California. New York: Oxford University Press, 1989.

Guthrie, Woody. Dust Bowl Ballads. New York: Folkway Records, 1964.

Gutmann, Myron P. and Geoff Cunfer. “A New Look at the Causes of the Dust Bowl.” Charles L. Wood Agricultural History Lecture Series, no. 99-1. Lubbock: International Center for Arid and Semiarid Land Studies, Texas Tech University, 1999.

Hansen, Zeynep K. and Gary D. Libecap. “Small Farms, Externalities, and the Dust Bowl of the 1930s.” Journal of Political Economy 112 (2004): 665-694.

Hurt, R. Douglas. The Dust Bowl: An Agricultural and Social History. Chicago: Nelson-Hall, 1981.

Lookingbill, Brad. Dust Bowl USA: Depression America and the Ecological Imagination, 1929-1941. Athens: Ohio University Press, 2001.

Lorentz, Pare. The Plow that Broke the Plains. Washington: Resettlement Administration, 1936.

Malin, James C. “Dust Storms, 1850-1900.” Kansas Historical Quarterly 14 (May, August, and November 1946): 129-144, 265-296; 391-413.

Malin, James C. Essays on Historiography. Ann Arbor, Michigan: Edwards Brothers, 1946.

Malin, James C. The Grassland of North America: Prolegomena to Its History. Lawrence, Kansas, privately printed, 1961.

Riney-Kehrberg, Pamela. Rooted in Dust: Surviving Drought and Depression in Southwestern Kansas. Lawrence: University Press of Kansas, 1994.

Riney-Kehrberg, Pamela, editor. Waiting on the Bounty: The Dust Bowl Diary of Mary Knackstedt Dyck. Iowa City: University of Iowa Press, 1999.

Svobida, Lawrence. Farming the Dust Bowl: A Firsthand Account from Kansas. Lawrence: University Press of Kansas, 1986.

Wooten, H.H. The Land Utilization Program, 1934 to 1964: Origin, Development, and Present Status. U.S.D.A. Economic Research Service Agricultural Economic Report no. 85. Washington: Government Printing Office, 1965.

Worster, Donald. Dust Bowl: The Southern Plains in the 1930s. New York: Oxford University Press, 1979.

Wunder, John R., Frances W. Kaye, and Vernon Carstensen. Americans View Their Dust Bowl Experience. Niwot: University Press of Colorado, 1999.

Citation: Cunfer, Geoff. “The Dust Bowl”. EH.Net Encyclopedia, edited by Robert Whaples. August 18, 2004. URL http://eh.net/encyclopedia/the-dust-bowl/

The Depression of 1893

David O. Whitten, Auburn University

The Depression of 1893 was one of the worst in American history with the unemployment rate exceeding ten percent for half a decade. This article describes economic developments in the decades leading up to the depression; the performance of the economy during the 1890s; domestic and international causes of the depression; and political and social responses to the depression.

The Depression of 1893 can be seen as a watershed event in American history. It was accompanied by violent strikes, the climax of the Populist and free silver political crusades, the creation of a new political balance, the continuing transformation of the country’s economy, major changes in national policy, and far-reaching social and intellectual developments. Business contraction shaped the decade that ushered out the nineteenth century.

Unemployment Estimates

One way to measure the severity of the depression is to examine the unemployment rate. Table 1 provides estimates of unemployment, which are derived from data on output — annual unemployment was not directly measured until 1929, so there is no consensus on the precise magnitude of the unemployment rate of the 1890s. Despite the differences in the two series, however, it is obvious that the Depression of 1893 was an important event. The unemployment rate exceeded ten percent for five or six consecutive years. The only other time this occurred in the history of the US economy was during the Great Depression of the 1930s.

Timing and Depth of the Depression

The National Bureau of Economic Research estimates that the economic contraction began in January 1893 and continued until June 1894. The economy then grew until December 1895, but it was then hit by a second recession that lasted until June 1897. Estimates of annual real gross national product (which adjust for this period’s deflation) are fairly crude, but they generally suggest that real GNP fell about 4% from 1892 to 1893 and another 6% from 1893 to 1894. By 1895 the economy had grown past its earlier peak, but GDP fell about 2.5% from 1895 to 1896. During this period population grew at about 2% per year, so real GNP per person didn’t surpass its 1892 level until 1899. Immigration, which had averaged over 500,000 people per year in the 1880s and which would surpass one million people per year in the first decade of the 1900s, averaged only 270,000 from 1894 to 1898.

Table 1
Estimates of Unemployment during the 1890s

Year Lebergott Romer
1890 4.0% 4.0%
1891 5.4 4.8
1892 3.0 3.7
1893 11.7 8.1
1894 18.4 12.3
1895 13.7 11.1
1896 14.5 12.0
1897 14.5 12.4
1898 12.4 11.6
1899 6.5 8,7
1900 5.0 5.0

Source: Romer, 1984

The depression struck an economy that was more like the economy of 1993 than that of 1793. By 1890, the US economy generated one of the highest levels of output per person in the world — below that in Britain, but higher than the rest of Europe. Agriculture no longer dominated the economy, producing only about 19 percent of GNP, well below the 30 percent produced in manufacturing and mining. Agriculture’s share of the labor force, which had been about 74% in 1800, and 60% in 1860, had fallen to roughly 40% in 1890. As Table 2 shows, only the South remained a predominantly agricultural region. Throughout the country few families were self-sufficient, most relied on selling their output or labor in the market — unlike those living in the country one hundred years earlier.

Table 2
Agriculture’s Share of the Labor Force by Region, 1890

Northeast 15%
Middle Atlantic 17%
Midwest 43%
South Atlantic 63%
South Central 67%
West 29%

Economic Trends Preceding the 1890s

Between 1870 and 1890 the number of farms in the United States rose by nearly 80 percent, to 4.5 million, and increased by another 25 percent by the end of the century. Farm property value grew by 75 percent, to $16.5 billion, and by 1900 had increased by another 25 percent. The advancing checkerboard of tilled fields in the nation’s heartland represented a vast indebtedness. Nationwide about 29% of farmers were encumbered by mortgages. One contemporary observer estimated 2.3 million farm mortgages nationwide in 1890 worth over $2.2 billion. But farmers in the plains were much more likely to be in debt. Kansas croplands were mortgaged to 45 percent of their true value, those in South Dakota to 46 percent, in Minnesota to 44, in Montana 41, and in Colorado 34 percent. Debt covered a comparable proportion of all farmlands in those states. Under favorable conditions the millions of dollars of annual charges on farm mortgages could be borne, but a declining economy brought foreclosures and tax sales.

Railroads opened new areas to agriculture, linking these to rapidly changing national and international markets. Mechanization, the development of improved crops, and the introduction of new techniques increased productivity and fueled a rapid expansion of farming operations. The output of staples skyrocketed. Yields of wheat, corn, and cotton doubled between 1870 and 1890 though the nation’s population rose by only two-thirds. Grain and fiber flooded the domestic market. Moreover, competition in world markets was fierce: Egypt and India emerged as rival sources of cotton; other areas poured out a growing stream of cereals. Farmers in the United States read the disappointing results in falling prices. Over 1870-73, corn and wheat averaged $0.463 and $1.174 per bushel and cotton $0.152 per pound; twenty years later they brought but $0.412 and $0.707 a bushel and $0.078 a pound. In 1889 corn fell to ten cents in Kansas, about half the estimated cost of production. Some farmers in need of cash to meet debts tried to increase income by increasing output of crops whose overproduction had already demoralized prices and cut farm receipts.

Railroad construction was an important spur to economic growth. Expansion peaked between 1879 and 1883, when eight thousand miles a year, on average, were built including the Southern Pacific, Northern Pacific and Santa Fe. An even higher peak was reached in the late 1880s, and the roads provided important markets for lumber, coal, iron, steel, and rolling stock.

The post-Civil War generation saw an enormous growth of manufacturing. Industrial output rose by some 296 percent, reaching in 1890 a value of almost $9.4 billion. In that year the nation’s 350,000 industrial firms employed nearly 4,750,000 workers. Iron and steel paced the progress of manufacturing. Farm and forest continued to provide raw materials for such established enterprises as cotton textiles, food, and lumber production. Heralding the machine age, however, was the growing importance of extractives — raw materials for a lengthening list of consumer goods and for producing and fueling locomotives, railroad cars, industrial machinery and equipment, farm implements, and electrical equipment for commerce and industry. The swift expansion and diversification of manufacturing allowed a growing independence from European imports and was reflected in the prominence of new goods among US exports. Already the value of American manufactures was more than half the value of European manufactures and twice that of Britain.

Onset and Causes of the Depression

The depression, which was signaled by a financial panic in 1893, has been blamed on the deflation dating back to the Civil War, the gold standard and monetary policy, underconsumption (the economy was producing goods and services at a higher rate than society was consuming and the resulting inventory accumulation led firms to reduce employment and cut back production), a general economic unsoundness (a reference less to tangible economic difficulties and more to a feeling that the economy was not running properly), and government extravagance .

Economic indicators signaling an 1893 business recession in the United States were largely obscured. The economy had improved during the previous year. Business failures had declined, and the average liabilities of failed firms had fallen by 40 percent. The country’s position in international commerce was improved. During the late nineteenth century, the United States had a negative net balance of payments. Passenger and cargo fares paid to foreign ships that carried most American overseas commerce, insurance charges, tourists’ expenditures abroad, and returns to foreign investors ordinarily more than offset the effect of a positive merchandise balance. In 1892, however, improved agricultural exports had reduced the previous year’s net negative balance from $89 million to $20 million. Moreover, output of non-agricultural consumer goods had risen by more than 5 percent, and business firms were believed to have an ample backlog of unfilled orders as 1893 opened. The number checks cleared between banks in the nation at large and outside New York, factory employment, wholesale prices, and railroad freight ton mileage advanced through the early months of the new year.

Yet several monthly series of indicators showed that business was falling off. Building construction had peaked in April 1892, later moving irregularly downward, probably in reaction to over building. The decline continued until the turn of the century, when construction volume finally turned up again. Weakness in building was transmitted to the rest of the economy, dampening general activity through restricted investment opportunities and curtailed demand for construction materials. Meanwhile, a similar uneven downward drift in business activity after spring 1892 was evident from a composite index of cotton takings (cotton turned into yarn, cloth, etc.) and raw silk consumption, rubber imports, tin and tin plate imports, pig iron manufactures, bituminous and anthracite coal production, crude oil output, railroad freight ton mileage, and foreign trade volume. Pig iron production had crested in February, followed by stock prices and business incorporations six months later.

The economy exhibited other weaknesses as the March 1893 date for Grover Cleveland’s inauguration to the presidency drew near. One of the most serious was in agriculture. Storm, drought, and overproduction during the preceding half-dozen years had reversed the remarkable agricultural prosperity and expansion of the early 1880s in the wheat, corn, and cotton belts. Wheat prices tumbled twenty cents per bushel in 1892. Corn held steady, but at a low figure and on a fall of one-eighth in output. Twice as great a decline in production dealt a severe blow to the hopes of cotton growers: the season’s short crop canceled gains anticipated from a recovery of one cent in prices to 8.3 cents per pound, close to the average level of recent years. Midwestern and Southern farming regions seethed with discontent as growers watched staple prices fall by as much as two-thirds after 1870 and all farm prices by two-fifths; meanwhile, the general wholesale index fell by one-fourth. The situation was grave for many. Farmers’ terms of trade had worsened, and dollar debts willingly incurred in good times to permit agricultural expansion were becoming unbearable burdens. Debt payments and low prices restricted agrarian purchasing power and demand for goods and services. Significantly, both output and consumption of farm equipment began to fall as early as 1891, marking a decline in agricultural investment. Moreover, foreclosure of farm mortgages reduced the ability of mortgage companies, banks, and other lenders to convert their earning assets into cash because the willingness of investors to buy mortgage paper was reduced by the declining expectation that they would yield a positive return.

Slowing investment in railroads was an additional deflationary influence. Railroad expansion had long been a potent engine of economic growth, ranging from 15 to 20 percent of total national investment in the 1870s and 1880s. Construction was a rough index of railroad investment. The amount of new track laid yearly peaked at 12,984 miles in 1887, after which it fell off steeply. Capital outlays rose through 1891 to provide needed additions to plant and equipment, but the rate of growth could not be sustained. Unsatisfactory earnings and a low return for investors indicated the system was over built and overcapitalized, and reports of mismanagement were common. In 1892, only 44 percent of rail shares outstanding returned dividends, although twice that proportion of bonds paid interest. In the meantime, the completion of trunk lines dried up local capital sources. Political antagonism toward railroads, spurred by the roads’ immense size and power and by real and imagined discrimination against small shippers, made the industry less attractive to investors. Declining growth reduced investment opportunity even as rail securities became less appealing. Capital outlays fell in 1892 despite easy credit during much of the year. The markets for ancillary industries, like iron and steel, felt the impact of falling railroad investment as well; at times in the 1880s rails had accounted for 90 percent of the country’s rolled steel output. In an industry whose expansion had long played a vital role in creating new markets for suppliers, lagging capital expenditures loomed large in the onset of depression.

European Influences

European depression was a further source of weakness as 1893 began. Recession struck France in 1889, and business slackened in Germany and England the following year. Contemporaries dated the English downturn from a financial panic in November. Monetary stringency was a base cause of economic hard times. Because specie — gold and silver — was regarded as the only real money, and paper money was available in multiples of the specie supply, when people viewed the future with doubt they stockpiled specie and rejected paper. The availability of specie was limited, so the longer hard times prevailed the more difficult it was for anyone to secure hard money. In addition to monetary stringency, the collapse of extensive speculations in Australian, South African, and Argentine properties; and a sharp break in securities prices marked the advent of severe contraction. The great banking house of Baring and Brothers, caught with excessive holdings of Argentine securities in a falling market, shocked the financial world by suspending business on November 20, 1890. Within a year of the crisis, commercial stagnation had settled over most of Europe. The contraction was severe and long-lived. In England many indices fell to 80 percent of capacity; wholesale prices overall declined nearly 6 percent in two years and had declined 15 percent by 1894. An index of the prices of principal industrial products declined by almost as much. In Germany, contraction lasted three times as long as the average for the period 1879-1902. Not until mid-1895 did Europe begin to revive. Full prosperity returned a year or more later.

Panic in the United Kingdom and falling trade in Europe brought serious repercussions in the United States. The immediate result was near panic in New York City, the nation’s financial center, as British investors sold their American stocks to obtain funds. Uneasiness spread through the country, fostered by falling stock prices, monetary stringency, and an increase in business failures. Liabilities of failed firms during the last quarter of 1890 were $90 million — twice those in the preceding quarter. Only the normal year’s end grain exports, destined largely for England, averted a gold outflow.

Circumstances moderated during the early months of 1891, although gold flowed to Europe, and business failures remained high. Credit eased, if slowly: in response to pleas for relief, the federal treasury began the premature redemption of government bonds to put additional money into circulation, and the end of the harvest trade reduced demand for credit. Commerce quickened in the spring. Perhaps anticipation of brisk trade during the harvest season stimulated the revival of investment and business; in any event, the harvest of 1891 buoyed the economy. A bumper American wheat crop coincided with poor yields in Europe increase exports and the inflow of specie: US exports in fiscal 1892 were $150 million greater than in the preceding year, a full 1 percent of gross national product. The improved market for American crops was primarily responsible for a brief cycle of prosperity in the United States that Europe did not share. Business thrived until signs of recession began to appear in late 1892 and early 1893.

The business revival of 1891-92 only delayed an inevitable reckoning. While domestic factors led in precipitating a major downturn in the United States, the European contraction operated as a powerful depressant. Commercial stagnation in Europe decisively affected the flow of foreign investment funds to the United States. Although foreign investment in this country and American investment abroad rose overall during the 1890s, changing business conditions forced American funds going abroad and foreign funds flowing into the United States to reverse as Americans sold off foreign holdings and foreigners sold off their holdings of American assets. Initially, contraction abroad forced European investors to sell substantial holdings of American securities, then the rate of new foreign investment fell off. The repatriation of American securities prompted gold exports, deflating the money stock and depressing prices. A reduced inflow of foreign capital slowed expansion and may have exacerbated the declining growth of the railroads; undoubtedly, it dampened aggregate demand.

As foreign investors sold their holdings of American stocks for hard money, specie left the United States. Funds secured through foreign investment in domestic enterprise were important in helping the country meet its usual balance of payments deficit. Fewer funds invested during the 1890s was one of the factors that, with a continued negative balance of payments, forced the United States to export gold almost continuously from 1892 to 1896. The impact of depression abroad on the flow of capital to this country can be inferred from the history of new capital issues in Britain, the source of perhaps 75 percent of overseas investment in the United States. British issues varied as shown in Table 3.

Table 3
British New Capital Issues, 1890-1898 (millions of pounds, sterling)

1890 142.6
1891 104.6
1892 81.1
1893 49.1
1894 91.8
1895 104.7
1896 152.8
1897 157.3
1898 150.2

Source: Hoffmann, p. 193

Simultaneously, the share of new British investment sent abroad fell from one-fourth in 1891 to one-fifth two years later. Over that same period, British net capital flows abroad declined by about 60 percent; not until 1896 and 1897 did they resume earlier levels.

Thus, the recession that began in 1893 had deep roots. The slowdown in railroad expansion, decline in building construction, and foreign depression had reduced investment opportunities, and, following the brief upturn effected by the bumper wheat crop of 1891, agricultural prices fell as did exports and commerce in general. By the end of 1893, business failures numbering 15,242 averaging $22,751 in liabilities, had been reported. Plagued by successive contractions of credit, many essentially sound firms failed which would have survived under ordinary circumstances. Liabilities totaled a staggering $357 million. This was the crisis of 1893.

Response to the Depression

The financial crises of 1893 accelerated the recession that was evident early in the year into a major contraction that spread throughout the economy. Investment, commerce, prices, employment, and wages remained depressed for several years. Changing circumstances and expectations, and a persistent federal deficit, subjected the treasury gold reserve to intense pressure and generated sharp counterflows of gold. The treasury was driven four times between 1894 and 1896 to resort to bond issues totaling $260 million to obtain specie to augment the reserve. Meanwhile, restricted investment, income, and profits spelled low consumption, widespread suffering, and occasionally explosive labor and political struggles. An extensive but incomplete revival occurred in 1895. The Democratic nomination of William Jennings Bryan for the presidency on a free silver platform the following year amid an upsurge of silverite support contributed to a second downturn peculiar to the United States. Europe, just beginning to emerge from depression, was unaffected. Only in mid-1897 did recovery begin in this country; full prosperity returned gradually over the ensuing year and more.

The economy that emerged from the depression differed profoundly from that of 1893. Consolidation and the influence of investment bankers were more advanced. The nation’s international trade position was more advantageous: huge merchandise exports assured a positive net balance of payments despite large tourist expenditures abroad, foreign investments in the United States, and a continued reliance on foreign shipping to carry most of America’s overseas commerce. Moreover, new industries were rapidly moving to ascendancy, and manufactures were coming to replace farm produce as the staple products and exports of the country. The era revealed the outlines of an emerging industrial-urban economic order that portended great changes for the United States.

Hard times intensified social sensitivity to a wide range of problems accompanying industrialization, by making them more severe. Those whom depression struck hardest as well as much of the general public and major Protestant churches, shored up their civic consciousness about currency and banking reform, regulation of business in the public interest, and labor relations. Although nineteenth century liberalism and the tradition of administrative nihilism that it favored remained viable, public opinion began to slowly swing toward governmental activism and interventionism associated with modern, industrial societies, erecting in the process the intellectual foundation for the reform impulse that was to be called Progressivism in twentieth century America. Most important of all, these opposed tendencies in thought set the boundaries within which Americans for the next century debated the most vital questions of their shared experience. The depression was a reminder of business slumps, commonweal above avarice, and principle above principal.

Government responses to depression during the 1890s exhibited elements of complexity, confusion, and contradiction. Yet they also showed a pattern that confirmed the transitional character of the era and clarified the role of the business crisis in the emergence of modern America. Hard times, intimately related to developments issuing in an industrial economy characterized by increasingly vast business units and concentrations of financial and productive power, were a major influence on society, thought, politics, and thus, unavoidably, government. Awareness of, and proposals of means for adapting to, deep-rooted changes attending industrialization, urbanization, and other dimensions of the current transformation of the United States long antedated the economic contraction of the nineties.

Selected Bibliography

*I would like to thank Douglas Steeples, retired dean of the College of Liberal Arts and professor of history, emeritus, Mercer University. Much of this article has been taken from Democracy in Desperation: The Depression of 1893 by Douglas Steeples and David O. Whitten, which was declared an Exceptional Academic Title by Choice. Democracy in Desperation includes the most recent and extensive bibliography for the depression of 1893.

Clanton, Gene. Populism: The Humane Preference in America, 1890-1900. Boston: Twayne, 1991.

Friedman, Milton, and Anna Jacobson Schwartz. A Monetary History of the United States, 1867-1960. Princeton: Princeton University Press, 1963.

Goodwyn, Lawrence. Democratic Promise: The Populist Movement in America. New York: Oxford University Press, 1976.

Grant, H. Roger. Self Help in the 1890s Depression. Ames: Iowa State University Press, 1983.

Higgs, Robert. The Transformation of the American Economy, 1865-1914. New York: Wiley, 1971.

Himmelberg, Robert F. The Rise of Big Business and the Beginnings of Antitrust and Railroad Regulation, 1870-1900. New York: Garland, 1994.

Hoffmann, Charles. The Depression of the Nineties: An Economic History. Westport, CT: Greenwood Publishing, 1970.

Jones, Stanley L. The Presidential Election of 1896. Madison: University of Wisconsin Press, 1964.

Kindleberger, Charles Poor. Manias, Panics, and Crashes: A History of Financial Crises. Revised Edition. New York: Basic Books, 1989.

Kolko, Gabriel. Railroads and Regulation, 1877-1916. Princeton: Princeton University Press, 1965.

Lamoreaux, Naomi R. The Great Merger Movement in American Business, 1895-1904. New York: Cambridge University Press, 1985.

Rees, Albert. Real Wages in Manufacturing, 1890-1914. Princeton, NJ: Princeton University Press, 1961.

Ritter, Gretchen. Goldbugs and Greenbacks: The Antimonopoly Tradition and the Politics of Finance in America. New York: Cambridge University Press, 1997.

Romer, Christina. “Spurious Volatility in Historical Unemployment Data.” Journal of Political Economy 94, no. 1. (1986): 1-37.

Schwantes, Carlos A. Coxey’s Army: An American Odyssey. Lincoln: University of Nebraska Press, 1985.

Steeples, Douglas, and David Whitten. Democracy in Desperation: The Depression of 1893. Westport, CT: Greenwood Press, 1998.

Timberlake, Richard. “Panic of 1893.” In Business Cycles and Depressions: An Encyclopedia, edited by David Glasner. New York: Garland, 1997.

White, Gerald Taylor. Years of Transition: The United States and the Problems of Recovery after 1893. University, AL: University of Alabama Press, 1982.

Citation: Whitten, David. “Depression of 1893″. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/the-depression-of-1893/

An Economic History of Denmark

Ingrid Henriksen, University of Copenhagen

Denmark is located in Northern Europe between the North Sea and the Baltic. Today Denmark consists of the Jutland Peninsula bordering Germany and the Danish Isles and covers 43,069 square kilometers (16,629 square miles). 1 The present nation is the result of several cessions of territory throughout history. The last of the former Danish territories in southern Sweden were lost to Sweden in 1658, following one of the numerous wars between the two nations, which especially marred the sixteenth and seventeenth centuries. Following defeat in the Napoleonic Wars, Norway was separated from Denmark in 1814. After the last major war, the Second Schleswig War in 1864, Danish territory was further reduced by a third when Schleswig and Holstein were ceded to Germany. After a regional referendum in 1920 only North-Schleswig returned to Denmark. Finally, Iceland, withdrew from the union with Denmark in 1944. The following will deal with the geographical unit of today’s Denmark.

Prerequisites of Growth

Throughout history a number of advantageous factors have shaped the Danish economy. From this perspective it may not be surprising to find today’s Denmark among the richest societies in the world. According to the OECD, it ranked seventh in 2004, with income of $29.231 per capita (PPP). Although we can identify a number of turning points and breaks, for the time period over which we have quantitative evidence this long-run position has changed little. Thus Maddison (2001) in his estimate of GDP per capita around 1600 places Denmark as number six. One interpretation could be that favorable circumstances, rather than ingenious institutions or policies, have determined Danish economic development. Nevertheless, this article also deals with time periods in which the Danish economy was either diverging from or converging towards the leading economies.

Table 1:
Average Annual GDP Growth (at factor costs)
Total Per capita
1870-1880 1.9% 0.9%
1880-1890 2.5% 1.5%
1890-1900 2.9% 1.8%
1900-1913 3.2% 2.0%
1913-1929 3.0% 1.6%
1929-1938 2.2% 1.4%
1938-1950 2.4% 1.4%
1950-1960 3.4% 2.6%
1960-1973 4.6% 3.8%
1973-1982 1.5% 1.3%
1982-1993 1.6% 1.5%
1993-2004 2.2% 2.0%

Sources: Johansen (1985) and Statistics Denmark ‘Statistikbanken’ online.

Denmark’s geographical location in close proximity of the most dynamic nations of sixteenth-century Europe, the Netherlands and the United Kingdom, no doubt exerted a positive influence on the Danish economy and Danish institutions. The North German area influenced Denmark both through long-term economic links and through the Lutheran Protestant Reformation which the Danes embraced in 1536.

The Danish economy traditionally specialized in agriculture like most other small and medium-sized European countries. It is, however, rather unique to find a rich European country in the late-nineteenth and mid-twentieth century which retained such a strong agrarian bias. Only in the late 1950s did the workforce of manufacturing industry overtake that of agriculture. Thus an economic history of Denmark must take its point of departure in agricultural development for quite a long stretch of time.

Looking at resource endowments, Denmark enjoyed a relatively high agricultural land-to-labor ratio compared to other European countries, with the exception of the UK. This was significant for several reasons since it, in this case, was accompanied by a comparatively wealthy peasantry.

Denmark had no mineral resources to speak of until the exploitation of oil and gas in the North Sea began in 1972 and 1984, respectively. From 1991 on Denmark has been a net exporter of energy although on a very modest scale compared to neighboring Norway and Britain. The small deposits are currently projected to be depleted by the end of the second decade of the twenty-first century.

Figure 1. Percent of GDP in selected=

Source: Johansen (1985) and Statistics Denmark ’Nationalregnskaber’

Good logistic can be regarded as a resource in pre-industrial economies. The Danish coast line of 7,314 km and the fact that no point is more than 50 km from the sea were advantages in an age in which transport by sea was more economical than land transport.

Decline and Transformation, 1500-1750

The year of the Lutheran Reformation (1536) conventionally marks the end of the Middle Ages in Danish historiography. Only around 1500 did population growth begin to pick up after the devastating effect of the Black Death. Growth thereafter was modest and at times probably stagnant with large fluctuations in mortality following major wars, particularly during the seventeenth century, and years of bad harvests. About 80-85 percent of the population lived from subsistence agriculture in small rural communities and this did not change. Exports are estimated to have been about 5 percent of GDP between 1550 and 1650. The main export products were oxen and grain. The period after 1650 was characterized by a long lasting slump with a marked decline in exports to the neighboring countries, the Netherlands in particular.

The institutional development after the Black Death showed a return to more archaic forms. Unlike other parts of northwestern Europe, the peasantry on the Danish Isles afterwards became a victim of a process of re-feudalization during the last decades of the fifteenth century. A likely explanation is the low population density that encouraged large landowners to hold on to their labor by all means. Freehold tenure among peasants effectively disappeared during the seventeenth century. Institutions like bonded labor that forced peasants to stay on the estate where they were born, and labor services on the demesne as part of the land rent bring to mind similar arrangements in Europe east of the Elbe River. One exception to the East European model was crucial, however. The demesne land, that is the land worked directly under the estate, never made up more than nine percent of total land by the mid eighteenth century. Although some estate owners saw an interest in encroaching on peasant land, the state protected the latter as production units and, more importantly, as a tax base. Bonded labor was codified in the all-encompassing Danish Law of Christian V in 1683. It was further intensified by being extended, though under another label, to all Denmark during 1733-88, as a means for the state to tide the large landlords over an agrarian crisis. One explanation for the long life of such an authoritarian institution could be that the tenants were relatively well off, with 25-50 acres of land on average. Another reason could be that reality differed from the formal rigor of the institutions.

Following the Protestant Reformation in 1536, the Crown took over all church land, thereby making it the owner of 50 percent of all land. The costs of warfare during most of the sixteenth century could still be covered by the revenue of these substantial possessions. Around 1600 the income from taxation and customs, mostly Sound Toll collected from ships that passed the narrow strait between Denmark and today’s Sweden, on the one hand and Crown land revenues on the other were equally large. About 50 years later, after a major fiscal crisis had led to the sale of about half of all Crown lands, the revenue from royal demesnes declined relatively to about one third, and after 1660 the full transition from domain state to tax state was completed.

The bulk of the former Crown land had been sold to nobles and a few common owners of estates. Consequently, although the Danish constitution of 1665 was the most stringent version of absolutism found anywhere in Europe at the time, the Crown depended heavily on estate owners to perform a number of important local tasks. Thus, conscription of troops for warfare, collection of land taxes and maintenance of law and order enhanced the landlords’ power over their tenants.

Reform and International Market Integration, 1750-1870

The driving force of Danish economic growth, which took off during the late eighteenth century was population growth at home and abroad – which triggered technological and institutional innovation. Whereas the Danish population during the previous hundred years grew by about 0.4 percent per annum, growth climbed to about 0.6 percent, accelerating after 1775 and especially from the second decade of the nineteenth century (Johansen 2002). Like elsewhere in Northern Europe, accelerating growth can be ascribed to a decline in mortality, mainly child mortality. Probably this development was initiated by fewer spells of epidemic diseases due to fewer wars and to greater inherited immunity against contagious diseases. Vaccination against smallpox and formal education of midwives from the early nineteenth century might have played a role (Banggård 2004). Land reforms that entailed some scattering of the farm population may also have had a positive influence. Prices rose from the late eighteenth century in response to the increase in populations in Northern Europe, but also following a number of international conflicts. This again caused a boom in Danish transit shipping and in grain exports.

Population growth rendered the old institutional set up obsolete. Landlords no longer needed to bind labor to their estate, as a new class of landless laborers or cottagers with little land emerged. The work of these day-laborers was to replace the labor services of tenant farmers on the demesnes. The old system of labor services obviously presented an incentive problem all the more since it was often carried by the live-in servants of the tenant farmers. Thus, the labor days on the demesnes represented a loss to both landlords and tenants (Henriksen 2003). Part of the land rent was originally paid in grain. Some of it had been converted to money which meant that real rents declined during the inflation. The solution to these problems was massive land sales both from the remaining crown lands and from private landlords to their tenants. As a result two-thirds of all Danish farmers became owner-occupiers compared to only ten percent in the mid-eighteenth century. This development was halted during the next two and a half decades but resumed as the business cycle picked up during the 1840s and 1850s. It was to become of vital importance to the modernization of Danish agriculture towards the end of the nineteenth century that 75 percent of all agricultural land was farmed by owners of middle-sized farms of about 50 acres. Population growth may also have put a pressure on common lands in the villages. At any rate enclosure begun in the 1760s, accelerated in the 1790s supported by legislation and was almost complete in the third decade of the nineteenth century.

The initiative for the sweeping land reforms from the 1780s is thought to have come from below – that is from the landlords and in some instances also from the peasantry. The absolute monarch and his counselors were, however, strongly supportive of these measures. The desire for peasant land as a tax base weighed heavily and the reforms were believed to enhance the efficiency of peasant farming. Besides, the central government was by now more powerful than in the preceding centuries and less dependent on landlords for local administrative tasks.

Production per capita rose modestly before the 1830s and more pronouncedly thereafter when a better allocation of labor and land followed the reforms and when some new crops like clover and potatoes were introduced at a larger scale. Most importantly, the Danes no longer lived at the margin of hunger. No longer do we find a correlation between demographic variables, deaths and births, and bad harvest years (Johansen 2002).

A liberalization of import tariffs in 1797 marked the end of a short spell of late mercantilism. Further liberalizations during the nineteenth and the beginning of the twentieth century established the Danish liberal tradition in international trade that was only to be broken by the protectionism of the 1930s.

Following the loss of the secured Norwegian market for grain in 1814, Danish exports began to target the British market. The great rush forward came as the British Corn Law was repealed in 1846. The export share of the production value in agriculture rose from roughly 10 to around 30 percent between 1800 and 1870.

In 1849 absolute monarchy was peacefully replaced by a free constitution. The long-term benefits of fundamental principles such as the inviolability of private property rights, the freedom of contracting and the freedom of association were probably essential to future growth though hard to quantify.

Modernization and Convergence, 1870-1914

During this period Danish economic growth outperformed that of most other European countries. A convergence in real wages towards the richest countries, Britain and the U.S., as shown by O’Rourke and Williamsson (1999), can only in part be explained by open economy forces. Denmark became a net importer of foreign capital from the 1890s and foreign debt was well above 40 percent of GDP on the eve of WWI. Overseas emigration reduced the potential workforce but as mortality declined population growth stayed around one percent per annum. The increase in foreign trade was substantial, as in many other economies during the heyday of the gold standard. Thus the export share of Danish agriculture surged to a 60 percent.

The background for the latter development has featured prominently in many international comparative analyses. Part of the explanation for the success, as in other Protestant parts of Northern Europe, was a high rate of literacy that allowed a fast spread of new ideas and new technology.

The driving force of growth was that of a small open economy, which responded effectively to a change in international product prices, in this instance caused by the invasion of cheap grain to Western Europe from North America and Eastern Europe. Like Britain, the Netherlands and Belgium, Denmark did not impose a tariff on grain, in spite of the strong agrarian dominance in society and politics.

Proposals to impose tariffs on grain, and later on cattle and butter, were turned down by Danish farmers. The majority seems to have realized the advantages accruing from the free imports of cheap animal feed during the ongoing process of transition from vegetable to animal production, at a time when the prices of animal products did not decline as much as grain prices. The dominant middle-sized farm was inefficient for wheat but had its comparative advantage in intensive animal farming with the given technology. O’Rourke (1997) found that the grain invasion only lowered Danish rents by 4-5 percent, while real wages rose (according to expectation) but more than in any other agrarian economy and more than in industrialized Britain.

The move from grain exports to exports of animal products, mainly butter and bacon, was to a great extent facilitated by the spread of agricultural cooperatives. This organization allowed the middle-sized and small farms that dominated Danish agriculture to benefit from the economy of scale in processing and marketing. The newly invented steam-driven continuous cream separator skimmed more cream from a kilo of milk than conventional methods and had the further advantage of allowing transported milk brought together from a number of suppliers to be skimmed. From the 1880s the majority of these creameries in Denmark were established as cooperatives and about 20 years later, in 1903, the owners of 81 percent of all milk cows supplied to a cooperative (Henriksen 1999). The Danish dairy industry captured over a third of the rapidly expanding British butter-import market, establishing a reputation for consistent quality that was reflected in high prices. Furthermore, the cooperatives played an active role in persuading the dairy farmers to expand production from summer to year-round dairying. The costs of intensive feeding during the wintertime were more than made up for by a winter price premium (Henriksen and O’Rourke 2005). Year-round dairying resulted in a higher rate of utilization of agrarian capital – that is of farm animals and of the modern cooperative creameries. Not least did this intensive production mean a higher utilization of hitherto underemployed labor. From the late 1890’s, in particular, labor productivity in agriculture rose at an unanticipated speed at par with productivity increase in the urban trades.

Industrialization in Denmark took its modest beginning in the 1870s with a temporary acceleration in the late 1890s. It may be a prime example of an industrialization process governed by domestic demand for industrial goods. Industry’s export never exceeded 10 percent of value added before 1914, compared to agriculture’s export share of 60 percent. The export drive of agriculture towards the end of the nineteenth century was a major force in developing other sectors of the economy not least transport, trade and finance.

Weathering War and Depression, 1914-1950

Denmark, as a neutral nation, escaped the devastating effects of World War I and was even allowed to carry on exports to both sides in the conflict. The ensuing trade surplus resulted in a trebling of the money supply. As the monetary authorities failed to contain the inflationary effects of this development, the value of the Danish currency slumped to about 60 percent of its pre-war value in 1920. The effects of monetary policy failure were aggravated by a decision to return to the gold standard at the 1913 level. When monetary policy was finally tightened in 1924, it resulted in fierce speculation in an appreciation of the Krone. During 1925-26 the currency returned quickly to its pre-war parity. As this was not counterbalanced by an equal decline in prices, the result was a sharp real appreciation and a subsequent deterioration in Denmark’s competitive position (Klovland 1997).

Figure 2. Indices of the Krone Real Exchange Rate and Terms Of Trade (1980=100; Real rates based on Wholesale Price Index

Source: Abildgren (2005)

Note: Trade with Germany is included in the calculation of the real effective exchange rate for the whole period, including 1921-23.

When, in September 1931, Britain decided to leave the gold standard again, Denmark, together with Sweden and Norway, followed only a week later. This move was beneficial as the large real depreciation lead to a long-lasting improvement in Denmark’s competitiveness in the 1930s. It was, no doubt, the single most important policy decision during the depression years. Keynesian demand management, even if it had been fully understood, was barred by a small public sector, only about 13 percent of GDP. As it was, fiscal orthodoxy ruled and policy was slightly procyclical as taxes were raised to cover the deficit created by crisis and unemployment (Topp 1995).

Structural development during the 1920s, surprisingly for a rich nation at this stage, was in favor of agriculture. The total labor force in Danish agriculture grew by 5 percent from 1920 to 1930. The number of employees in agriculture was stagnating whereas the number of self-employed farmers increased by a larger number. The development in relative incomes cannot account for this trend but part of the explanation must be found in a flawed Danish land policy, which actively supported a further parceling out of land into small holdings and restricted the consolidation into larger more viable farms. It took until the early 1960s before this policy began to be unwound.

When the world depression hit Denmark with a minor time lag, agriculture still employed one-third of the total workforce while its contribution to total GDP was a bit less than one-fifth. Perhaps more importantly, agricultural goods still made up 80 percent of total exports.

Denmark’s terms of trade, as a consequence, declined by 24 percent from 1930 to 1932. In 1933 and 1934 bilateral trade agreements were forced upon Denmark by Britain and Germany. In 1932 Denmark had adopted exchange control, a harsh measure even for its time, to stem the net flow of foreign exchange out of the country. By rationing imports exchange control also offered some protection of domestic industry. At the end of the decade manufacture’s GDP had surpassed that of agriculture. In spite of the protectionist policy, unemployment soared to 13-15 percent of the workforce.

The policy mistakes during World War I and its immediate aftermath served as a lesson for policymakers during World War II. The German occupation force (April 9, 1940 until May 5, 1945) drew the funds for its sustenance and for exports to Germany on the Danish central bank whereby the money supply more than doubled. In response the Danish authorities in 1943 launched a policy of absorbing money through open market operations and, for the first time in history, through a surplus on the state budget.

Economic reconstruction after World War II was swift, as again Denmark had been spared the worst consequences of a major war. In 1946 GDP recovered its highest pre-war level. In spite of this, Denmark received relatively generous support through the Marshall Plan of 1948-52, when measured in dollars per capita.

From Riches to Crisis, 1950-1973: Liberalizations and International Integration Once Again

The growth performance during 1950-1957 was markedly lower than the Western European average. The main reason was the high share of agricultural goods in Danish exports, 63 percent in 1950. International trade in agricultural products to a large extent remained regulated. Large deteriorations in the terms of trade caused by the British devaluation 1949, when Denmark followed suit, the outbreak of the Korean War in 1950, and the Suez-crisis of 1956 made matters worse. The ensuing deficits on the balance of payment led the government to contractionary policy measures which restrained growth.

The liberalization of the flow of goods and capital in Western Europe within the framework of the OEEC (the Organization for European Economic Cooperation) during the 1950s probably dealt a blow to some of the Danish manufacturing firms, especially in the textile industry, that had been sheltered through exchange control and wartime. Nevertheless, the export share of industrial production doubled from 10 percent to 20 percent before 1957, at the same time as employment in industry surpassed agricultural employment.

On the question of European economic integration Denmark linked up with its largest trading partner, Britain. After the establishment of the European Common Market in 1958 and when the attempts to create a large European free trade area failed, Denmark entered the European Free Trade Association (EFTA) created under British leadership in 1960. When Britain was finally able to join the European Economic Community (EEC) in 1973, Denmark followed, after a referendum on the issue. Long before admission to the EEC, the advantages to Danish agriculture from the Common Agricultural Policy (CAP) had been emphasized. The higher prices within the EEC were capitalized into higher land prices at the same time that investments were increased based on the expected gains from membership. As a result the most indebted farmers who had borrowed at fixed interests rates were hit hard by two developments from the early 1980s. The EEC started to reduce the producers’ benefits of the CAP because of overproduction and, after 1982, the Danish economy adjusted to a lower level of inflation, and therefore, nominal interest rates. According to Andersen (2001) Danish farmers were left with the highest interest burden of all European Union (EU) farmers in the 1990’s.

Denmark’s relations with the EU, while enthusiastic at the beginning, have since been characterized by a certain amount of reserve. A national referendum in 1992 turned down the treaty on the European Union, the Maastricht Treaty. The Danes, then, opted out of four areas, common citizenship, a common currency, common foreign and defense politics and a common policy on police and legal matters. Once more, in 2000, adoption of the common currency, the Euro, was turned down by the Danish electorate. In the debate leading up to the referendum the possible economic advantages of the Euro in the form of lower transaction costs were considered to be modest, compared to the existent regime of fixed exchange rates vis-à-vis the Euro. All the major political parties, nevertheless, are pro-European, with only the extreme Right and the extreme Left being against. It seems that there is a discrepancy between the general public and the politicians on this particular issue.

As far as domestic economic policy is concerned, the heritage from the 1940s was a new commitment to high employment modified by a balance of payment constraint. The Danish policy differed from that of some other parts of Europe in that the remains of the planned economy from the war and reconstruction period in the form of rationing and price control were dismantled around 1950 and that no nationalizations took place.

Instead of direct regulation, economic policy relied on demand management with fiscal policy as its main instrument. Monetary policy remained a bone of contention between politicians and economists. Coordination of policies was the buzzword but within that framework monetary policy was allotted a passive role. The major political parties for a long time were wary of letting the market rate of interest clear the loan market. Instead, some quantitative measures were carried out with the purpose of dampening the demand for loans.

From Agricultural Society to Service Society: The Growth of the Welfare State

Structural problems in foreign trade extended into the high growth period of 1958-73, as Danish agricultural exports were met with constraints both from the then EEC-member countries and most EFTA countries, as well. During the same decade, the 1960s, as the importance of agriculture was declining the share of employment in the public sector grew rapidly until 1983. Building and construction also took a growing share of the workforce until 1970. These developments left manufacturing industry with a secondary position. Consequently, as pointed out by Pedersen (1995) the sheltered sectors in the economy crowded out the sectors that were exposed to international competition, that is mostly industry and agriculture, by putting a pressure on labor and other costs during the years of strong expansion.

Perhaps the most conspicuous feature of the Danish economy during the Golden Age was the steep increase in welfare-related costs from the mid 1960s and not least the corresponding increases in the number of public employees. Although the seeds of the modern Scandinavian welfare state were sown at a much earlier date, the 1960s was the time when public expenditure as a share of GDP exceeded that of most other countries.

As in other modern welfare states, important elements in the growth of the public sector during the 1960s were the expansion in public health care and education, both free for all citizens. The background for much of the increase in the number of public employees from the late 1960s was the rise in labor participation by married women from the late 1960s until about 1990, partly at least as a consequence. In response, the public day care facilities for young children and old people were expanded. Whereas in 1965 7 percent of 0-6 year olds were in a day nursery or kindergarten, this share rose to 77 per cent in 2000. This again spawned more employment opportunities for women in the public sector. Today the labor participation for women, around 75 percent of 16-66 year olds, is among the highest in the world.

Originally social welfare programs targeted low income earners who were encouraged to take out insurance against sickness (1892), unemployment (1907) and disability (1922). The public subsidized these schemes and initiated a program for the poor among old people (1891). The high unemployment period in the 1930s inspired some temporary relief and some administrative reform, but little fundamental change.

Welfare policy in the first four decades following World War II is commonly believed to have been strongly influenced by the Social Democrat party which held around 30 percent of the votes in general elections and was the party in power for long periods of time. One of the distinctive features of the Danish welfare state has been its focus on the needs of the individual person rather than on the family context. Another important characteristic is the universal nature of a number of benefits starting with a basic old age pension for all in 1956. The compensation rates in a number of schedules are high in international comparison, particularly for low income earners. Public transfers gained a larger share in total public outlays both because standards were raised – that is benefits became higher – and because the number of recipients increased dramatically following the high unemployment regime from the mid 1970s to the mid 1990s. To pay for the high transfers and the large public sector – around 30 percent of the work force – the tax load is also high in international perspective. The share public sector and social expenditure has risen to above 50 percent of GDP, only second to the share in Sweden.

Figure 3. Unemployment, Denmark (percent of total labor force)

Source: Statistics Denmark ‘50 års-oversigten’ and ADAM’s databank

The Danish labor market model has recently attracted favorable international attention (OECD 2005). It has been declared successful in fighting unemployment – especially compared to the policies of countries like Germany and France. The so-called Flexicurity model rests on three pillars. The first is low employment protection, the second is relatively high compensation rates for the unemployed and the third is the requirement for active participation by the unemployed. Low employment protection has a long tradition in Denmark and there is no change in this factor when comparing the twenty years of high unemployment – 8-12 per cent of the labor force – from the mid 1970s to the mid 1990s, to the past ten years when unemployment has declined to a mere 4.5 percent in 2006. The rules governing compensation to the unemployed were tightened from 1994, limiting the number of years the unemployed could receive benefits from 7 to 4. Most noticeably labor market policy in 1994 turned from ‘passive’ measures – besides unemployment benefits, an early retirement scheme and a temporary paid leave scheme – toward ‘active’ measures that were devoted to getting people back to work by providing training and jobs. It is commonly supposed that the strengthening of economic incentives helped to lower unemployment. However, as Andersen and Svarer (2006) point out, while unemployment has declined substantially a large and growing share of Danes of employable age receives transfers other than unemployment benefit – that is benefits related to sickness or social problems of various kinds, early retirement benefits, etc. This makes it hazardous to compare the Danish labor market model with that of many other countries.

Exchange Rates and Macroeconomic Policy

Denmark has traditionally adhered to a fixed exchange rate regime. The belief is that for a small and open economy, a floating exchange rate could lead to very volatile exchange rates which would harm foreign trade. After having abandoned the gold standard in 1931, the Danish currency (the Krone) was, for a while, pegged to the British pound, only to join the IMF system of fixed but adjustable exchange rates, the so-called Bretton Woods system after World War II. The close link with the British economy still manifested itself when the Danish currency was devaluated along with the pound in 1949 and, half way, in 1967. The devaluation also reflected that after 1960, Denmark’s international competitiveness had gradually been eroded by rising real wages, corresponding to a 30 percent real appreciation of the currency (Pedersen 1996).

When the Bretton Woods system broke down in the early 1970s, Denmark joined the European exchange rate cooperation, the “Snake” arrangement, set up in 1972, an arrangement that was to be continued in the form of the Exchange Rate Mechanism within the European Monetary System from 1979. The Deutschmark was effectively the nominal anchor in European currency cooperation until the launch of the Euro in 1999, a fact that put Danish competitiveness under severe pressure because of markedly higher inflation in Denmark compared to Germany. In the end the Danish government gave way before the pressure and undertook four discrete devaluations from 1979 to 1982. Since compensatory increases in wages were held back, the balance of trade improved perceptibly.

This improvement could, however, not make up for the soaring costs of old loans at a time when the international real rates of interests were high. The Danish devaluation strategy exacerbated this problem. The anticipation of further devaluations was mirrored in a steep increase in the long-term rate of interest. It peaked at 22 percent in nominal terms in 1982, with an interest spread to Germany of 10 percent. Combined with the effects of the second oil crisis on the Danish terms of trade, unemployment rose to 10 percent of the labor force. Given the relatively high compensation ratios for the unemployed, the public deficit increased rapidly and public debt grew to about 70 percent of GDP.

Figure 4. Current Account and Foreign Debt (Denmark)

Source: Statistics Denmark Statistical Yearbooks and ADAM’s Databank

In September 1982 the Social Democrat minority government resigned without a general election and was relieved by a Conservative-Liberal minority government. The new government launched a program to improve the competitiveness of the private sector and to rebalance public finances. An important element was a disinflationary economic policy based on fixed exchange rates pegging the Krone to the participants of the EMS and, from 1999, to the Euro. Furthermore, automatic wage indexation that had occurred, with short interruptions since 1920 (with a short lag and high coverage), was abolished. Fiscal policy was tightened, thus bringing an end to the real increases in public expenditure that had lasted since the 1960’s.

The stabilization policy was successful in bringing down inflation and long interest rates. Pedersen (1995) finds that this process, nevertheless, was slower than might have been expected. In view of former Danish exchange rate policy it took some time for the market to believe in the credible commitment to fixed exchange rates. From the late 1990s the interest spread to Germany/ Euroland has been negligible, however.

The initial success of the stabilization policy brought a boom to the Danish economy that, once again, caused overheating in the form of high wage increases (in 1987) and a deterioration of the current account. The solution to this was a number of reforms in 1986-87 aiming at encouraging private savings that had by then fallen to an historical low. Most notable was the reform that reduced tax deductibility of private interest on debts. These measures resulted in a hard landing to the economy caused by the collapse of the housing market.

The period of low growth was further prolonged by the international recession in 1992. In 1993 yet another shift of regime occurred in Danish economic policy. A new Social Democrat government decided to ‘kick start’ the economy by means of a moderate fiscal expansion whereas, in 1994, the same government tightened labor market policies substantially, as we have seen. Mainly as a consequence of these measures the Danish economy from 1994 entered a period of moderate growth with unemployment steadily falling to the level of the 1970s. A new feature that still puzzles Danish economists is that the decline in unemployment over these years has not yet resulted in any increase in wage inflation.

Denmark at the beginning of the twenty-first century in many ways fits the description of a Small Successful European Economy according to Mokyr (2006). Unlike in most of the other small economies, however, Danish exports are broad based and have no “niche” in the world market. Like some other small European countries, Ireland, Finland and Sweden, the short term economic fluctuations as described above have not followed the European business cycle very closely for the past thirty years (Andersen 2001). Domestic demand and domestic economic policy has, after all, played a crucial role even in a very small and very open economy.

References

Abildgren, Kim. “Real Effective Exchange Rates and Purchasing-Power-parity Convergence: Empirical Evidence for Denmark, 1875-2002.” Scandinavian Economic History Review 53, no. 3 (2005): 58-70.

Andersen, Torben M. et al. The Danish Economy: An international Perspective. Copenhagen: DJØF Publishing, 2001.

Andersen, Torben M. and Michael Svarer. “Flexicurity: den danska arbetsmarknadsmodellen.” Ekonomisk debatt 34, no. 1 (2006): 17-29.

Banggaard, Grethe. Befolkningsfremmende foranstaltninger og faldende børnedødelighed. Danmark, ca. 1750-1850. Odense: Syddansk Universitetsforlag, 2004

Hansen, Sv. Aage. Økonomisk vækst i Danmark: Volume I: 1720-1914 and Volume II: 1914-1983. København: Akademisk Forlag, 1984.

Henriksen, Ingrid. “Avoiding Lock-in: Cooperative Creameries in Denmark, 1882-1903.” European Review of Economic History 3, no. 1 (1999): 57-78

Henriksen, Ingrid. “Freehold Tenure in Late Eighteenth-Century Denmark.” Advances in Agricultural Economic History 2 (2003): 21-40.

Henriksen, Ingrid and Kevin H. O’Rourke. “Incentives, Technology and the Shift to Year-round Dairying in Late Nineteenth-century Denmark.” Economic History Review 58, no. 3 (2005):.520-54.

Johansen, Hans Chr. Danish Population History, 1600-1939. Odense: University Press of Southern Denmark, 2002.

Johansen, Hans Chr. Dansk historisk statistik, 1814-1980. København: Gyldendal, 1985.

Klovland, Jan T. “Monetary Policy and Business Cycles in the Interwar Years: The Scandinavian Experience.” European Review of Economic History 2, no. 3 (1998): 309-44.

Maddison, Angus. The World Economy: A Millennial Perspective. Paris: OECD, 2001

Mokyr, Joel. “Successful Small Open Economies and the Importance of Good Institutions.” In The Road to Prosperity. An Economic History of Finland, edited by Jari Ojala, Jari Eloranta and Jukka Jalava, 8-14. Helsinki: SKS, 2006.

Pedersen, Peder J. “Postwar Growth of the Danish Economy.” In Economic Growth in Europe since 1945, edited by Nicholas Crafts and Gianni Toniolo. Cambridge: Cambridge University Press, 1995.

OECD, Employment Outlook, 2005.

O’Rourke, Kevin H. “The European Grain Invasion, 1870-1913.” Journal of Economic History 57, no. 4 (1997): 775-99.

O’Rourke, Kevin H. and Jeffrey G. Williamson. Globalization and History: The Evolution of a Nineteenth-century Atlantic Economy. Cambridge, MA: MIT Press, 1999

Topp, Niels-Henrik. “Influence of the Public Sector on Activity in Denmark, 1929-39.” Scandinavian Economic History Review 43, no. 3 (1995): 339-56.


Footnotes

1 Denmark also includes the Faeroe Islands, with home rule since 1948, and Greenland, with home rule since 1979, both in the North Atlantic. These territories are left out of this account.

Citation: Henriksen, Ingrid. “An Economic History of Denmark”. EH.Net Encyclopedia, edited by Robert Whaples. October 6, 2006. URL http://eh.net/encyclopedia/an-economic-history-of-denmark/

Credit in the Colonial American Economy

David T Flynn, University of North Dakota

Overview of Credit versus Barter and Cash

Credit was vital to the economy of colonial America and much of the individual prosperity and success in the colonies was due to credit. Networks of credit stretched across the Atlantic from Britain to the major port cities and into the interior of the country allowing exchange to occur (Bridenbaugh, 1990, 154). Colonists made purchases by credit, cash and barter. Barter and cash were spot exchanges, goods and services were given in exchange for immediate payment. Credit, however, delayed the payment until a later date. Understanding the role of credit in the eighteenth century requires a brief discussion of all payment options as well as the nature of the repayment of credit.

Barter

Barter is an exchange of goods and services for other goods and services and can be a very difficult method of exchange due to the double coincidence of wants. For exchange to occur in a barter situation each party must have the good desired by its trading partner. Suppose John Hancock has paper supplies and wants corn while Paul Revere has silver spoons and wants paper products. Even though Revere wants the goods available from Hancock no exchange occurs because Hancock does not want the good Revere has to offer. The double coincidence of wants can make barter very costly because of time spent searching for a trading partner. This time could otherwise be used for consumption, production, leisure, or any number of other activities. The principle advantage of any form of money over barter is obvious: money satisfies the double coincidence of wants, that is, money functions as a medium of exchange.

Money’s advantages

Money also has other functions that make it a superior method of exchange to barter including acting as the unit of account (the unit in which prices are quoted) in the economy (e.g. the dollar in the United States and the pound in England). A barter economy uses a large number of prices because every good must have a price in terms of each other good available in the economy. An economy with n different goods would have n(n-1)/2 prices in total, not an enormous burden for small values of n, but as n grows it quickly becomes unmanageable. A unit of account reduces the number of prices from the barter situation to n, or the number of goods. The colonists had a unit of account, the colonial pound (£), which removed this burden of barter.

Several forms of money circulated in the colonies over the course of the seventeenth and eighteenth centuries, such as specie, commodity money and paper currency. Specie is gold or silver minted into coins and is a special form of commodity money, a good that has an exchange value separate from the market value of the good. Tobacco, and later tobacco warehouse receipts, acted as a form of money in many of the colonies. Despite multiple money options some colonists complained of an inability to keep money in circulation, or at least in the hands of those wanting to use it for exchange (Baxter, 1945, 11-17; Bridenbaugh, 153).1

Credit’s advantages

When you acquire goods with credit you delay payment to a later time, be it one day or one year. A basic credit transaction today is essentially the same as in the eighteenth century, only the form is different.2 Extending credit presents risks, most notably default, or the failure of the borrower to repay the amount borrowed. Sellers also needed to worry about the total volume of credit they extended because it threatened their solvency in the case of default. Consumers benefited from credit by the ability to consume beyond current financial resources, as well as security from theft and other advantages. Sellers gained by faster sales of goods and interest charges, often hidden in a higher price for the goods.3

Uncertainty about the scope of credit

The frequency of credit versus barter and cash is not well quantified because surviving account books and transaction records generally only report cash or goods payments made after the merchant allowed credit, not spot cash or barter transactions (Baxter, 19n). Martin (1939, 150) concurs, “The entries represent transactions with those customers who did not pay at once on purchasing goods for [the seller] either made no record of immediate cash purchases, or else there were almost no such transactions.” The results of Flynn’s (2001) study using merchant account books from Connecticut and Massachusetts found also that most purchases recorded in the account books were credit purchases (see Table 1 below).4 Scholars are forced to make general statements about credit as a standard tool in transactions in port cities and rural villages without reference to specific numbers (Perkins, 1980, 123-124).

Table 1

Percentage of Purchases by Type

Purchases by Credit Purchases by Cash Purchases by Barter
Connecticut 98.6 1.1 0.3
Massachusetts 98.5 1.0 0.4
Combined 98.6 1.0 0.4

Source: Adapted from Table 3.2 in Flynn (2001), p. 54.

Indications of the importance of credit

In some regions, the institution of credit was so accepted that many employers, including merchants, paid their employees by providing them credit at a store on the business’s account (Martin, 94). Probate inventories evidence the frequency of credit through the large amount of accounts receivable recorded for traders and merchant in Connecticut, sometimes over £1,000 (Main, 1985, 302-303). Accounts receivable are an asset of the business representing amounts owed to the business by other parties. Almost 30 percent of the estates of Connecticut “traders” contained £100 or more of receivables as part of their estate (Main, 316). More than this, accounts receivable averaged one-eighth of personal wealth throughout most of the colonial period, and more than one-fifth at the end (Main, 36). While there is no evidence that enables us to determine the relative frequencies of payments, the available information supports the idea that the different forms of payment co-existed.

The Different Types of Credit

There are three different types of credit to discuss: international credit, book credit, and promissory notes and each facilitated exchange and payments. Colonial importers and wholesalers relied on credit from British suppliers while rural merchants received credit from importers and wholesalers in the port cities and, finally, consumers received credit from the retailers. A discussion starts logically with international credit from British suppliers to colonial merchants because it allowed colonial merchants to extend credit to their customers (McCusker and Menard, 1985, 80n; Martin, 1939, 19; Perkins, 1980, 24).

Overseas credit

Research on colonial growth attaches importance to several items including foreign funds, capital improvements and productivity gains. The majority of foreign funds transferred were in the form of mercantile credit (Egnal, 1998, 12-20). British merchants shipped goods to colonial merchants on credit for between six months and one year before demanding payment or charging interest (Egnal, 55; Perkins, 1994, 65; Shepherd and Walton, 1972, 131-132; Thomson, 1955, 15). Other examples show a minimum of one year’s credit given before suppliers assessed five percent interest charges (Martin, 122-123). Factors such as interest and duration determined for how long colonial merchants could extend credit to their own customers and at what level of markup. Some merchants sold goods on commission, where the goods remained the property of the British merchant until sold. After the sale the colonial merchant remitted the funds, less his fee, to the British merchant.

Relationships between colonial and British merchants exhibited regional differences. Virginia merchants’ system of exchange, known as the consignment system, depended on the credit arrangements between planters and “factors” – middlemen who accepted colonial goods and acquired British or other products desired by colonists (Thomson, 28). A relationship with a British merchant was important for success in business because it provided the tobacco growers and factors access to supplies of credit sufficient to maintain business (Thomson, 211). Independent Virginia merchants, those without a British connection, ordered their supplies of goods on credit and paid with locally produced goods (Thomson, 15). Virginia and other Southern colonies could rely on credit because of their production of a staple crop desired by British merchants. New England merchants such as Thomas Hancock, uncle of the famous patriot John Hancock, could not rely on this to the same extent. New England merchants sometimes engaged in additional exchanges with other colonies and countries because they lacked goods desired by British merchants (Baxter, 46-47). Without the willingness of British merchant houses to wait for payment it would have been difficult for many colonial merchants to extend credit to their customers.

Domestic credit: book credit and promissory notes

Domestic credit was primarily of two forms, book credit and promissory notes. Merchants recorded book credit in the account books of the business. These entries were debits for an individual’s account and were set against payments, credits in the merchant’s ledger. Promissory notes detailed a debt, including typically the date of issue, the date of redemption, the amount owed, possibly the form of repayment and an interest rate. Book credit and promissory notes were substitutes and complements. Both represented a delay of payment and could be used to acquire goods but book accounts were also a large source of personal notes. Merchants who felt payment was either too slow in coming or the risks of default too high could insist the buyer provide a note. The note was a more secure form of credit as it could be exchanged and, despite the likely loss on the note’s face value if the debtor was in financial trouble, would not represent a continuing worry of the merchant (Martin, 158-159).5

Figure 1

Accounts of Samuell Maxey, Customer, and Jonathan Parker, Massachusetts Merchant

Date Transaction Debt (£) Date Transaction Credit (£)
5/28/1748 To Maxey earthenware by Brock 62.00 5/30/1748 By cash & Leather 45.00
10/21/1748 To ditto by Cap’n Long 13.75 8/20/1748 By 2 quintals of fish @6-0-0 [per quintal] 12.00
5/25/1749 To ditto 61.75 11/15/1748 By cash received of Mr. Suttin 5.00
6/26/1749 To ditto 27.35 5/26/1749 By sundrys 74.75
10/1749 By cash of Mr. Kettel 9.75
12/1749 By ditto 18.35

Source: John Parker Account Book. Baker Library, Harvard Business School, Mss: 605 1747-1764 P241, p.7.

The settlement of debt obligations incorporated many forms of payment. Figure 1 details the activity between Samuell Maxey and Jonathan Parker, a Massachusetts merchant. Included are several purchases of earthenware by Maxey and others and several payments, including some in cash and goods as well as from third parties. Baxter (1945, 21) describes similar experiences when he says,

…the accounts over and over again tell of the creditor’s weary efforts to get his dues by accepting a tardy and halting series of odds and ends; and (as prices were often soaring, especially in 1740-64) the longer a debtor could put off payment, the fewer goods might he need to hand over to square a liability for so much money.

Repayment means and examples

The “odds and ends” included goods and commodity money as well as other cash, bills of exchange, and third party settlements (Baxter, 17-32). Merchants accepted goods such as pork beef, fish and grains for their store goods (Martin, 94). Flynn (2001) shows several items offered as payment, including goods, cash, notes and others, shown in Table 2.

Table 2

Percentage of Payments by Category

Repayment in Cash Repayment in Goods Repayment by note Repayment by Reckoning Repayment by third- party note Repayment by Bond Repayment by Labor

Conn.

27.5 45.9 3.3 7.5 6.9 0.0 8.9
Mass. 24.2 47.6 2.8 7.5 13.7 0.2 2.3
Combined 25.6 46.9 3.0 7.5 10.9 0.1 5.0

Source: Adapted from Table 3.4 in Flynn (2001), p. 54.

Cash, goods and notes require no further explanation, but Table 2 shows other items used in payment as well. Colonists used labor to repay their tabs, working in their creditor’s field or lending the labor services of a child or yoke of oxen. Some accounts also list “reckoning,” which occurred typically between two merchants or traders that made purchases on credit from each other. Before the two merchants settled their accounts it was convenient to determine the net position of their accounts with each other. After making the determination the merchant in debt possibly made a payment that brought the balance to zero, but at other times the merchants proceeded without a payment but a better sense of the account position. Third parties also made payments that employed goods, money and credit. When the merchant did not want the particular goods offered in payment he could hope to pass them on, ideally to his own creditors. Such exchange satisfied both the merchant’s debts and the consumer’s (Baxter, 24-25). Figure 1 above and Figure 2 below illustrate this.

Figure 2

Accounts of Mr. Clark, Customer, and Jonathan Parker, Massachusetts Merchant

Date Transaction Debt (£) Date Transaction Credit (£)
9/27/1749 To Clark earthenware 10.85 11/30/1749 By cash 3.00
4/14/1750 By ditto 1.00
?/1762 By rum in full of Mr. Blanchard 6.35

Source: John Parker Account Book. Baker Library, Harvard Business School, Mss: 605 1747-1764 P241, p.2.

The accounts of Parker and his customer, Mr. Clark, show another purchase of earthenware and three payments. The purchase is clearly on credit as Parker recorded the first payment occurring over two months after the purchase. Clark provided two cash payments and then a third person Mr. Blanchard settled Clark’s account in full with rum. What do these third party payments represent? For answers to this we need to step back from the specifics of the account and generalize.

Figures 1 and 2 show credits from third parties in cash and goods. If we think in terms of three-way trade the answer becomes obvious. In Figure 1 where a Mr. Suttin pays £5.00 cash to Parker on the account of Samuell Maxey, Suttin is settling a debt with Maxey (in part or in full we do not know). To settle the debt he owes Parker, Maxey directs those who owe him money to pay Parker, and thus reduce his debt. Figure 2 displays the same type of activity, except Blanchard pays with rum. Though not depicted here, private debts between customers could be settled on the merchant’s books. Rather than offering payment in cash or goods, private parties could swap debt on the merchant’s account book, ordering a transfer from one account to another. The merchant’s final approval for the exchange implied something about the added risk from a third party exchange. The new person did not pose a greater default risk in the creditor’s opinion, otherwise (we would suspect) they refused the exchange.6

Complexity of the credit system

The payment system in the colonies was complex and dynamic with creditors allowing debtors to settle accounts in several fashions. Goods and money satisfied outstanding debts and other credit obligations deferred or transferred debts. Debtors and creditors employed the numerous forms of payment in regular and third party transactions, making merchants’ account books a clearinghouse for debts. Although the lack of technology leaves casual observers thinking payments at this time were primitive, such was clearly not the case. With only pen and paper eighteenth century merchants developed a sophisticated payment system, of which book credit and personal notes were an important part.

The Duration of Credit

The length of time outstanding for credit, its duration, is an important characteristic. Duration represents the amount of time a creditor awaited payment and anecdotal and statistical evidence provide some insights into the duration of book credit and promissory notes.

The calculation of the duration of book credit, or any similar type of instrument, is relatively straightforward when the merchant recorded dates in his account book conscientiously. Consider the following example.

Figure 3

Accounts of David Forthingham, Customer, and Jonathan Parker, Massachusetts Merchant

Date Transaction Debt (£) Date Transaction Credit (£)
10/1/1748 To Forthingham earthenware 7.75 10/1/1748 By cash 3.00
4/1749 By Indian corn 4.75

Source: John Parker Account Book. Baker Library, Harvard Business School, Mss: 605 1747-1764 P241, p.2.

The exchanges between Frothingham and Jonathan Parker show one purchase and two payments. Frothingham provides a partial payment for the earthenware at the time of purchase, in cash. However, £4.75 of debt remains outstanding, and is not repaid until April of 1749. It is possible to calculate a range of values for the final settlement of this account, using the first day of April to give a lower bound estimate and the last day to give an upper bound estimate. Counting the number of days shows that it took at least 182 days and at most 211 days to settle the debt. Alternatively the debt lasted between 6 and 7 months.

Figure 4

Accounts of Joseph Adams, Customer, and Jonathan Parker, Massachusetts Merchant

Date Transaction Debt (£) Date Transaction Credit (£)
9/7/1747 to Adams earthenware -30.65 11/9/1747 by cash 30.65
7/22/1748 to ditto -22.40 7/22/1748 by ditto 12.40
No Date7 by ditto 10.00

Source: John Parker Account Book. Baker Library, Harvard Business School, Mss: 605 1747-1764 P241, p.4.

Not all merchants were meticulous record keepers and sometimes they failed to record a particular date with the rest of an account book entry.8 Figure 4 illustrates this problem well and also provides an example of multiple purchases along with multiple payments. The first purchase of earthenware is repaid with one “cash” payment sixty-three days (2.1 months) later.9 Computation of the term of the second loan is more complicated. The last two payments satisfy the purchase amount, so Adams repaid the loan completely. Unfortunately, Parker left out the date for the second payment. The second payment occurred on or after July 22, 1748, so this date is the lower end of the interval. The minimum time between purchase and second payment is zero days, but computation of a maximum time, or upper bound, is not possible due to the lack of information.10

With a sufficient number of debts some generalization is possible. If we interpret the data as the length of a debt’s life we can use demographic methods, in particular the life table.11 For a sample of Connecticut and Massachusetts account books the average duration looks like the following.12

Table 3

Expected Duration for Connecticut Debts, Lower and Upper Bound

(a) (b) (c) (d) (e)
Size of debt in £ eo lower bound (months) Median lower bound (interval) eo upper bound (months) Median upper bound (interval)
All Values 14.79 6-12 15.87 6-12
0.0-0.25 15.22 6-12 15.99 6-12
0.25-0.50 14.28 6-12 15.51 6-12
0.50-0.75 15.24 6-12 18.01 6-12
0.75-1.00 14.25 6-12 15.94 6-12
1.00-10.00 13.95 6-12 15.07 6-12
10.00+ 7.95 0-6 10.73 6-12

Table 4

Expectation Duration for Massachusetts Debts, Lower and Upper Bound

(a) (b) (c) (d) (e)
Size of debt in £ eo lower bound (months) Lower bound median (interval) eo upper bound (months) Upper bound median (interval)
All Values 13.22 6-12 14.87 6-12
0.0-0.25 14.74 6-12 17.55 12-18
0.25-0.50 12.08 6-12 12.80 6-12
0.50-0.75 11.73 6-12 13.08 6-12
0.75-1.00 11.01 6-12 12.43 6-12
1.00-10.00 13.08 6-12 13.88 6-12
10.00+ 14.28 12-18 17.02 12-18

Source: Adapted from Tables 4.1 and 4.2 in Flynn (2001), p. 80.

For all debts in the sample from Connecticut, the expected length of time the debt is outstanding from its inception is estimated between 14.78 and 15.86 months. For Massachusetts the range is somewhat shorter, from 13.22 to 14.87 months. Tables 3 and 4 break the data into categories based on the value of the credit transaction as well. An important question to ask is whether this represents a long- term or a short-term debt? There is no standard yardstick for comparison in this case. The best comparison is likely the international credit granted to colonial merchants. The colonial merchants needed to repay these amounts and had to sell the goods to make remittances. The estimates of that credit duration, listed earlier, center around one year, which means that colonial merchants in New England needed to repay their British suppliers before they could expect to receive full payment from their customers. From the colonial merchants’ perspective book credit was certainly long-term.

Other estimates of duration of book credit

Other estimates of book credit’s duration vary. Consumers paying their credit purchases in kind took as little time as a few months or as long as several years (Martin, 153). Some accounting records show book credit remaining unsettled for nearly thirty years (Baxter, 161). Thomas Hancock often noted expected payment dates, such as “to pay in 6 months” along with a purchase, though frequently this was not enough time for the buyer. Thomas blamed the law, which allowed twelve months for people to make repayments, complaining to his suppliers that he often provided credit to country residents of “one two & more years” (Baxter, 192). Surely such a situation is the exception and not the rule, though it does serve to remind us that many of these arrangements were open, lacking definite endpoints. Some merchants allowed accounts to last as long as two years before examining the position of the account, allowing one year’s book credit without charge, and thereafter assessing interest (Martin, 157).

Duration of promissory notes

The duration of promissory notes is also important. Priest (1999) examines a form of duration for these credit instruments, estimating the time between a debtor’s signing of the note and the creditor’s filing of suit to collect payment. Of course this only measures the duration for notes that go into default and require legal recourse. Typically, a suit originated some 6 to 9 months after default (Priest, 2417-18). Results for the period 1724 to 1750 show 14.5% of cases occurred within 6 months after the initial contraction date, the execution of the debt. Merchants brought suit in more than 60% of the cases between 6 months and 3 years from execution, 21.4% from six to twelve months, 27.4% from one to two years and 14.1% from two to three years. Finally, more than 20% of the cases occurred more than three years from the execution of the debt. The median interval between execution and suit was 17.5 months (Priest, 2436, Table 3).

The duration of promissory notes provides an important complement to estimates of book credit’s term. Median estimates of 17.5 months make promissory notes, more than likely, a long-term credit instrument when balanced against the one year credit term given colonial importers. The estimates for book credit range between three months and several years in the literature to between 13 and 16 months in Flynn (2001) study. Duration results show that merchants waited significant amounts of time for payment, raising the issue of the time value of money and interest rates.

The Interest Practices of Merchants

In some cases credit was outstanding for a long period of time, but the accounts make no mention of any interest charges, as in Figures 1 through 4. Such an omission is difficult to reconcile with the fairly sophisticated business practices for the merchants of the day. Accounting research and manuals from the time demonstrate clearly an understanding of the time value of money. The business community understood the concept of compound interest. Account books allowed merchants to charge higher and variable prices for goods sold on book credit (Martin, 94). While in some cases interest charges entered the account book as an explicit entry in many others interest was an added or implicit charge contained in the good’s price.

Advertisements from the time make it clear that merchants charged less for goods

purchased by cash, and accounts paid promptly received a discount on the price,

One general pricing policy seems to have been that goods for cash were sold at a lower price than when they were charged. Cabel[sic] Bull advertised beaver hats at 27/ cash and 30/ country produce in hand. Daniel Butler of Northampton offered dyes, and “a few Cwt. of Redwood and Logwood cheaper than ever for ready money.” Many other advertisements carried allusions to the practice but gave no definite data. A daybook of the Ely store contained this entry for October 21, 1757: “William Jones, Dr to 6 yds Towcloth at 1/6—if paid in a month at 1/4. (Martin, 1939, 144-145)

Other advertisements also evidence a price difference, offering cash prices for certain grains they desired. Connecticut merchants likely offered good prices for products they thought would sell well as they sought remittances for their British creditors. Hartford merchants charged interest rates ranging from four and one-half to six and one-half percent in the 1750s and 1760s, though Flynn (2001) arrives at different rates from a different sample of New England account books (Martin, 158). Many promissory notes in South Carolina specified interest, though not an exact rate, usually just the term “lawful interest” (Woods, 364).

Estimates of interest rates

Simple regression analysis can help determine if interest was implicit in the price of goods sold on credit though there are numerous technical issues, such as borrower characteristics, market conditions and the quality of the good that make a discussion here inappropriate.13 In general, there seems to be a positive correlation, with the annual interest rates falling between 3.75% and 7%, which seem consistent with the results from interest entries made in account books. There is some tendency for the price of a good to increase with the time waited for repayment, though many other technical matters need resolution.

Most annual interest rates in Flynn’s (2001) study, explicit and implicit, fall in the range of 4 to 6.5 percent making them similar to those Martin found in her examination of accounts and roughly consistent with the Massachusetts lawful rate of 6 percent at the time, though some entries assess interest as high as 10 percent (Martin, 158; Rothenberg, 1992, 124). Despite this, the explicit rates are insufficient on their own to form a conclusion about the interest rate charged on book credit; there are too few entries, and many involve promissory notes or third parties, factors expected to alter the interest rate. Other factors such as borrower characteristics likely changed the assessed rate of interest too, with more prominent and wealthy individuals charged lower rates, either due to their status and a perceived lower risk, or possibly due to longer merchant-buyer relationships. Most account books do not contain information sufficient to judge the effects of these characteristics.

Merchants gained from credit use by charging higher prices; credit required a premium over cash sales and so the merchant collected interest and at the same time minimized the necessary amount of payments media (Martin, 94). Interest was distinct from the normal markups for insurance, freight, wharfage, etc. that were often significant additions to the overall price and represented an attempt to account for risk and the time value of money (Baxter, 192; Thomson, 239).14

Conclusions

Credit was significant as a form of payment in colonial America. Direct comparisons of the number of credit purchases versus barter or cash are not possible, but an examination of accounting records demonstrates credit’s widespread use. Credit was present in all forms of trade including international trade between England and her colonies. The domestic forms of credit were relatively long-term instruments that allowed individuals to consume beyond current means. In addition, book credit allowed colonists to economize on cash and other means of payment through transfers of credit, “reckoning,” and other means such as paying workers with store credit. Merchants also understood the time value of money, entering interest charges explicitly in the account books and implicitly as part of the price. The use of credit, the duration of credit instruments, and the methods of incorporating interest show credit as an important method of exchange and the economy of colonial America to be very complex and sophisticated.

References

Baxter, W.T. The House of Hancock: Business in Boston, 1724-1775. Cambridge: Harvard University Press, 1945.

Bridenbaugh, Carl. The Colonial Craftsman. Dover Publications: New York, 1990.

Egnal, Marc. New World Economies: The Growth of the Thirteen Colonies and Early Canada. Oxford: Oxford University Press, 1998.

Flynn, David T. “Credit and the Economy of Colonial New England.” Ph.D. dissertation, Indiana University, 2001.

McCusker, John J., and Russel R. Menard. The Economy of British America, 1607-1789. Chapel Hill: University of North Carolina Press, 1985.

Main, Jackson Turner. Society and Economy in Colonial Connecticut. Princeton: Princeton University Press, 1985.

Martin, Margaret. “Merchants and Trade of the Connecticut River Valley, 1750-1820.” Smith College Studies in History. Department of History, Smith College: Northampton, Mass. 1939.

Parker, Jonathan. Account Book, 1747-1764. Mss:605 1747-1815. Baker Library Historical Collections, Harvard Business School; Cambridge, Massachusetts

Perkins, Edwin J. The Economy of Colonial America. New York: Columbia University Press, 1980.

Perkins, Edwin J. American Public Finance and Financial Services, 1700-1815. Columbus: Ohio State University Press, 1994.

Price, Jacob M. Capital and Credit in British Overseas Trade: The View from the Chesapeake, 1700-1776. Cambridge: Harvard University Press, 1980.

Priest, Claire. “Colonial Courts and Secured Credit: Early American Commercial Litigation and Shays’ Rebellion.” Yale Law Journal 108, no. 8 (June, 1999): 2412-2450.

Rothenberg, Winifred. From Market-Places to a Market Economy: The Transformation of Rural Massachusetts, 1750-1850. Chicago: University of Chicago Press, 1992.

Shepherd, James F. and Gary Walton. Shipping, Maritime Trade, and the Economic Development of Colonial North America. Cambridge: University Press 1972.

Thomson, Robert Polk. The Merchant in Virginia, 1700-1775. Ph.D. dissertation, University of Wisconsin, 1955.

Further Reading:

For a good introduction to credit’s importance across different professions, merchant practices and the development of business practices over time I suggest:

Bailyn, Bernard. The New England Merchants in the Seventeenth-Century. Cambridge: Harvard University Press, 1979.

Schlesinger, Arthur. The Colonial Merchants and the American Revolution: 1763-1776. New York: Facsimile Library Inc., 1939.

For an introduction to issues relating to money supply, the unit of account in the economy, and price and exchange rate data I recommend:

Brock, Leslie V. The Currency of the American Colonies, 1700-1764: A Study in Colonial Finance and Imperial Relations. New York: Arno Press, 1975.

McCusker, John J. Money and Exchange in Europe and America, 1600-1775: A Handbook. Chapel Hill: University of North Carolina Press, 1978.

McCusker, John J. How Much Is That in Real Money? A Historical Commodity Price Index for Use as a Deflator of Money Values in the Economy of the United States, Second Edition. Worcester, MA: American Antiquarian Society, 2001.

1 Some authors note a small amount of cash purchases as well as small numbers of cash payments for debts as evidence of a lack of money (Bridenbaugh, 153; Baxter, 19n).

2 Presently, credit cards are a common form of payment. While such technology did not exist in the past, the merchant’s account book provided a means of recording credit purchases.

3 Price (1980, pp.16-17) provides an excellent summary of the advantages and risks of credit to different types of consumers and to merchants in both Britain and the colonies.

4 Please note that this table consists of transactions mostly between colonial retail merchants and colonial consumers in New England. Flynn (2001) uses account books that collectively span from approximately 1704 to 1770.

5 In some cases with the extension of book credit came a requirement to provide a note too. When the solvency of the debtor came into question the creditor, could sell the note and pass the risk of default on to another.

6 I offer a detailed example of such an exchange going sour for the merchant below.

7 “No date” is Flynn’s entry to show that a date is not recorded in the account book.

8 It seems that this frequently occurs at the end of a list of entries, particularly when the credit fully satisfies an outstanding purchase as in Figure 4.

9 To calculate months, divide days by 30. The term “cash” is placed in quotation marks as it is woefully nondescript. Some merchants and researchers using account books group several different items under the heading cash.

10 Students interested in historical research of this type should be prepared to encounter many situations of missing information. There are ways to deal with this censoring problem, but a technical discussion is not appropriate here.

11 Colin Newell’s Methods and Models in Demography (Guilford Press, 1988) is an excellent introduction for these techniques.

12 Note that either merchants recorded amounts in the lawful money standard or Flynn (2001) converted amounts into this standard for these purposes.

13 The premise behind the regression is quite simple: we look for a correlation between the amount of time an amount was outstanding and the per unit price of the good. If credit purchases contained implicit interest charges there would be a positive relationship. Note that this test implies forward looking merchants, that is, merchants factored the perceived or agreed upon time to repayment into the price of the good.

14 The advance varied by colony, good and time period,

In 1783, a Boston correspondent wrote Wadsworth that dry goods in Boston were selling at a twenty to twenty-five percent ‘advance’ from the ‘real Sterling Cost by Wholesale.’ The ‘advances’ occasionally mentioned in John Ely’s Day Book were far higher, seventy to seventy-five per cent on dry goods. Dry goods sold well at one hundred and fifty per cent ‘advance’ in New York in 1750… (Martin, 136).

In the 1720s a typical advance on piece goods in Boston was eighty per cent, seventy-five with cash (Martin, 136n). It should be noted that others find open account balances were commonly kept interest free (Rothenberg, 1992, 123).

13

Citation: Flynn, David. “Credit in the Colonial American Economy”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/credit-in-the-colonial-american-economy/