EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

Economic History of Malaysia

John H. Drabble, University of Sydney, Australia

General Background

The Federation of Malaysia (see map), formed in 1963, originally consisted of Malaya, Singapore, Sarawak and Sabah. Due to internal political tensions Singapore was obliged to leave in 1965. Malaya is now known as Peninsular Malaysia, and the two other territories on the island of Borneo as East Malaysia. Prior to 1963 these territories were under British rule for varying periods from the late eighteenth century. Malaya gained independence in 1957, Sarawak and Sabah (the latter known previously as British North Borneo) in 1963, and Singapore full independence in 1965. These territories lie between 2 and 6 degrees north of the equator. The terrain consists of extensive coastal plains backed by mountainous interiors. The soils are not naturally fertile but the humid tropical climate subject to monsoonal weather patterns creates good conditions for plant growth. Historically much of the region was covered in dense rainforest (jungle), though much of this has been removed for commercial purposes over the last century leading to extensive soil erosion and silting of the rivers which run from the interiors to the coast.

SINGAPORE

The present government is a parliamentary system at the federal level (located in Kuala Lumpur, Peninsular Malaysia) and at the state level, based on periodic general elections. Each Peninsular state (except Penang and Melaka) has a traditional Malay ruler, the Sultan, one of whom is elected as paramount ruler of Malaysia (Yang dipertuan Agung) for a five-year term.

The population at the end of the twentieth century approximated 22 million and is ethnically diverse, consisting of 57 percent Malays and other indigenous peoples (collectively known as bumiputera), 24 percent Chinese, 7 percent Indians and the balance “others” (including a high proportion of non-citizen Asians, e.g., Indonesians, Bangladeshis, Filipinos) (Andaya and Andaya, 2001, 3-4)

Significance as a Case Study in Economic Development

Malaysia is generally regarded as one of the most successful non-western countries to have achieved a relatively smooth transition to modern economic growth over the last century or so. Since the late nineteenth century it has been a major supplier of primary products to the industrialized countries; tin, rubber, palm oil, timber, oil, liquified natural gas, etc.

However, since about 1970 the leading sector in development has been a range of export-oriented manufacturing industries such as textiles, electrical and electronic goods, rubber products etc. Government policy has generally accorded a central role to foreign capital, while at the same time working towards more substantial participation for domestic, especially bumiputera, capital and enterprise. By 1990 the country had largely met the criteria for a Newly-Industrialized Country (NIC) status (30 percent of exports to consist of manufactured goods). While the Asian economic crisis of 1997-98 slowed growth temporarily, the current plan, titled Vision 2020, aims to achieve “a fully developed industrialized economy by that date. This will require an annual growth rate in real GDP of 7 percent” (Far Eastern Economic Review, Nov. 6, 2003). Malaysia is perhaps the best example of a country in which the economic roles and interests of various racial groups have been pragmatically managed in the long-term without significant loss of growth momentum, despite the ongoing presence of inter-ethnic tensions which have occasionally manifested in violence, notably in 1969 (see below).

The Premodern Economy

Malaysia has a long history of internationally valued exports, being known from the early centuries A.D. as a source of gold, tin and exotics such as birds’ feathers, edible birds’ nests, aromatic woods, tree resins etc. The commercial importance of the area was enhanced by its strategic position athwart the seaborne trade routes from the Indian Ocean to East Asia. Merchants from both these regions, Arabs, Indians and Chinese regularly visited. Some became domiciled in ports such as Melaka [formerly Malacca], the location of one of the earliest local sultanates (c.1402 A.D.) and a focal point for both local and international trade.

From the early sixteenth century the area was increasingly penetrated by European trading interests, first the Portuguese (from 1511), then the Dutch East India Company [VOC](1602) in competition with the English East India Company [EIC] (1600) for the trade in pepper and various spices. By the late eighteenth century the VOC was dominant in the Indonesian region while the EIC acquired bases in Malaysia, beginning with Penang (1786), Singapore (1819) and Melaka (1824). These were major staging posts in the growing trade with China and also served as footholds from which to expand British control into the Malay Peninsula (from 1870), and northwest Borneo (Sarawak from 1841 and North Borneo from 1882). Over these centuries there was an increasing inflow of migrants from China attracted by the opportunities in trade and as a wage labor force for the burgeoning production of export commodities such as gold and tin. The indigenous people also engaged in commercial production (rice, tin), but remained basically within a subsistence economy and were reluctant to offer themselves as permanent wage labor. Overall, production in the premodern economy was relatively small in volume and technologically undeveloped. The capitalist sector, already foreign dominated, was still in its infancy (Drabble, 2000).

The Transition to Capitalist Production

The nineteenth century witnessed an enormous expansion in world trade which, between 1815 and 1914, grew on average at 4-5 percent a year compared to 1 percent in the preceding hundred years. The driving force came from the Industrial Revolution in the West which saw the innovation of large scale factory production of manufactured goods made possible by technological advances, accompanied by more efficient communications (e.g., railways, cars, trucks, steamships, international canals [Suez 1869, Panama 1914], telegraphs) which speeded up and greatly lowered the cost of long distance trade. Industrializing countries required ever-larger supplies of raw materials as well as foodstuffs for their growing populations. Regions such as Malaysia with ample supplies of virgin land and relative proximity to trade routes were well placed to respond to this demand. What was lacking was an adequate supply of capital and wage labor. In both aspects, the deficiency was supplied largely from foreign sources.

As expanding British power brought stability to the region, Chinese migrants started to arrive in large numbers with Singapore quickly becoming the major point of entry. Most arrived with few funds but those able to amass profits from trade (including opium) used these to finance ventures in agriculture and mining, especially in the neighboring Malay Peninsula. Crops such as pepper, gambier, tapioca, sugar and coffee were produced for export to markets in Asia (e.g. China), and later to the West after 1850 when Britain moved toward a policy of free trade. These crops were labor, not capital, intensive and in some cases quickly exhausted soil fertility and required periodic movement to virgin land (Jackson, 1968).

Tin

Besides ample land, the Malay Peninsula also contained substantial deposits of tin. International demand for tin rose progressively in the nineteenth century due to the discovery of a more efficient method for producing tinplate (for canned food). At the same time deposits in major suppliers such as Cornwall (England) had been largely worked out, thus opening an opportunity for new producers. Traditionally tin had been mined by Malays from ore deposits close to the surface. Difficulties with flooding limited the depth of mining; furthermore their activity was seasonal. From the 1840s the discovery of large deposits in the Peninsula states of Perak and Selangor attracted large numbers of Chinese migrants who dominated the industry in the nineteenth century bringing new technology which improved ore recovery and water control, facilitating mining to greater depths. By the end of the century Malayan tin exports (at approximately 52,000 metric tons) supplied just over half the world output. Singapore was a major center for smelting (refining) the ore into ingots. Tin mining also attracted attention from European, mainly British, investors who again introduced new technology – such as high-pressure hoses to wash out the ore, the steam pump and, from 1912, the bucket dredge floating in its own pond, which could operate to even deeper levels. These innovations required substantial capital for which the chosen vehicle was the public joint stock company, usually registered in Britain. Since no major new ore deposits were found, the emphasis was on increased efficiency in production. European operators, again employing mostly Chinese wage labor, enjoyed a technical advantage here and by 1929 accounted for 61 percent of Malayan output (Wong Lin Ken, 1965; Yip Yat Hoong, 1969).

Rubber

While tin mining brought considerable prosperity, it was a non-renewable resource. In the early twentieth century it was the agricultural sector which came to the forefront. The crops mentioned previously had boomed briefly but were hard pressed to survive severe price swings and the pests and diseases that were endemic in tropical agriculture. The cultivation of rubber-yielding trees became commercially attractive as a raw material for new industries in the West, notably for tires for the booming automobile industry especially in the U.S. Previously rubber had come from scattered trees growing wild in the jungles of South America with production only expandable at rising marginal costs. Cultivation on estates generated economies of scale. In the 1870s the British government organized the transport of specimens of the tree Hevea Brasiliensis from Brazil to colonies in the East, notably Ceylon and Singapore. There the trees flourished and after initial hesitancy over the five years needed for the trees to reach productive age, planters Chinese and European rushed to invest. The boom reached vast proportions as the rubber price reached record heights in 1910 (see Fig.1). Average values fell thereafter but investors were heavily committed and planting continued (also in the neighboring Netherlands Indies [Indonesia]). By 1921 the rubber acreage in Malaysia (mostly in the Peninsula) had reached 935 000 hectares (about 1.34 million acres) or some 55 percent of the total in South and Southeast Asia while output stood at 50 percent of world production.

Fig.1. Average London Rubber Prices, 1905-41 (current values)

As a result of this boom, rubber quickly surpassed tin as Malaysia’s main export product, a position that it was to hold until 1980. A distinctive feature of the industry was that the technology of extracting the rubber latex from the trees (called tapping) by an incision with a special knife, and its manufacture into various grades of sheet known as raw or plantation rubber, was easily adopted by a wide range of producers. The larger estates, mainly British-owned, were financed (as in the case of tin mining) through British-registered public joint stock companies. For example, between 1903 and 1912 some 260 companies were registered to operate in Malaya. Chinese planters for the most part preferred to form private partnerships to operate estates which were on average smaller. Finally, there were the smallholdings (under 40 hectares or 100 acres) of which those at the lower end of the range (2 hectares/5 acres or less) were predominantly owned by indigenous Malays who found growing and selling rubber more profitable than subsistence (rice) farming. These smallholders did not need much capital since their equipment was rudimentary and labor came either from within their family or in the form of share-tappers who received a proportion (say 50 percent) of the output. In Malaya in 1921 roughly 60 percent of the planted area was estates (75 percent European-owned) and 40 percent smallholdings (Drabble, 1991, 1).

The workforce for the estates consisted of migrants. British estates depended mainly on migrants from India, brought in under government auspices with fares paid and accommodation provided. Chinese business looked to the “coolie trade” from South China, with expenses advanced that migrants had subsequently to pay off. The flow of immigration was directly related to economic conditions in Malaysia. For example arrivals of Indians averaged 61 000 a year between 1900 and 1920. Substantial numbers also came from the Netherlands Indies.

Thus far, most capitalist enterprise was located in Malaya. Sarawak and British North Borneo had a similar range of mining and agricultural industries in the 19th century. However, their geographical location slightly away from the main trade route (see map) and the rugged internal terrain costly for transport made them less attractive to foreign investment. However, the discovery of oil by a subsidiary of Royal Dutch-Shell starting production from 1907 put Sarawak more prominently in the business of exports. As in Malaya, the labor force came largely from immigrants from China and to a lesser extent Java.

The growth in production for export in Malaysia was facilitated by development of an infrastructure of roads, railways, ports (e.g. Penang, Singapore) and telecommunications under the auspices of the colonial governments, though again this was considerably more advanced in Malaya (Amarjit Kaur, 1985, 1998)

The Creation of a Plural Society

By the 1920s the large inflows of migrants had created a multi-ethnic population of the type which the British scholar, J.S. Furnivall (1948) described as a plural society in which the different racial groups live side by side under a single political administration but, apart from economic transactions, do not interact with each other either socially or culturally. Though the original intention of many migrants was to come for only a limited period (say 3-5 years), save money and then return home, a growing number were staying longer, having children and becoming permanently domiciled in Malaysia. The economic developments described in the previous section were unevenly located, for example, in Malaya the bulk of the tin mines and rubber estates were located along the west coast of the Peninsula. In the boom-times, such was the size of the immigrant inflows that in certain areas they far outnumbered the indigenous Malays. In social and cultural terms Indians and Chinese recreated the institutions, hierarchies and linguistic usage of their countries of origin. This was particularly so in the case of the Chinese. Not only did they predominate in major commercial centers such as Penang, Singapore, and Kuching, but they controlled local trade in the smaller towns and villages through a network of small shops (kedai) and dealerships that served as a pipeline along which export goods like rubber went out and in return imported manufactured goods were brought in for sale. In addition Chinese owned considerable mining and agricultural land. This created a distribution of wealth and division of labor in which economic power and function were directly related to race. In this situation lay the seeds of growing discontent among bumiputera that they were losing their ancestral inheritance (land) and becoming economically marginalized. As long as British colonial rule continued the various ethnic groups looked primarily to government to protect their interests and maintain peaceable relations. An example of colonial paternalism was the designation from 1913 of certain lands in Malaya as Malay Reservations in which only indigenous people could own and deal in property (Lim Teck Ghee, 1977).

Benefits and Drawbacks of an Export Economy

Prior to World War II the international economy was divided very broadly into the northern and southern hemispheres. The former contained most of the industrialized manufacturing countries and the latter the principal sources of foodstuffs and raw materials. The commodity exchange between the spheres was known as the Old International Division of Labor (OIDL). Malaysia’s place in this system was as a leading exporter of raw materials (tin, rubber, timber, oil, etc.) and an importer of manufactures. Since relatively little processing was done on the former prior to export, most of the value-added component in the final product accrued to foreign manufacturers, e.g. rubber tire manufacturers in the U.S.

It is clear from this situation that Malaysia depended heavily on earnings from exports of primary commodities to maintain the standard of living. Rice had to be imported (mainly from Burma and Thailand) because domestic production supplied on average only 40 percent of total needs. As long as export prices were high (for example during the rubber boom previously mentioned), the volume of imports remained ample. Profits to capital and good smallholder incomes supported an expanding economy. There are no official data for Malaysian national income prior to World War II, but some comparative estimates are given in Table 1 which indicate that Malayan Gross Domestic Product (GDP) per person was easily the leader in the Southeast and East Asian region by the late 1920s.

Table 1
GDP per Capita: Selected Asian Countries, 1900-1990
(in 1985 international dollars)

1900 1929 1950 1973 1990
Malaya/Malaysia1 6002 1910 1828 3088 5775
Singapore 22763 5372 14441
Burma 523 651 304 446 562
Thailand 594 623 652 1559 3694
Indonesia 617 1009 727 1253 2118
Philippines 735 1106 943 1629 1934
South Korea 568 945 565 1782 6012
Japan 724 1192 1208 7133 13197

Notes: Malaya to 19731; Guesstimate2; 19603

Source: van der Eng (1994).

However, the international economy was subject to strong fluctuations. The levels of activity in the industrialized countries, especially the U.S., were the determining factors here. Almost immediately following World War I there was a depression from 1919-22. Strong growth in the mid and late-1920s was followed by the Great Depression (1929-32). As industrial output slumped, primary product prices fell even more heavily. For example, in 1932 rubber sold on the London market for about one one-hundredth of the peak price in 1910 (Fig.1). The effects on export earnings were very severe; in Malaysia’s case between 1929 and 1932 these dropped by 73 percent (Malaya), 60 percent (Sarawak) and 50 percent (North Borneo). The aggregate value of imports fell on average by 60 percent. Estates dismissed labor and since there was no social security, many workers had to return to their country of origin. Smallholder incomes dropped heavily and many who had taken out high-interest secured loans in more prosperous times were unable to service these and faced the loss of their land.

The colonial government attempted to counteract this vulnerability to economic swings by instituting schemes to restore commodity prices to profitable levels. For the rubber industry this involved two periods of mandatory restriction of exports to reduce world stocks and thus exert upward pressure on market prices. The first of these (named the Stevenson scheme after its originator) lasted from 1 October 1922- 1 November 1928, and the second (the International Rubber Regulation Agreement) from 1 June 1934-1941. Tin exports were similarly restricted from 1931-41. While these measures did succeed in raising world prices, the inequitable treatment of Asian as against European producers in both industries has been debated. The protective policy has also been blamed for “freezing” the structure of the Malaysian economy and hindering further development, for instance into manufacturing industry (Lim Teck Ghee, 1977; Drabble, 1991).

Why No Industrialization?

Malaysia had very few secondary industries before World War II. The little that did appear was connected mainly with the processing of the primary exports, rubber and tin, together with limited production of manufactured goods for the domestic market (e.g. bread, biscuits, beverages, cigarettes and various building materials). Much of this activity was Chinese-owned and located in Singapore (Huff, 1994). Among the reasons advanced are; the small size of the domestic market, the relatively high wage levels in Singapore which made products uncompetitive as exports, and a culture dominated by British trading firms which favored commerce over industry. Overshadowing all these was the dominance of primary production. When commodity prices were high, there was little incentive for investors, European or Asian, to move into other sectors. Conversely, when these prices fell capital and credit dried up, while incomes contracted, thus lessening effective demand for manufactures. W.G. Huff (2002) has argued that, prior to World War II, “there was, in fact, never a good time to embark on industrialization in Malaya.”

War Time 1942-45: The Japanese Occupation

During the Japanese occupation years of World War II, the export of primary products was limited to the relatively small amounts required for the Japanese economy. This led to the abandonment of large areas of rubber and the closure of many mines, the latter progressively affected by a shortage of spare parts for machinery. Businesses, especially those Chinese-owned, were taken over and reassigned to Japanese interests. Rice imports fell heavily and thus the population devoted a large part of their efforts to producing enough food to stay alive. Large numbers of laborers (many of whom died) were conscripted to work on military projects such as construction of the Thai-Burma railroad. Overall the war period saw the dislocation of the export economy, widespread destruction of the infrastructure (roads, bridges etc.) and a decline in standards of public health. It also saw a rise in inter-ethnic tensions due to the harsh treatment meted out by the Japanese to some groups, notably the Chinese, compared to a more favorable attitude towards the indigenous peoples among whom (Malays particularly) there was a growing sense of ethnic nationalism (Drabble, 2000).

Postwar Reconstruction and Independence

The returning British colonial rulers had two priorities after 1945; to rebuild the export economy as it had been under the OIDL (see above), and to rationalize the fragmented administrative structure (see General Background). The first was accomplished by the late 1940s with estates and mines refurbished, production restarted once the labor force had been brought back and adequate rice imports regained. The second was a complex and delicate political process which resulted in the formation of the Federation of Malaya (1948) from which Singapore, with its predominantly Chinese population (about 75%), was kept separate. In Borneo in 1946 the state of Sarawak, which had been a private kingdom of the English Brooke family (so-called “White Rajas”) since 1841, and North Borneo, administered by the British North Borneo Company from 1881, were both transferred to direct rule from Britain. However, independence was clearly on the horizon and in Malaya tensions continued with the guerrilla campaign (called the “Emergency”) waged by the Malayan Communist Party (membership largely Chinese) from 1948-60 to force out the British and set up a Malayan Peoples’ Republic. This failed and in 1957 the Malayan Federation gained independence (Merdeka) under a “bargain” by which the Malays would hold political paramountcy while others, notably Chinese and Indians, were given citizenship and the freedom to pursue their economic interests. The bargain was institutionalized as the Alliance, later renamed the National Front (Barisan Nasional) which remains the dominant political grouping. In 1963 the Federation of Malaysia was formed in which the bumiputera population was sufficient in total to offset the high proportion of Chinese arising from the short-lived inclusion of Singapore (Andaya and Andaya, 2001).

Towards the Formation of a National Economy

Postwar two long-term problems came to the forefront. These were (a) the political fragmentation (see above) which had long prevented a centralized approach to economic development, coupled with control from Britain which gave primacy to imperial as opposed to local interests and (b) excessive dependence on a small range of primary products (notably rubber and tin) which prewar experience had shown to be an unstable basis for the economy.

The first of these was addressed partly through the political rearrangements outlined in the previous section, with the economic aspects buttressed by a report from a mission to Malaya from the International Bank for Reconstruction and Development (IBRD) in 1954. The report argued that Malaya “is now a distinct national economy.” A further mission in 1963 urged “closer economic cooperation between the prospective Malaysia[n] territories” (cited in Drabble, 2000, 161, 176). The rationale for the Federation was that Singapore would serve as the initial center of industrialization, with Malaya, Sabah and Sarawak following at a pace determined by local conditions.

The second problem centered on economic diversification. The IBRD reports just noted advocated building up a range of secondary industries to meet a larger portion of the domestic demand for manufactures, i.e. import-substitution industrialization (ISI). In the interim dependence on primary products would perforce continue.

The Adoption of Planning

In the postwar world the development plan (usually a Five-Year Plan) was widely adopted by Less-Developed Countries (LDCs) to set directions, targets and estimated costs. Each of the Malaysian territories had plans during the 1950s. Malaya was the first to get industrialization of the ISI type under way. The Pioneer Industries Ordinance (1958) offered inducements such as five-year tax holidays, guarantees (to foreign investors) of freedom to repatriate profits and capital etc. A modest degree of tariff protection was granted. The main types of goods produced were consumer items such as batteries, paints, tires, and pharmaceuticals. Just over half the capital invested came from abroad, with neighboring Singapore in the lead. When Singapore exited the federation in 1965, Malaysia’s fledgling industrialization plans assumed greater significance although foreign investors complained of stifling bureaucracy retarding their projects.

Primary production, however, was still the major economic activity and here the problem was rejuvenation of the leading industries, rubber in particular. New capital investment in rubber had slowed since the 1920s, and the bulk of the existing trees were nearing the end of their economic life. The best prospect for rejuvenation lay in cutting down the old trees and replanting the land with new varieties capable of raising output per acre/hectare by a factor of three or four. However, the new trees required seven years to mature. Corporately owned estates could replant progressively, but smallholders could not face such a prolonged loss of income without support. To encourage replanting, the government offered grants to owners, financed by a special duty on rubber exports. The process was a lengthy one and it was the 1980s before replanting was substantially complete. Moreover, many estates elected to switch over to a new crop, oil palms (a product used primarily in foodstuffs), which offered quicker returns. Progress was swift and by the 1960s Malaysia was supplying 20 percent of world demand for this commodity.

Another priority at this time consisted of programs to improve the standard of living of the indigenous peoples, most of whom lived in the rural areas. The main instrument was land development, with schemes to open up large areas (say 100,000 acres or 40 000 hectares) which were then subdivided into 10 acre/4 hectare blocks for distribution to small farmers from overcrowded regions who were either short of land or had none at all. Financial assistance (repayable) was provided to cover housing and living costs until the holdings became productive. Rubber and oil palms were the main commercial crops planted. Steps were also taken to increase the domestic production of rice to lessen the historical dependence on imports.

In the primary sector Malaysia’s range of products was increased from the 1960s by a rapid increase in the export of hardwood timber, mostly in the form of (unprocessed) saw-logs. The markets were mainly in East Asia and Australasia. Here the largely untapped resources of Sabah and Sarawak came to the fore, but the rapid rate of exploitation led by the late twentieth century to damaging effects on both the environment (extensive deforestation, soil-loss, silting, changed weather patterns), and the traditional hunter-gatherer way of life of forest-dwellers (decrease in wild-life, fish, etc.). Other development projects such as the building of dams for hydroelectric power also had adverse consequences in all these respects (Amarjit Kaur, 1998; Drabble, 2000; Hong, 1987).

A further major addition to primary exports came from the discovery of large deposits of oil and natural gas in East Malaysia, and off the east coast of the Peninsula from the 1970s. Gas was exported in liquified form (LNG), and was also used domestically as a substitute for oil. At peak values in 1982, petroleum and LNG provided around 29 percent of Malaysian export earnings but had declined to 18 percent by 1988.

Industrialization and the New Economic Policy 1970-90

The program of industrialization aimed primarily at the domestic market (ISI) lost impetus in the late 1960s as foreign investors, particularly from Britain switched attention elsewhere. An important factor here was the outbreak of civil disturbances in May 1969, following a federal election in which political parties in the Peninsula (largely non-bumiputera in membership) opposed to the Alliance did unexpectedly well. This brought to a head tensions, which had been rising during the 1960s over issues such as the use of the national language, Malay (Bahasa Malaysia) as the main instructional medium in education. There was also discontent among Peninsular Malays that the economic fruits since independence had gone mostly to non-Malays, notably the Chinese. The outcome was severe inter-ethnic rioting centered in the federal capital, Kuala Lumpur, which led to the suspension of parliamentary government for two years and the implementation of the New Economic Policy (NEP).

The main aim of the NEP was a restructuring of the Malaysian economy over two decades, 1970-90 with the following aims:

  1. to redistribute corporate equity so that the bumiputera share would rise from around 2 percent to 30 percent. The share of other Malaysians would increase marginally from 35 to 40 percent, while that of foreigners would fall from 63 percent to 30 percent.
  2. to eliminate the close link between race and economic function (a legacy of the colonial era) and restructure employment so that that the bumiputera share in each sector would reflect more accurately their proportion of the total population (roughly 55 percent). In 1970 this group had about two-thirds of jobs in the primary sector where incomes were generally lowest, but only 30 percent in the secondary sector. In high-income middle class occupations (e.g. professions, management) the share was only 13 percent.
  3. To eradicate poverty irrespective of race. In 1970 just under half of all households in Peninsular Malaysia had incomes below the official poverty line. Malays accounted for about 75 percent of these.

The principle underlying these aims was that the redistribution would not result in any one group losing in absolute terms. Rather it would be achieved through the process of economic growth, i.e. the economy would get bigger (more investment, more jobs, etc.). While the primary sector would continue to receive developmental aid under the successive Five Year Plans, the main emphasis was a switch to export-oriented industrialization (EOI) with Malaysia seeking a share in global markets for manufactured goods. Free Trade Zones (FTZs) were set up in places such as Penang where production was carried on with the undertaking that the output would be exported. Firms locating there received concessions such as duty-free imports of raw materials and capital goods, and tax concessions, aimed at primarily at foreign investors who were also attracted by Malaysia’s good facilities, relatively low wages and docile trade unions. A range of industries grew up; textiles, rubber and food products, chemicals, telecommunications equipment, electrical and electronic machinery/appliances, car assembly and some heavy industries, iron and steel. As with ISI, much of the capital and technology was foreign, for example the Japanese firm Mitsubishi was a partner in a venture to set up a plant to assemble a Malaysian national car, the Proton, from mostly imported components (Drabble, 2000).

Results of the NEP

Table 2 below shows the outcome of the NEP in the categories outlined above.

Table 2
Restructuring under the NEP, 1970-90

1970 1990
Wealth Ownership (%) Bumiputera 2.0 20.3
Other Malaysians 34.6 54.6
Foreigners 63.4 25.1
Employment
(%) of total
workers
in each
sector
Primary sector (agriculture, mineral
extraction, forest products and fishing)
Bumiputera 67.6 [61.0]* 71.2 [36.7]*
Others 32.4 28.8
Secondary sector
(manufacturing and construction)
Bumiputera 30.8 [14.6]* 48.0 [26.3]*
Others 69.2 52.0
Tertiary sector (services) Bumiputera 37.9 [24.4]* 51.0 [36.9]*
Others 62.1 49.0

Note: [ ]* is the proportion of the ethnic group thus employed. The “others” category has not been disaggregated by race to avoid undue complexity.
Source: Drabble, 2000, Table 10.9.

Section (a) shows that, overall, foreign ownership fell substantially more than planned, while that of “Other Malaysians” rose well above the target. Bumiputera ownership appears to have stopped well short of the 30 percent mark. However, other evidence suggests that in certain sectors such as agriculture/mining (35.7%) and banking/insurance (49.7%) bumiputera ownership of shares in publicly listed companies had already attained a level well beyond the target. Section (b) indicates that while bumiputera employment share in primary production increased slightly (due mainly to the land schemes), as a proportion of that ethnic group it declined sharply, while rising markedly in both the secondary and tertiary sectors. In middle class employment the share rose to 27 percent.

As regards the proportion of households below the poverty line, in broad terms the incidence in Malaysia fell from approximately 49 percent in 1970 to 17 percent in 1990, but with large regional variations between the Peninsula (15%), Sarawak (21 %) and Sabah (34%) (Drabble, 2000, Table 13.5). All ethnic groups registered big falls, but on average the non-bumiputera still enjoyed the lowest incidence of poverty. By 2002 the overall level had fallen to only 4 percent.

The restructuring of the Malaysian economy under the NEP is very clear when we look at the changes in composition of the Gross Domestic Product (GDP) in Table 3 below.

Table 3
Structural Change in GDP 1970-90 (% shares)

Year Primary Secondary Tertiary
1970 44.3 18.3 37.4
1990 28.1 30.2 41.7

Source: Malaysian Government, 1991, Table 3-2.

Over these three decades Malaysia accomplished a transition from a primary product-dependent economy to one in which manufacturing industry had emerged as the leading growth sector. Rubber and tin, which accounted for 54.3 percent of Malaysian export value in 1970, declined sharply in relative terms to a mere 4.9 percent in 1990 (Crouch, 1996, 222).

Factors in the structural shift

The post-independence state played a leading role in the transformation. The transition from British rule was smooth. Apart from the disturbances in 1969 government maintained a firm control over the administrative machinery. Malaysia’s Five Year Development plans were a model for the developing world. Foreign capital was accorded a central role, though subject to the requirements of the NEP. At the same time these requirements discouraged domestic investors, the Chinese especially, to some extent (Jesudason, 1989).

Development was helped by major improvements in education and health. Enrolments at the primary school level reached approximately 90 percent by the 1970s, and at the secondary level 59 percent of potential by 1987. Increased female enrolments, up from 39 percent to 58 percent of potential from 1975 to 1991, were a notable feature, as was the participation of women in the workforce which rose to just over 45 percent of total employment by 1986/7. In the tertiary sector the number of universities increased from one to seven between 1969 and 1990 and numerous technical and vocational colleges opened. Bumiputera enrolments soared as a result of the NEP policy of redistribution (which included ethnic quotas and government scholarships). However, tertiary enrolments totaled only 7 percent of the age group by 1987. There was an “educational-occupation mismatch,” with graduates (bumiputera especially) preferring jobs in government, and consequent shortfalls against strong demand for engineers, research scientists, technicians and the like. Better living conditions (more homes with piped water and more rural clinics, for example) led to substantial falls in infant mortality, improved public health and longer life-expectancy, especially in Peninsular Malaysia (Drabble, 2000, 248, 284-6).

The quality of national leadership was a crucial factor. This was particularly so during the NEP. The leading figure here was Dr Mahathir Mohamad, Malaysian Prime Minister from 1981-2003. While supporting the NEP aim through positive discrimination to give bumiputera an economic stake in the country commensurate with their indigenous status and share in the population, he nevertheless emphasized that this should ultimately lead them to a more modern outlook and ability to compete with the other races in the country, the Chinese especially (see Khoo Boo Teik, 1995). There were, however, some paradoxes here. Mahathir was a meritocrat in principle, but in practice this period saw the spread of “money politics” (another expression for patronage) in Malaysia. In common with many other countries Malaysia embarked on a policy of privatization of public assets, notably in transportation (e.g. Malaysian Airlines), utilities (e.g. electricity supply) and communications (e.g. television). This was done not through an open process of competitive tendering but rather by a “nebulous ‘first come, first served’ principle” (Jomo, 1995, 8) which saw ownership pass directly to politically well-connected businessmen, mainly bumiputera, at relatively low valuations.

The New Development Policy

Positive action to promote bumiputera interests did not end with the NEP in 1990, this was followed in 1991 by the New Development Policy (NDP), which emphasized assistance only to “Bumiputera with potential, commitment and good track records” (Malaysian Government, 1991, 17) rather than the previous blanket measures to redistribute wealth and employment. In turn the NDP was part of a longer-term program known as Vision 2020. The aim here is to turn Malaysia into a fully industrialized country and to quadruple per capita income by the year 2020. This will require the country to continue ascending the technological “ladder” from low- to high-tech types of industrial production, with a corresponding increase in the intensity of capital investment and greater retention of value-added (i.e. the value added to raw materials in the production process) by Malaysian producers.

The Malaysian economy continued to boom at historically unprecedented rates of 8-9 percent a year for much of the 1990s (see next section). There was heavy expenditure on infrastructure, for example extensive building in Kuala Lumpur such as the Twin Towers (currently the highest buildings in the world). The volume of manufactured exports, notably electronic goods and electronic components increased rapidly.

Asian Financial Crisis, 1997-98

The Asian financial crisis originated in heavy international currency speculation leading to major slumps in exchange rates beginning with the Thai baht in May 1997, spreading rapidly throughout East and Southeast Asia and severely affecting the banking and finance sectors. The Malaysian ringgit exchange rate fell from RM 2.42 to 4.88 to the U.S. dollar by January 1998. There was a heavy outflow of foreign capital. To counter the crisis the International Monetary Fund (IMF) recommended austerity changes to fiscal and monetary policies. Some countries (Thailand, South Korea, and Indonesia) reluctantly adopted these. The Malaysian government refused and implemented independent measures; the ringgitbecame non-convertible externally and was pegged at RM 3.80 to the US dollar, while foreign capital repatriated before staying at least twelve months was subject to substantial levies. Despite international criticism these actions stabilized the domestic situation quite effectively, restoring net growth (see next section) especially compared to neighboring Indonesia.

Rates of Economic Growth

Malaysia’s economic growth in comparative perspective from 1960-90 is set out in Table 4 below.

Table 4
Asia-Pacific Region: Growth of Real GDP (annual average percent)

1960-69 1971-80 1981-89
Japan 10.9 5.0 4.0
Asian “Tigers”
Hong Kong 10.0 9.5 7.2
South Korea 8.5 8.7 9.3
Singapore 8.9 9.0 6.9
Taiwan 11.6 9.7 8.1
ASEAN-4
Indonesia 3.5 7.9 5.2
Malaysia 6.5 8.0 5.4
Philippines 4.9 6.2 1.7
Thailand 8.3 9.9 7.1

Source: Drabble, 2000, Table 10.2; figures for Japan are for 1960-70, 1971-80, and 1981-90.

The data show that Japan, the dominant Asian economy for much of this period, progressively slowed by the 1990s (see below). The four leading Newly Industrialized Countries (Asian “Tigers” as they were called) followed EOF strategies and achieved very high rates of growth. Among the four ASEAN (Association of Southeast Asian Nations formed 1967) members, again all adopting EOI policies, Thailand stood out followed closely by Malaysia. Reference to Table 1 above shows that by 1990 Malaysia, while still among the leaders in GDP per head, had slipped relative to the “Tigers.”

These economies, joined by China, continued growth into the 1990s at such high rates (Malaysia averaged around 8 percent a year) that the term “Asian miracle” became a common method of description. The exception was Japan which encountered major problems with structural change and an over-extended banking system. Post-crisis the countries of the region have started recovery but at differing rates. The Malaysian economy contracted by nearly 7 percent in 1998, recovered to 8 percent growth in 2000, slipped again to under 1 percent in 2001 and has since stabilized at between 4 and 5 percent growth in 2002-04.

The new Malaysian Prime Minister (since October 2003), Abdullah Ahmad Badawi, plans to shift the emphasis in development to smaller, less-costly infrastructure projects and to break the previous dominance of “money politics.” Foreign direct investment will still be sought but priority will be given to nurturing the domestic manufacturing sector.

Further improvements in education will remain a key factor (Far Eastern Economic Review, Nov.6, 2003).

Overview

Malaysia owes its successful historical economic record to a number of factors. Geographically it lies close to major world trade routes bringing early exposure to the international economy. The sparse indigenous population and labor force has been supplemented by immigrants, mainly from neighboring Asian countries with many becoming permanently domiciled. The economy has always been exceptionally open to external influences such as globalization. Foreign capital has played a major role throughout. Governments, colonial and national, have aimed at managing the structure of the economy while maintaining inter-ethnic stability. Since about 1960 the economy has benefited from extensive restructuring with sustained growth of exports from both the primary and secondary sectors, thus gaining a double impetus.

However, on a less positive assessment, the country has so far exchanged dependence on a limited range of primary products (e.g. tin and rubber) for dependence on an equally limited range of manufactured goods, notably electronics and electronic components (59 percent of exports in 2002). These industries are facing increasing competition from lower-wage countries, especially India and China. Within Malaysia the distribution of secondary industry is unbalanced, currently heavily favoring the Peninsula. Sabah and Sarawak are still heavily dependent on primary products (timber, oil, LNG). There is an urgent need to continue the search for new industries in which Malaysia can enjoy a comparative advantage in world markets, not least because inter-ethnic harmony depends heavily on the continuance of economic prosperity.

Select Bibliography

General Studies

Amarjit Kaur. Economic Change in East Malaysia: Sabah and Sarawak since 1850. London: Macmillan, 1998.

Andaya, L.Y. and Andaya, B.W. A History of Malaysia, second edition. Basingstoke: Palgrave, 2001.

Crouch, Harold. Government and Society in Malaysia. Sydney: Allen and Unwin, 1996.

Drabble, J.H. An Economic History of Malaysia, c.1800-1990: The Transition to Modern Economic Growth. Basingstoke: Macmillan and New York: St. Martin’s Press, 2000.

Furnivall, J.S. Colonial Policy and Practice: A Comparative Study of Burma and Netherlands India. Cambridge (UK), 1948.

Huff, W.G. The Economic Growth of Singapore: Trade and Development in the Twentieth Century. Cambridge: Cambridge University Press, 1994.

Jomo, K.S. Growth and Structural Change in the Malaysian Economy. London: Macmillan, 1990.

Industries/Transport

Alavi, Rokiah. Industrialization in Malaysia: Import Substitution and Infant Industry Performance. London: Routledge, 1966.

Amarjit Kaur. Bridge and Barrier: Transport and Communications in Colonial Malaya 1870-1957. Kuala Lumpur: Oxford University Press, 1985.

Drabble, J.H. Rubber in Malaya 1876-1922: The Genesis of the Industry. Kuala Lumpur: Oxford University Press, 1973.

Drabble, J.H. Malayan Rubber: The Interwar Years. London: Macmillan, 1991.

Huff, W.G. “Boom or Bust Commodities and Industrialization in Pre-World War II Malaya.” Journal of Economic History 62, no. 4 (2002): 1074-1115.

Jackson, J.C. Planters and Speculators: European and Chinese Agricultural Enterprise in Malaya 1786-1921. Kuala Lumpur: University of Malaya Press, 1968.

Lim Teck Ghee. Peasants and Their Agricultural Economy in Colonial Malaya, 1874-1941. Kuala Lumpur: Oxford University Press, 1977.

Wong Lin Ken. The Malayan Tin Industry to 1914. Tucson: University of Arizona Press, 1965.

Yip Yat Hoong. The Development of the Tin Mining Industry of Malaya. Kuala Lumpur: University of Malaya Press, 1969.

New Economic Policy

Jesudason, J.V. Ethnicity and the Economy: The State, Chinese Business and Multinationals in Malaysia. Kuala Lumpur: Oxford University Press, 1989.

Jomo, K.S., editor. Privatizing Malaysia: Rents, Rhetoric, Realities. Boulder, CO: Westview Press, 1995.

Khoo Boo Teik. Paradoxes of Mahathirism: An Intellectual Biography of Mahathir Mohamad. Kuala Lumpur: Oxford University Press, 1995.

Vincent, J.R., R.M. Ali and Associates. Environment and Development in a Resource-Rich Economy: Malaysia under the New Economic Policy. Cambridge, MA: Harvard University Press, 1997

Ethnic Communities

Chew, Daniel. Chinese Pioneers on the Sarawak Frontier, 1841-1941. Kuala Lumpur: Oxford University Press, 1990.

Gullick, J.M. Malay Society in the Late Nineteenth Century. Kuala Lumpur: Oxford University Press, 1989.

Hong, Evelyne. Natives of Sarawak: Survival in Borneo’s Vanishing Forests. Penang: Institut Masyarakat Malaysia, 1987.

Shamsul, A.B. From British to Bumiputera Rule. Singapore: Institute of Southeast Asian Studies, 1986.

Economic Growth

Far Eastern Economic Review. Hong Kong. An excellent weekly overview of current regional affairs.

Malaysian Government. The Second Outline Perspective Plan, 1991-2000. Kuala Lumpur: Government Printer, 1991.

Van der Eng, Pierre. “Assessing Economic Growth and the Standard of Living in Asia 1870-1990.” Milan, Eleventh International Economic History Congress, 1994.

Citation: Drabble, John. “The Economic History of Malaysia”. EH.Net Encyclopedia, edited by Robert Whaples. July 31, 2004. URL http://eh.net/encyclopedia/economic-history-of-malaysia/

The Economic History of Korea

The Economic History of Korea

Myung Soo Cha, Yeungnam University

Three Periods

Two regime shifts divide the economic history of Korea during the past six centuries into three distinct periods: 1) the period of Malthusian stagnation up to 1910, when Japan annexed Korea; 2) the colonial period from 1910-45, when the country embarked upon modern economic growth; and 3) the post colonial decades, when living standards improved rapidly in South Korea, while North Korea returned to the world of disease and starvation. The dramatic history of living standards in Korea presents one of the most convincing pieces of evidence to show that institutions — particularly the government — matter for economic growth.

Dynastic Degeneration

The founders of the Chosôn dynasty (1392-1910) imposed a tribute system on a little-commercialized peasant economy, collecting taxes in the form of a wide variety of products and mobilizing labor to obtain the handicrafts and services it needed. From the late sixteenth to the early seventeenth century, invading armies from Japan and China shattered the command system and forced a transition to a market economy. The damaged bureaucracy started to receive taxes in money commodities — rice and cotton textiles — and eventually began to mint copper coins and lifted restrictions on trade. The wars also dealt a serious blow to slavery and the pre-war system of forced labor, allowing labor markets to emerge.

Markets were slow to develop: grain markets in agricultural regions of Korea appeared less integrated than those in comparable parts of China and Japan. Population and acreage, however, recovered quickly from the adverse impact of the wars. Population growth came to a halt around 1800, and a century of demographic stagnation followed due to a higher level of mortality. During the nineteenth century, living standards appeared to deteriorate. Both wages and rents fell, tax receipts shrank, and budget deficits expanded, forcing the government to resort to debasement. Peasant rebellions occurred more frequently, and poor peasants left Korea for northern China.

Given that both acreage and population remained stable during the nineteenth century, the worsening living standards imply that the aggregate output contracted, because land and labor were being used in an ever more inefficient way. The decline in efficiency appeared to have much to do with disintegrating system of water control, which included flood control and irrigation.

The water control problem had institutional roots, as in Q’ing China. Population growth caused rapid deforestation, as peasants were able to readily obtain farmlands by burning off forests, where property rights usually remained ill-defined. (This contrasts with Tokugawa Japan, where conflicts and litigation following competitive exploitation of forests led to forest regulation.) While the deforestation wrought havoc on reservoirs by increasing the incidence and intensity of flooding, private individuals had little incentives to repair the damages, as they expected others to free-ride on the benefits of their efforts. Keeping the system of water control in good condition required public initiatives, which the dynastic government could not undertake. During the nineteenth century, powerful landowning families took turns controlling minor or ailing kings, reducing the state to an instrument serving private interests. Failing to take measures to maintain irrigation, provincial officials accelerated its decay by taking bribes in return for conniving at the practice of farming on the rich soil alongside reservoirs. Peasants responded to the decaying irrigation by developing new rice seed varieties, which could better resist droughts but yielded less. They also tried to counter the increasingly unstable water supply by building waterways linking farmlands with rivers, which frequently met opposition from people farming further downstream. Not only did provincial administrators fail to settle the water disputes, but also some of them became central causes of clashes. In 1894 peasants protested against a local administrator’s attempts to generate private income by collecting fees for using waterways, which had been built by peasants. The uprising quickly developed into a nationwide peasant rebellion, which the crumbling government could suppress only by calling in military forces from China and Japan. An unforeseen consequence of the rebellion was the Sino-Japanese war fought on the Korean soil, where Japan defeated China, tipping the balance of power in Korea critically in her favor.

The water control problem affected primarily rice farming productivity: during the nineteenth century paddy land prices (as measured by the amount of rice) fell, while dry farm prices (as measured by the amount of dry farm products) rose. Peasants and landlords converted paddy lands into dry farms during the nineteenth century, and there occurred an exodus of workers out of agriculture into handicraft and commerce. Despite the proto-industrialization, late dynastic Korea remained less urbanized than Q’ing China, not to mention Tokugawa Japan. Seasonal fluctuations in rice prices in the main agricultural regions of Korea were far wider than those observed in Japan during the nineteenth century, implying a significantly higher interest rate, a lower level of capital per person, and therefore lower living standards for Korea. In the mid-nineteenth century paddy land productivity in Korea was about half of that in Japan.

Colonial Transition to Modern Economic Growth

Less than two decades after having been opened by Commodore Perry, Japan first made its ambitions about Korea known by forcing the country open to trade in 1876. Defeating Russia in the war of 1905, Japan virtually annexed Korea, which was made official five years later. What replaced the feeble and predatory bureaucracy of the ChosǑn dynasty was a developmental state. Drawing on the Meiji government’s experience, the colonial state introduced a set of expensive policy measures to modernize Korea. One important project was to improve infrastructure: railway lines were extended, and roads and harbors and communication networks were improved, which rapidly integrated goods and factor markets both nationally and internationally. Another project was a vigorous health campaign: the colonial government improved public hygiene, introduced modern medicine, and built hospitals, significantly accelerating the mortality decline set in motion around 1890, apparently by the introduction of the smallpox vaccination. The mortality transition resulted in a population expanding 1.4% per year during the colonial period. The third project was to revamp education. As modern teaching institutions quickly replaced traditional schools teaching Chinese classics, primary school enrollment ration rose from 1 percent in 1910 to 47 percent in 1943. Finally, the cadastral survey (1910-18) modernized and legalized property rights to land, which boosted not only the efficiency in land use, but also tax revenue from landowners. These modernization efforts generated sizable public deficits, which the colonial government could finance partly by floating bonds in Japan and partly by unilateral transfers from the Japanese government.

The colonial government implemented industrial policy as well. The Rice Production Development Program (1920-1933), a policy response to the Rice Riots in Japan in 1918, was aimed at increasing rice supply within the Japanese empire. In colonial Korea, the program placed particular emphasis upon reversing the decay in water control. The colonial government provided subsidies for irrigation projects, and set up institutions to lower information, negotiation, and enforcement costs in building new waterways and reservoirs. Improved irrigation made it possible for peasants to grow high yielding rice seed varieties. Completion of a chemical fertilizer factory in 1927 increased the use of fertilizer, further boosting the yields from the new type of rice seeds. Rice prices fell rapidly in the late 1920s and early 1930s in the wake of the world agricultural depression, leading to the suspension of the program in 1933.

Despite the Rice Program, the structure of the colonial economy has been shifting away from agriculture towards manufacturing ever since the beginning of the colonial rule at a consistent pace. From 1911-40 the share of manufacturing in GDP increased from 6 percent to 28 percent, and the share of agriculture fell from 76 percent to 41 percent. Major causes of the structural change included diffusion of modern manufacturing technology, the world agricultural depression shifting the terms of trade in favor of manufacturing, and Japan’s early recovery from the Great Depression generating an investment boom in the colony. Also Korea’s cheap labor and natural resources and the introduction of controls on output and investment in Japan to mitigate the impact of the Depression helped attract direct investment in the colony. Finally, subjugating party politicians and pushing Japan into the Second World War with the invasion of China in 1937, the Japanese military began to develop northern parts of Korea peninsula as an industrial base producing munitions.

The institutional modernization, technological diffusion, and the inflow of Japanese capital put an end to the Malthusian degeneration and pushed Korea onto the path of modern economic growth. Both rents and wages stopped falling and started to rise from the early twentieth century. As the population explosion made labor increasingly abundant vis-a-vis land, rents increased more rapidly than wages, suggesting that income distribution became less equal during the colonial period. Per capita output rose faster than one percent per year from 1911-38.

Per capita grain consumption declined during the colonial period, providing grounds for traditional criticism of the Japanese colonialism exploiting Korea. However, per capita real consumption increased, due to rising non-grain and non-good consumption, and Koreans were also getting better education and living longer. In the late 1920s, life expectancy at birth was 37 years, an estimate several years longer than in China and almost ten years shorter than in Japan. Life expectancy increased to 43 years at the end of the colonial period. Male mean stature was slightly higher than 160 centimeters at the end of the 1920s, a number not significantly different from the Chinese or Japanese height, and appeared to become shorter during the latter half of the colonial period.

South Korean Prosperity

With the end of the Second World War in 1945, two separate regimes emerged on the Korean peninsula to replace the colonial government. The U.S. military government took over the southern half, while communist Russia set up a Korean leadership in the northern half. The de-colonization and political division meant sudden disruption of trade both with Japan and within Korea, causing serious economic turmoil. Dealing with the post-colonial chaos with economic aid, the U.S. military government privatized properties previously owned by the Japanese government and civilians. The first South Korean government, established in 1948, carried out a land reform, making land distribution more egalitarian. Then the Korean War broke out in 1950, killing one and half million people and destroying about a quarter of capital stock during its three year duration.

After the war, South Korean policymakers set upon stimulating economic growth by promoting indigenous industrial firms, following the example of many other post-World War II developing countries. The government selected firms in targeted industries and gave them privileges to buy foreign currencies and to borrow funds from banks at preferential rates. It also erected tariff barriers and imposed a prohibition on manufacturing imports, hoping that the protection would give domestic firms a chance to improve productivity through learning-by-doing and importing advanced technologies. Under the policy, known as import-substitution industrialization (ISI), entrepreneurs seemed more interested in maximizing and perpetuating favors by bribing bureaucrats and politicians, however. This behavior, dubbed as directly unproductive profit-seeking activities (DUP), caused efficiency to falter and living standards to stagnate, providing a background to the collapse of the First Republic in April 1960.

The military coup led by General Park Chung Hee overthrew the short-lived Second Republic in May 1961, making a shift to a strategy of stimulating growth through export promotion (EP hereafter), although ISI was not altogether abandoned. Under EP, policymakers gave various types of favors — low interest loans being the most important — to exporting firms according to their export performance. As the qualification for the special treatment was quantifiable and objective, the room for DUP became significantly smaller. Another advantage of EP over ISI was that it accelerated productivity advances by placing firms under the discipline of export markets and by widening the contact with the developed world: efficiency growth was significantly faster in export industries than in the rest of the economy. In the decade following the shift to EP, per capita output doubled, and South Korea became an industrialized country: from 1960/62 to 1973/75 the share of agriculture in GDP fell from 45 percent to 25 percent, while the share of manufacturing rose from 9 percent to 27 percent. One important factor contributing to the achievement was that the authoritarian government could enjoy relative independence from and avoid capture by special interests.

The withdrawal of U.S. troops from Vietnam in the early 1970s and the subsequent conquest of the region by the communist regime alarmed the South Korean leadership, which has been coping with the threat of North Korea with the help of the U.S. military presence. Park Chung Hee’s reaction was to reduce the level of reliance on the U.S. armed support by expanding capability to produce munitions, which required returning to ISI to build heavy and chemical industries (HCI). The government intervened heavily in the financial markets, directing banks to provide low interest loans to chaebols — conglomerates of businesses owned by a single family — selected for the task of developing different sectors of HCI. Successfully expanding the capital-intensive industries more rapidly than the rest of the economy, the HCI drive generated multiple symptoms of distortion, including rapidly slowing growth, worsening inflation and accumulation of non-performing loans.

Again the ISI ended with a regime shift, triggered by Park Chung Hee’s assassination in 1979. In the 1980s, the succeeding leadership made systematic attempts to sort out the unwelcome legacy of the HCI drive by de-regulating trade and financial sectors. In the 1990s, liberalization of capital account followed, causing rapid accumulation of short-term external debts. This, together with a highly leveraged corporate sector and the banking sector destabilized by the financial repression, provided the background to the contagion of financial crisis from Southeast Asia in 1997. The crisis provided a strong momentum for corporate and financial sector reform.

In the quarter century following the policy shift in the early 1960s, the South Korean per capita output grew at an unusually rapid rate of 7 percent per year, a growth performance paralleled only by Taiwan and two city-states, Hong Kong and Singapore. The portion of South Koreans enjoying the benefits of the growth increased more rapidly from the end of 1970s, when the rising trend in the Gini coefficient (which measures the inequality of income distribution) since the colonial period was reversed. The growth was attributable far more to increased use of productive inputs — physical capital in particular — than to productivity advances. The rapid capital accumulation was driven by an increasingly high savings rate due to a falling dependency ratio, a lagged outcome of rapidly falling mortality during the colonial period. The high growth was also aided by accumulation of human capital, which started with the introduction of modern education under the Japanese rule. Finally, the South Korean developmental state, as symbolized by Park Chung Hee, a former officer of the Japanese Imperial army serving in wartime Manchuria, was closely modeled upon the colonial system of government. In short, South Korea grew on the shoulders of the colonial achievement, rather than emerging out of the ashes left by the Korean War, as is sometimes asserted.

North Korean Starvation

Neither did the North Korean economy emerge out of a void. Founders of the regime took over the system of command set up by the Japanese rulers to invade China. They also benefited from the colonial industrialization concentrated in the north, which had raised the standard of living in the north above that in the south at the end of the colonial rule. While the economic advantage led the North Korean leadership to feel confident enough to invade the South in 1950, it could not sustain the lead: North Korea started to lag behind the fast growing South from the late 1960s, and then suffered a tragic decline in living standards in the 1990s.

After the conclusion of the Korean War, the North Korean power elites adopted a strategy of driving growth through forced saving, which went quickly to the wall for several reasons. First, managers and workers in collective farms and state enterprises had little incentive to improve productivity to counter the falling marginal productivity of capital. Second, the country’s self-imposed isolation made it difficult for it to benefit from the advanced technologies of the developed world through trade and foreign investment. Finally, the despotic and militaristic rule diverted resources to unproductive purposes and disturbed the consistency of planning.

The economic stalemate forced the ruling elites to experiment with the introduction of material incentives and independent accounting of state enterprises. However, they could not push the institutional reform far enough, for fear that it might destabilize their totalitarian rule. Efforts were also made to attract foreign capital, which ended in failure too. Having spent the funds lent by western banks in the early 1970s largely for military purposes, North Korea defaulted on the loans. Laws introduced in the 1980s to draw foreign direct investment had little effect.

The collapse of centrally planned economies in the late 1980s virtually ended energy and capital goods imports at subsidized prices, dealing a serious blow to the wobbly regime. Desperate efforts to resolve chronic food shortages by expanding acreage through deforestation made the country vulnerable to climatic shocks in the 1990s. The end result was a disastrous subsistence crisis, to which the militarist regime responded by extorting concessions from the rest of the world through brinkmanship diplomacy.

Further Reading

Amsden, Alice. Asia’s Next Giant: South Korea and Late Industrialization. Oxford: Oxford University Press, 1989.

Ban, Sung Hwan. “Agricultural Growth in Korea.” In Agricultural Growth in Japan, Taiwan, Korea, and the Philippines, edited by Yujiro Hayami, Vernon W. Ruttan, and Herman M. Southworth, 96-116. Honolulu: University Press of Hawaii, 1979.

Cha, Myung Soo. “Imperial Policy or World Price Shocks? Explaining Interwar Korean Consumption Trend.” Journal of Economic History 58, no. 3 (1998): 731-754.

Cha, Myung Soo. “The Colonial Origins of Korea’s Market Economy.” In Asia-Pacific Dynamism, 1550-2000, edited by A.J.H. Latham and H. Kawakatsu, 86-103. London: Routledge, 2000.

Cha, Myung Soo. “Facts and Myths about Korea’s Economic Past.” Forthcoming in Australian Review of Economic History 44 (2004).

Cole, David C. and Yung Chul Park. Financial Development in Korea, 1945-1978. Cambridge: Harvard University Press, 1983.

Dollar, David and Kenneth Sokoloff. “Patterns of Productivity Growth in South Korean Manufacturing Industries, 1963-1979.” Journal of Development Economics 33, no. 2 (1990): 390-27.

Eckert, Carter J. Offspring of Empire: The Koch’ang Kims and the Colonial Origins of Korean Capitalism, 1876-1945. Seattle: Washington University Press, 1991.

Gill, Insong. “Stature, Consumption, and the Standard of Living in Colonial Korea.” In The Biological Standard of Living in Comparative Perspective, edited by John Komlos and Joerg Baten, 122-138. Stuttgart: Franz Steiner Verlag, 1998.

Gragert, Edwin H. Landownership under Colonial Rule: Korea’s Japanese Experience, 1900-1935. Honolulu: University Press of Hawaii, 1994.

Haggard, Stephan. The Political Economy of the Asian Financial Crisis. Washington: Institute of International Economics, 2000.

Haggard, Stephan, D. Kang and C. Moon. “Japanese Colonialism and Korean Development: A Critique.” World Development 25 (1997): 867-81.

Haggard, Stephan, Byung-kook Kim and Chung-in Moon. “The Transition to Export-led Growth in South Korea: 1954-1966.” Journal of Asian Studies 50, no. 4 (1991): 850-73.

Kang, Kenneth H. “Why Did Koreans Save So Little and Why Do They Now Save So Much?” International Economic Journal 8 (1994): 99-111.

Kang, Kenneth H, and Vijaya Ramachandran. “Economic Transformation in Korea: Rapid Growth without an Agricultural Revolution?” Economic Development and Cultural Change 47, no. 4 (1999): 783-801.

Kim, Kwang Suk and Michael Roemer. Growth and Structural Transformation. Cambridge, MA: Harvard University Press, 1979.

Kimura, Mitsuhiko. “From Fascism to Communism: Continuity and Development of Collectivist Economic Policy in North Korea.” Economic History Review 52, no.1 (1999): 69-86.

Kimura, Mitsuhiko. “Standards of Living in Colonial Korea: Did the Masses Become Worse Off or Better Off under Japanese Rule?” Journal of Economic History 53, no. 3 (1993): 629-652.

Kohli, Atul. “Where Do High Growth Political Economies Come From? The Japanese Lineage of Korea’s ‘Developmental State’.” World Development 9: 1269-93.

Krueger, Anne. The Developmental Role of the Foreign Sector and Aid. Cambridge: Harvard University Press, 1982.

Kwon, Tai Hwan. Demography of Korea: Population Change and Its Components, 1925-66. Seoul: Seoul National University Press, 1977.

Noland, Marcus. Avoiding the Apocalypse: The Future of the Two Koreas. Washington: Institute for International Economics, 2000.

Palais, James B. Politics and Policy in Traditional Korea. Cambridge: Harvard University Press, 1975.

Stern, Joseph J, Ji-hong Kim, Dwight H. Perkins and Jung-ho Yoo, editors. Industrialization and the State: The Korean Heavy and Chemical Industry Drive. Cambridge: Harvard University Press, 1995.

Woo, Jung-en. Race to the Swift: State and Finance in Korean Industrialization. New York: Columbia University Press, 1991.

Young, Alwyn. “The Tyranny of Numbers: Confronting the Statistical Realities of the East Asian Growth Experience.” Quarterly Journal of Economics 110, no. 3 (1995): 641-80.

Citation: Cha, Myung. “The Economic History of Korea”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-korea/

A Brief Economic History of Modern Israel

Nadav Halevi, Hebrew University

The Pre-state Background

The history of modern Israel begins in the 1880s, when the first Zionist immigrants came to Palestine, then under Ottoman rule, to join the small existing Jewish community, establishing agricultural settlements and some industry, restoring Hebrew as the spoken national language, and creating new economic and social institutions. The ravages of World War I reduced the Jewish population by a third, to 56,000, about what it had been at the beginning of the century.

As a result of the war, Palestine came under the control of Great Britain, whose Balfour Declaration had called for a Jewish National Home in Palestine. Britain’s control was formalized in 1920, when it was given the Mandate for Palestine by the League of Nations. During the Mandatory period, which lasted until May 1948, the social, political and economic structure for the future state of Israel was developed. Though the government of Palestine had a single economic policy, the Jewish and Arab economies developed separately, with relatively little connection.

Two factors were instrumental in fostering rapid economic growth of the Jewish sector: immigration and capital inflows. The Jewish population increased mainly through immigration; by the end of 1947 it had reached 630,000, about 35 percent of the total population. Immigrants came in waves, particularly large in the mid 1920s and mid 1930s. They consisted of ideological Zionists and refugees, economic and political, from Central and Eastern Europe. Capital inflows included public funds, collected by Zionist institutions, but were for the most part private funds. National product grew rapidly during periods of large immigration, but both waves of mass immigration were followed by recessions, periods of adjustment and consolidation.

In the period from 1922 to 1947 real net domestic product (NDP) of the Jewish sector grew at an average rate of 13.2 percent, and in 1947 accounted for 54 percent of the NDP of the Jewish and Arab economies together. NDP per capita in the Jewish sector grew at a rate of 4.8 percent; by the end of the period it was 8.5 times larger in than in 1922, and 2.5 times larger than in the Arab sector (Metzer, 1998). Though agricultural development – an ideological objective – was substantial, this sector never accounted for more than 15 percent of total net domestic product of the Jewish economy. Manufacturing grew slowly for most of the period, but very rapidly during World War II, when Palestine was cut off from foreign competition and was a major provider to the British armed forces in the Middle East. By the end of the period, manufacturing accounted for a quarter of NDP. Housing construction, though a smaller component of NDP, was the most volatile sector, and contributed to sharp business cycle movements. A salient feature of the Jewish economy during the Mandatory period, which carried over into later periods, was the dominant size of the services sector – more than half of total NDP. This included a relatively modern educational and health sector, efficient financial and business sectors, and semi-governmental Jewish institutions, which later were ready to take on governmental duties.

The Formative Years: 1948-1965

The state of Israel came into being, in mid May 1948, in the midst of a war with its Arab neighbors. The immediate economic problems were formidable: to finance and wage a war, to take in as many immigrants as possible (first the refugees kept in camps in Europe and on Cyprus), to provide basic commodities to the old and new population, and to create a government bureaucracy to cope with all these challenges. The creation of a government went relatively smoothly, as semi-governmental Jewish institutions which had developed during the Mandatory period now became government departments.

Cease-fire agreements were signed during 1949. By the end of that year a total of 340,000 immigrants had arrived, and by the end of 1951 an additional 345,000 (the latter including immigrants from Arab countries), thus doubling the Jewish population. Immediate needs were met by a strict austerity program and inflationary government finance, repressed by price controls and rationing of basic commodities. However, the problems of providing housing and employment for the new population were solved only gradually. A New Economic Policy was introduced in early 1952. It consisted of exchange rate devaluation, the gradual relaxation of price controls and rationing, and curbing of monetary expansion, primarily by budgetary restraint. Active immigration encouragement was curtailed, to await the absorption of the earlier mass immigration.

From 1950 until 1965, Israel achieved a high rate of growth: Real GNP (gross national product) grew by an average annual rate of over 11 percent, and per capita GNP by greater than 6 percent. What made this possible? Israel was fortunate in receiving large sums of capital inflows: U.S. aid in the forms of unilateral transfers and loans, German reparations and restitutions to individuals, sale of State of Israel Bonds abroad, and unilateral transfers to public institutions, mainly the Jewish Agency, which retained responsibility for immigration absorption and agricultural settlement. Thus, Israel had resources available for domestic use – for public and private consumption and investment – about 25 percent more than its own GNP. This made possible a massive investment program, mainly financed through a special government budget. Both the enormity of needs and the socialist philosophy of the main political party in the government coalitions led to extreme government intervention in the economy.

Governmental budgets and strong protectionist measures to foster import-substitution enabled the development of new industries, chief among them textiles, and subsidies were given to help the development of exports, additional to the traditional exports of citrus products and cut diamonds.

During the four decades from the mid 1960s until the present, Israel’s economy developed and changed, as did economic policy. A major factor affecting these developments has been the Arab-Israeli conflict. Its influence is discussed first, and is followed by brief descriptions of economic growth and fluctuations, and evolution of economic policy.

The Arab-Israel Conflict

The most dramatic event of the 1960s was the Six Day War of 1967, at the end of which Israel controlled the West Bank (of the Jordan River) – the area of Palestine absorbed by the Jordan since 1949 – and the Gaza Strip, controlled until then by Egypt.

As a consequence of the occupation of these territories Israel was responsible for the economic as well as the political life in the areas taken over. The Arab sections of Jerusalem were united with the Jewish section. Jewish settlements were established in parts of the occupied territories. As hostilities intensified, special investments in infrastructure were made to protect Jewish settlers. The allocation of resources to Jewish settlements in the occupied territories has been a political and economic issue ever since.

The economies of Israel and the occupied territories were partially integrated. Trade in goods and services developed, with restrictions placed on exports to Israel of products deemed too competitive, and Palestinian workers were employed in Israel particularly in construction and agriculture. At its peak, in 1996, Palestinian employment in Israel reached 115,000 to 120,000, about 40 percent of the Palestinian labor force, but never more than 6.5 percent of total Israeli employment. Thus, while employment in Israel was a major contributor to the economy of the Palestinians, its effects on the Israeli economy, except for the sectors of construction and agriculture, were not large.

The Palestinian economy developed rapidly – real per capita national income grew at an annual rate of close to 20 percent in 1969-1972 and 5 percent in 1973-1980 – but fluctuated widely thereafter, and actually decreased in times of hostilities. Palestinian per capita income equaled 10.2 percent of Israeli per capita income in 1968, 22.8 percent in 1986, and declined to 9.7 percent in 1998 (Kleiman, 2003).

As part of the peace process between Israel and the Palestinians initiated in the 1990s, an economic agreement was signed between the parties in 1994, which in effect transformed what had been essentially a one-sided customs agreement (which gave Israel full freedom to export to the Territories but put restrictions on Palestinian exports to Israel) into a more equal customs union: the uniform external trade policy was actually Israel’s, but the Palestinians were given limited sovereignty regarding imports of certain commodities.

Arab uprisings (intifadas), in the 1980s, and especially the more violent one beginning in 2000 and continuing into 2005, led to severe Israeli restrictions on interaction between the two economies, particularly employment of Palestinians in Israel, and even to military reoccupation of some areas given over earlier to Palestinian control. These measures set the Palestinian economy back many years, wiping out much of the gains in income which had been achieved since 1967 – per capita GNP in 2004 was $932, compared to about $1500 in 1999. Palestinian workers in Israel were replaced by foreign workers.

An important economic implication of the Arab-Israel conflict is that Israel must allocate a major part of its budget to defense. The size of the defense budget has varied, rising during wars and armed hostilities. The total defense burden (including expenses not in the budget) reached its maximum relative size during and after the Yom Kippur War of 1973, close to 30 percent of GNP in 1974-1978. In the 2000-2004 period, the defense budget alone reached about 22 to 25 percent of GDP. Israel has been fortunate in receiving generous amounts of U.S. aid. Until 1972 most of this came in the form of grants and loans, primarily for purchases of U.S. agricultural surpluses. But since 1973 U.S. aid has been closely connected to Israel’s defense needs. During 1973-1982 annual loans and grants averaged $1.9 billion, and covered some 60 percent of total defense imports. But even in more tranquil periods, the defense burden, exclusive of U.S. aid, has been much larger than usual in industrial countries during peace time.

Growth and Economic Fluctuations

The high rates of growth of income and income per capita which characterized Israel until 1973 were not achieved thereafter. GDP growth fluctuated, generally between 2 and 5 percent, reaching as high as 7.5 percent in 2000, but falling below zero in the recession years from 2001 to mid 2003. By the end of the twentieth century income per capita reached about $20,000, similar to many of the more developed industrialized countries.

Economic fluctuations in Israel have usually been associated with waves of immigration: a large flow of immigrants which abruptly increases the population requires an adjustment period until it is absorbed productively, with the investments for its absorption in employment and housing stimulating economic activity. Immigration never again reached the relative size of the first years after statehood, but again gained importance with the loosening of restrictions on emigration from the Soviet Union. The total number of immigrants in 1972-1982 was 325,000, and after the collapse of the Soviet Union immigration totaled 1,050,000 in 1990-1999, mostly from the former Soviet Union. Unlike the earlier period, these immigrants were gradually absorbed in productive employment (though often not in the same activity as abroad) without resort to make-work projects. By the end of the century the population of Israel passed 6,300,000, with the Jewish population being 78 percent of the total. The immigrants from the former Soviet Union were equal to about one-fifth of the Jewish population, and were a significant and important addition of human capital to the labor force.

As the economy developed, the structure of output changed. Though the service sectors are still relatively large – trade and services contributing 46 percent of the business sector’s product – agriculture has declined in importance, and industry makes up over a quarter of the total. The structure of manufacturing has also changed: both in total production and in exports the share of traditional, low-tech industries has declined, with sophisticated, high-tech products, particularly electronics, achieving primary importance.

Fluctuations in output were marked by periods of inflation and periods of unemployment. After a change in exchange rate policy in the late 1970s (discussed below), an inflationary spiral was unleashed. Hyperinflation rates were reached in the early 1980s, about 400 percent per year by the time a drastic stabilization policy was imposed in 1985. Exchange rate stabilization, budgetary and monetary restraint, and wage and price freezes sharply reduced the rate of inflation to less than 20 percent, and then to about 16 percent in the late 1980s. Very drastic monetary policy, from the late 1990s, finally reduced the inflation to zero by 2005. However, this policy, combined with external factors such as the bursting of the high-tech bubble, recession abroad, and domestic insecurity resulting from the intifada, led to unemployment levels above 10 percent at the beginning of the new century. The economic improvements since the latter half of 2003 have, as yet (February 2005), not significantly reduced the level of unemployment.

Policy Changes

The Israeli economy was initially subject to extensive government controls. Only gradually was the economy converted into a fairly free (though still not completely so) market economy. This process began in the 1960s. In response to a realization by policy makers that government intervention in the economy was excessive, and to the challenge posed by the creation in Europe of a customs union (which gradually progressed into the present European Union), Israel embarked upon a very gradual process of economic liberalization. This appeared first in foreign trade: quantitative restrictions on imports were replaced by tariff protection, which was slowly reduced, and both import-substitution and exports were encouraged by more realistic exchange rates rather than by protection and subsidies. Several partial trade agreements with the European Economic Community (EEC), starting in 1964, culminated in a free trade area agreement (FTA) in industrial goods in 1975, and an FTA agreement with the U.S. came into force in 1985.

By late 1977 a considerable degree of trade liberalization had taken place. In October of that year, Israel moved from a fixed exchange rate system to a floating rate system, and restrictions on capital movements were considerably liberalized. However, there followed a disastrous inflationary spiral which curbed the capital liberalization process. Capital flows were not completely liberalized until the beginning of the new century.

Throughout the 1980s and the 1990s there were additional liberalization measures: in monetary policy, in domestic capital markets, and in various instruments of governmental interference in economic activity. The role of government in the economy was considerably decreased. On the other hand, some governmental economic functions were increased: a national health insurance system was introduced, though private health providers continued to provide health services within the national system. Social welfare payments, such as unemployment benefits, child allowances, old age pensions and minimum income support, were expanded continuously, until they formed a major budgetary expenditure. These transfer payments compensated, to a large extent, for the continuous growth of income inequality, which had moved Israel from among the developed countries with the least income inequality to those with the most. By 2003, 15 percent of the government’s budget went to health services, 15 percent to education, and an additional 20 percent were transfer payments through the National Insurance Agency.

Beginning in 2003, the Ministry of Finance embarked upon a major effort to decrease welfare payments, induce greater participation in the labor force, privatize enterprises still owned by government, and reduce both the relative size of the government deficit and the government sector itself. These activities are the result of an ideological acceptance by the present policy makers of the concept that a truly free market economy is needed to fit into and compete in the modern world of globalization.

An important economic institution is the Histadrut, a federation of labor unions. What had made this institution unique is that, in addition to normal labor union functions, it encompassed agricultural and other cooperatives, major construction and industrial enterprises, and social welfare institutions, including the main health care provider. During the Mandatory period, and for many years thereafter, the Histadrut was an important factor in economic development and in influencing economic policy. During the 1990s, the Histadrut was divested of many of its non-union activities, and its influence in the economy has greatly declined. The major unions associated with it still have much say in wage and employment issues.

The Challenges Ahead

As it moves into the new century, the Israeli economy has proven to be prosperous, as it continuously introduces and applies economic innovation, and to be capable of dealing with economic fluctuations. However, it faces some serious challenges. Some of these are the same as those faced by most industrial economies: how to reconcile innovation, the switch from traditional activities which are no longer competitive, to more sophisticated, skill-intensive products, with the dislocation of labor it involves, and the income inequality it intensifies. Like other small economies, Israel has to see how it fits into the new global economy, marked by the two major markets of the EU and the U.S., and the emergence of China as a major economic factor.

Special issues relate to the relations of Israel with its Arab neighbors. First are the financial implications of continuous hostilities and military threats. Clearly, if peace can come to the region, resources can be transferred to more productive uses. Furthermore, foreign investment, so important for Israel’s future growth, is very responsive to political security. Other issues depend on the type of relations established: will there be the free movement of goods and workers between Israel and a Palestinian state? Will relatively free economic relations with other Arab countries lead to a greater integration of Israel in the immediate region, or, as is more likely, will Israel’s trade orientation continue to be directed mainly to the present major industrial countries? If the latter proves true, Israel will have to carefully maneuver between the two giants: the U.S. and the EU.

References and Recommended Reading

Ben-Bassat, Avi, editor. The Israeli Economy, 1985-1998: From Government Intervention to Market Economics. Cambridge, MA: MIT Press, 2002.

Ben-Porath, Yoram, editor. The Israeli Economy: Maturing through Crisis. Cambridge, MA: Harvard University Press, 1986.

Fischer, Stanley, Dani Rodrik and Elias Tuma, editors. The Economics of Middle East Peace. Cambridge, MA: MIT Press, 1993.

Halevi, Nadav and Ruth Klinov-Malul, The Economic Development of Israel. New York: Praeger, 1968.

Kleiman, Ephraim. “Palestinian Economic Viability and Vulnerability.” Paper presented at the UCLA Burkle Conference in Athens, August 2003. (Available at www.international.ucla.edu.)

Metz, Helen Chapin, editor. Israel: A Country Study. Washington: Library of Congress Country Studies, 1986.

Metzer, Jacob, The Divided Economy of Mandatory Palestine. Cambridge: Cambridge University Press, 1998.

Patinkin, Don. The Israel Economy: The First Decade. Jerusalem: Maurice Falk Institute for Economic Research in Israel, 1967.

Razin, Assaf and Efraim Sadka, The Economy of Modern Israel: Malaise and Promise. London: Chicago University Press, 1993.

World Bank. Developing the Occupied Territories: An Investment in Peace. Washington D.C.: The World Bank, September, 1993.

Citation: Halevi, Nadav. “A Brief Economic History of Modern Israel”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/a-brief-economic-history-of-modern-israel/

Islamic Economics: What It Is and How It Developed

M. Umer Chapra, Islamic Research and Training Institute

Islamic economics has been having a revival over the last few decades. However, it is still in a preliminary stage of development. In contrast with this, conventional economics has become a well-developed and sophisticated discipline after going through a long and rigorous process of development over more than a century. Is a new discipline in economics needed? If so, what is Islamic economics, how does it differ from conventional economics, and what contributions has it made over the centuries? This article tries to briefly answer these questions.

It is universally recognized that resources are scarce compared with the claims on them. However, it is also simultaneously recognized by practically all civilizations that the well-being of all human beings needs to be ensured. Given the scarcity of resources, the well-being of all may remain an unrealized dream if the scarce resources are not utilized efficiently and equitably. For this purpose, every society needs to develop an effective strategy, which is consciously or unconsciously conditioned by its worldview. If the worldview is flawed, the strategy may not be able to help the society actualize the well-being of all. Prevailing worldviews may be classified for the sake of ease into two board theoretical constructs (1) secular and materialist, and (2) spiritual and humanitarian.

The Role of the Worldview

Secular and materialist worldviews attach maximum importance to the material aspect of human well-being and tend generally to ignore the importance of the spiritual aspect. They often argue that maximum material well-being can be best realized if individuals are given unhindered freedom to pursue their self-interest and to maximize their want satisfaction in keeping with their own tastes and preferences.[1] In their extreme form they do not recognize any role for Divine guidance in human life and place full trust in the ability of human beings to chalk out a proper strategy with the help of their reason. In such a worldview there is little role for values or government intervention in the efficient and equitable allocation and distribution of resources. When asked about how social interest would be served when everyone has unlimited freedom to pursue his/her self-interest, the reply is that market forces will themselves ensure this because competition will keep self-interest under check.

In contrast with this, religious worldviews give attention to both the material as well as the spiritual aspects of human well-being. They do not necessarily reject the role of reason in human development. They, however, recognize the limitations of reason and wish to complement it by revelation. They do not also reject the need for individual freedom or the role that the serving of self-interest can play in human development They, however, emphasize that both freedom and the pursuit of self-interest need to be toned down by moral values and good governance to ensure that everyone’s well-being is realized and that social harmony and family integrity are not hurt in the process of everyone serving his/her self-interest.

Material and Spiritual Needs

Even though none of the major worldviews prevailing around the world is totally materialist and hedonist, there are, nevertheless, significant differences among them in terms of the emphasis they place on material or spiritual goals and the role of moral values and government intervention in ordering human affairs. While material goals concentrate primarily on goods and services that contribute to physical comfort and well-being, spiritual goals include nearness to God, peace of mind, inner happiness, honesty, justice, mutual care and cooperation, family and social harmony, and the absence of crime and anomie. These may not be quantifiable, but are, nevertheless, crucial for realizing human well-being. Resources being limited, excessive emphasis on the material ingredients of well-being may lead to a neglect of spiritual ingredients. The greater the difference in emphasis, the greater may be the difference in the economic disciplines of these societies. Feyerabend (1993) frankly recognized this in the introduction to the Chinese edition of his thought-provoking book, Against Method, by stating that “First world science is only one science among many; by claiming to be more it ceases to be an instrument of research and turns into a (political) pressure group” (p.3, parentheses are in the original).

The Enlightenment Worldview and Conventional Economics

There is a great deal that is common between the worldviews of most major religions, particularly those of Judaism, Christianity and Islam. This is because, according to Islam, there is a continuity and similarity in the value systems of all Revealed religions to the extent to which the Message has not been lost or distorted over the ages. The Qur’an clearly states that: “Nothing has been said to you [Muhammad] that was not said to the Messengers before you” (Al-Qur’an, 41:43). If conventional economics had continued to develop in the image of the Judeo-Christian worldview, as it did before the Enlightenment Movement of the seventeenth and eighteenth centuries, there may not have been any significant difference between conventional and Islamic economics. However, after the Enlightenment Movement, all intellectual disciplines in Europe became influenced by its secular, value-neutral, materialist and social-Darwinist worldview, even though this did not succeed fully. All economists did not necessarily become materialist or social-Darwinist in their individual lives and many of them continued to be attached to their religious worldviews. Koopmans (1969) has rightly observed that “scratch an economist and you will find a moralist underneath.” Therefore, while theoretically conventional economics adopted the secular and value neutral orientation of the Enlightenment worldview and failed to recognize the role of value judgments and good governance in the efficient and equitable allocation and distribution of resources, in practice this did not take place fully. The pre-Enlightenment tradition never disappeared completely (see Baeck, 1994, p. 11).

There is no doubt that, in spite of its secular and materialist worldview, the market system led to a long period of prosperity in the Western market-oriented economies. However, this unprecedented prosperity did not lead to the elimination of poverty or the fulfillment of everyone’s needs in conformity with the Judeo-Christian value system even in the wealthiest countries. Inequalities of income and wealth have also continued to persist and there has also been a substantial degree of economic instability and unemployment which have added to the miseries of the poor. This indicates that both efficiency and equity have remained elusive in spite of rapid development and phenomenal rise in wealth.

Consequently there has been persistent criticism of economics by a number of well-meaning scholars, including Thomas Carlyle (Past and Present, 1843), John Ruskin (Unto this Last, 1862) and Charles Dickens (Hard Times, 1854-55) in England, and Henry George (Progress and Poverty, 1879) in America. They ridiculed the dominant doctrine of laissez-faire with its emphasis on self-interest. Thomas Carlyle called economics a “dismal science” and rejected the idea that free and uncontrolled private interests will work in harmony and further the public welfare (see Jay and Jay, 1986). Henry George condemned the resulting contrast between wealth and poverty and wrote: “So long as all the increased wealth which modern progress brings goes but to build great fortunes, to increase luxury and make sharper the contrast between the House of Have and the House of Want, progress is not real and cannot be permanent” (1955, p. 10).

In addition to failing to fulfill the basic needs of a large number of people and increasing inequalities of income and wealth, modern economic development has been associated with the disintegration of the family and a failure to bring peace of mind and inner happiness (Easterlin 2001, 1995 and 1974; Oswald, 1997; Blanchflower and Oswald, 2000; Diener and Oshi, 2000; and Kenny, 1999). Due to these problems and others the laissez-faire approach lost ground, particularly after the Great Depression of the 1930s as a result of the Keynesian revolution and the socialist onslaught. However, most observers have concluded that government intervention alone cannot by itself remove all socio-economic ills. It is also necessary to motivate individuals to do what is right and abstain from doing what is wrong. This is where the moral uplift of society can be helpful. Without it, more and more difficult and costly regulations are needed. Nobel-laureate Amartya Sen has, therefore, rightly argued that “the distancing of economics from ethics has impoverished welfare economics and also weakened the basis of a good deal of descriptive and predictive economics” and that economics “can be made more productive by paying greater and more explicit attention to ethical considerations that shaped human behaviour and judgment” (1987, pp. 78-79). Hausman and McPherson also conclude in their survey article “Economics and Contemporary Moral Philosophy” that “An economy that is engaged actively and self-critically with the moral aspects of its subject matter cannot help but be more interesting, more illuminating and, ultimately, more useful than the one that tries not to be” (1993, p. 723).

Islamic Economics – and How It Differs from Conventional Economics

While conventional economics is now in the process of returning to its pre-Enlightenment roots, Islamic economics never got entangled in a secular and materialist worldview. It is based on a religious worldview which strikes at the roots of secularism and value neutrality. To ensure the true well-being of all individuals, irrespective of their sex, age, race, religion or wealth, Islamic economics does not seek to abolish private property, as was done by communism, nor does it prevent individuals from serving their self-interest. It recognizes the role of the market in the efficient allocation of resources, but does not find competition to be sufficient to safeguard social interest. It tries to promote human brotherhood, socio-economic justice and the well-being of all through an integrated role of moral values, market mechanism, families, society, and ‘good governance.’ This is because of the great emphasis in Islam on human brotherhood and socio-economic justice.

The Integrated Role of the Market, Families, Society, and Government

The market is not the only institution where people interact in human society. They also interact in the family, the society and the government and their interaction in all these institutions is closely interrelated. There is no doubt that the serving of self-interest does help raise efficiency in the market place. However, if self-interest is overemphasized and there are no moral restraints on individual behavior, other institutions may not work effectively – families may disintegrate, the society may be uncaring, and the government may be corrupt, partisan, and self-centered. Mutual sacrifice is necessary for keeping the families glued together. Since the human being is the most important input of not only the market, but also of the family, the society and the government, and the family is the source of this input, nothing may work if families disintegrate and are unable to provide loving care to children. This is likely to happen if both the husband and wife try to serve just their own self-interest and are not attuned to the making of sacrifices that the proper care and upbringing of children demands. Lack of willingness to make such sacrifice can lead to a decline in the quality of the human input to all other institutions, including the market, the society and the government. It may also lead to a fall in fertility rates below the replacement level, making it difficult for society not only to sustain its development but also its social security system.

The Role of Moral Values

While conventional economics generally considers the behavior and tastes and preferences of individuals as given, Islamic economics does not do so. It places great emphasis on individual and social reform through moral uplift. This is the purpose for which all God’s messengers, including Abraham, Moses, Jesus, and Muhammad, came to this world. Moral uplift aims at the change in human behavior, tastes and preferences and, thereby, it complements the price mechanism in promoting general well-being. Before even entering the market place and being exposed to the price filter, consumers are expected to pass their claims through the moral filter. This will help filter out conspicuous consumption and all wasteful and unnecessary claims on resources. The price mechanism can then take over and reduce the claims on resources even further to lead to the market equilibrium. The two filters can together make it possible to have optimum economy in the use of resources, which is necessary to satisfy the material as well as spiritual needs of all human beings, to reduce the concentration of wealth in a few hands, and to raise savings, which are needed to promote greater investment and employment. Without complementing the market system with morally-based value judgments, we may end up perpetuating inequities in spite of our good intentions through what Solo calls inaction, non-choice and drifting (Solo, 1981, p. 38)

From the above discussion, one may easily notice the similarities and differences between the two disciplines. While the subject matter of both is the allocation and distribution of resources and both emphasize the fulfillment of material needs, there is an equal emphasis in Islamic economics on the fulfillment of spiritual needs. While both recognize the important role of market mechanism in the allocation and distribution of resources, Islamic economics argues that the market may not by itself be able to fulfill even the material needs of all human beings. This is because it can promote excessive use of scarce resources by the rich at the expense of the poor if there is undue emphasis on the serving of self-interest. Sacrifice is involved in fulfilling our obligations towards others and excessive emphasis on the serving of self-interest does not have the potential of motivating people to make the needed sacrifice. This, however, raises the crucial question of why a rational person would sacrifice his self-interest for the sake of others?

The Importance of the Hereafter

This is where the concepts of the innate goodness of human beings and of the Hereafter come in – concepts which conventional economics ignores but on which Islam and other major religions place a great deal of emphasis. Because of their innate goodness, human beings do not necessarily always try to serve their self-interest. They are also altruistic and are willing to make sacrifices for the well-being of others. In addition, the concept of the Hereafter does not confine self-interest to just this world. It rather extends it beyond this world to life after death. We may be able to serve our self-interest in this world by being selfish, dishonest, uncaring, and negligent of our obligations towards our families, other human beings, animals, and the environment. However, we cannot serve our self-interest in the Hereafter except by fulfilling all these obligations.

Thus, the serving of self-interest receives a long-run perspective in Islam and other religions by taking into account both this world and the next. This serves to provide a motivating mechanism for sacrifice for the well-being of others that conventional economics fails to provide. The innate goodness of human beings along with the long-run perspective given to self-interest has the potential of inducing a person to be not only efficient but also equitable and caring. Consequently, the three crucial concepts of conventional economics – rational economic man, positivism, and laissez-faire – were not able to gain intellectual blessing in their conventional economics sense from any of the outstanding scholars who represent the mainstream of Islamic thought.

Rational Economic Man

While there is hardly anyone opposed to the need for rationality in human behavior, there are differences of opinion in defining rationality (Sen, 1987, pp. 11-14). However, once rationality has been defined in terms of overall individual as well as social well-being, then rational behavior could only be that which helps us realize this goal. Conventional economics does not define rationality in this way. It equates rationality with the serving of self-interest through the maximization of wealth and want satisfaction, The drive of self-interest is considered to be the “moral equivalent of the force of gravity in nature” (Myers, 1983, p. 4). Within this framework society is conceptualized as a mere collection of individuals united through ties of self-interest.

The concept of ‘rational economic man’ in this social-Darwinist, utilitarian, and material sense of serving self–interest could not find a foothold in Islamic economics. ‘Rationality’ in Islamic economics does not get confined to the serving of one’s self-interest in this world alone; it also gets extended to the Hereafter through the faithful compliance with moral values that help rein self-interest to promote social interest. Al-Mawardi (d. 1058) considered it necessary, like all other Muslim scholars, to rein individual tastes and preferences through moral values (1955, pp. 118-20). Ibn Khaldun (d.1406) emphasized that moral orientation helps remove mutual rivalry and envy, strengthens social solidarity, and creates an inclination towards righteousness (n.d., p.158).

Positivism

Similarly, positivism in the conventional economics sense of being “entirely neutral between ends” (Robbins, 1935, p. 240) or “independent of any particular ethical position or normative judgment” (Friedman, 1953) did not find a place in Muslim intellectual thinking. Since all resources at the disposal of human beings are a trust from God, and human beings are accountable before Him, there is no other option but to use them in keeping with the terms of trust. These terms are defined by beliefs and moral values. Human brotherhood, one of the central objectives of Islam, would be a meaningless jargon if it were not reinforced by justice in the allocation and distribution of resources.

Pareto Optimum

Without justice, it would be difficult to realize even development. Muslim scholars have emphasized this throughout history. Development Economics has also started emphasizing its importance, more so in the last few decades.[2] Abu Yusuf (d. 798) argued that: “Rendering justice to those wronged and eradicating injustice, raises tax revenue, accelerates development of the country, and brings blessings in addition to reward in the Hereafter” (1933/34, p. 111: see also pp. 3-17). Al-Mawardi argued that comprehensive justice “inculcates mutual love and affection, obedience to the law, development of the country, expansion of wealth, growth of progeny, and security of the sovereign” (1955, p. 27). Ibn Taymiyyah (d. 1328) emphasized that “justice towards everything and everyone is an imperative for everyone, and injustice is prohibited to everything and everyone. Injustice is absolutely not permissible irrespective of whether it is to a Muslim or a non-Muslim or even to an unjust person” (1961-63, Vol. 18, p. 166).

Justice and the well-being of all may be difficult to realize without a sacrifice on the part of the well-to-do. The concept of Pareto optimum does not, therefore, fit into the paradigm of Islamic economics. This is because Pareto optimum does not recognize any solution as optimum if it requires a sacrifice on the part of a few (rich) for raising the well-being of the many (poor). Such a position is in clear conflict with moral values, the raison d’être of which is the well-being of all. Hence, this concept did not arise in Islamic economics. In fact, Islam makes it a religious obligation of Muslims to make a sacrifice for the poor and the needy, by paying Zakat at the rate of 2.5 percent of their net worth. This is in addition to the taxes that they pay to the governments as in other countries.

The Role of State

Moral values may not be effective if they are not observed by all. They need to be enforced. It is the duty of the state to restrain all socially harmful behavior[3] including injustice, fraud, cheating, transgression against other people’s person, honor and property, and the non-fulfillment of contracts and other obligations through proper upbringing, incentives and deterrents, appropriate regulations, and an effective and impartial judiciary. The Qur’an can only provide norms. It cannot by itself enforce them. The state has to ensure this. That is why the Prophet Muhammad said: “God restrains through the sovereign more than what He restrains through the Qur’an” (cited by al-Mawardi, 1955, p. 121). This emphasis on the role of the state has been reflected in the writings of all leading Muslim scholars throughout history.[4] Al-Mawardi emphasized that an effective government (Sultan Qahir) is indispensable for preventing injustice and wrongdoing (1960, p. 5). Say’s Law could not, therefore, become a meaningful proposition in Islamic economics.

How far is the state expected to go in the fulfillment of its role? What is it that the state is expected to do? This has been spelled out by a number of scholars in the literature on what has come to be termed as “Mirrors for Princes.”[5] None of them visualized regimentation or the owning and operating of a substantial part of the economy by the state. Several classical Muslim scholars, including al-Dimashqi (d. after 1175) and Ibn Khaldun, clearly expressed their disapproval of the state becoming directly involved in the economy (Al-Dimashqi, 1977, pp. 12 and 61; Ibn Khaldun, pp. 281-83). According to Ibn Khaldun, the state should not acquire the character of a monolithic or despotic state resorting to a high degree of regimentation (ibid., p. 188). It should not feel that, because it has authority, it can do anything it likes (ibid, p. 306). It should be welfare-oriented, moderate in its spending, respect the property rights of the people, and avoid onerous taxation (ibid, p. 296). This implies that what these scholars visualized as the role of government is what has now been generally referred to as ‘good governance’.

Some of the Contributions Made by Islamic Economics

The above discussion should not lead one to an impression that the two disciplines are entirely different. One of the reasons for this is that the subject matter of both disciplines is the same, allocation and distribution of scarce resources. Another reason is that all conventional economists have never been value neutral. They have made value judgments in conformity with their beliefs. As indicated earlier, even the paradigm of conventional economics has been changing – the role of good governance has now become well recognized and the injection of a moral dimension has also become emphasized by a number of prominent economists. Moreover, Islamic economists have benefited a great deal from the tools of analysis developed by neoclassical, Keynesian, social, humanistic and institutional economics as well as other social sciences, and will continue to do so in the future.

The Fallacy of the ‘Great Gap’ Theory

A number of economic concepts developed in Islamic economics long before they did in conventional economics. These cover a number of areas including interdisciplinary approach; property rights; division of labor and specialization; the importance of saving and investment for development; the role that both demand and supply play in the determination of prices and the factors that influence demand and supply; the roles of money, exchange, and the market mechanism; characteristics of money, counterfeiting, currency debasement, and Gresham’s law; the development of checks, letters of credit and banking; labor supply and population; the role of the state, justice, peace, and stability in development; and principles of taxation.I t is not possible to provide comprehensive coverage of all the contributions Muslim scholars have made to economics. Only some of their contributions will be highlighted below to remove the concept of the “Great Gap” of “over 500 years” that exists in the history of conventional economic thought as a result of the incorrect conclusion by Joseph Schumpeter in History of Economic Analysis (1954), that the intervening period between the Greeks and the Scholastics was sterile and unproductive.[6] This concept has become well embedded in the conventional economics literature as may be seen from the reference to this even by the Nobel-laureate, Douglass North, in his December 1993 Nobel lecture (1994, p. 365). Consequently, as Todd Lowry has rightly observed, “the character and sophistication of Arabian writings has been ignored” (See his ‘Foreword’ in Ghazanfar, 2003, p. xi).

The reality, however, is that the Muslim civilization, which benefited greatly from the Chinese, Indian, Sassanian and Byzantine civilizations, itself made rich contributions to intellectual activity, including socio-economic thought, during the ‘Great Gap’ period, and thereby played a part in kindling the flame of the European Enlightenment Movement. Even the Scholastics themselves were greatly influenced by the contributions made by Muslim scholars. The names of Ibn Sina (Avicenna, d. 1037), Ibn Rushd (Averroes, d. 1198) and Maimonides (d. 1204, a Jewish philosopher, scientist, and physician who flourished in Muslim Spain) appear on almost every page of the thirteenth-century summa (treatises written by scholastic philosophers) (Pifer, 1978, p. 356).

Multidisciplinary Approach for Development

One of the most important contributions of Islamic economics, in addition to the above paradigm discussion, was the adoption of a multidisciplinary dynamic approach. Muslim scholars did not focus their attention primarily on economic variables. They considered overall human well-being to be the end product of interaction over a long period of time between a number of economic, moral, social, political, demographic and historical factors in such a way that none of them is able to make an optimum contribution without the support of the others. Justice occupied a pivotal place in this whole framework because of its crucial importance in the Islamic worldview There was an acute realization that justice is indispensable for development and that, in the absence of justice, there will be decline and disintegration.

The contributions made by different scholars over the centuries seem to have reached their consummation in Ibn Khaldun’s Maquddimah, which literally means ‘introduction,’ and constitutes the first volume of a seven-volume history, briefly called Kitab al-‘Ibar or the Book of Lessons [of History].[7] Ibn Khaldun lived at a time (1332-1406) when the Muslim civilization was in the process of decline. He wished to see a reversal of this tide, and, as a social scientist, he was well aware that such a reversal could not be envisaged without first drawing lessons (‘ibar) from history to determine the factors that had led the Muslim civilization to bloom out of humble beginnings and to decline thereafter. He was, therefore, not interested in knowing just what happened. He wanted to know the how and why of what happened. He wanted to introduce a cause and effect relationship into the discussion of historical phenomena. The Muqaddimah is the result of this desire. It tries to derive the principles that govern the rise and fall of a ruling dynasty, state (dawlah) or civilization (‘umran).

Since the centre of Ibn Khaldun’s analysis is the human being, he sees the rise and fall of dynasties or civilizations to be closely dependent on the well-being or misery of the people. The well-being of the people is in turn not dependent just on economic variables, as conventional economics has emphasized until recently, but also on the closely interrelated role of moral, psychological, social, economic, political, demographic and historical factors. One of these factors acts as the trigger mechanism. The others may, or may not, react in the same way. If the others do not react in the same direction, then the decay in one sector may not spread to the others and either the decaying sector may be reformed or the decline of the civilization may be much slower. If, however, the other sectors react in the same direction as the trigger mechanism, the decay will gain momentum through an interrelated chain reaction such that it becomes difficult over time to identify the cause from the effect. He, thus, seems to have had a clear vision of how all the different factors operate in an interrelated and dynamic manner over a long period to promote the development or decline of a society.

He did not, thus, adopt the neoclassical economist’s simplification of confining himself to primarily short-term static analysis of only markets by assuming unrealistically that all other factors remain constant. Even in the short-run, everything may be in a state of flux through a chain reaction to the various changes constantly taking place in human society, even though these may be so small as to be imperceptible. Therefore, even though economists may adopt the ceteris paribus assumption for ease of analysis, Ibn Khaldun’s multidisciplinary dynamics can be more helpful in formulating socio-economic policies that help improve the overall performance of a society. Neoclassical economics is unable to do this because, as North has rightly asked, “How can one prescribe policies when one does not understand how economies develop?” He, therefore, considers neoclassical economics to be “an inappropriate tool to analyze and prescribe policies that will induce development” (North, 1994, p. 549).

However, this is not all that Islamic economics has done. Muslim scholars, including Abu Yusuf (d. 798), al-Mawardi (d. 1058), Ibn Hazm (d. 1064), al-Sarakhsi (d. 1090), al-Tusi (d. 1093), al-Ghazali (d. 1111), al-Dimashqi (d. after 1175), Ibn Rushd (d. 1187), Ibn Taymiyyah (d.1328), Ibn al-Ukhuwwah (d. 1329), Ibn al-Qayyim (d. 1350), al-Shatibi (d. 1388), Ibn Khaldun (d. 1406), al-Maqrizi (d. 1442), al-Dawwani (d. 1501), and Shah Waliyullah (d. 1762) made a number of valuable contributions to economic theory. Their insight into some economic concepts was so deep that a number of the theories propounded by them could undoubtedly be considered the forerunners of some more sophisticated modern formulations of these theories.[8]

Division of Labor, Specialization, Trade, Exchange and Money and Banking

A number of scholars emphasized the necessity of division of labor for economic development long before this happened in conventional economics. For example, al-Sarakhsi (d. 1090) said: “the farmer needs the work of the weaver to get clothing for himself, and the weaver needs the work of the farmer to get his food and the cotton from which the cloth is made …, and thus everyone of them helps the other by his work…” (1978, Vol. 30, p. 264). Al-Dimashqi, writing about a century later, elaborates further by saying: “No individual can, because of the shortness of his life span, burden himself with all industries. If he does, he may not be able to master the skills of all of them from the first to the last. Industries are all interdependent. Construction needs the carpenter and the carpenter needs the ironsmith and the ironsmith needs the miner, and all these industries need premises. People are, therefore, necessitated by force of circumstances to be clustered in cities to help each other in fulfilling their mutual needs” (1977, p. 20-21).

Ibn Khaldun ruled out the feasibility or desirability of self-sufficiency, and emphasized the need for division of labor and specialization by indicating that: “It is well-known and well-established that individual human beings are not by themselves capable of satisfying all their individual economic needs. They must all cooperate for this purpose. The needs that can be satisfied by a group of them through mutual cooperation are many times greater than what individuals are capable of satisfying by themselves” (p. 360). In this respect he was perhaps the forerunner of the theory of comparative advantage, the credit for which is generally given in conventional economics to David Ricardo who formulated it in 1817.

The discussion of division of labor and specialization, in turn, led to an emphasis on trade and exchange, the existence of well-regulated and properly functioning markets through their effective regulation and supervision (hisbah), and money as a stable and reliable measure, medium of exchange and store of value. However, because of bimetallism (gold and silver coins circulating together) which then prevailed, and the different supply and demand conditions that the two metals faced, the rate of exchange between the two full-bodied coins fluctuated. This was further complicated by debasement of currencies by governments in the later centuries to tide over their fiscal problems. This had, according to Ibn Taymiyyah (d. 1328) (1961-63, Vol. 29, p. 649), and later on al-Maqrizi (d. 1442) and al-Asadi (d. 1450), the effect of bad coins driving good coins out of circulation (al-Misri, 1981, pp. 54 and 66), a phenomenon which was recognized and referred to in the West in the sixteenth century as Gresham’s Law. Since debasement of currencies is in sheer violation of the Islamic emphasis on honesty and integrity in all measures of value, fraudulent practices in the issue of coins in the fourteenth century and afterwards elicited a great deal of literature on monetary theory and policy. The Muslims, according to Baeck, should, therefore, be considered forerunners and critical incubators of the debasement literature of the fourteenth and fifteenth centuries (Baeck, 1994, p. 114).

To finance their expanding domestic and international trade, the Muslim world also developed a financial system, which was able to mobilize the “entire reservoir of monetary resources of the mediaeval Islamic world” for financing agriculture, crafts, manufacturing and long-distance trade (Udovitch, 1970, pp. 180 and 261). Financiers were known as sarrafs. By the time of Abbasid Caliph al-Muqtadir (908-32), they had started performing most of the basic functions of modern banks (Fischel, 1992). They had their markets, something akin to the Wall Street in New York and Lombard Street in London, and fulfilled all the banking needs of commerce, agriculture and industry (Duri, 1986, p. 898). This promoted the use of checks (sakk) and letters of credit (hawala). The English word check comes from the Arabic term sakk.

Demand and Supply

A number of Muslim scholars seem to have clearly understood the role of both demand and supply in the determination of prices. For example, Ibn Taymiyyah (d. 1328) wrote: “The rise or fall of prices may not necessarily be due to injustice by some people. They may also be due to the shortage of output or the import of commodities in demand. If the demand for a commodity increases and the supply of what is demanded declines, the price rises. If, however, the demand falls and the supply increases, the price falls” (1961-3, Vol. 8, p. 523).

Even before Ibn Taymiyyah, al-Jahiz (d. 869) wrote nearly five centuries earlier that: “Anything available in the market is cheap because of its availability [supply] and dear by its lack of availability if there is need [demand] for it” (1983, p. 13), and that “anything the supply of which increases, becomes cheap except intelligence, which becomes dearer when it increases” (ibid., p. 13).

Ibn Khaldun went even further by emphasizing that both an increase in demand or a fall in supply leads to a rise in prices, while a decline in demand or a rise in supply contributes to a fall in prices (pp. 393 and 396). He believed that while continuation of ‘excessively low’ prices hurts the craftsmen and traders and drives them out of the market, the continuation of ‘excessively high’ prices hurts the consumers. ‘Moderate’ prices in between the two extremes were, therefore, desirable, because they would not only allow the traders a socially-acceptable level of return but also lead to the clearance of the markets by promoting sales and thereby generating a given turnover and prosperity (ibid, p. 398). Nevertheless, low prices were desirable for necessities because they provide relief to the poor who constitute the majority of the population (ibid, p. 398). If one were to use modem terminology, one could say that Ibn Khaldun found a stable price level with a relatively low cost of living to be preferable, from the point of view of both growth and equity in comparison with bouts of inflation and deflation. The former hurts equity while the latter reduces incentive and efficiency. Low prices for necessities should not, however, be attained through the fixing of prices by the state; this destroys the incentive for production (ibid, pp. 279-83).

The factors which determined demand were, according to Ibn Khaldun, income, price level, the size of the population, government spending, the habits and customs of the people, and the general development and prosperity of the society (ibid, pp.398-404). The factors which determined supply were demand (ibid, pp. 400 and 403), order and stability (pp. 306-08), the relative rate of profit (ibid, pp. 395 and 398), the extent of human effort (p. 381), the size of the labor force as well as their knowledge and skill (pp. 363 and 399-400), peace and security (pp. 394-95 and 396), and the technical background and development of the whole society (pp. 399-403). All these constituted important elements of his theory of production. If the price falls and leads to a loss, capital is eroded, the incentive to supply declines, leading to a recession. Trade and crafts also consequently suffer (p. 398).

This is highly significant because the role of both demand and supply in the determination of value was not well understood in the West until the late nineteenth and the early twentieth centuries. Pre-classical English economists like William Petty (1623-87), Richard Cantillon (1680-1734), James Steuart (1712-80), and even Adam Smith (1723-90), the founder of the Classical School, generally stressed only the role of the cost of production, and particularly of labor, in the determination of value. The first use in English writings of the notions of both demand and supply was perhaps in 1767 (Thweatt, 1983). Nevertheless, it was not until the second decade of the nineteenth century that the role of both demand and supply in the determination of market prices began to be fully appreciated (Groenewegen, 1973). While Ibn Khaldun had been way ahead of conventional economists, he probably did not have any idea of demand and supply schedules, elasticities of demand and supply and most important of all, equilibrium price, which plays a crucial role in modern economic discussions.

Public Finance

Taxation

Long before Adam Smith (d. 1790), who is famous, among other things, for his canons of taxation (equality, certainty, convenience of payment, and economy in collection) (see Smith, 1937, pp. 777-79), the development of these canons can be traced in the writings of pre-Islamic as well as Muslim scholars, particularly the need for the tax system to be just and not oppressive. Caliphs Umar (d. 644), Ali (d. 661) and Umar ibn Abd al-Aziz (d. 720), stressed that taxes should be collected with justice and leniency and should not be beyond the ability of the people to bear. Tax collectors should not under any circumstances deprive the people of the necessities of life (Abu Yusuf, 1933/34, pp. 14, 16 and 86). Abu Yusuf, adviser to Caliph Harun al-Rashid (786-809), argued that a just tax system would lead not only to an increase in revenues but also to the development of the country (Abu Yusuf, 1933/34, p. 111; see also pp. 14, 16, 60, 85, 105-19 and 125). Al-Mawardi also argued that the tax system should do justice to both the taxpayer and the treasury – “taking more was iniquitous with respect to the rights of the people, while taking less was unfair with respect to the right of the public treasury” (1960, p. 209; see also pp. 142-56 and 215).[9]

Ibn Khaldun stressed the principles of taxation very forcefully in the Muqaddimah. He quoted from a letter written by Tahir ibn al-Husayn, Caliph al-Ma’mun’s general, advising his son, ‘Abdullah ibn Tahir, Governor of al-Raqqah (Syria): “So distribute [taxes] among all people making them general, not exempting anyone because of his nobility or wealth and not exempting even your own officials or courtiers or followers. And do not levy on anyone a tax which is beyond his capacity to pay” (p. 308).[10] In this particular passage, he stressed the principles of equity and neutrality, while in other places he also stressed the principles of convenience and productivity.

The effect of taxation on incentives and productivity was so clearly visualized by Ibn Khaldun that he seems to have grasped the concept of optimum taxation. He anticipated the gist of the Laffer Curve, nearly six hundred years before Arthur Laffer, in two full chapters of the Muqaddimah.[11] At the end of the first chapter, he concluded that “the most important factor making for business prosperity is to lighten as much as possible the burden of taxation on businessmen, in order to encourage enterprise by ensuring greater profits [after taxes]” (p. 280). This he explained by stating that “when taxes and imposts are light, the people have the incentive to be more active. Business therefore expands, bringing greater satisfaction to the people because of low taxes …, and tax revenues also rise, being the sum total of all assessments” (p. 279). He went on to say that as time passes the needs of the state increase and rates of taxation rise to increase the yield. If this rise is gradual people become accustomed to it, but ultimately there is an adverse impact on incentives. Business activity is discouraged and declines, and so does the yield of taxation (pp. 280-81). A prosperous economy at the beginning of the dynasty, thus, yields higher tax revenue from lower tax rates while a depressed economy at the end of the dynasty, yields smaller tax revenue from higher rates (p. 279). He explained the reasons for this by stating: “Know that acting unjustly with respect to people’s wealth, reduces their will to earn and acquire wealth … and if the will to earn goes, they stop working. The greater the oppression, the greater the effect on their effort to earn … and, if people abstain from earning and stop working, the markets will stagnate and the condition of people will worsen” (pp. 286-87); tax revenues will also decline (p. 362). He, therefore, advocated justice in taxation (p. 308).

Public Expenditure

For Ibn Khaldun the state was also an important factor of production. By its spending it promotes production and by its taxation it discourages production (pp. 279-81). Since the government constitutes the greatest market for goods and services, and is a major source of all development (pp. 286 and 403), a decrease in its spending leads to not only a slackening of business activity and a decline in profits but also a decline in tax revenue (p. 286). The more the government spends, the better it may be for the economy (p. 286).[12] Higher spending enables the government to do the things that are needed to support the population and to ensure law and order and political stability (pp. 306 and 308). Without order and political stability, the producers have no incentive to produce. He stated that “the only reason [for the accelerated development of cities] is that the government is near them and pours its money into them, like the water [of a river] that makes green everything around it, and irrigates the soil adjacent to it, while in the distance everything remains dry” (p. 369).

Ibn Khaldun also analyzed the effect of government expenditure on the economy and is, in this respect, a forerunner of Keynes. He stated: “A decrease in government spending leads to a decline in tax revenues. The reason for this is that the state represents the greatest market for the world and the source of civilization. If the ruler hoards tax revenues, or if these are lost, and he does not spend them as they should be, the amount available with his courtiers and supporters would decrease, as would also the amount that reaches through them to their employees and dependents [the multiplier effect]. Their total spending would, therefore, decline. Since they constitute a significant part of the population and their spending constitutes a substantial part of the market, business will slacken and the profits of businessmen will decline, leading also to a decline in tax revenues … Wealth tends to circulate between the people and the ruler, from him to them and from them to him. Therefore, if the ruler withholds it from spending, the people would become deprived of it” (p. 286).

Economic Mismanagement and Famine

Ibn Khaldun established the causal link between bad government and high grain prices by indicating that in the later stage of the dynasty, when public administration becomes corrupt and inefficient, and resorts to coercion and oppressive taxation, incentive is adversely affected and the farmers refrain from cultivating the land. Grain production and reserves fail to keep pace with the rising population. The absence of reserves causes supply shortages in the event of a famine and leads to price escalation (pp. 301-02).

Al-Maqrizi (d. 1442) who, as muhtasib (market supervisor), had intimate knowledge of the economic conditions during his times, applied Ibn Khaldun’s analysis in his book (1956) to determine the reasons for the economic crisis of Egypt during the period 1403-06. He identified that the political administration had become very weak and corrupt during the Circassian period. Public officials were appointed on the basis of bribery rather than ability.[13] To recover the bribes, officials resorted to oppressive taxation. The incentive to work and produce was adversely affected and output declined. The crisis was further intensified by debasement of the currency through the excessive issue of copper fulus, or fiat money, to cover state budgetary deficits. All these factors joined hands with the famine to lead to a high degree of inflation, misery of the poor, and impoverishment of the country.

Hence, al-Maqrizi laid bare the socio-political determinants of the prevailing ‘system crisis’ by taking into account a number of variables like corruption, bad government policies, and weak administration. All of these together played a role in worsening the impact of the famine, which could otherwise have been handled effectively without a significant adverse impact on the population. This is clearly a forerunner of Sen’s entitlement theory, which holds the economic mismanagement of illegitimate governments to be responsible for the poor people’s misery during famines and other natural disasters (Sen, 1981). What al-Maqrizi wrote of the Circassian Mamluks was also true of the later Ottoman period (See Meyer, 1989).

Stages of Development

Ibn Khaldun stated the stages of development through which every society passes, moving from the primitive Bedouin stage to the rise of village, towns and urban centers with an effective government, development of agriculture, industry and sciences, and the impact of values and environment on this development ( Muqaddimah, pp. 35, 41-44, 87-95, 120-48, 172-76). Walliyullah[14] (d. 1762) later analyzed the development of society through four different stages from primitive existence to a well-developed community with khilafah (morally-based welfare state), which tries to ensure the spiritual as well as material well-being of the people. Like Ibn Khaldun, he considered political authority to be indispensable for human well-being. To be able to serve as a source of well-being for all and not of burden and decay, it must have the characteristics of the khilafah. He applied this analysis in various writings to the conditions prevailing during his life-time. He found that the luxurious life style of the rulers, along with their exhausting military campaigns, the increasing corruption and inefficiency of the civil service, and huge stipends to a vast retinue of unproductive courtiers, led them to the imposition of oppressive taxes on farmers, traders and craftsmen, who constituted the main productive section of the population. These people had, therefore, lost interest in their occupations, output had slowed down, state financial resources had declined, and the country had become impoverished (Waliyullah, 1992, Vol. I, pp. 119-52). Thus, in step with Ibn Khaldun and other Muslim scholars, al-Maqrizi and Waliyullah combined moral, political, social and economic factors to explain the economic phenomena of their times and the rise and fall of their societies.

Muslim Intellectual Decline

Unfortunately, the rich theoretical contribution made by Muslim scholars up until Ibn Khaldun did not get fertilized and irrigated by later scholars to lead to the development of Islamic economics, except by a few isolated scholars like al-Maqrizi, al-Dawwani (d. 1501), and Waliyullah. Their contributions were, however, only in specific areas and did not lead to a further development of Ibn Khaldun’s model of socio-economic and political dynamics. Islamic economics did not, therefore, develop as a separate intellectual discipline in conformity with the Islamic paradigm along the theoretical foundations and method laid down by Ibn Khaldun and his predecessors. It continued to remain an integral part of the social and moral philosophy of Islam.

One may ask here why the rich intellectual contributions made by Muslim scholars did not continue after Ibn Khaldun. The reason may be that, as indicated earlier, Ibn Khaldun lived at a time when the political and socio-economic decline of the Muslim world was underway.[15] He was perhaps “the sole point of light in his quarter of the firmament” (Toynbee, 1935, Vol. 3, p. 321). According to Ibn Khaldun himself, sciences progress only when a society is itself progressing (p. 434). This theory is clearly upheld by Muslim history. Sciences progressed rapidly in the Muslim world for four centuries from the middle of the eighth century to the middle of the twelfth century and continued to do so at a substantially decelerated pace for at least two more centuries, tapering off gradually thereafter (Sarton 1927, Vol. 1 and Book 1 of Vol. 2). Once in a while there did appear a brilliant star on an otherwise unexciting firmament. Economics was no exception. It also continued to be in a state of limbo in the Muslim world. No worthwhile contributions were made after Ibn Khaldun.

The trigger mechanism for this decline was, according to Ibn Khaldun, the failure of political authority to provide good governance. Political illegitimacy, which started after the end of khilafah in 661 gradually led to increased corruption and the use of state resources for private benefit at the neglect of education and other nation-building functions of the state. This gradually triggered the decline of all other sectors of the society and economy.[16]

The rapidly rising Western civilization took over the torch of knowledge from the declining Muslim world and has kept it burning with even greater brightness. All sciences, including the social sciences, have made phenomenal progress. Conventional economics became a separate academic discipline after the publication of Alfred Marshall’s great treatise, Principles of Economics, in 1890 (Schumpeter, 1954, p.21),[17] and has continued to develop since then at a remarkable speed. With such a great achievement to its credit, there is no psychological need to allow the ‘Great Gap’ thesis to persist. It would help promote better understanding of Muslim civilization in the West if textbooks started giving credit to Muslim scholars. They were “the torchbearers of ancient learning during the medieval period” and “it was from them that the Renaissance was sparked and the Enlightenment kindled” (Todd Lowry in his ‘Foreword’ in Ghazanfar, 2003, p. xi). Watt has been frank enough to admit that, “the influence of Islam on Western Christendom is greater than is usually realized” and that, “an important task for Western Europeans, as we move into the era of the one world, is … to acknowledge fully our debt to the Arab and Islamic world” (Watt, 1972, p. 84).

Conventional economics, however, took a wrong turn after the Enlightenment Movement by stripping itself of the moral basis of society emphasized by Aristotelian and Judeo-Christian philosophies. This deprived it of the role that moral values and good governance can play in helping society raise both efficiency and equity in the allocation and distribution of scarce resources needed for promoting the well-being of all. However, this has been changing. The role of good governance has already been recognized and that of moral values is gradually penetrating the economics orthodoxy. Islamic economics is also reviving now after the independence of Muslim countries from foreign domination. It is likely that the two disciplines will converge and become one after a period of time. This will be in keeping with the teachings of the Qur’an, which clearly states that mankind was created as one but became divided as a result of their differences and transgression against each other (10:19, 2:213 and 3: 19). This reunification [globalization, as it is new called], if reinforced by justice and mutual care, should help promote peaceful coexistence and enable mankind to realize the well-being of all, a goal the realization of which we are all anxiously looking forward to.

References

Abu Yusuf, Ya ‘qub ibn Ibrahim. Kitab al-Kharaj. Cairo: al-Matab‘ah al-Salafiyyah, second edition, 1933/34. (This book has been translated into English by A. Ben Shemesh. Taxation in Islam. Leiden: E. J. Brill, 1969.)

Allouche, Adel. Mamluk Economics: A Study and Translation of Al-Maqrizi’s Ighathah. Salt Lake City: University of Utah Press, 1994.

Baeck Louis. The Mediterranean Tradition in Economic Thought. London: Routledge, 1994.

Blanchflower, David, and Andrew Oswald. “Well-being over Time in Britain and USA.” NBER, Working Paper No. 7487, 2000.

Blaug Mark. Economic Theory in Retrospect. Cambridge: Cambridge University Press, 1985.

Boulakia, Jean David C. “Ibn Khaldun: A Fourteenth-Century Economist.” Journal of Political Economy 79, no. 5 (1971): 1105-18.

Chapra, M. Umer. The Future of Economics: An Islamic Perspective. Leicester, UK: The Islamic Foundation, 2000.

Cline, William R. Potential Effects of Income Redistribution on Economic Growth. New York: Praeger, 1973.

DeSmogyi, Joseph N. “Economic Theory in Classical Arabic Literature.” Studies in Islam (Delhi), (1965): 1-6.

Diener E., and Shigehiro Oshi. “Money and Happiness: Income and Subjective Well-being.” In Culture and Subjective Well-being, edited by E. Diener and E. Suh. Cambridge, MA: MIT Press, 2000.

Dimashqi, Abu al-Fadl Ja‘far ibn ‘Ali al-. Al-Isharah ila Mahasin al-Tijarah, Al-Bushra al-Shurbaji, editor. Cairo: Maktabah al-Kulliyat al-Azhar, 1977.

Duri, A.A. “Baghdad.” The Encyclopedia of Islam, 894-99. Leiden: Brill, 1986.

Easterlin, Richard. “Does Economic Growth Improve the Human Lot? Some Empirical Evidence.” In Nations and Households in Economic Growth: Essays in Honor of Moses Abramowitz, edited by Paul David and Melvin Reder. New York: Academic Press, 1974.

Easterlin, Richard. “Will Raising the Income of All Increase the Happiness of All?” Journal of Economic Behavior and Organization 27, no. 1 (1995): 35-48.

Easterlin, Richard (2001), “Income and Happiness: Towards a Unified Theory” in Economic Journal, 111: 473 (2001).

Essid, M. Yassine. A Critique of the Origins of Islamic Economic Thought. Leiden: Brill, 1995.

Feyerabend, Paul. Against Method: Outline of an Anarchistic Theory of Knowledge. London: Verso, third edition, 1993.

Fischel, W.J. “Djahbadh.” In Encyclopedia of Islam, volume 2, 382-83. Leiden: Brill, 1992.

Friedman, Milton. Essays in Positive Economics. Chicago: University of Chicago Press, 1953.

George, Henry. Progress and Poverty. New York: Robert Schalkenback Foundation, 1955.

Ghazanfar, S.M. Medieval Islamic Economic Thought: Filling the Great Gap in European Economics. London: Routledge Curzon, 2003.

Groenewegen, P.D. “A Note on the Origin of the Phrase, ‘Supply and Demand.’” Economic Journal 83, no. 330 (1973): 505-09.

Hausman, Daniel, and Michael McPherson. “Taking Ethics Seriously: Economics and Contemporary Moral Philosophy.” Journal of Economic Literature 31, no. 2 (1993): 671-731.

Ibn Khaldun. Muqaddimah. Cairo: Al-Maktabah al-Tijariyyah al-Kubra. See also its translation under Rosenthal (1967) and selections from it under Issawi (1950).

Ibn Taymiyyah. Majmu‘ Fatawa Shaykh al-Islam Ahmad Ibn Taymiyyah. ‘Abd al-Rahman al-‘Asimi, editor. Riyadh: Matabi‘ al-Riyad, 1961-63.

Islahi, A. Azim. History of Economic Thought in Islam. Aligharh, India: Department of Economics, Aligharh Muslim University, 1996.

Issawi, Charles. An Arab Philosophy of History: Selections from the Prolegomena of Ibn Khaldun of Tunis (1332-1406). London: John Muray, 1950.

Jahiz, Amr ibn Bahr al-. Kitab al-Tabassur bi al-Tijarah. Beirut: Dar al-Kitab al-Jadid, 1983.

Jay, Elizabeth, and Richard Jay. Critics of Capitalism: Victorian Reactions to Political Economy. Cambridge: Cambridge University Press, 1986.

Kenny, Charles. “Does Growth Cause Happiness, or Does Happiness Cause Growth?” Kyklos 52, no. 1 (1999): 3-26.

Koopmans, T.C. (1969), “Inter-temporal Distribution and ‘Optimal’ Aggregate Economic Growth”, in Fellner et. al., Ten Economic Studies in the Tradition of Irving Fisher (John Willey and Sons).

Mahdi, Mohsin. Ibn Khaldun’s Philosophy of History. Chicago: University of Chicago Press, 1964.

Maqrizi, Taqi al-Din Ahmad ibn Ali al-. Ighathah al-Ummah bi Kashf al-Ghummah. Hims, Syria: Dar ibn al-Wahid, 1956. (See its English translation by Allouche, 1994).

Mawardi, Abu al-Hasan ‘Ali al-. Adab al-Dunya wa al-Din. Mustafa al Saqqa, editor. Cairo: Mustafa al-Babi al Halabi, 1955.

Mawardi, Abdu al-Hasan. Al-Ahkam al-Sultaniyyah wa al-Wilayat al-Diniyyah. Cairo: Mustafa al-Babi al-Halabi, 1960. (The English translation of this book by Wafa Wahba has been published under the title, The Ordinances of Government. Reading: Garnet, 1996.)

Mirakhor, Abbas. “The Muslim Scholars and the History of Economics: A Need for Consideration.” American Journal of Islamic Social Sciences (1987): 245-76.

Misri Rafiq Yunus al-. Al-Islam wa al-Nuqud. Jeddah: King Abdulaziz University, 1981.

Meyer, M.S. “Economic Thought in the Ottoman Empire in the 14th – Early 19th Centuries.” Archiv Orientali 4, no. 57 (1989): 305-18.

Myers, Milton L. The Soul of Modern Economic Man: Ideas of Self-Interest, Thomas Hobbes to Adam Smith. Chicago: University of Chicago Press, 1983.

North, Douglass C. Structure and Change in Economic History. New York: W.W. Norton, 1981.

North, Douglass C. “Economic Performance through Time.” American Economic Review 84, no. 2 (1994): 359-68.

Oswald, A.J. “Happiness and Economic Performance,” Economic Journal 107, no. 445 (1997): 1815-31.

Pifer, Josef. “Scholasticism.” Encyclopedia Britannica 16 (1978): 352-57.

Robbins, Lionel. An Essay on the Nature and Significance of Economic Science. London: Macmillan, second edition, 1935.

Rosenthal, Franz. Ibn Khaldun: The Muqaddimah, An Introduction to History. Princeton, NJ: Princeton University Press, 1967.

Sarakhsi, Shams al-Din al-. Kitab al-Mabsut. Beirut: Dar al-Ma‘rifah, third edition, 1978 (particularly “Kitab al-Kasb” of al-Shaybani in Vol. 30: 245-97).

Sarton, George. Introduction to the History of Science. Washington, DC: Carnegie Institute (three volumes issued between 1927 and 1948, each of the second and third volumes has two parts).

Schumpeter, Joseph A. History of Economic Analysis. New York: Oxford University Press, 1954.

Sen, Amartya. Poverty and Famines: An Essay on Entitlement and Deprivation. Oxford: Clarendon Press, 1981.

Sen, Amartya. On Ethics and Economics. Oxford: Basil Blackwell, 1987.

Siddiqi, M. Nejatullah. “History of Islamic Economic Thought.” In Lectures on Islamic Economics, Ausaf Ahmad and K.R. Awan, 69-90. Jeddah: IDB/IRTI, 1992.

Smith, Adam. An Inquiry into the Nature and Causes of the Wealth of Nations. New York: Modern Library, 1937.

Solo, Robert A. “Values and Judgments in the Discourse of the Sciences.” In Value Judgment and Income Distribution, edited by Robert A. Solo and Charles A. Anderson, 9-40. New York, Praeger, 1981.

Spengler, Joseph. “Economic Thought in Islam: Ibn Khaldun.” Comparative Studies in Society and History (1964): 268-306.

Thweatt, W.O. “Origins of the Terminology, Supply and Demand.” Scottish Journal of Political Economy (1983): 287-94.

Toynbee, Arnold J. A Study of History. London: Oxford University Press, second edition, 1935.

Udovitch, Abraham L. Partnership and Profit in Medieval Islam. Princeton; NJ: Princeton University Press, 1970.

Waliyullah, Shah. Hujjatullah al-Balighah. M.Sharif Sukkar, editor. Beirut: Dar Ihya al- Ulum, second edition, two volumes, 1992. (An English translation of this book by Marcia K. Hermansen was published bu Brill, Leiden, 1966.)

Watt, W. Montgomery. The Influence of Islam on Medieval Europe. Edinburgh: Edinburgh University Press, 1972.


[1] This is the liberal version of the secular and materialist worldviews. There is also the totalitarian version which does not have faith in the individuals’ ability to manage private property in a way that would ensure social well-being. Hence its prescription is to curb individual freedom and to transfer all means of production and decision making to a totalitarian state. Since this form of the secular and materialist worldview failed to realize human well-being and has been overthrown practically everywhere, it is not discussed in this paper.

[2] The literature on economic development is full of assertions that improvement in income distribution is in direct conflict with economic growth. For a summary of these views, see Cline, 1973, Chapter 2. This has, however, changed and there is hardly any development economist now who argues that injustice can help promote development.

[3] North has used the term ‘nasty’ for all such behavior. See the chapter “Ideology and Free Rider,” in North, 1981.

[4] Some of these scholars include Abu Yusuf (d. 798), al-Mawardi (d. 1058), Abu Ya’la (d. 1065), Nazam al-Mulk (d.1092), al-Ghazali (d. 1111), Ibn Taymiyyah (d. 1328), Ibn Khaldun (d. 1406), Shah Walliyullah (d. 1762), Jamaluddin al-Afghani (d. 1897), Muhammad ‘Abduh (d. 1905), Muhammad Iqbal (d. 1938), Hasan al-Banna (d. 1949), Sayyid Mawdudi (d. 1979), and Baqir al-Sadr (d. 1980).

[5] Some of these authors include al-Katib (d. 749), Ibn al-Muqaffa (d. 756) al-Nu‘man (d. 974), al-Mawardi (d. 1058), Kai Ka’us (d. 1082), Nizam al-Mulk (d. 1092), al-Ghazali (d. 1111), al-Turtushi (d. 1127). (For details, see Essid, 1995, pp.19-41.)

[6] For the fallacy of the Great Gap thesis, see Mirakhor (1987) and Ghazanfar (2003), particularly the “Foreword” by Todd Lowry and the “Introduction” by Ghazanfar.

[7] The full name of the book (given in the bibliography) may be freely translated as “The Book of Lessons and the Record of Cause and Effect in the History of Arabs, Persians and Berbers and their Powerful Contemporaries.” Several different editions of the Muqaddimah are now available in Arabic. The one I have used is that published in Cairo by al-Maktabah al-Tijarriyah al-Kubra without any indication of the year of publication. It has the advantage of showing all vowel marks, which makes the reading relatively easier. The Muqaddimah was translated into English in three volumes by Franz Rosenthal. Its first edition was published in 1958 and the second edition in 1967. Selections from the Muqaddimah by Charles Issawi were published in 1950 under the title, An Arab Philosophy of History: Selections from the Prolegomena of Ibn Khaldun of Tunis (1332-1406).

A considerable volume of literature is now available on Ibn Khaldun. This includes Spengler, 1964; Boulakia, 1971; Mirakhor, 1987; and Chapra, 2000.

[8] For some of these contributions, see Spengler, 1964; DeSmogyi, 1965; Mirakhor, 1987; Siddiqi, 1992; Essid, 1995; Islahi, 1996; Chapra, 2000; and Ghazanfar, 2003.

[9] For a more detailed discussion of taxation by various Muslim scholars, see the section on “Literature on Mirrors for Princes” in Essid, 1995, pp. 19-41.

[10] This letter is a significant development over the letter of Abu Yusuf to Caliph Harun al-Rashid (1933/34, pp. 3-17). It is more comprehensive and covers a larger number of topics.

[11] These are “On tax revenues and the reason for their being low and high” (pp. 279-80) and “Injustice ruins development” (pp. 286-410).

[12] Bear in mind the fact that this was stated at the time when commodity money, which it is not possible for the government to ‘create,’ was used, and fiduciary money, had not become the rule of the day.

[13] This was during the Slave (Mamluk) Dynasty in Egypt, which is divided into two periods. The first period was that of the Bahri (or Turkish) Mamluks (1250-1382), who have generally received praise in the chronicles of their contemporaries. The second period was that of the Burji Mamluks (Circassians, 1382-1517). This period was beset by a series of severe economic crises. (For details see Allouche, 1994.)

[14] Shah Walliyullah al-Dihlawi, popularly known as Walliyullah, was born in 1703, four years before the death of the Mughal Emperor, Aurangzeb (1658-1707). Aurangzeb’s rule, spanning a period of forty-nine years, was followed by a great deal of political instability – ten different changes in rulers during Walliyullah’s life-span of 59 years – leading ultimately to the weakening and decline of the Mughal Empire.

[15] For a brief account of the general decline and disintegration of the Muslim world during the fourteenth century, see Muhsin Mahdi, 1964, pp. 17-26.

[16] For a discussion of the causes of Muslim decline, see Chapra, 2000, pp. 173-252.

[17] According to Blaug (1985), economics became an academic discipline in the 1880s (p. 3).

Citation: Chapra, M. “Islamic Economics: What It Is and How It Developed”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/islamic-economics-what-it-is-and-how-it-developed/

Industrial Sickness Funds

John E. Murray, University of Toledo

Overview and Definition

Industrial sickness funds provided an early form of health insurance. They were financial institutions that extended cash payments and in some cases medical benefits to members who became unable to work due to sickness or injury. The term industrial sickness funds is a later construct which describes funds organized by companies, which were also known as establishment funds, and by labor unions. These funds were widespread geographically in the United States; the 1890 Census of Insurance found 1,259 nationwide, with concentrations in the Northeast, Midwest, California, Texas, and Louisiana (U.S. Department of the Interior, 1895). By the turn of the twentieth century, some industrial sickness funds had accumulated considerable experience at managing sickness benefits. A few predated the Civil War. When the U. S. Commissioner of Labor surveyed a sample of sickness funds in 1908, it found 867 non-fraternal funds nationwide that provided temporary disability benefits (U.S. Commissioner of Labor, 1909). By the time of World War I, these funds, together with similar funds sponsored by fraternal societies, covered 30 to 40 percent of non-agricultural wage workers in the more industrialized states, or by extension, eight to nine million nationwide (Murray 2007a). Sickness funds were numerous, widespread, and in general carefully operated.

Industrial sickness funds were among the earliest providers of any type of health or medical benefits in the United States. In fact, their earliest product was called “workingman’s insurance” or “sickness insurance,” terms that described their clientele and purpose accurately. In the late Progressive Era, reformers promoted government insurance programs that would supplant the sickness funds. To sound more British, they used the term “health insurance,” and that is the phrase we still use for this kind of insurance contract (Numbers 1978). In the history of health insurance, the funds were contemporary with benefit operations of fraternal societies (see fraternal sickness insurance) and led into the period of group health insurance (see health insurance, U. S.). They should be distinguished from the sickness benefits provided by some industrial insurance policies, which required weekly premium payments and paid a cash benefit upon death, which was intended to cover burial expenses.

Many written histories of health insurance have missed the important role industrial sickness funds played in both relief of worker suffering and in the political process. Recent historians have tended to criticize, patronize, or ignore sickness funds. Lubove (1986) complained that they stood in the way of government insurance for all workers. Klein (2003) claimed that they were inefficient, without making explicit her standard for that judgment. Quadagno (2005) simply asserted that no one had thought of health insurance before the 1920s. Contemporary commentators such as I. M. Rubinow and Irving Fisher criticized workers who preferred “hopelessly inadequate” sickness fund insurance over government insurance as “infantile” (Derickson 2005). But these criticisms stemmed more from their authors’ ideological preconceptions than from close study of these institutions.

Rise and Operations of Industrial Sickness Funds

The period of their greatest extent and importance was from the 1880s to around 1940. The many state labor bureau surveys of individual workers, since digitized by the University of California’s Historical Labor Statistics Project and available for download at EH.net, often asked questions such as “do you belong to a benefit society,” meaning a fraternal sickness benefit fund or an industrial sickness fund. Of the surveys from the early 1890s that included this question, around a quarter of respondents indicated that they belonged to such societies. Later, closer to 1920, several states examined the extent of sickness insurance coverage in response to movements to create governmental health insurance for workers (Table 1). These later studies indicated that in the Northeast, Midwest, and California, between thirty and forty percent of non-agricultural workers were covered. Thus, remarkably, these societies had actually increased their market share over a three decade period in which the labor force itself grew from 13 to 30 million workers (Murray 2007a). Industrial sickness funds were dynamic institutions, capable of dealing with an ever expanding labor market

Table 1:
Sources of Insurance in Three States (thousands of workers)

Source/state Illinois Ohio California
Fraternal society 250 200 291
Establishment fund 116 130 50
Union fund 140 85 38
Other sick fund 12 N/a 35
Commercial insurance 140 85 2 (?)
Total 660 500 416
Eligible labor force 1,850 1,500 995
Share insured 36% 33% 42%
Sources: Illinois (1919), Ohio, (1919), California (1917), Lee et al. (1957).

Industrial sickness funds operated in a relatively simple fashion, but one that enabled them to mitigate the usual information problems that emerge in insurance markets. The process of joining a fund and making a claim typically worked as follows. A newly hired worker in a plant with such a fund explicitly applied to join, often after a probationary period during which fund managers could observe his baseline health and work habits. After admission to the fund, he paid an entrance fee followed by weekly dues. Since the average industrial worker in the 1910s earned about ten dollars a week, the entrance fee of one dollar was a half-day’s pay and the dues of ten cents made the cost to the worker around one percent of his pay packet.

A member who was unable to work contacted his fund, which then sent either a committee of fellow fund members, a physician, or both to check on the member-now-claimant. If they found him as sick as he had said he was, and in their judgment he was unable to work, after a one week waiting period he received around half his weekly pay. The waiting period was intended to let transient, less serious illnesses resolve so that the fund could support members with longer-term medical problems. To continue receiving the sick pay the claimant needed to allow periodic examinations by a physician or visiting committee. In rough terms, the average worker missed two percent of a work year, or about a week every year, a rate that varied by age and industry. The quarter of all workers who missed any work lost on average one month’s pay; thus a typical incapacitated worker received three and a half weeks of benefit per year. Comparing the cost of dues and expected value of benefits shows that the sickness funds were close to an actuarially fair bet: $5.00 in annual dues compared to (0.25 chance of falling ill) x (3.5 weeks of benefits) x ($5.00 weekly benefit), or about four and a half dollars in expected benefits. Thus, sickness funds appear to have been a reasonably fair deal for workers.

Establishment funds did not invent sickness benefits by any means. Rather, they systematized previous arrangements for supporting sick workers or the survivors of deceased workers. The old way was to pass the hat, which was characterized by random assessments and arbitrary financial awards. Workers and employers both observed that contributors and beneficiaries alike detested passing the hat. Fellow workers complained about the surprise nature of the hat’s appearance, and beneficiaries faced humiliation upon grief when the hat contained less money than had been collected for a more popular co-worker. Eventually rules replaced discretion, and benefits were paid according to a published schedule, either as a flat rate per diem or as a percentage of wages. The 1890 Census of Insurance reported that only a few funds extended benefits “at the discretion of the society,” and by the time of the 1908 Commissioner of Labor survey the practice had disappeared (Murray 2007).

Labor union funds began in the early nineteenth century. In the earliest union funds, members of craft unions pledged to complete jobs that ill brothers had contracted to perform but could not finish due to illness. Eventually cash benefit payments replaced the in-kind promises of labor, accompanied by cash premium payments into the union’s kitty. While criticized by many observers as unstable, labor union funds actually operated in transparent fashion. Even funds that offered unemployment benefits survived the depression of the mid-1890s by reducing benefit payments and enacting other conservative measures. Another criticism was that their benefits were too small in amount and too brief in duration, but according to the 1908 Commissioner of Labor survey, labor union funds and establishment funds offered similar levels of benefits. The cost-benefit ratio did favor establishment funds, but establishment fund membership ended with employment at a particular company, while union funds offered the substantial attraction of benefits that were portable from job to job.

The cash payment to sick workers created an incentive to take sick leave that workers without sickness insurance did not face; this is the moral hazard of sick pay. Further, workers who believed that they were more likely to make a sick claim would have a stronger incentive to join a sickness fund than a worker in relatively good health; this is called adverse selection. Early twentieth century commentators on government sickness insurance disagreed on the extent and even the existence of moral hazard and adverse selection in sickness insurance. Later statistical studies found evidence for both in establishment funds. However, the funds themselves had understood the potential financial damage each could wreak and strategized to mitigate such losses. The magnitude of the sick pay moral hazard was small, and affected primarily the tendency of the worker to make a claim in the first place. Many sickness funds limited their liability here by paying for the physician who examined the claimant and thus was responsible for approving extended sickness payments. Physicians appear to have paid attention to the wishes of those who paid them. Among claimants in funds that paid the examining physician directly, the average duration of their illness ended significantly earlier. By the same token, physicians who were paid by the worker tended to approve longer absences for that worker—a sign that physicians too responded to incentives.

Testing for adverse selection depends on whether membership in a company’s fund was the worker’s choice (that is, it was voluntary) or the company’s choice (that is, it was compulsory). In fact among establishment funds in which membership was voluntary, claim rates per member were significantly higher than in mandatory membership funds. This indicates that voluntary funds were especially attractive to sicker workers, which is the essence of adverse selection. To reduce the risks of adverse selection, funds imposed age limits to keep out older applicants, physical examinations to discourage the obviously ill, probationary periods to reveal chronic illness, and pre-existing condition clauses to avoid paying for such conditions (Murray 2007a). Sickness funds thus cleverly managed information problems typical of insurance markets.

Industrial Sickness Funds and Progressive Era Politics

Industrial sickness funds were the linchpin of efforts to promote and to oppose the Progressive campaign for state-level mandatory government sickness insurance. One consistent claim made by government insurance supporters was that workers could neither afford to pay for sickness insurance nor to save in advance of financially damaging health problems. The leading advocacy organization, the American Association for Labor Legislation (AALL), reported in its magazine that “Savings of Wage-Earners Are Insufficient to Meet this Loss,” meaning lost income during sickness (American Association for Labor Legislation 1916a). However, worker surveys of savings, income, and insurance holdings revealed that workers rationally strategized according to their varying needs and abilities across the life-cycle. Young workers saved little and were less likely to belong to industrial sickness funds—but were less likely to miss work due to illness as well. Middle aged workers, married with families to support, were relatively more likely to belong to a sickness fund. Older workers pursued a different strategy, saving more and relying on sickness funds less; among other factors, they wanted greater liquidity in their financial assets (Murray 2007a). Worker strategies reflected varying needs at varying stages of life, some (but not all) of which could be adequately addressed by membership in sickness funds.

Despite claims to the contrary by some historians, there was little popular support for government sickness insurance in early twentieth century America. Lobbying by the AALL led twelve states to charge investigatory commissions with determining the need for and feasibility of government sickness insurance (Moss 1996). The AALL offered a basic bill that could be adjusted to meet a state’s particular needs (American Association for Labor Legislation 1916b). Typically the Association prodded states to adopt a version of German insurance, which would keep the many small industrial sickness funds while forcing new members into some and creating new funds for other workers. However, these bills met consistent defeat in statehouses, earning only a fleeting victory in the New York Senate in 1919, which was followed by the bill’s death in an Assembly committee (Hoffman 2001). In the previous year a California referendum on a constitutional amendment that would allow the government to provide sickness insurance lost by nearly three to one (Costa 1996).

After the Progressive campaign exhausted itself, industrial sickness funds continued to grow through the 1920s, but the Great Depression exposed deep flaws in their structure. Many labor union funds, without a sponsoring firm to act as lender of last resort, dissolved. Establishment funds failed at a surprisingly low rate, but their survival was made possible by the tendency of firms to fire less healthy workers. Federal surveys in Minnesota found that ill-health led to earlier job loss in the Depression, and comparisons of self reported health in later surveys indicated that the unemployed were in fact in poorer health than the employed, and the disparity grew as the Depression deepened. Thus, industrial sickness funds paradoxically enjoyed falling claim rates (and thus reduced expenses) as the economy deteriorated (Murray 2007).

Decline and Rebirth of Sickness Funds

At the same time, commercial insurers had been engaging in ever more productive research into the actuarial science of group health insurance. Eventually the insurers cut premium rates while offering benefits comparable to those available through sickness funds. As a result, the commercial insurers and Blue CrossBlue Shield came to dominate the market for health benefits. A federal survey that covered the early 1930s found more firms with group health than with mutual benefit societies but the benefit societies still insured more than twice as many workers (Sayers, et al 1937). By the later 1930s that gap in the number of firms had widened in favor of group health (Figure 1), and the number of workers insured was about equal. After the mid-1940s, industrial sickness funds were no longer a significant player in markets for health insurance (Murray 2007a).

Figure 1: Health Benefit Provision and Source
Source: Dobbin (1992) citing National Industrial Conference Board surveys.

More recently, a type of industrial sickness fund has begun to stage a comeback. Voluntary employee beneficiary associations (VEBAs) fall under a 1928 federal law that was created to govern industrial sickness funds. VEBAs are trusts set up to pay employee benefits without earning profits for the company. In late 2007, the Big Three automakers each contracted with the United Auto Workers (UAW) to operate a VEBA that would provide health insurance for UAW members. If the automakers and their workers succeed in establishing VEBAs that stand the test of time, they will have resurrected a once-successful financial institution previously thought relegated to the pre-World War II economy (Murray 2007b).

References

American Association for Labor Legislation. “Brief for Health Insurance.” American Labor Legislation Review 6 (1916a): 155–236.

American Association for Labor Legislation. “Tentative Draft of an Act.” American Labor Legislation Review 6 (1916b): 239–68.

California Social Insurance Commission. Report of the Social Insurance Commission of the State of California, January 25, 1917. Sacramento: California State Printing Office, 1917.

Costa, Dora L. “Demand for Private and State Provided Health Insurance in the 1910s: Evidence from California.” Photocopy, MIT, 1996.

Derickson, Alan. Health Security for All: Dreams of Universal Health Care in America. Baltimore: Johns Hopkins University Press, 2005.

Dobbin, Frank. “The Origins of Private Social Insurance: Public Policy and Fringe Benefits in America, 1920-1950,” American Journal of Sociology 97 (1992): 1416-50.

Hoffman, Beatrix. The Wages of Sickness: The Politics of Health Insurance in Progressive America. Chapel Hill: University of North Carolina Press, 2001.

Klein, Jennifer. For All These Rights: Business, Labor, and the Shaping of America’s Public-Private Welfare State. Princeton: Princeton University Press, 2003.

Lee, Everett S., Ann Ratner Miller, Carol P. Brainerd, and Richard A. Easterlin, under the direction of Simon Kuznets and Dorothy Swaine Thomas. Population Redistribution and Economic Growth, 1870-1950: Volume I, Methodological Considerations and Reference Tables. Philadelphia: Memoirs of the American Philosophical Society 45, 1957.

Lubove, Roy. The Struggle for Social Security, 1900-1930. Second edition. Pittsburgh: University of Pittsburgh Press, 1986.

Moss, David. Socializing Security: Progressive-Era Economists and the Origins of American Social Policy. Cambridge: Harvard University Press, 1996.

Murray, John E. Origins of American Health Insurance: A History of Industrial Sickness Funds. New Haven: Yale University Press, 2007a.

Murray, John E. “UAW Members Must Treat Health Care Money as Their Own,” Detroit Free Press, 21 November 2007b.

Ohio Health and Old Age Insurance Commission. Health, Health Insurance, Old Age Pensions: Report, Recommendations, Dissenting Opinions. Columbus: Heer, 1919.

Quadagno, Jill. One Nation, Uninsured: Why the U. S. Has No National Health Insurance. New York: Oxford University Press, 2005.

Sayers, R. R., Gertrud Kroeger, and W. M. Gafafer. “General Aspects and Functions of the Sick Benefit Organization.” Public Health Reports 52 (November 5, 1937): 1563–80.

State of Illinois. Report of the Health Insurance Commission of the State of Illinois, May 1, 1919. Springfield: State of Illinois, 1919.

U.S. Department of the Interior. Report on Insurance Business in the United States at the Eleventh Census: 1890; pt. 2, “Life Insurance.” Washington, DC: GPO, 1895.

U.S. Commissioner of Labor. Twenty-third Annual Report of the Commissioner of Labor, 1908: Workmen’s Insurance and Benefit Funds in the United States. Washington, DC: GPO, 1909.

Citation: Murray, John. “Industrial Sickness Funds, US”. EH.Net Encyclopedia, edited by Robert Whaples. June 5, 2008. URL http://eh.net/encyclopedia/industrial-sickness-funds/

Women Workers in the British Industrial Revolution

Joyce Burnette, Wabash College

Historians disagree about whether the British Industrial Revolution (1760-1830) was beneficial for women. Frederick Engels, writing in the late nineteenth century, thought that the Industrial Revolution increased women’s participation in labor outside the home, and claimed that this change was emancipating. 1 More recent historians dispute the claim that women’s labor force participation rose, and focus more on the disadvantages women experienced during this time period.2 One thing is certain: the Industrial Revolution was a time of important changes in the way that women worked.

The Census

Unfortunately, the historical sources on women’s work are neither as complete nor as reliable as we would like. Aggregate information on the occupations of women is available only from the census, and while census data has the advantage of being comprehensive, it is not a very good measure of work done by women during the Industrial Revolution. For one thing, the census does not provide any information on individual occupations until 1841, which is after the period we wish to study.3 Even then the data on women’s occupations is questionable. For the 1841 census, the directions for enumerators stated that “The professions &c. of wives, or of sons or daughters living with and assisting their parents but not apprenticed or receiving wages, need not be inserted.” Clearly this census would not give us an accurate measure of female labor force participation. Table One illustrates the problem further; it shows the occupations of men and women recorded in the 1851 census, for 20 occupational categories. These numbers suggest that female labor force participation was low, and that 40 percent of occupied women worked in domestic service. However, economic historians have demonstrated that these numbers are misleading. First, many women who were actually employed were not listed as employed in the census. Women who appear in farm wage books have no recorded occupation in the census.4 At the same time, the census over-estimates participation by listing in the “domestic service” category women who were actually family members. In addition, the census exaggerates the extent to which women were concentrated in domestic service occupations because many women listed as “maids”, and included in the domestic servant category in the aggregate tables, were really agricultural workers.5

Table One

Occupational Distribution in the 1851 Census of Great Britain

Occupational Category Males (thousands) Females (thousands) Percent Female
Public Administration 64 3 4.5
Armed Forces 63 0 0.0
Professions 162 103 38.9
Domestic Services 193 1135 85.5
Commercial 91 0 0.0
Transportation & Communications 433 13 2.9
Agriculture 1788 229 11.4
Fishing 36 1 2.7
Mining 383 11 2.8
Metal Manufactures 536 36 6.3
Building & Construction 496 1 0.2
Wood & Furniture 152 8 5.0
Bricks, Cement, Pottery, Glass 75 15 16.7
Chemicals 42 4 8.7
Leather & Skins 55 5 8.3
Paper & Printing 62 16 20.5
Textiles 661 635 49.0
Clothing 418 491 54.0
Food, Drink, Lodging 348 53 13.2
Other 445 75 14.4
Total Occupied 6545 2832 30.2
Total Unoccupied 1060 5294 83.3

Source: B.R. Mitchell, Abstract of British Historical Statistics, Cambridge: Cambridge University Press, 1962, p. 60.

Domestic Service

Domestic work – cooking, cleaning, caring for children and the sick, fetching water, making and mending clothing – took up the bulk of women’s time during the Industrial Revolution period. Most of this work was unpaid. Some families were well-off enough that they could employ other women to do this work, as live-in servants, as charring women, or as service providers. Live-in servants were fairly common; even middle-class families had maids to help with the domestic chores. Charring women did housework on a daily basis. In London women were paid 2s.6d. per day for washing, which was more than three times the 8d. typically paid for agricultural labor in the country. However, a “day’s work” in washing could last 20 hours, more than twice as long as a day’s work in agriculture.6 Other women worked as laundresses, doing the washing in their own homes.

Cottage Industry

Before factories appeared, most textile manufacture (including the main processes of spinning and weaving) was carried out under the “putting-out” system. Since raw materials were expensive, textile workers rarely had enough capital to be self-employed, but would take raw materials from a merchant, spin or weave the materials in their homes, and then return the finished product and receive a piece-rate wage. This system disappeared during the Industrial Revolution as new machinery requiring water or steam power appeared, and work moved from the home to the factory.

Before the Industrial Revolution, hand spinning had been a widespread female employment. It could take as many as ten spinners to provide one hand-loom weaver with yarn, and men did not spin, so most of the workers in the textile industry were women. The new textile machines of the Industrial Revolution changed that. Wages for hand-spinning fell, and many rural women who had previously spun found themselves unemployed. In a few locations, new cottage industries such as straw-plaiting and lace-making grew and took the place of spinning, but in other locations women remained unemployed.

Another important cottage industry was the pillow-lace industry, so called because women wove the lace on pins stuck in a pillow. In the late-eighteenth century women in Bedford could earn 6s. a week making lace, which was about 50 percent more than women earned in argiculture. However, this industry too disappeared due to mechanization. Following Heathcote’s invention of the bobbinet machine (1809), cheaper lace could be made by embroidering patterns on machine-made lace net. This new type of lace created a new cottage industry, that of “lace-runners” who emboidered patterns on the lace.

The straw-plaiting industry employed women braiding straw into bands used for making hats and bonnets. The industry prospered around the turn of the century due to the invention of a simple tool for splitting the straw and war, which cut off competition from Italy. At this time women could earn 4s. to 6s. per week plaiting straw. This industry also declined, though, following the increase in free trade with the Continent in the 1820s.

Factories

A defining feature of the Industrial Revolution was the rise of factories, particularly textile factories. Work moved out of the home and into a factory, which used a central power source to run its machines. Water power was used in most of the early factories, but improvements in the steam engine made steam power possible as well. The most dramatic productivity growth occurred in the cotton industry. The invention of James Hargreaves’ spinning jenny (1764), Richard Arkwright’s “throstle” or “water frame” (1769), and Samuel Crompton’s spinning mule (1779, so named because it combined features of the two earlier machines) revolutionized spinning. Britain began to manufacture cotton cloth, and declining prices for the cloth encouraged both domestic consumption and export. Machines also appeared for other parts of the cloth-making process, the most important of which was Edmund Cartwright’s powerloom, which was adopted slowly because of imperfections in the early designs, but was widely used by the 1830s. While cotton was the most important textile of the Industrial Revolution, there were advances in machinery for silk, flax, and wool production as well.7

The advent of new machinery changed the gender division of labor in textile production. Before the Industrial Revolution, women spun yarn using a spinning wheel (or occasionally a distaff and spindle). Men didn’t spin, and this division of labor made sense because women were trained to have more dexterity than men, and because men’s greater strength made them more valuable in other occupations. In contrast to spinning, handloom weaving was done by both sexes, but men outnumbered women. Men monopolized highly skilled preparation and finishing processes such as wool combing and cloth-dressing. With mechanization, the gender division of labor changed. Women used the spinning jenny and water frame, but mule spinning was almost exclusively a male occupation because it required more strength, and because the male mule-spinners actively opposed the employment of female mule-spinners. Women mule-spinners in Glasgow, and their employers, were the victims of violent attacks by male spinners trying to reduce the competition in their occupation.8 While they moved out of spinning, women seem to have increased their employment in weaving (both in handloom weaving and eventually in powerloom factories). Both sexes were employed as powerloom operators.

Table Two

Factory Workers in 1833: Females as a Percent of the Workforce

Industry Ages 12 and under Ages 13-20 Ages 21+ All Ages
Cotton 51.8 65.0 52.2 58.0
Wool 38.6 46.2 37.7 40.9
Flax 54.8 77.3 59.5 67.4
Silk 74.3 84.3 71.3 78.1
Lace 38.7 57.4 16.6 36.5
Potteries 38.1 46.9 27.1 29.4
Dyehouse 0.0 0.0 0.0 0.0
Glass 0.0 0.0 0.0 0.0
Paper 100.0 39.2 53.6
Whole Sample 52.8 66.4 48.0 56.8

Source: “Report from Dr. James Mitchell to the Central Board of Commissioners, respecting the Returns made from the Factories, and the Results obtained from them.” British Parliamentary Papers, 1834 (167) XIX. Mitchell collected data from 82 cotton factories, 65 wool factories, 73 flax factories, 29 silk factories, 7 potteries, 11 lace factories, one dyehouse, one “glass works”, and 2 paper mills throughout Great Britain.

While the highly skilled and highly paid task of mule-spinning was a male occupation, many women and girls were engaged in other tasks in textile factories. For example, the wet-spinning of flax, introduced in Leeds in 1825, employed mainly teenage girls. Girls often worked as assistants to mule-spinners, piecing together broken threads. In fact, females were a majority of the factory labor force. Table Two shows that 57 percent of factory workers were female, most of them under age 20. Women were widely employed in all the textile industries, and constituted the majority of workers in cotton, flax, and silk. Outside of textiles, women were employed in potteries and paper factories, but not in dye or glass manufacture. Of the women who worked in factories, 16 percent were under age 13, 51 percent were between the ages of 13 and 20, and 33 percent were age 21 and over. On average, girls earned the same wages as boys. Children’s wages rose from about 1s.6d. per week at age 7 to about 5s. per week at age 15. Beginning at age 16, and a large gap between male and female wages appeared. At age 30, women factory workers earned only one-third as much as men.

Figure One
Distribution of Male and Female Factory Employment by Age, 1833

Figure 1

Source: “Report from Dr. James Mitchell to the Central Board of Commissioners, respecting the Returns made from the Factories, and the Results obtained from them.” British Parliamentary Papers, 1834 (167) XIX.
The y-axis shows the percentage of total employment within each sex that is in that five-year age category.

Figure Two
Wages of Factory Workers in 1833

Figure 2

Source: “Report from Dr. James Mitchell to the Central Board of Commissioners, respecting the Returns made from the Factories, and the Results obtained from them.” British Parliamentary Papers, 1834 (167) XIX.

Agriculture

Wage Workers

Wage-earners in agriculture generally fit into one of two broad categories – servants who were hired annually and received part of their wage in room and board, and day-laborers who lived independently and were paid a daily or weekly wage. Before industrialization servants comprised between one-third and one-half of labor in agriculture.9 For servants the value of room and board was a substantial portion of their compensation, so the ratio of money wages is an under-estimate of the ratio of total wages (see Table Three). Most servants were young and unmarried. Because servants were paid part of their wage in kind, as board, the use of the servant contract tended to fall when food prices were high. During the Industrial Revolution the use of servants seems to have fallen in the South and East.10 The percentage of servants who were female also declined in the first half of the nineteenth century.11

Table Three

Wages of Agricultural Servants (£ per year)

Year Location Male Money Wage Male In-Kind Wage Female Money Wage Female In-Kind Wage Ratio of Money Wages Ratio of Total Wages
1770 Lancashire 7 9 3 6 0.43 0.56
1770 Oxfordshire 10 12 4 8 0.40 0.55
1770 Staffordshire 11 9 4 6 0.36 0.50
1821 Yorkshire 16.5 27 7 18 0.42 0.57

Source: Joyce Burnette, “An Investigation of the Female-Male Wage Gap during the Industrial Revolution in Britain,” Economic History Review 50 (May 1997): 257-281.

While servants lived with the farmer and received food and lodging as part of their wage, laborers lived independently, received fewer in-kind payments, and were paid a daily or a weekly wage. Though the majority of laborers were male, some were female. Table Four shows the percentage of laborers who were female at various farms in the late-18th and early-19th centuries. These numbers suggest that female employment was widespread, but varied considerably from one location to the next. Compared to men, female laborers generally worked fewer days during the year. The employment of female laborers was concentrated around the harvest, and women rarely worked during the winter. While men commonly worked six days per week, outside of harvest women generally averaged around four days per week.

Table Four

Employment of Women as Laborers in Agriculture:
Percentage of Annual Work-Days Worked by Females

Year Location Percent Female
1772-5 Oakes in Norton, Derbyshire 17
1774-7 Dunster Castle Farm, Somerset 27
1785-92 Dunster Castle Farm, Somerset 40
1794-5 Dunster Castle Farm, Somerset 42
1801-3 Dunster Castle Farm, Somerset 35
1801-4 Nettlecombe Barton, Somerset 10
1814-6 Nettlecombe Barton, Somerset 7
1826-8 Nettlecombe Barton, Somerset 5
1828-39 Shipton Moyne, Gloucestershire 19
1831-45 Oakes in Norton, Derbyshire 6
1836-9 Dunster Castle Farm, Somerset 26
1839-40 Lustead, Norfolk 6
1846-9 Dunster Castle Farm, Somerset 29

Sources: Joyce Burnette, “Labourers at the Oakes: Changes in the Demand for Female Day-Laborers at a Farm near Sheffield During the Agricultural Revolution,” Journal of Economic History 59 (March 1999): 41-67; Helen Speechley, Female and Child Agricultural Day Labourers in Somerset, c. 1685-1870, dissertation, Univ. of Exeter, 1999. Sotheron-Estcourt accounts, G.R.O. D1571; Ketton-Cremer accounts, N.R.O. WKC 5/250

The wages of female day-laborers were fairly uniform; generally a farmer paid the same wage to all the adult women he hired. Women’s daily wages were between one-third and one-half of male wages. Women generally worked shorter days, though, so the gap in hourly wages was not quite this large.12 In the less populous counties of Northumberland and Durham, male laborers were required to provide a “bondager,” a woman (usually a family member) who was available for day-labor whenever the employer wanted her.13

Table Five

Wages of Agricultural Laborers

Year Location Male Wage (d./day) Female Wage (d./day) Ratio
1770 Yorkshire 5 12 0.42
1789 Hertfordshire 6 16 0.38
1797 Warwickshire 6 14 0.43
1807 Oxfordshire 9 23 0.39
1833 Cumberland 12 24 0.50
1833 Essex 10 22 0.45
1838 Worcester 9 18 0.50

Source: Joyce Burnette, “An Investigation of the Female-Male Wage Gap during the Industrial Revolution in Britain,” Economic History Review 50 (May 1997): 257-281.

Various sources suggest that women’s employment in agriculture declined during the early nineteenth century. Enclosure increased farm size and changed the patterns of animal husbandry, both of which seem to have led to reductions in female employment.14 More women were employed during harvest than during other seasons, but women’s employment during harvest declined as the scythe replaced the sickle as the most popular harvest tool. While women frequently harvested with the sickle, they did not use the heavier scythe.15 Female employment fell the most in the East, where farms increasingly specialized in grain production. Women had more work in the West, which specialized more in livestock and dairy farming.16

Non-Wage-Earners

During the eighteenth century there were many opportunities for women to be productively employed in farm work on their own account, whether they were wives of farmers on large holdings, or wives of landless laborers. In the early nineteenth century, however, many of these opportunities disappeared, and women’s participation in agricultural production fell.

In a village that had a commons, even if the family merely rented a cottage the wife could be self-employed in agriculture because she could keep a cow, or other animals, on the commons. By careful management of her stock, a woman might earn as much during the year as her husband earned as a laborer. Women also gathered fuel from the commons, saving the family considerable expense. The enclosure of the commons, though, eliminated these opportunities. In an enclosure, land was re-assigned so as to eliminate the commons and consolidate holdings. Even when the poor had clear legal rights to use the commons, these rights were not always compensated in the enclosure agreement. While enclosure occurred at different times for different locations, the largest waves of enclosures occurred in the first two decades of the nineteenth century, meaning that, for many, opportunities for self-employment in agriculture declined as the same time as employment in cottage industry declined. 17

Only a few opportunities for agricultural production remained for the landless laboring family. In some locations landlords permitted landless laborers to rent small allotments, on which they could still grow some of their own food. The right to glean on fields after harvest seems to have been maintained at least through the middle of the nineteenth century, by which time it had become one of the few agricultural activities available to women in some areas. Gleaning was a valuable right; the value of the grain gleaned was often between 5 and 10 percent of the family’s total annual income.18

In the eighteenth century it was common for farmers’ wives to be actively involved in farm work, particularly in managing the dairy, pigs, and poultry. The diary was an important source of income for many farms, and its success depended on the skill of the mistress, who usually ran the operation with no help from men. In the nineteenth century, however, farmer’s wives were more likely to withdraw from farm management, leaving the dairy to the management of dairymen who paid a fixed fee for the use of the cows.19 While poor women withdrew from self-employment in agriculture because of lost opportunities, farmer’s wives seem to have withdraw because greater prosperity allowed them to enjoy more leisure.

It was less common for women to manage their own farms, but not unknown. Commercial directories list numerous women farmers. For example, the 1829 Directory of the County of Derby lists 3354 farmers, of which 162, or 4.8%, were clearly female.20 While the commercial directories themselves do not indicate to what extent these women were actively involved in their farms, other evidence suggests that at least some women farmers were actively involved in the work of the farm.21

Self-Employed

During the Industrial Revolution period women were also active businesswomen in towns. Among business owners listed in commercial directories, about 10 percent were female. Table Seven shows the percentage female in all the trades with at least 25 people listed in the 1788 Manchester commercial directory. Single women, married women, and widows are included in these numbers. Sometimes these women were widows carrying on the businesses of their deceased husbands, but even in this case that does not mean they were simply figureheads. Widows often continued their husband’s businesses because they had been active in management of the business while their husband was alive, and wished to continue.22 Sometimes married women were engaged in trade separately from their husbands. Women most commonly ran shops and taverns, and worked as dressmakers and milliners, but they were not confined to these areas, and appear in most of the trades listed in commercial directories. Manchester, for example, had six female blacksmiths and five female machine makers in 1846. Between 1730 and 1800 there were 121 “rouping women” selling off estates in Edinburgh. 23

Table Six

Business Owners Listed in Commercial Directories

Date City Male Female Unknown Gender Percent Female
1788 Manchester 2033 199 321 8.9
1824-5 Manchester 4185 297 1671 6.6
1846 Manchester 11,942 1222 2316 9.3
1850 Birmingham 15,054 2020 1677 11.8
1850 Derby 2415 332 194 12.1

Sources: Lewis’s Manchester Directory for 1788 (reprinted by Neil Richardson, Manchester, 1984); Pigot and Dean’s Directory for Manchester, Salford, &c. for 1824-5 (Manchester 1825); Slater’s National Commercial Directory of Ireland (Manchester, 1846); Slater’s Royal National and Commercial Directory (Manchester, 1850)

Table Seven

Women in Trades in Manchester, 1788

Trade Men Women Gender Unknown Percent Female
Apothecary/ Surgeon / Midwife 29 1 5 3.3
Attorney 39 0 3 0.0
Boot and Shoe makers 87 0 1 0.0
Butcher 33 1 1 2.9
Calenderer 31 4 5 11.4
Corn & Flour Dealer 45 4 5 8.2
Cotton Dealer 23 0 2 0.0
Draper, Mercer, Dealer of Cloth 46 15 19 24.6
Dyer 44 3 18 6.4
Fustian Cutter / Shearer 54 2 0 3.6
Grocers & Tea Dealers 91 16 12 15.0
Hairdresser & Peruke maker 34 1 0 2.9
Hatter 45 3 4 6.3
Joiner 34 0 1 0.0
Liquor dealer 30 4 14 11.8
Manufacturer, cloth 257 4 118 1.5
Merchant 58 1 18 1.7
Publichouse / Inn / Tavern 126 13 2 9.4
School master / mistress 18 10 0 35.7
Shopkeeper 107 16 4 13.0
Tailor 59 0 1 0.0
Warehouse 64 0 14 0.0

Source: Lewis’s Manchester Directory for 1788 (reprinted by Neil Richardson, Manchester, 1984)

Guilds often controlled access to trades, admitting only those who had served an apprenticeship and thus earned the “freedom” of the trade. Women could obtain “freedom” not only by apprenticeship, but also by widowhood. The widow of a tradesman was often considered knowledgeable enough in the trade that she was given the right to carry on the trade even without an apprenticeship. In the eighteenth century women were apprenticed to a wide variety of trades, including butchery, bookbinding, brush making, carpentry, ropemaking and silversmithing.24 Between the eighteenth and nineteenth centuries the number of females apprenticed to trades declined, possibly suggesting reduced participation by women. However, the power of the guilds and the importance of apprenticeship were also declining during this time, so the decline in female apprenticeships may not have been an important barrier to employment.25

Many women worked in the factories of the Industrial Revolution, and a few women actually owned factories. In Keighley, West Yorkshire, Ann Illingworth, Miss Rachael Leach, and Mrs. Betty Hudson built and operated textile mills.26 In 1833 Mrs. Doig owned a powerloom factory in Scotland, which employed 60 workers.27

While many women did successfully enter trades, there were obstacles to women’s employment that kept their numbers low. Women generally received less education than men (though education of the time was of limited practical use). Women may have found it more difficult than men to raise the necessary capital because English law did not consider a married woman to have any legal existence; she could not sue or be sued. A married woman was a feme covert and technically could not make any legally binding contracts, a fact which may have discouraged others from loaning money to or making other contracts with married women. However, this law was not as limiting in practice as it would seem to be in theory because a married woman engaged in trade on her own account was treated by the courts as a feme sole and was responsible for her own debts.28

The professionalization of certain occupations resulted in the exclusion of women from work they had previously done. Women had provided medical care for centuries, but the professionalization of medicine in the early-nineteenth century made it a male occupation. The Royal College of Physicians admitted only graduates of Oxford and Cambridge, schools to which women were not admitted until the twentieth century. Women were even replaced by men in midwifery. The process began in the late-eighteenth century, when we observe the use of the term “man-midwife,” an oxymoronic title suggestive of changing gender roles. In the nineteenth century the “man-midwife” disappeared, and women were replaced by physicians or surgeons for assisting childbirth. Professionalization of the clergy was also effective in excluding women. While the Church of England did not allow women ministers, the Methodists movement had many women preachers during its early years. However, even among the Methodists female preachers disappeared when lay preachers were replaced with a professional clergy in the early nineteenth century.29

In other occupations where professionalization was not as strong, women remained an important part of the workforce. Teaching, particularly in the lower grades, was a common profession for women. Some were governesses, who lived as household servants, but many opened their own schools and took in pupils. The writing profession seems to have been fairly open to women; the leading novelists of the period include Jane Austen, Charlotte and Emily Brontë, Fanny Burney, George Eliot (the pen name of Mary Ann Evans), Elizabeth Gaskell, and Frances Trollope. Female non-fiction writers of the period include Jane Marcet, Hannah More, and Mary Wollstonecraft.

Other Occupations

The occupations listed above are by no means a complete listing of the occupations of women during the Industrial Revolution. Women made buttons, nails, screws, and pins. They worked in the tin plate, silver plate, pottery and Birmingham “toy” trades (which made small articles like snuff boxes). Women worked in the mines until The Mines Act of 1842 prohibited them from working underground, but afterwards women continued to pursue above-ground mining tasks.

Married Women in the Labor Market

While there are no comprehensive sources of information on the labor force participation of married women, household budgets reported by contemporary authors give us some information on women’s participation.30 For the period 1787 to 1815, 66 percent of married women in working-class households had either a recorded occupation or positive earnings. For the period 1816-20 the rate fell to 49 percent, but in 1821-40 it recovered to 62 percent. Table Eight gives participation rates of women by date and occupation of the husband.

Table Eight

Participation Rates of Married Women

High-Wage Agriculture Low-Wage Agriculture Mining Factory Outwork Trades All
1787-1815 55 85 40 37 46 63 66
1816-1820 34 NA 28 4 42 30 49
1821-1840 22 85 33 86 54 63 62

Source: Sara Horrell and Jane Humphries, “Women’s Labour Force Participation and the Transition to the male-Breadwinner Family, 1790-1865,” Economic History Review 48 (February 1995): 89-117

While many wives worked, the amount of their earnings was small relative to their husband’s earnings. Annual earnings of married women who did work averaged only about 28 percent of their husband’s earnings. Because not all women worked, and because children usually contributed more to the family budget than their mothers, for the average family the wife contributed only around seven percent of total family income.

Childcare

Women workers used a variety of methods to care for their children. Sometimes childcare and work were compatible, and women took their children with them to the fields or shops where they worked.31 Sometimes women working at home would give their infants opiates such as “Godfrey’s Cordial” in order to keep the children quiet while their mothers worked.32 The movement of work into factories increased the difficulty of combining work and childcare. In most factory work the hours were rigidly set, and women who took the jobs had to accept the twelve or thirteen hour days. Work in the factories was very disciplined, so the women could not bring their children to the factory, and could not take breaks at will. However, these difficulties did not prevent women with small children from working.

Nineteenth-century mothers used older siblings, other relatives, neighbors, and dame schools to provide child care while they worked.33 Occasionally mothers would leave young children home alone, but this was dangerous enough that only a few did so.34 Children as young as two might be sent to dame schools, in which women would take children into their home and provide child care, as well as some basic literacy instruction.35 In areas where lace-making or straw-plaiting thrived, children were sent from about age seven to “schools” where they learned the trade.36

Mothers might use a combination of different types of childcare. Elizabeth Wells, who worked in a Leicester worsted factory, had five children, ages 10, 8, 6, 2, and four months. The eldest, a daughter, stayed home to tend the house and care for the infant. The second child worked, and the six-year-old and two-year-old were sent to “an infant school.”37 Mary Wright, an “over-looker” in the rag-cutting room of a Buckinghamshire paper factory, had five children. The eldest worked in the rag-cutting room with her, the youngest was cared for at home, and the middle three were sent to a school; “for taking care of an infant she pays 1s.6d. a-week, and 3d. a-week for the three others. They go to a school, where they are taken care of and taught to read.”38

The cost of childcare was substantial. At the end of the eighteenth century the price of child-care was about 1s. a week, which was about a quarter of a woman’s weekly earnings in agriculture.39 In the 1840s mothers paid anywhere from 9d. to 2s.6d. per week for child care, out of a wage of around 7s. per week.40

For Further Reading

Burnette, Joyce. “An Investigation of the Female-Male Wage Gap during the Industrial Revolution in Britain.” Economic History Review 50 (1997): 257-281.

Davidoff, Leonore, and Catherine Hall. Family Fortunes: Men and Women of the English Middle Class, 1780-1850. Chicago: University of Chicago Press, 1987.

Honeyman, Katrina. Women, Gender and Industrialisation in England, 1700-1870. New York: St. Martin’s Press, 2000.

Horrell, Sara, and Jane Humphries. “Women’s Labour Force Participation and the Transition to the Male-Breadwinner Family, 1790-1865.” Economic History Review 48 (1995): 89-117.

Humphries, Jane. “Enclosures, Common Rights, and Women: The Proletarianization of Families in the Late Eighteenth and Early Nineteenth Centuries.” Journal of Economic History 50 (1990): 17-42.

King, Peter. “Customary Rights and Women’s Earnings: The Importance of Gleaning to the Rural Labouring Poor, 1750-1850.” Economic History Review 44 (1991): 461-476

Kussmaul, Ann. Servants in Husbandry in Early Modern England. Cambridge: Cambridge University Press, 1981.

Pinchbeck, Ivy. Women Workers and the Industrial Revolution, 1750-1850, London: Routledge, 1930.

Sanderson, Elizabeth. Women and Work in Eighteenth-Century Edinburgh. New York: St. Martin’s Press, 1996.

Snell, K.D.M. Annals of the Labouring Poor: Social Change and Agrarian England, 1660-1900. Cambridge: Cambridge University Press, 1985.

Valenze, Deborah. Prophetic Sons and Daughters: Female Preaching and Popular Religion in Industrial England. Princeton University Press, 1985.

Valenze, Deborah. The First Industrial Woman. Oxford: Oxford University Press, 1995.

1 “Since large-scale industry has transferred the woman from the house to the labour market and the factory, and makes her, often enough, the bread-winner of the family, the last remnants of male domination in the proletarian home have lost all foundation – except, perhaps, for some of that brutality towards women which became firmly rooted with the establishment of monogamy. . . .It will then become evidence that the first premise for the emancipation of women is the reintroduction of the entire female sex into public industry.” Frederick Engels, The Origin of the Family, Private Property and the State, in Karl Marx and Frederick Engels: Selected Works, New York: International Publishers, 1986, p. 508, 510.

2 Ivy Pinchbeck (Women Workers and the Industrial Revolution, Routledge, 1930) claimed that higher incomes allowed some women to withdraw from the labor force. While she saw some disadvantages resulting from this withdrawal, particularly the loss of independence, she thought that overall women benefited from having more time to devote to their homes and families. Davidoff and Hall (Family Fortunes: Man and Women of the English Middle Class, 1780-1850, Univ. of Chicago Press, 1987) agree that women withdrew from work, but they see the change as a negative result of gender discrimination. Similarly, Horrell and Humphries (“Women’s Labour Force Participation and the Transition to the Male-Breadwinner Family, 1790-1865,” Economic History Review, Feb. 1995, XLVIII:89-117) do not find that rising incomes caused declining labor force participation, and they believe that declining demand for female workers caused the female exodus from the workplace.

3 While the British census began in 1801, individual enumeration did not begin until 1841. For a detailed description of the British censuses of the nineteenth century, see Edward Higgs, Making Sense of the Census, London: HMSO, 1989.

4 For example, Helen Speechley, in her dissertation, showed that seven women who worked for wages at a Somerset farm had no recorded occupation in the 1851 census See Helen Speechley, Female and Child Agricultural Day Labourers in Somerset, c. 1685-1870, dissertation, Univ. of Exeter, 1999.

5 Edward Higgs finds that removing family members from the “servants” category reduced the number of servants in Rochdale in 1851. Enumerators did not clearly distinguish between the terms “housekeeper” and “housewife.” See Edward Higgs, “Domestic Service and Household Production” in Angela John, ed., Unequal Opportunities, Oxford: Basil Blackwell, and “Women, Occupations and Work in the Nineteenth Century Censuses,” History Workshop, 1987, 23:59-80. In contrast, the censuses of the early 20th century seem to be fairly accurate; see Tim Hatton and Roy Bailey, “Women’s Work in Census and Survey, 1911-1931,” Economic History Review, Feb. 2001, LIV:87-107.

6 A shilling was equal to 12 pence, so if women earned 2s.6d. for 20 hours, they earned 1.5d. per hour. Women agricultural laborers earned closer to 1d. per hour, so the London wage was higher. See Dorothy George, London Life in the Eighteenth-Century, London: Kegan Paul, Trench, Trubner & Co., 1925, p. 208, and Patricia Malcolmson, English Laundresses, Univ. of Illinois Press, 1986, p. 25. .

7 On the technology of the Industrial Revolution, see David Landes, The Unbound Prometheus, Cambridge Univ. Press, 1969, and Joel Mokyr, The Lever of Riches, Oxford Univ. Press, 1990.

8 A petition from Glasgow cotton manufactures makes the following claim, “In almost every department of the cotton spinning business, the labour of women would be equally efficient with that of men; yet in several of these departments, such measures of violence have been adopted by the combination, that the women who are willing to be employed, and who are anxious by being employed to earn the bread of their families, have been driven from their situations by violence. . . . Messrs. James Dunlop and Sons, some years ago, erected cotton mills in Calton of Glasgow, on which they expended upwards of [£]27,000 forming their spinning machines, (Chiefly with the view of ridding themselves of the combination [the male union],) of such reduced size as could easily be wrought by women. They employed women alone, as not being parties to the combination, and thus more easily managed, and less insubordinate than male spinners. These they paid at the same rate of wages, as were paid at other works to men. But they were waylaid and attacked, in going to, and returning from their work; the houses in which they resided, were broken open in the night. The women themselves were cruelly beaten and abused; and the mother of one of them killed; . . . And these nefarious attempts were persevered in so systematically, and so long, that Messrs. Dunlop and sons, found it necessary to dismiss all female spinners from their works, and to employ only male spinners, most probably the very men who had attempted their ruin.” First Report from the Select Committee on Artizans and Machinery, British Parliamentary Papers, 1824 vol. V, p. 525.

9 Ann Kussmaul, Servants in Husbandry in Early Modern England, Cambridge Univ. Press, 1981, Ch. 1

10 See Ivy Pinchbeck, Women Workers and the Industrial Revolution, Routledge, 1930, Ch. 1, and K.D.M. Snell, Annals of the Labouring Poor, Cambridge Univ. Press, 1985, Ch. 2.

11 For the period 1574 to 1821 about 45 percent of servants were female, but this fell to 32 percent in 1851. See Ann Kussmaul, Servants in Husbandry in Early Modern England, Cambridge Univ. Press, 1981, Ch. 1.

12 Men usually worked 12-hour days, and women averaged closer to 10 hours. See Joyce Burnette, “An Investigation of the Female-Male Wage Gap during the Industrial Revolution in Britain,” Economic History Review, May 1997, 50:257-281.

13 See Ivy Pinchbeck, Women Workers and the Industrial Revolution, Routledge, 1930, p. 65.

14 See Robert Allen, Enclosure and the Yeoman, Clarendon Press, 1992, and Joyce Burnette, “Labourers at the Oakes: Changes in the Demand for Female Day-Laborers at a Farm near Sheffield During the Agricultural Revolution,” Journal of Economics History, March 1999, 59:41-67.

15 While the scythe had been used for mowing grass for hay or cheaper grains for some time, the sickle was used for harvesting wheat until the nineteenth century. Thus adoption of the scythe for harvesting wheat seems to be a response to changing prices rather than invention of a new technology. The scythe required less labor to harvest a given acre, but left more grain on the ground, so as grain prices fell relative to wages, farmers substituted the scythe for the sickle. See E.J.T. Collins, “Harvest Technology and Labour Supply in Britain, 1790-1870,” Economic History Review, Dec. 1969, XXIII:453-473.

16 K.D.M. Snell, Annals of the Labouring Poor, Cambridge, 1985.

17 See Jane Humphries, “Enclosures, Common Rights, and Women: The Proletarianization of Families in the Late Eighteenth and Early Nineteenth Centuries,” Journal of Economic History, March 1990, 50:17-42, and J.M. Neeson, Commoners: Common Rights, Enclosure and Social Change in England, 1700-1820, Cambridge Univ. Press, 1993.

18 See Peter King, “Customary Rights and Women’s Earnings: The Importance of Gleaning to the Rural Labouring Poor, 1750-1850,” Economic History Review, 1991, XLIV:461-476.

19 Pinchbeck, Women Workers and the Industrial Revolution, Routledge, 1930, p. 41-42 See also Deborah Valenze, The First Industrial Woman, Oxford Univ. Press, 1995

20 Stephen Glover, The Directory of the County of Derby, Derby: Henry Mozley and Son, 1829.

21 Eden gives an example of gentlewomen who, on the death of their father, began to work as farmers. He notes, “not seldom, in one and the same day, they have divided their hours in helping to fill the dung-cart, and receiving company of the highest rank and distinction.” (F.M. Eden, The State of the Poor, vol. i., p. 626.) One woman farmer who was clearly an active manager celebrated her success in a letter sent to the Annals of Agriculture, (quoted by Pinchbeck, Women Workers and the Industrial Revolution, Routledge, 1930, p. 30): “I bought a small estate, and took possession of it in the month of July, 1803. . . . As a woman undertaking to farm is generally a subject of ridicule, I bought the small estate by way of experiment: the gentlemen of the county have now complimented me so much on having set so good and example to the farmers, that I have determined on taking a very large farm into my hands.” The Annals of Agriculture give a number of examples of women farmers cited for their experiments or their prize-winning crops.

22 Tradesmen considered themselves lucky to find a wife who was good at business. In his autobiography James Hopkinson, a cabinetmaker, said of his wife, “I found I had got a good and suitable companion one with whom I could take sweet council and whose love and affections was only equall’d by her ability as a business woman.” Victorian Cabinet Maker: The Memoirs of James Hopkinson, 1819-1894, 1968, p. 96.

23 See Elizabeth Sanderson, Women and Work in Eighteenth-Century Edinburgh, St. Martin’s Press, 1996.

24 See K.D.M. Snell, Annals of the Labouring Poor, Cambridge Univ. Press, 1985, Table 6.1.

25 The law requiring a seven-year apprenticeship before someone could work in a trade was repealed in 1814.

26 See Francois Crouzet, The First Industrialists, Cambridge Univ. Press, 1985, and M.L. Baumber, From Revival to Regency: A History of Keighley and Haworth, 1740-1820, Crabtree Ltd., Keighley, 1983.

27 First Report of the Central Board of His Majesty’s Commissioners for inquiry into the Employment of Children in Factories, with Minutes of Evidence, British Parliamentary Papers, 1833 (450) XX, A1, p. 120.

28 For example, in the case of “LaVie and another Assignees against Philips and another Assignees,” the court upheld the right of a woman to operate as feme sole. In 1764 James Cox and his wife Jane were operating separate businesses, and both went bankrupt within the space of two months. Jane’s creditors sued James’s creditors for the recovery of five fans, goods from her shop that had been taken for James’s debts. The court ruled that, since Jane was trading as a feme sole, her husband did not own the goods in her shop, and thus James’s creditors had no right to seize them. See William Blackstone, Reports of Cases determined in the several Courts of Westminster-Hall, from 1746 to 1779, London, 1781, p. 570-575.

29 See Deborah Valenze, Prophetic Sons and Daughters: Female Preaching and Popular Religion in Industrial England, Princeton Univ. Press, 1985.

30 See Sara Horrell and Jane Humphries, “Women’s Labour Force Participation and the Transition to the male-Breadwinner Family, 1790-1865,” Economic History Review, Feb. 1995, XLVIII:89-117.

31 In his autobiography James Hopkinson says of his wife, “How she laboured at the press and assisted me in the work of my printing office, with a child in her arms, I have no space to tell, nor in fact have I space to allude to the many ways she contributed to my good fortune.” James Hopkinson, Victorian Cabinet Marker: The Memoirs of James Hopkinson, 1819-1894, J.B. Goodman, ed., Routledge & Kegan Paul, 1968, p. 96. A 1739 poem by Mary Collier suggests that carrying babies into the field was fairly common; it contains these lines:

Our tender Babes into the Field we bear,
And wrap them in our Cloaths to keep them warm,
While round about we gather up the Corn;
. . .
When Night comes on, unto our Home we go,
Our Corn we carry, and our Infant too.

Mary Collier, The Woman’s Labour, Augustan Reprint Society, #230, 1985, p. 10. A 1835 Poor Law report stated that in Sussex, “the custom of the mother of a family carrying her infant with her in its cradle into the field, rather than lose the opportunity of adding her earnings to the general stock, though partially practiced before, is becoming very much more general now.” (Quoted in Pinchbeck, Women Workers and the Industrial Revolution, Routledge, 1930, p. 85.)

32 Sarah Johnson of Nottingham claimed that she ” Knows it is quite a common custom for mothers to give Godfrey’s and the Anodyne cordial to their infants, ‘it is quite too common.’ It is given to infants at the breast; it is not given because the child is ill, but ‘to compose it to rest, to sleep it,’ so that the mother may get to work. ‘Has seen an infant lay asleep on its mother’s lap whilst at the lace-frame for six or eight hours at a time.’ This has been from the effects of the cordial.” [Reports from Assistant Handloom-Weavers’ Commissioners, British Parliamentary Papers, 1840 (43) XXIII, p. 157] Mary Colton, a lace worker from Nottingham, described her use of the drug to parliamentary investigators thus: ‘Was confined of an illegitimate child in November, 1839. When the child was a week old she gave it a half teaspoonful of Godfrey’s twice a-day. She could not afford to pay for the nursing of the child, and so gave it Godfrey’s to keep it quiet, that she might not be interrupted at the lace piece; she gradually increased the quantity by a drop or two at a time until it reached a teaspoonful; when the infant was four months old it was so “wankle” and thin that folks persuaded her to give it laudanum to bring it on, as it did other children. A halfpenny worth, which was about a teaspoonful and three-quarters, was given in two days; continued to give her this quantity since February, 1840, until this last past (1841), and then reduced the quantity. She now buys a halfpenny worth of laudanum and a halfpenny worth of Godfrey’s mixed, which lasts her three days. . . . If it had not been for her having to sit so close to work she would never have given the child Godfrey’s. She has tried to break it off many times but cannot, for if she did, she should not have anything to eat.” [Children’s Employment Commission: Second Report of the Commissioners (Trades and Manufactures), British Parliamentary Papers, 1843 (431) XIV, p. 630].

33 Elizabeth Leadbeater, who worked for a Birmingham brass-founder, worked while she was nursing and had her mother look after the infant. [Children’s Employment Commission: Second Report of the Commissioners (Trades and Manufactures), British Parliamentary Papers, 1843 (431) XIV, p. 710.] Mrs. Smart, an agricultural worker from Calne, Wiltshire, noted, “Sometimes I have had my mother, and sometimes my sister, to take care of the children, or I could not have gone out.” [Reports of Special Assistant Poor Law Commissioners on the Employment of Women and Children in Agriculture, British Parliamentary Papers, 1843 (510) XII, p. 65.] More commonly, though, older siblings provided the childcare. “Older siblings” generally meant children of nine or ten years old, and included boys as well as girls. Mrs. Britton of Calne, Wiltshire, left her children in the care of her eldest boy. [Reports of Special Assistant Poor Law Commissioners on the Employment of Women and Children in Agriculture, British Parliamentary Papers, 1843 (510) XII, p. 66] In a family from Presteign, Wales, containing children aged 9, 7, 5, 3, and 1, we find that “The oldest children nurse the youngest.” [F.M. Eden, State of the Poor, London: Davis, 1797, vol. iii, p. 904] When asked what income a labourer’s wife and children could earn, some respondents to the 1833 “Rural Queries” assumed that the eldest child would take care of the others, leaving the mother free to work. The returns from Bengeworth, Worcester, report that, “If the Mother goes to field work, the eldest Child had need to stay at home, to tend the younger branches of the Family.” Ewhurst, Surrey, reported that “If the Mother were employed, the elder Children at home would probably be required to attend to the younger Children.” [Report of His Majesty’s Commissioners for Inquiry in the Administration and Practical Operation of the Poor Law, Appendix B, “Rural Queries,” British Parliamentary Papers, 1834 (44) XXX, p. 488 and 593]

34 Parents heard of incidents, such as one reported in the Times (Feb. 6, 1819):

A shocking accident occurred at Llandidno, near Conway, on Tuesday night, during the absence of a miner and his wife, who had gone to attend a methodist meeting, and locked the house door, leaving two children within; the house by some means took fire, and was, together with the unfortunate children, consumed to ashes; the eldest only four years old!

Mothers were aware of these dangers. One mother who admitted to leaving her children at home worried greatly about the risks:

I have always left my children to themselves, and, God be praised! nothing has ever happened to them, though I thought it dangerous. I have many a time come home, and have thought it a mercy to find nothing has happened to them. . . . Bad accidents often happen. [Reports of Special Assistant Poor Law Commissioners on the Employment of Women and Children in Agriculture, British Parliamentary Papers, 1843 (510) XII, p. 68.]

Leaving young children home without child care had real dangers, and the fact that most working mothers paid for childcare suggests that they did not consider leaving young children alone to be an acceptable option.

35 In 1840 an observer of Spitalfields noted, “In this neighborhood, where the women as well as the men are employed in the manufacture of silk, many children are sent to small schools, not for instruction, but to be taken care of whilst their mothers are at work.”[ Reports from Assistant Handloom-Weavers’ Commissioners, British Parliamentary Papers, 1840 (43) XXIII, p. 261] In 1840 the wife of a Gloucester weaver earned 2s. a week from running a school; she had twelve students and charged each 2d. a week. [Reports from Assistant Handloom Weavers’ Commissioners, British Parliamentary Papers, 1840 (220) XXIV, p. 419] In 1843 the lace-making schools of the midlands generally charged 3d. per week. [Children’s Employment Commission: Second Report of the Commissioners (Trades and Manufactures), British Parliamentary Papers, 1843 (431) XIV, p. 46, 64, 71, 72]

36 At one straw-plaiting school in Hertfordshire,

Children commence learning the trade about seven years old: parents pay 3d. a-week for each child, and for this they are taught the trade and taught to read. The mistress employs about from 15 to 20 at work in a room; the parents get the profits of the children’s labour.[ Children’s Employment Commission: Second Report of the Commissioners (Trades and Manufactures), British Parliamentary Papers, 1843 (431) XIV, p. 64]

At these schools there was very little instruction; some time was devoted to teaching the children to read, but they spent most of their time working. One mistress complained that the children worked too much and learned too little, “In my judgment I think the mothers task the children too much; the mistress is obliged to make them perform it, otherwise they would put them to other schools.” Ann Page of Newport Pagnell, Buckinghamshire, had “eleven scholars” and claimed to “teach them all reading once a-day.” [Children’s Employment Commission: Second Report of the Commissioners (Trades and Manufactures), British Parliamentary Papers, 1843 (431) XIV, p. 66, 71] The standard rate of 3d. per week seems to have been paid for supervision of the children rather than for the instruction.

37 First Report of the Central Board of His Majesty’s Commissioners for Inquiring into the Employment of Children in Factories, British Parliamentary Papers, 1833 (450) XX, C1 p. 33.

38 Children’s Employment Commission: Second Report of the Commissioners (Trades and Manufactures), British Parliamentary Papers, 1843 (431) XIV, p. 46.

39 David Davies, The Case of Labourers in Husbandry Stated and Considered, London: Robinson, 1795, p.14. Agricultural wages for this time period are found in Eden, State of the Poor, London: Davis, 1797.

40 In 1843 parliamentary investigator Alfred Austin reports, “Where a girl is hired to take care of children, she is paid about 9d. a week, and has her food besides, which is a serious deduction from the wages of the woman at work.”[ Reports of Special Assistant Poor Law Commissioners on the Employment of Women and Children in Agriculture, British Parliamentary Papers,1843 (510) XII, p.26] Agricultural wages in the area were 8d. per day, so even without the cost of food, the cost of child care was about one-fifth a woman’s wage. One Scottish woman earned 7s. per week in a coal mine and paid 2s.6d., or 36 percent of her income, for the care of her children.[ B.P.P. 1844 (592) XVI, p. 6] In 1843 Mary Wright, a “over-looker” at a Buckinghamshire paper factory, paid even more for child care; she told parliamentary investigators that “for taking care of an infant she pays 1s.6d. a-week, and 3d. a-week for three others.” [Children’s Employment Commission: Second Report of the Commissioners (Trades and Manufactures), British Parliamentary Papers, 1843 (431) XIV, p. 46] She earned 10s.6d. per week, so her total child-care payments were 21 percent of her wage. Engels put the cost of child care at 1s. or 18d. a week. [Engels, [1845] 1926, p. 143] Factory workers often made 7s. a week, so again these women may have paid around one-fifth of their earnings for child care. Some estimates suggest even higher fractions of women’s income went to child care. The overseer of Wisbech, Cambridge, suggests a higher fraction; he reports, “The earnings of the Wife we consider comparatively small, in cases where she has a large family to attend to; if she has one or two children, she has to pay half, or perhaps more of her earnings for a person to take care of them.” [Report of His Majesty’s Commissioners for Inquiry in the Administration and Practical Operation of the Poor Law, Appendix B, “Rural Queries,” British Parliamentary Papers, 1834 (44) XXX, p. 76]

Citation: Burnette, Joyce. “Women Workers in the British Industrial Revolution”. EH.Net Encyclopedia, edited by Robert Whaples. March 26, 2008. URL http://eh.net/encyclopedia/women-workers-in-the-british-industrial-revolution/

Immigration to the United States

Raymond L. Cohn, Illinois State University (Emeritus)

For good reason, it is often said the United States is a nation of immigrants. Almost every person in the United States is descended from someone who arrived from another country. This article discusses immigration to the United States from colonial times to the present. The focus is on individuals who paid their own way, rather than slaves and indentured servants. Various issues concerning immigration are discussed: (1) the basic data sources available, (2) the variation in the volume over time, (3) the reasons immigration occurred, (4) nativism and U.S. immigration policy, (5) the characteristics of the immigrant stream, (6) the effects on the United States economy, and (7) the experience of immigrants in the U.S. labor market.

For readers who wish to further investigate immigration, the following works listed in the Reference section of this entry are recommended as general histories of immigration to the United States: Hansen (1940); Jones (1960); Walker (1964); Taylor (1971); Miller (1985); Nugent (1992); Erickson (1994); Hatton and Williamson (1998); and Cohn (2009).

The Available Data Sources

The primary source of data on immigration to the United States is the Passenger Lists, though U.S. and state census materials, Congressional reports, and company records also contain material on immigrants. In addition, the Integrated Public Use Microdata Series (IPUMS) web site at the University of Minnesota (http://www.ipums.umn.edu/usa/) contains data samples drawn from a number of federal censuses. Since the samples are of individuals and families, the site is useful in immigration research. A number of the countries from which the immigrants left also kept records about the individuals. Many of these records were originally summarized in Ferenczi (1970). Although records from other countries are useful for some purposes, the U.S. records are generally viewed as more complete, especially for the period before 1870. It is worthy of note that comparisons of the lists between countries often lead to somewhat different results. It is also probable that, during the early years, a few of the U.S. lists were lost or never collected.

Passenger Lists

The U.S. Passenger Lists resulted from an 1819 law requiring every ship carrying passengers that arrived in the United States from a foreign port to file with the port authorities a list of all passengers on the ship. These records are the basis for the vast majority of the historical data on immigration. For example, virtually all of the tables in the chapter on immigration in Carter et. al (2006) are based on these records. The Passenger Lists recorded a great deal of information. Each list indicates the name of the ship, the name of the captain, the port(s) of embarkation, the port of arrival, and the date of arrival. Following this information is a list of the passengers. Each person’s name is listed, along with age, gender, occupation, country of origin, country of destination, and whether or not the person died on the voyage. It is often possible to distinguish family groups since family members were usually grouped together and, to save time, the compilers frequently used ditto marks to indicate the same last name. Various data based on the lists were published in Senate or Congressional Reports at the time. Due to their usefulness in genealogical research, the lists are now widely available on microfilm and are increasingly available on CD-rom. Even a few public libraries in major cities have full or partial collections of these records. Most of the ship lists are also available on-line at various web sites.

The Volume of Immigration

Both the total volume of immigration to the United States and the immigrants’ countries of origins varied substantially over time. Table 1 provides the basic data on total immigrant volume by time period broken down by country or area of origin. The column “Average Yearly Total – All Countries” presents the average yearly total immigration to the United States in the time period given. Immigration rates – the average number of immigrants entering per thousand individuals in the U.S. population – are shown in the next column. The columns headed by country or area names show the percentage of immigrants coming from that place. The time periods in Table 1 have been chosen for illustrative purposes. A few things should be noted concerning the figures in Table 1. First, the estimates for much of the period since 1820 are based on the original Passenger Lists and are subject to the caveats discussed above. The estimates for the period before 1820 are the best currently available but are less precise than those after 1820. Second, though it was legal to import slaves into the United States (or the American colonies) before 1808, the estimates presented exclude slaves. Third, though illegal immigration into the United States has occurred, the figures in Table 1 include only legal immigrants. In 2015, the total number of illegal immigrants in the United States is estimated at around 11 million. These individuals were mostly from Mexico, Central America, and Asia.

Trends over Time

From the data presented in Table 1, it is apparent that the volume of immigration and its rate relative to the U.S. population varied over time. Immigration was relatively small until a noticeable increase occurred in the 1830s and a huge jump in the 1840s. The volume passed 200,000 for the first time in 1847 and the period between 1847 and 1854 saw the highest rate of immigration in U.S. history. From the level reached between 1847 and 1854, volume decreased and increased over time through 1930. For the period from 1847 through 1930, the average yearly volume was 434,000. During these years, immigrant volume peaked between 1900 and 1914, when an average of almost 900,000 immigrants arrived in the United States each year. This period is also second in terms of the rate of immigration relative to the U.S. population. The volume and rate fell to low levels between 1931 and 1946, though by the 1970s the volume had again reached that experienced between 1847 and 1930. The rise in volume continued through the 1980s and 1990s, though the rate per one thousand American residents has remained well below that experienced before 1915. It is notable that since about 1990, the average yearly volume of immigration has surpassed the previous peak experienced between 1900 and 1914. In 2015, reflecting the large volume of immigration, about 15 percent of the U.S. population was foreign-born.

Table 1
Immigration Volume and Rates

Years Average Yearly Total – All Countries Immigration Rates (Per 1000 Population) Percent of Average Yearly Total
Great Britain Ireland Scandinavia and Other NW Europe Germany Central and Eastern Europe Southern Europe Asia Africa Australia and Pacific Islands Mexico Other America
1630‑1700 2,200 —- —- —- —- —- —- —- —- —- —- —- —-
1700-1780 4,325 —- —- —- —- —- —- —- —- —- —- —- —-
1780-1819 9,900 —- —- —- —- —- —- —- —- —- —- —- —-
1820-1831 14,538 1.3 22 45 12 8 0 2 0 0 —- 4 6
1832-1846 71,916 4.3 16 41 9 27 0 1 0 0 —- 1 5
1847-1854 334,506 14.0 13 45 6 32 0 0 1 0 —- 0 3
1855-1864 160,427 5.2 25 28 5 33 0 1 3 0 —- 0 4
1865-1873 327,464 8.4 24 16 10 34 1 1 3 0 0 0 10
1874-1880 260,754 5.6 18 15 14 24 5 3 5 0 0 0 15
1881-1893 525,102 8.9 14 12 16 26 16 8 1 0 0 0 6
1894-1899 276,547 3.9 7 12 12 11 32 22 3 0 0 0 2
1900-1914 891,806 10.2 6 4 7 4 45 26 3 0 0 1 5
1915-1919 234,536 2.3 5 2 8 1 7 21 6 0 1 8 40
1920-1930 412,474 3.6 8 5 8 9 14 16 3 0 0 11 26
1931-1946 50,507 0.4 10 2 9 15 8 12 3 1 1 6 33
1947-1960 252,210 1.5 7 2 6 8 4 10 8 1 1 15 38
1961-1970 332,168 1.7 6 1 4 6 4 13 13 1 1 14 38
1971-1980 449,331 2.1 3 0 1 2 4 8 35 2 1 14 30
1981-1990 733,806 3.1 2 0 1 1 3 2 37 2 1 23 27
1991-2000 909,264 3.4 2 1 1 1 11 2 38 5 1 30 9
2001-2008 1,040,951 4.4 2 0 1 1 9 1 35 7 1 17 27
2009-2015 1,046,459 4.8 1 0 1 1 5 1 40 10 1 14 27

Sources: Years before 1820: Grabbe (1989). 1820-1970: Historical Statistics (1976). Years since 1970: U.S. Immigration and Naturalization Service (various years). Note: Entries with a zero indicate less than one-half of one percent. Entries with dashes indicate no information or no immigrants. 2002-2015: Department of Homeland Security: Office of Immigration Statistics (various years).

Sources of Immigration

The sources of immigration have changed a number of times over the years. In general, four relatively distinct periods can be identified in Table 1. Before 1881, the vast majority of immigrants, almost 86% of the total, arrived from northwest Europe, principally Great Britain, Ireland, Germany, and Scandinavia. During the colonial period, though the data do not allow an accurate breakdown, most immigrants arrived from Britain, with smaller numbers coming from Ireland and Germany. The years between 1881 and 1893 saw a transition in the sources of U.S. immigrants. After 1881, immigrant volume from central, eastern, and southern Europe began to increase rapidly. Between 1894 and 1914, immigrants from southern, central, and eastern Europe accounted for 69% of the total. With the onset of World War I in 1914, the sources of U.S. immigration again changed. From 1915 to the present day, a major source of immigrants to the United States has been the Western Hemisphere, accounting for 46% of the total. In the period between 1915 and 1960, virtually all of the remaining immigrants came from Europe, though no specific part of Europe was dominant. Beginning in the 1960s, immigration from Europe fell off substantially and was replaced by a much larger percentage of immigrants from Asia. Also noteworthy is the rise in immigration from Africa in the twenty-first century. Thus, over the course of U.S. history, the sources of immigration changed from northwestern Europe to southern, central and eastern Europe to the Americas in combination with Europe to the current situation where most immigrants come from the Americas, Asia and Africa.

Duration of Voyage and Method of Travel

Before the 1840s, immigrants arrived on sailing ships. General information on the length of the voyage is unavailable for the colonial and early national periods. By the 1840s, however, the average voyage length for ships from the British Isles was five to six weeks, with those from the European continent taking a week or so longer. In the 1840s, a few steamships began to cross the Atlantic. Over the course of the 1850s, steamships began to account for a larger, though still minority, percentage of immigrant travel. By 1873, virtually all immigrants arrived on steamships (Cohn 2005). As a result, the voyage time fell initially to about two weeks and it continued to decline into the twentieth century. Steamships remained the primary means of travel until after World War II. As a consequence of the boom in airplane travel over the last few decades, most immigrants now arrive via air.

Place of Arrival

Where immigrants landed in the United States varied, especially in the period before the Civil War. During the colonial and early national periods, immigrants arrived not only at New York City but also at a variety of other ports, especially Philadelphia, Boston, New Orleans, and Baltimore. Over time, and especially when most immigrants began arriving via steamship, New York City became the main arrival port. No formal immigration facilities existed at any of the ports until New York City established Castle Garden as its landing depot in 1855. This facility, located at the tip of Manhattan, was replaced in 1892 with Ellis Island, which in turn operated until 1954.

Death Rates during the Voyage

A final aspect to consider is the mortality experienced by the individuals on board the ships. Information taken from the Passenger Lists for the period of the sailing ship between 1820 and 1860 finds a loss rate of one to two percent of the immigrants who boarded (Cohn, 2009). Given the length of the trip and taking into account the ages of the immigrants, this rate represents mortality approximately four times higher than that experienced by non-migrants. Mortality was mainly due to outbreaks of cholera and typhus on some ships, leading to especially high death rates among children and the elderly. There appears to have been little trend over time in mortality or differences in the loss rate by country of origin, though some evidence suggests the loss rate may have differed by port of embarkation. In addition, the best evidence from the colonial period finds a loss rate only slightly higher than that of the antebellum years. In the period after the Civil War, with the change to steamships and the resulting shorter travel time and improved on-board conditions, mortality on the voyages fell, though exactly how much has not been determined.

The Causes of Immigration

Economic historians generally believe no single factor led to immigration. In fact, different studies have tried to explain immigration by emphasizing different factors, with the first important study being done by Thomas (1954). The most recent attempt to comprehensively explain immigration has been by Hatton and Williamson (1998), who focus on the period between 1860 and 1914. Massey (1999) expresses relatively similar views. Hatton and Williamson view immigration from a country during this time as being caused by up to five different factors: (a) the difference in real wages between the country and the United States; (b) the rate of population growth in the country 20 or 30 years before; (c) the degree of industrialization and urbanization in the home country; (d) the volume of previous immigrants from that country or region; and (e) economic and political conditions in the United States. To this list can be added factors not relevant during the 1860 to 1914 period, such as the potato famine, the movement from sail to steam, and the presence or absence of immigration restrictions. Thus, a total of at least eight factors affected immigration.

Causes of Fluctuations in Immigration Levels over Time

As discussed above, the total volume of immigration trended upward until World War I. The initial increase in immigration during the 1830s and 1840s was caused by improvements in shipping, more rapid population growth in Europe, and the potato famine in the latter part of the 1840s, which affected not only Ireland but also much of northwest Europe. As previously noted, the steamship replaced the sailing ship after the Civil War. By substantially reducing the length of the trip and increasing comfort and safety, the steamship encouraged an increase in the volume of immigration. Part of the reason volume increased was that temporary immigration became more likely. In this situation, an individual came to the United States not planning to stay permanently but instead planning to work for a period of time before returning home. All in all, the period from 1865 through 1914, when immigration was not restricted and steamships were dominant, saw an average yearly immigrant volume of almost 529,000. In contrast, average yearly immigration between 1820 and 1860 via sailing ship was only 123,000, and even between 1847 and 1860 was only 266,000.

Another feature of the data in Table 1 is that the yearly volume of immigration fluctuated quite a bit in the period before 1914. The fluctuations are mainly due to changes in economic and political conditions in the United States. Essentially, periods of low volume corresponded with U.S. economic depressions or times of widespread opposition to immigrants. In particular, volume declined during the nativist outbreak in the 1850s and the major depressions of the 1870s and 1890s and the Great Depression of the 1930s. As discussed in the next section, the United States imposed widespread restrictions on immigration beginning in the 1920s. Since then, the volume has been subject to more direct determination by the United States government. Thus, fluctuations in the total volume of immigration over time are due to four of the eight factors discussed in the first paragraph of this section: the potato famine, the movement from sail to steam, economic and political conditions in the United States, and the presence or absence of immigration restrictions.

Factors Influencing Immigration Rates from Particular Countries

The other four factors are primarily used to explain changes in the source countries of immigration. A larger difference in real wages between the country and the United States increased immigration from the country because it meant immigrants had more to gain from the move. Because most immigrants were between 15 and 35 years old, a higher population growth 20 or 30 years earlier meant there were more individuals in the potential immigrant group. In addition, a larger volume of young workers in a country reduced job prospects at home and further encouraged immigration. A greater degree of industrialization and urbanization in the home country typically increased immigration because traditional ties with the land were broken during this period, making laborers in the country more mobile. Finally, the presence of a larger volume of previous immigrants from that country or region encouraged more immigration because potential immigrants now had friends or relatives to stay with who could smooth their transition to living and working in the United States.

Based on these four factors, Hatton and Williamson explain the rise and fall in the volume of immigration from a country to the United States. Immigrant volume initially increased as a consequence of more rapid population growth and industrialization in a country and the existence of a large gap in real wages between the country and the United States. Within a number of years, volume increased further due to the previous immigration that had occurred. Volume remained high until various changes in Europe caused immigration to decline. Population growth slowed. Most of the countries had undergone industrialization. Partly due to the previous immigration, real wages rose at home and became closer to those in the United States. Thus, each source country went through stages where immigration increased, reached a peak, and then declined.

Differences in the timing of these effects then led to changes in the source countries of the immigrants. The countries of northwest Europe were the first to experience rapid population growth and to begin industrializing. By the latter part of the nineteenth century, immigration from these countries was in the stage of decline. At about the same time, countries in central, eastern, and southern Europe were experiencing the beginnings of industrialization and more rapid population growth. This model holds directly only through the 1920s, because U.S. government policy changed. At that point, quotas were established on the number of individuals allowed to immigrate from each country. Even so, many countries, especially those in northwest Europe, had passed the point where a large number of individuals wanted to leave and thus did not fill their quotas. The quotas were binding for many other countries in Europe in which pressures to immigrate were still strong. Even today, the countries providing the majority of immigrants to the United States, those south of the United States and in Asia and Africa, are places where population growth is high, industrialization is breaking traditional ties with the land, and real wage differentials with the United States are large.

Immigration Policy and Nativism

This section summarizes the changes in U.S. immigration policy. Only the most important policy changes are discussed and a number of relatively minor changes have been ignored. Interested readers are referred to Le May (1987) and Briggs (1984) for more complete accounts of U.S. immigration policy.

Few Restrictions before 1882

Immigration into the United States was subject to virtually no legal restrictions before 1882. Essentially, anyone who wanted to enter the United States could and, as discussed earlier, no specified arrival areas existed until 1855. Individuals simply got off the ship and went about their business. Little opposition among U.S. citizens to immigration is apparent until about the 1830s. The growing concern at this time was due to the increasing volume of immigration in both absolute terms and relative to the U.S. population, and the facts that more of the arrivals were Catholic and unskilled. The nativist feeling burst into the open during the 1850s when the Know-Nothing political party achieved a great deal of political success in the 1854 off-year elections. This party did not favor restrictions on the number of immigrants, though they did seek to restrict their ability to quickly become voting citizens. For a short period of time, the Know-Nothings had an important presence in Congress and many state legislatures. With the downturn in immigration in 1855 and the nation’s attention turning more to the slavery issue, their influence receded.

Chinese Exclusion Act

The first restrictive immigration laws were directed against Asian countries. The first law was the Chinese Exclusion Act of 1882. This law essentially prohibited the immigration of Chinese citizens and it stayed in effect until it was removed during World War II. In 1907, Japanese immigration was substantially reduced through a Gentlemen’s Agreement between Japan and the United States. It is noteworthy that the Chinese Exclusion Act also prohibited the immigration of “convicts, lunatics, idiots” and those individuals who might need to be supported by government assistance. The latter provision was used to some extent during periods of high unemployment, though as noted above, immigration fell anyway because of the lack of jobs.

Literacy Test Adopted in 1917

The desire to restrict immigration to the United States grew over the latter part of the nineteenth century. This growth was due partly to the high volume and rate of immigration and partly to the changing national origins of the immigrants; more began arriving from southern, central, and eastern Europe. In 1907, Congress set up the Immigration Commission, chaired by Senator William Dillingham, to investigate immigration. This body issued a famous report, now viewed as flawed, concluding that immigrants from the newer parts of Europe did not assimilate easily and, in general, blaming them for various economic ills. Attempts at restricting immigration were initially made by proposing a law requiring a literacy test for admission to the United States, and such a law was finally passed in 1917. This same law also virtually banned immigration from any country in Asia. Restrictionists were no doubt displeased when the volume of immigration from Europe resumed its former level after World War I in spite of the literacy test. The movement then turned to explicitly limiting the number of arrivals.

1920s: Quota Act and National Origins Act

The Quota Act of 1921 laid the framework for a fundamental change in U.S. immigration policy. It limited the number of immigrants from Europe to a total of about 350,000 per year. National quotas were established in direct proportion to each country’s presence in the U.S. population in 1910. In addition, the act assigned Asian countries quotas near zero. Three years later in 1924, the National Origins Act instituted a requirement that visas be obtained from an American consulate abroad before immigrating, reduced the total European quota to about 165,000, and changed how the quotas were determined. Now, the quotas were established in direct proportion to each country’s presence in the U.S. population in 1890, though this aspect of the act was not fully implemented until 1929. Because relatively few individuals immigrated from southern, central, and eastern Europe before 1890, the effect of the 1924 law was to drastically reduce the number of individuals allowed to immigrate to the United States from these countries. Yet total immigration to the United States remained fairly high until the Great Depression because neither the 1921 nor the 1924 law restricted immigration from the Western Hemisphere. Thus, it was the combination of the outbreak of World War I and the subsequent 1920s restrictions that caused the Western Hemisphere to become a more important source of immigrants to the United States after 1915, though it should be recalled the rate of immigration fell to low levels after 1930.

Immigration and Nationality Act of 1965

The last major change in U.S. immigration policy occurred with the passage of the Immigration and Nationality Act of 1965. This law abolished the quotas based on national origins. Instead, a series of preferences were established to determine who would gain entry. The most important preference was given to relatives of U.S. citizens and permanent resident aliens. By the twenty-first century, about two-thirds of immigrants came through these family channels. Preferences were also given to professionals, scientists, artists, and workers in short supply. The 1965 law kept an overall quota on total immigration for Eastern Hemisphere countries, originally set at 170,000, and no more than 20,000 individuals were allowed to immigrate to the United States from any single country. This law was designed to treat all countries equally. Asian countries were treated the same as any other country, so the virtual prohibition on immigration from Asia disappeared. In addition, for the first time the law also limited the number of immigrants from Western Hemisphere countries, with the original overall quota set at 120,000. It is important to note that neither quota was binding because immediate relatives of U.S. citizens, such as spouses, parents, and minor children, were exempt from the quota. In addition, the United States has admitted large numbers of refugees at different times from Vietnam, Haiti, Cuba, and other countries. Finally, many individuals enter the United States on student visas, enroll in colleges and universities, and eventually get companies to sponsor them for a work visa. Thus, the total number of legal immigrants to the United States since 1965 has always been larger than the combined quotas. This law has led to an increase in the volume of immigration and, by treating all countries the same, has led to Asia recently becoming a more important source of U.S. immigrants.

Though features of the 1965 law have been modified since it was enacted, this law still serves as the basis for U.S. immigration policy today. The most important modifications occurred in 1986 when employer sanctions were adopted for those hiring illegal workers. On the other hand, the same law also gave temporary resident status to individuals who had lived illegally in the United States since before 1982. The latter feature led to very high volumes of legal immigration being recorded in 1989, 1990, and 1991.

The Characteristics of the Immigrants

In this section, various characteristics of the immigrant stream arriving at different points in time are discussed. The following characteristics of immigration are analyzed: gender breakdown, age structure, family vs. individual migration, and occupations listed. Virtually all the information is based on the Passenger Lists, a source discussed above.

Gender and Age

Data are presented in Table 2 on the gender breakdown and age structure of immigration. The gender breakdown and age structure remain fairly consistent in the period before 1930. Generally, about 60% of the immigrants were male. As to age structure, about 20% of immigrants were children, 70% were adults up to age 44, and 10% were older than 44. In most of the period and for most countries, immigrants were typically young single males, young couples, or, especially in the era before the steamship, families. For particular countries, such as Ireland, a large number of the immigrants were single women (Cohn, 1995). The primary exception to this generalization was the 1899-1914 period, when 68% of the immigrants were male and adults under 45 accounted for 82% of the total. This period saw the immigration of a large number of single males who planned to work for a period of months or years and return to their homeland, a development made possible by the steamship shortening the voyage and reducing its cost (Nugent, 1992). The characteristics of the immigrant stream since 1930 have been somewhat different. Males have comprised less than one-half of all immigrants. In addition, the percentage of immigrants over age 45 has increased at the expense of those between the ages of 14 and 44.

Table 2
Immigration by Gender and Age

Percent Males Percent under 14 years Percent 14–44 years Percent 45 years and over
Years
1820-1831 70 19 70 11
1832-1846 62 24 67 10
1847-1854 59 23 67 10
1855-1864 58 19 71 10
1865-1873 62 21 66 13
1873-1880 63 19 69 12
1881-1893 61 20 71 10
1894-1898 57 15 77 8
1899-1914 68 12 82 5
1915-1917 59 16 74 10
1918-1930 56 18 73 9
1931-1946 40 15 67 17
1947-1960 45 21 64 15
1961-1970 45 25 61 14
1971-1980 46 24 61 15
1981-1990 52 18 66 16
1991-2000 51 17 65 18
2001-2008 45 15 64 21
2009-2015 45 15 61 24

Notes: From 1918-1970, the age breakdown is “Under 16” and “16-44.” From 1971 to 1998, the age breakdown is “Under 15” and “15-44.” For 2001-2015, it is again “Under 16” and “16-44.”

Sources: 1820-1970: Historical Statistics (1976). Years since 1970: U.S. Immigration and Naturalization Service (various years). 2002-2015: Department of Homeland Security: Office of Immigration Statistics (various years).

Occupations

Table 3 presents data on the percentage of immigrants who did not report an occupation and the percentage breakdown of those reporting an occupation. The percentage not reporting an occupation declined through 1914. The small percentages between 1894 and 1914 are a reflection of the large number of single males who arrived during this period. As is apparent, the classification scheme for occupations has changed over time. Though there is no perfect way to correlate the occupation categories used in the different time periods, skilled workers comprised about one-fourth of the immigrant stream through 1970. The immigration of farmers was important before the Civil War but declined steadily over time. The percentage of laborers has varied over time, though during some time periods they comprised one-half or more of the immigrants. The highest percentages of laborers occurred during good years for the U.S. economy (1847-54, 1865-73, 1881-93, 1899-1914), because laborers possessed the fewest skills and would have an easier time finding a job when the U.S. economy was strong. Commercial workers, mainly merchants, were an important group of immigrants very early when immigrant volume was low, but their percentage fell substantially over time. Professional workers were always a small part of U.S. immigration until the 1930s. Since 1930, these workers have comprised a larger percentage of immigrants reporting an occupation.

Table 3
Immigration by Occupation

Year Percent with no occup. listed Percent of immigrants with an occupation in each category
Professional Commercial Skilled Farmers Servants Laborers Misc.
1820-1831 61 3 28 30 23 2 14
1832-1846 56 1 12 27 33 2 24
1847-1854 54 0 6 18 33 2 41
1855-1864 53 1 12 23 23 4 37 0
1865-1873 54 1 6 24 18 7 44 1
1873-1880 47 2 4 24 18 8 40 5
1881-1893 49 1 3 20 14 9 51 3
1894-1898 38 1 4 25 12 18 37 3
Professional, technical, and kindred workers Farmers and farm managers Managers, officials, and proprietors, exc. farm Clerical, sales, and kindred workers Craftsmen, foremen, operatives, and kindred workers Private HH workers Service workers, exc. private household Farm laborers and foremen Laborers, exc. farm and mine
1899-1914 26 1 2 3 2 18 15 2 26 33
1915-1919 37 5 4 5 5 21 15 7 11 26
1920-1930 39 4 5 4 7 24 17 6 8 25
1931-1946 59 19 4 15 13 21 13 6 2 7
1947-1960 53 16 5 5 17 31 8 6 3 10
1961-1970 56 23 2 5 17 25 9 7 4 9
1971-1980 59 25 — a 8 12 36 — b 15 5 — c
1981-1990 56 14 — a 8 12 37 — b 22 7 — c
1991-2000 61 17 — a 7 9 23 — b 14 30 — c
2001-2008 76 45 — a — d 14 21 — b 18 5 — c
2009-2015 76 46 — a — d 12 19 — b 19 5 — c

a – included with “Farm laborers and foremen”; b – included with “Service workers, etc.”; c – included with “Craftsmen, etc.”; d – included with “Professional.”

Sources: 1820-1970: Historical Statistics (1976). Years since 1970: U.S. Immigration and Naturalization Service (various years). 2002-2015: Department of Homeland Security: Office of Immigration Statistics (various years). From 1970 through 2001, the INS has provided the following occupational categories: Professional, specialty, and technical (listed above under “Professional”); Executive, administrative, and managerial (listed above under “Managers, etc.”); Sales; Administrative support (these two are combined and listed above under “Clerical, etc.”); Precision production, craft, and repair; Operator, fabricator, and laborer (these two are combined and listed above under “Craftsmen, etc.”); Farming, forestry, and fishing (listed above under “Farm laborers and foremen”); and Service (listed above under “Service workers, etc.). Since 2002, the Department of Homeland Security has combined the Professional and Executive categories.  Note: Entries with a zero indicate less than one-half of one percent. Entries with dashes indicate no information or no immigrants.

Skill Levels

The skill level of the immigrant stream is important because it potentially affects the U.S. labor force, an issue considered in the next section. Before turning to this issue, a number of comments can be made concerning the occupational skill level of the U.S. immigration stream. First, skill levels fell substantially in the period before the Civil War. Between 1820 and 1831, only 39% of the immigrants were farmers, servants, or laborers, the least skilled groups. Though the data are not as complete, immigration during the colonial period was almost certainly at least this skilled. By the 1847-54 period, however, the less-skilled percentage had increased to 76%. Second, the less-skilled percentage did not change dramatically late in the nineteenth century when the source of immigration changed from northwest Europe to other parts of Europe. Comparing 1873-80 with 1899-1914, both periods of high immigration, farmers, servants, and laborers accounted for 66% of the immigrants in the former period and 78% in the latter period. The second figure is, however, similar to that during the 1847-54 period. Third, the restrictions on immigration imposed during the 1920s had a sizable effect on the skill level of the immigrant stream. Between 1930 and 1970, only 31-34% of the immigrants were in the least-skilled group.

Fourth, a deterioration in immigrant skills appears in the numbers in the 1980s and 1990s, and then an improvement appears since 2001. Both changes may be an illusion.. In Table 3 for the 1980s and 1990s, the percentage in the “Professional” category falls while the percentages in the “Service” and “Farm workers” categories rise. These changes are, however, due to the amnesty for illegal immigrants resulting from the 1986 law. The amnesty led to the recorded volume of immigration in 1989, 1990, and 1991 being much higher than typical, and most of the “extra” immigrants recorded their occupation as “Service” or “Farm laborer.” If these years are ignored, then little change occurred in the occupational distribution of the immigrant stream during the 1980s and 1990s. Two caveats, however, should be noted. First, the illegal immigrants can not, of course, be ignored. Second, the skill level of the U.S. labor force was improving over the same period. Thus, relative to the U.S. labor force and including illegal immigration, it is apparent the occupational skill level of the U.S. immigrant stream declined during the 1980s and 1990s.  Turning to the twenty-first century, the percentage of the legal immigrant stream in the highest-skilled category appears to have increased. This conclusion is also not certain because the changes that occurred in how occupations were categorized beginning in 2001 make a straightforward comparison potentially inexact. This uncertainty is increased by the growing percentage of immigrants for which no occupation is reported. It is not clear whether a larger percentage of those arriving actually did not work (recall that a growing percentage of legal immigrants are somewhat older) or if more simply did not list an occupation. Overall, detecting changes in the skill level of the legal immigrant stream since about 1930 is fraught with difficulty.

The Effects of Immigration on the United States Economy

Though immigration has effects on the country from which the immigrants leave, this section only examines the effects on the United States, mainly those occurring over longer periods of time. Over short periods of time, sizeable and potentially negative effects can occur in a specific area when there is a huge influx of immigrants. A large number of arrivals in a short period of time in one city can cause school systems to become overcrowded, housing prices and welfare payments to increase, and jobs to become difficult to obtain. Yet most economists believe the effects of immigration over time are much less harmful than commonly supposed and, in many ways, are beneficial. . The following longer-term issues are discussed: the effects of immigration on the overall wage rate of U.S. workers; the effects on the wages of particular groups of workers, such as those who are unskilled; and the effects on the rate of economic growth, that is, the standard of living, in the United States. Determining the effects of immigration on the United States is complex and virtually none of the conclusions presented here are without controversy.

Immigration’s Impact on Overall Wage Rates

Immigration is popularly thought to lower the overall wage rate in the United States by increasing the supply of individuals looking for jobs. This effect may occur in an area over a fairly short period of time. Over longer time periods, however, wages will only fall if the amounts of other resources don’t change. Wages will not fall if the immigrants bring sufficient amounts of other resources with them, such as capital, or cause the amount of other resources in the economy to increase sufficiently. For example, historically the large-scale immigration from Europe contributed to rapid westward expansion of the United States during most of the nineteenth century. The westward expansion, however, increased the amounts of land and natural resources that were available, factors that almost certainly kept immigration from lowering wage rates. Immigrants also increase the amounts of other resources in the economy through running their own businesses, which both historically and in recent times has occurred at a greater rate among immigrants than native workers. By the beginning of the twentieth century, the westward frontier had been settled. A number of researchers have estimated that immigration did lower wages at this time (Hatton and Williamson, 1998; Goldin, 1994), though others have criticized these findings (Carter and Sutch, 1999). For the recent time period, most studies have found little effect of immigration on the level of wages, though a few have found an effect (Borjas, 1999).

Even if immigration leads to a fall in the wage rate, it does not follow that individual workers are worse off. Workers typically receive income from sources other than their own labor. If wages fall, then many other resource prices in the economy rise. For example, immigration increases the demand for housing and land and existing owners benefit from an increase in the current value of their property. Whether any individual worker is better off or worse off in this case is not easy to determine. It depends on the amounts of other resources each individual possesses.

Immigration’s Impact on Wages of Unskilled Workers

Consider the second issue, the effects of immigration on the wages of unskilled workers. If the immigrants arriving in the country are primarily unskilled, then the larger number of unskilled workers could cause their wage to fall if the overall demand for these workers doesn’t change. A requirement for this effect to occur is that the immigrants be less skilled than the U.S. labor force they enter. As discussed above, during colonial times immigrant volume was small and the immigrants were probably more skilled than the existing U.S. labor force. During the 1830s and 1840s, the volume and rate of immigration increased substantially and the skill level of the immigrant stream fell to approximately match that of the native labor force. Instead of lowering the wages of unskilled workers relative to those of skilled workers, however, the large inflow apparently led to little change in the wages of unskilled workers, while some skilled workers lost and others gained. The explanation for these results is that the larger number of unskilled workers resulting from immigration was a factor in employers adopting new methods of production that used more unskilled labor. As a result of this technological change, the demand for unskilled workers increased so their wage did not decline. As employers adopted these new machines, however, skilled artisans who had previously done many of these jobs, such as iron casting, suffered losses. Other skilled workers, such as many white-collar workers who were not in direct competition with the immigrants, gained. Some evidence exists to support a differential effect on skilled workers during the antebellum period (Williamson and Lindert, 1980; Margo, 2000). After the Civil War, however, the skill level of the immigrant stream was close to that of the native labor force, so immigration probably did not further affect the wage structure through the 1920s (Carter and Sutch, 1999).

Impact since World War II

The lower volume of immigration in the period from 1930 through 1960 meant immigration had little effect on the relative wages of different workers during these years. With the resumption of higher volumes of immigration after 1965, however, and with the immigrants’ skill levels being low through 2000, an effect on relative wages again became possible. In fact, the relative wages of high-school dropouts in the United States deteriorated during the same period, especially after the mid-1970s. Researchers who have studied the question have concluded that immigration accounted for about one-fourth of the wage deterioration experienced by high-school dropouts during the 1980s, though some researchers find a lower effect and others a higher one (Friedberg and Hunt, 1995; Borjas, 1999). Wages are determined by a number of factors other than immigration. In this case, it is thought the changing nature of the economy, such as the widespread use of computers increasing the benefits to education, bears more of the blame for the decline in the relative wages of high-school dropouts.

Economic Benefits from Immigration

Beyond any effect on wages, there are a number of ways in which immigration might improve the overall standard of living in an economy. First, immigrants may engage in inventive or scientific activity, with the result being a gain to everyone. Evidence exists for both the historical and more recent periods that the United States has attracted individuals with an inventive/scientific nature. The United States has always been a leader in these areas. Individuals are more likely to be successful in such an environment than in one where these activities are not as highly valued. Second, immigrants expand the size of markets for various goods, which may lead to lower firms’ average costs due to an increase in firm size. The result would be a decrease in the price of the goods in question. Third, most individuals immigrate between the ages of 15 and 35, so the expenses of their basic schooling are paid abroad. In the past, most immigrants, being of working age, immediately got a job. Thus, immigration increased the percentage of the population in the United States that worked, a factor that raises the average standard of living in a country. Even in more recent times, most immigrants work, though the increased proportion of older individuals in the immigrant stream means the positive effects from this factor may be lower than in the past. Fourth, while immigrants may place a strain on government services in an area, such as the school system, they also pay taxes. Even illegal immigrants directly pay sales taxes on their purchases of goods and indirectly pay property taxes through their rent. Finally, the fact that immigrants are less likely to immigrate to the United States during periods of high unemployment is also beneficial. By reducing the number of people looking for jobs during these periods, this factor increases the likelihood U.S. citizens will be able to find a job.

The Experience of Immigrants in the U.S. Labor Market

This section examines the labor market experiences of immigrants in the United States. The issue of discrimination against immigrants in jobs is investigated along with the issue of the success immigrants experienced over time. Again, the issues are investigated for the historical period of immigration as well as more recent times. Interested readers are directed to Borjas (1999), Ferrie (2000), Carter and Sutch (1999), Hatton and Williamson (1998), and Friedberg and Hunt (1995) for more technical discussions.

Did Immigrants Face Labor Market Discrimination?

Discrimination can take various forms. The first form is wage discrimination, in which a worker of one group is paid a wage lower than an equally productive worker of another group. Empirical tests of this hypothesis generally find this type of discrimination has not existed. At any point in time, immigrants have been paid the same wage for a specific job as a native worker. If immigrants generally received lower wages than native workers, the differences reflected the lower skills of the immigrants. Historically, as discussed above, the skill level of the immigrant stream was similar to that of the native labor force, so wages did not differ much between the two groups. During more recent years, the immigrant stream has been less skilled than the native labor force, leading to the receipt of lower wages by immigrants. A second form of discrimination is in the jobs an immigrant is able to obtain. For example, in 1910, immigrants accounted for over half of the workers in various jobs; examples are miners, apparel workers, workers in steel manufacturing, meat packers, bakers, and tailors. If a reason for the employment concentration was that immigrants were kept out of alternative higher paying jobs, then the immigrants would suffer. This type of discrimination may have occurred against Catholics during the 1840s and 1850s and against the immigrants from central, southern, and eastern Europe after 1890. In both cases, it is possible the immigrants suffered because they could not obtain higher paying jobs. In more recent years, reports of immigrants trained as doctors, say, in their home country but not allowed to easily practice as such in the United States, may represent a similar situation. Yet the open nature of the U.S. schooling system and economy has been such that this effect usually did not impact the fortunes of the immigrants’ children or did so at a much smaller rate.

Wage Growth, Job Mobility, and Wealth Accumulation

Another aspect of how immigrants fared in the U.S. labor market is their experiences over time with respect to wage growth, job mobility, and wealth accumulation. A study done by Ferrie (1999) for immigrants arriving between 1840 and 1850, the period when the inflow of immigrants relative to the U.S. population was the highest, found immigrants from Britain and Germany generally improved their job status over time. By 1860, over 75% of the individuals reporting a low-skilled job on the Passenger Lists had moved up into a higher-skilled job, while fewer than 25% of those reporting a high-skilled job on the Passenger Lists had moved down into a lower-skilled job. Thus, the job mobility for these individuals was high. For immigrants from Ireland, the experience was quite different; the percentage of immigrants moving up was only 40% and the percentage moving down was over 50%. It isn’t clear if the Irish did worse because they had less education and fewer skills or whether the differences were due to some type of discrimination against them in the labor market. As to wealth, all the immigrant groups succeeded in accumulating larger amounts of wealth the longer they were in the United States, though their wealth levels fell short of those enjoyed by natives. Essentially, the evidence indicates antebellum immigrants were quite successful over time in matching their skills to the available jobs in the U.S. economy.

The extent to which immigrants had success over time in the labor market in the period since the Civil War is not clear. Most researchers have thought that immigrants who arrived before 1915 had a difficult time. For example, Hanes (1996) concludes that immigrants, even those from northwest Europe, had slower earnings growth over time than natives, a finding he argues was due to poor assimilation. Hatton and Williamson (1998), on the other hand, criticize these findings on technical grounds and conclude that immigrants assimilated relatively easily into the U.S. labor market. For the period after World War II, Chiswick (1978) argues that immigrants’ wages have increased relative to those of natives the longer the immigrants have been in the United States. Borjas (1999) has criticized Chiswick’s finding by suggesting it is caused by a decline in the skills possessed by the arriving immigrants between the 1950s and the 1990s. Borjas finds that 25- to 34-year-old male immigrants who arrived in the late 1950s had wages 9% lower than comparable native males, but by 1970 had wages 6% higher. In contrast, those arriving in the late 1970s had wages 22% lower at entry. By the late 1990s, their wages were still 12% lower than comparable natives. Overall, the degree of success experienced by immigrants in the U.S. labor market remains an area of controversy.

References

Borjas, George J. Heaven’s Door: Immigration Policy and the American Economy. Princeton: Princeton University Press, 1999.

Briggs, Vernon M., Jr. Immigration and the American Labor Force. Baltimore: Johns Hopkins University Press, 1984.

Carter, Susan B., and Richard Sutch. “Historical Perspectives on the Economic Consequences of Immigration into the United States.” In The Handbook of International Migration: The American Experience, edited by Charles Hirschman, Philip Kasinitz, and Josh DeWind, 319-341. New York: Russell Sage Foundation, 1999

Carter, Susan B., et. al.  Historical Statistics of the United States: Earliest Times to the Present – Millennial Edition. Volume 1: Population. New York: Cambridge University Press, 2006.

Chiswick, Barry R. “The Effect of Americanization on the Earnings of Foreign-Born Men.” Journal of Political Economy 86 (1978): 897-921.

Cohn, Raymond L. “A Comparative Analysis of European Immigrant Streams to the United States during the Early Mass Migration.” Social Science History 19 (1995): 63-89.

Cohn, Raymond L.  “The Transition from Sail to Steam in Immigration to the United States.” Journal of Economic History 65 (2005): 479-495.

Cohn, Raymond L. Mass Migration under Sail: European Immigration to the Antebellum United States. New York: Cambridge University Press, 2009.

Erickson, Charlotte J. Leaving England: Essays on British Emigration in the Nineteenth Century. Ithaca: Cornell University Press, 1994.

Ferenczi, Imre. International Migrations. New York: Arno Press, 1970.

Ferrie, Joseph P. Yankeys Now: Immigrants in the Antebellum United States, 1840-1860. New York: Oxford University Press, 1999.

Friedberg, Rachael M., and Hunt, Jennifer. “The Impact of Immigrants on Host Country Wages, Employment and Growth.” The Journal of Economic Perspectives 9 (1995): 23-44.

Goldin, Claudia. “The Political Economy of Immigration Restrictions in the United States, 1890 to 1921.” In The Regulated Economy: A Historical Approach to Political Economy, edited by Claudia Goldin and Gary D. Libecap, 223-257. Chicago: University of Chicago Press, 1994.

Grabbe, Hans-Jürgen. “European Immigration to the United States in the Early National Period, 1783-1820.” Proceeding of the American Philosophical Society 133 (1989): 190-214.

Hanes, Christopher. “Immigrants’ Relative Rate of Wage Growth in the Late Nineteenth Century.” Explorations in Economic History 33 (1996): 35-64.

Hansen, Marcus L. The Atlantic Migration, 1607-1860. Cambridge, MA.: Harvard University Press, 1940.

Hatton, Timothy J., and Jeffrey G. Williamson. The Age of Mass Migration: Causes and Economic Impact. New York: Oxford University Press, 1998.

Jones, Maldwyn Allen. American Immigration. Chicago: University of Chicago Press, Second Edition, 1960.

Le May, Michael C. From Open Door to Dutch Door: An Analysis of U.S. Immigration Policy Since 1820. New York: Praeger, 1987.

Margo, Robert A. Wages and Labor Markets in the United States, 1820-1860. Chicago: University of Chicago Press, 2000.

Massey, Douglas S. “Why Does Immigration Occur? A Theoretical Synthesis.” In The Handbook of International Migration: The American Experience, edited by Charles Hirschman, Philip Kasinitz, and Josh DeWind, 34-52. New York: Russell Sage Foundation, 1999.

Miller, Kerby A. Emigrants and Exiles: Ireland and the Irish Exodus to North America. Oxford: Oxford University Press, 1985.

Nugent, Walter. Crossings: The Great Transatlantic Migrations, 1870-1914. Bloomington and Indianapolis: Indiana University Press, 1992.

Taylor, Philip. The Distant Magnet. New York: Harper & Row, 1971.

Thomas, Brinley. Migration and Economic Growth: A Study of Great Britain and the Atlantic Economy. Cambridge, U.K.: Cambridge University Press, 1954.

U.S. Department of Commerce. Historical Statistics of the United States. Washington, DC, 1976.

U.S. Immigration and Naturalization Service. Statistical Yearbook of the Immigration and Naturalization Service. Washington, DC: U.S. Government Printing Office, various years.

Walker, Mack. Germany and the Emigration, 1816-1885. Cambridge, MA: Harvard University Press, 1964.

Williamson, Jeffrey G., and Peter H. Lindert, Peter H. American Inequality: A Macroeconomic History. New York: Academic Press, 1980.

Citation: Cohn, Raymond L. “Immigration to the United States”. EH.Net Encyclopedia, edited by Robert Whaples. Revised August 2, 2017. URL http://eh.net/encyclopedia/immigration-to-the-united-states/

Smoot-Hawley Tariff

Anthony O’Brien, Lehigh University

The Smoot-Hawley Tariff of 1930 was the subject of enormous controversy at the time of its passage and remains one of the most notorious pieces of legislation in the history of the United States. In the popular press and in political discussions the usual assumption is that the Smoot-Hawley Tariff was a policy disaster that significantly worsened the Great Depression. During the controversy over passage of the North American Free Trade Agreement (NAFTA) in the 1990s, Vice President Al Gore and billionaire former presidential candidate Ross Perot met in a debate on the Larry King Live program. To help make his point that Perot’s opposition to NAFTA was wrong-headed, Gore gave Perot a framed portrait of Sen. Smoot and Rep. Hawley. Gore assumed the audience would consider Smoot and Hawley to have been exemplars of a foolish protectionism. Although the popular consensus on Smoot-Hawley is clear, the verdict among scholars is more mixed, particularly with respect to the question of whether the tariff significantly worsened the Great Depression.

Background to Passage of the Tariff

The Smoot-Hawley Tariff grew out of the campaign promises of Herbert Hoover during the 1928 presidential election. Hoover, the Republican candidate, had pledged to help farmers by raising tariffs on imports of farm products. Although the 1920s were generally a period of prosperity in the United States, this was not true of agriculture; average farm incomes actually declined between 1920 and 1929. During the campaign Hoover had focused on plans to raise tariffs on farm products, but the tariff plank in the 1928 Republican Party platform had actually referred to the potential of more far-reaching increases:

[W]e realize that there are certain industries which cannot now successfully compete with foreign producers because of lower foreign wages and a lower cost of living abroad, and we pledge the next Republican Congress to an examination and where necessary a revision of these schedules to the end that American labor in the industries may again command the home market, may maintain its standard of living, and may count upon steady employment in its accustomed field.

In a longer perspective, the Republican Party had been in favor of a protective tariff since its founding in the 1850s. The party drew significant support from manufacturing interests in the Midwest and Northeast that believed they benefited from high tariff barriers against foreign imports. Although the free trade arguments dear to most economists were espoused by few American politicians during the 1920s, the Democratic Party was generally critical of high tariffs. In the 1920s the Democratic members of Congress tended to represent southern agricultural interests — which saw high tariffs as curtailing foreign markets for their exports, particularly cotton — or unskilled urban workers — who saw the tariff as driving up the cost of living.

The Republicans did well in the 1928 election, picking up 30 seats in the House — giving them a 267 to 167 majority — and seven seats in the Senate — giving them a 56 to 39 majority. Hoover easily defeated the Democratic presidential candidate, New York Governor Al Smith, capturing 58 percent of the popular vote and 444 of 531 votes in the Electoral College. Hoover took office on March 4, 1929 and immediately called a special session of Congress to convene on April 15 for the purpose of raising duties on agricultural products. Once the session began it became clear, however, that the Republican Congressional leadership had in mind much more sweeping tariff increases.

The House concluded its work relatively quickly and passed a bill on May 28 by a vote of 264 to 147. The bill faced a considerably more difficult time in the Senate. A block of Progressive Republicans, representing midwestern and western states, held the balance of power in the Senate. Some of these Senators had supported the third-party candidacy of Wisconsin Senator Robert LaFollette during the 1924 presidential election and they were much less protectionist than the Republican Party as a whole. It proved impossible to put together a majority in the Senate to pass the bill and the special session ended in November 1929 without a bill being passed.

By the time Congress reconvened the following spring the Great Depression was well underway. Economists date the onset of the Great Depression to the cyclical peak of August 1929, although the stock market crash of October 1929 is the more traditional beginning. By the spring of 1930 it was already clear that the downturn would be severe. The impact of the Depression helped to secure the final few votes necessary to put together a slim majority in the Senate in favor of passage of the bill. Final passage in the Senate took place on June 13, 1930 by a vote of 44 to 42. Final passage took place in the House the following day by a vote of 245 to 177. The vote was largely on party lines. Republicans in the House voted 230 to 27 in favor of final passage. Ten of the 27 Republicans voting no were Progressives from Wisconsin and Minnesota. Democrats voted 150 to 15 against final passage. Ten of the 15 Democrats voting for final passage were from Louisiana or Florida and represented citrus or sugar interests that received significant new protection under the bill.

President Hoover had expressed reservations about the wide-ranging nature of the bill and had privately expressed fears that the bill might provoke retaliation from America’s trading partners. He received a petition signed by more than 1,000 economists, urging him to veto the bill. Ultimately, he signed the Smoot-Hawley bill into law on June 17, 1930.

Tariff Levels under Smoot-Hawley

Calculating the extent to which Smoot-Hawley raised tariffs is not straightforward. The usual summary measure of tariff protection is the ratio of total tariff duties collected to the value of imports. This measure is misleading when applied to the early 1930s. Most of the tariffs in the Smoot-Hawley bill were specific — such as $1.125 per ton of pig iron — rather than ad valorem — or a percentage of the value of the product. During the early 1930s the prices of many products declined, causing the specific tariff to become an increasing percentage of the value of the product. The chart below shows the ratio of import duties collected to the value of dutiable imports. The increase shown for the early 1930s was partly due to declining prices and, therefore, exaggerates the effects of the Smoot-Hawley rate increases.

Source: U.S. Bureau of the Census, Historical Statistics of the United States, Colonial Times to 1970, Washington, D.C.: USGPO, 1975, Series 212.

A more accurate measure of the increase in tariff rates attributable to Smoot-Hawley can be found in a study carried out by the U.S. Tariff Commission. This study calculated the ad valorem rates that would have prevailed on actual U.S. imports in 1928, if the Smoot-Hawley rates been in effect then. These rates were compared with the rates prevailing under the Tariff Act of 1922, known as the Fordney-McCumber Tariff. The results are reproduced in Table 1 for the broad product categories used in tariff schedules and for total dutiable imports.

Table 1
Tariffs Rates under Fordney-McCumber vs. Smoot-Hawley

Equivalent ad valorem rates
Product Fordney-McCumber Smoot-Hawley
Chemicals 29.72% 36.09%
Earthenware, and Glass 48.71 53.73
Metals 33.95 35.08
Wood 24.78 11.73
Sugar 67.85 77.21
Tobacco 63.09 64.78
Agricultural Products 22.71 35.07
Spirits and Wines 38.83 47.44
Cotton Manufactures 40.27 46.42
Flax, Hemp, and Jute 18.16 19.14
Wool and Manufactures 49.54 59.83
Silk Manufactures 56.56 59.13
Rayon Manufactures 52.33 53.62
Paper and Books 24.74 26.06
Sundries 36.97 28.45
Total 38.48 41.14

Source: U.S. Tariff Commission, The Tariff Review, July 1930, Table II, p. 196.

By this measure, Smoot-Hawley raised average tariff rates by about 2 ½ percentage points from the already high rates prevailing under the Fordney-McCumber Tariff of 1922.

The Basic Macroeconomics of the Tariff

Economists are almost uniformly critical of tariffs. One of the bedrock principles of economics is that voluntary trade makes everyone involved better off. For the U.S. government to interfere with trade between Canadian lumber producers and U.S. lumber importers — as it did under Smoot-Hawley by raising the tariff on lumber imports — makes both parties to the trade worse off. In a larger sense, it also hurts the efficiency of the U.S. economy by making it rely on higher priced U.S. lumber rather than less expensive Canadian lumber.

But what is the effect of a tariff on the overall level of employment and production in an economy? The usual answer is that a tariff will leave the overall level of employment and production in an economy largely unaffected. Although the popular view is very different, most economists do not believe that tariffs either create jobs or destroy jobs in aggregate. Economists believe that the overall level of jobs and production in the economy is determined by such things as the capital stock, the population, the state of technology, and so on. These factors are not generally affected by tariffs. So, for instance, a tariff on imports of lumber might drive up housing prices and cause a reduction in the number of houses built. But economists believe that the unemployment in the housing industry will not be long-lived. Economists are somewhat divided on why this is true. Some believe that the economy automatically adjusts rapidly to reallocate labor and machinery that are displaced from one use — such as making houses — into other uses. Other economists believe that this adjustment does not take place automatically, but can be brought about through active monetary or fiscal policy. In either view, the economy is seen as ordinarily being at its so-called full-employment or potential level and deviating from that level only for brief periods of time. Tariffs have the ability to change the mix of production and the mix of jobs available in an economy, but not to change the overall level of production or the overall level of jobs. The macroeconomic impact of tariffs is therefore very limited.

In the case of the Smoot-Hawley Tariff, however, the U.S. economy was in depression in 1930. No active monetary or fiscal policies were carried out and the economy was not making much progress back to full employment. In fact, the cyclical trough was not reached until March 1933 and the economy did not return to full employment until 1941. Under these circumstances is it possible for Smoot-Hawley to have had a significant impact on the level of employment and production and would that impact have been positive or negative?

A simple view of the determination of equilibrium Gross Domestic Product (Y) holds that it is equal to the sum of aggregate expenditures. Aggregate expenditures are divided into four categories: spending by households on consumption goods (C), spending by households and firms on investment goods — such as houses, and machinery and equipment (I), spending by the government on goods and services (G), and net exports, which are the difference between spending on exports by foreign households and firms (EX) and spending on imports by domestic households and firms (IM). So, in the basic algebra of the principles of economics course, at equilibrium, Y = C + I + G + (EX – IM).

The usual story of the Great Depression is that some combination of falling consumption spending and falling investment spending had resulted in the equilibrium level of GDP being far below its full employment level. By raising tariffs on imports, Smoot-Hawley would have reduced the level of imports, but would not have had any direct effect on exports. This simple analysis seems to lead to a surprising conclusion: by reducing imports, Smoot-Hawley would have raised the level of aggregate expenditures in the economy (by increasing net exports or (EX – IM)) and, therefore, increased the level of GDP relative to what it would otherwise have been.

A potential flaw in this argument is that it assumes that Smoot-Hawley did not have a negative impact on U.S. exports. In fact, it may have had a negative impact on exports if foreign governments were led to retaliate against the passage of Smoot-Hawley by raising tariffs on imports of U.S. goods. If net exports fell as a result of Smoot-Hawley, then the tariff would have had a negative macroeconomic impact; it would have made the Depression worse. In 1934 Joseph Jones wrote a very influential book in which he argued that widespread retaliation against Smoot-Hawley had, in fact, taken place. Jones’s book helped to establish the view among the public and among scholars that the passage of Smoot-Hawley had been a policy blunder that had worsened the Great Depression.

Did Retaliation Take Place?

This is a simplified analysis and there are other ways in which Smoot-Hawley could have had a macroeconomic impact, such as by increasing the price level in the U.S. relative to foreign price levels. But in recent years there has been significant scholarly interest in the question of whether Smoot-Hawley did provoke significant retaliation and, therefore, made the Depression worse. Clearly it is possible to overstate the extent of retaliation and Jones almost certainly did. For instance, the important decision by Britain to abandon a century-long commitment to free trade and raise tariffs in 1931 was not affected to any significant extent by Smoot-Hawley.

On the other hand, the case for retaliation by Canada is fairly clear. Then, as now, Canada was easily the largest trading partner of the United States. In 1929, 18 percent of U.S. merchandise exports went to Canada and 11 percent of U.S. merchandise imports came from Canada. At the time of the passage of Smoot-Hawley the Canadian Prime Minister was William Lyon Mackenzie King of the Liberal Party. King had been in office for most of the period since 1921 and had several times reduced Canadian tariffs. He held the position that tariffs should be used to raise revenue, but should not be used for protection. In early 1929 he was contemplating pushing for further tariff reductions, but this option was foreclosed by Hoover’s call for a special session of Congress to consider tariff increases.

As Smoot-Hawley neared passage King came under intense pressure from the Canadian Conservative Party and its leader, Richard Bedford Bennett, to retaliate. In May 1930 Canada imposed so-called countervailing duties on 16 products imported from the United States. The duties on these products — which represented about 30 percent of the value of all U.S. merchandise exports to Canada — were raised to the levels charged by the United States. In a speech, King made clear the retaliatory nature of these increases:

[T]he countervailing duties ? [are] designed to give a practical illustration to the United States of the desire of Canada to trade at all times on fair and equal terms?. For the present we raise the duties on these selected commodities to the level applied against Canadian exports of the same commodities by other countries, but at the same time we tell our neighbour ? we are ready in the future ? to consider trade on a reciprocal basis?.

In the election campaign the following July, Smoot-Hawley was a key issue. Bennett, the Conservative candidate, was strongly in favor in retaliation. In one campaign speech he declared:

How many thousands of American workmen are living on Canadian money today? They’ve got the jobs and we’ve got the soup kitchens?. I will not beg of any country to buy our goods. I will make [tariffs] fight for you. I will use them to blast a way into markets that have been closed.

Bennett handily won the election and pushed through the Canadian Parliament further tariff increases.

What Was the Impact of the Tariff on the Great Depression?

If there was retaliation for Smoot-Hawley, was this enough to have made the tariff a significant contributor to the severity of the Great Depression? Most economists are skeptical because foreign trade made up a small part of the U.S. economy in 1929 and the magnitude of the decline in GDP between 1929 and 1933 was so large. Table 2 gives values for nominal GDP, for real GDP (in 1929 dollars), for nominal and real net exports, and for nominal and real exports. In real terms, net exports did decline by about $.7 billion between 1929 and 1933, but this amounts to less than one percent of 1929 real GDP and is dwarfed by the total decline in real GDP between 1929 and 1933.

Table 2
GDP and Exports, 1929-1933

Year Nominal GDP Real GDP Nominal Net Exports Real Net Exports Nominal Exports Real Exports
1929 $103.1 $103.1 $0.4 $0.3 $5.9 $5.9
1930 $90.4 $93.3 $0.3 $0.0 $4.4 $4.9
1931 $75.8 $86.1 $0.0 -$0.4 $2.9 $4.1
1932 $58.0 $74.7 $0.0 -$0.3 $2.0 $3.3
1933 $55.6 $73.2 $0.1 -$0.4 $2.0 $3.3

Source: U.S. Department of Commerce, National Income and Product Accounts of the United States, Vol. I, 1929-1958, Washington, D.C.: USGPO, 1993.

If we focus on the decline in exports, we can construct an upper bound for the negative impact of Smoot-Hawley. Between 1929 and 1931, real exports declined by an amount equal to about 1.7% of 1929 real GDP. Declines in aggregate expenditures are usually thought to have a multiplied effect on equilibrium GDP. The best estimates are that the multiplier is roughly two. In that case, real GDP would have declined by about 3.4% between 1929 and 1931 as a result of the decline in real exports. Real GDP actually declined by about 16.5% between 1929 and 1931, so the decline in real exports can account for about 21% of the total decline in real GDP. The decline in real exports, then, may well have played an important, but not crucial, role in the decline in GDP during the first two years of the Depression. Bear in mind, though, that not all — perhaps not even most — of the decline in exports can be attributed to retaliation for Smoot-Hawley. Even if Smoot-Hawley had not been passed, U.S. exports would have fallen as incomes declined in Canada, the United Kingdom, and in other U.S. trading partners and as tariff rates in some of these countries increased for reasons unconnected to Smoot-Hawley.

Hawley-Smoot or Smoot-Hawley: A Note on Usage

Congressional legislation is often referred to by the names of the member of the House of Representatives and the member of the Senate who have introduced the bill. Tariff legislation always originates in the House of Representatives and according to convention the name of its House sponsor, in this case Representative Willis Hawley of Oregon, would precede the name of its Senate sponsor, Senator Reed Smoot of Utah — hence, Hawley-Smoot. In this instance, though, Senator Smoot was far better known than Representative Hawley and so the legislation is usually referred to as the Smoot-Hawley Tariff. The more formal name of the legislation was the U.S. Tariff Act of 1930.)

Further Reading

The Republican Party platform for 1928 is reprinted as: “Republican Platform [of 1928]” in Arthur M. Schlesinger, Jr., Fred L. Israel, and William P. Hansen, editors, History of American Presidential Elections, 1789-1968, New York: Chelsea House, 1971, Vol. 3. Herbert Hoover’s views on the tariff can be found in Herbert Hoover, The Future of Our Foreign Trade, Washington, D.C.: GPO, 1926 and Herbert Hoover, The Memoirs of Herbert Hoover: The Cabinet and the Presidency, 1920-1933, New York: Macmillan, 1952, Chapter 41. Trade statistics for this period can be found in U.S. Department of Commerce, Economic Analysis of Foreign Trade of the United States in Relation to the Tariff. Washington, D.C.: GPO, 1933 and in the annual supplements to the Survey of Current Business.

A classic account of the political process that resulted in the Smoot-Hawley Tariff is given in E. E. Schattschneider, Politics, Pressures and the Tariff, New York: Prentice-Hall, 1935. The best case for the view that there was extensive foreign retaliation against Smoot-Hawley is given in Joseph Jones, Tariff Retaliation: Repercussions of the Hawley-Smoot Bill, Philadelphia: University of Pennsylvania Press, 1934. The Jones book should be used with care; his argument is generally considered to be overstated. The view that party politics was of supreme importance in passage of the tariff is well argued in Robert Pastor, Congress and the Politics of United States Foreign Economic Policy, 1929-1976, Berkeley: University of California Press, 1980.

A discussion of the potential macroeconomic impact of Smoot-Hawley appears in Rudiger Dornbusch and Stanley Fischer, “The Open Economy: Implications for Monetary and Fiscal Policy.” In The American Business Cycle: Continuity and Change, edited by Robert J. Gordon, NBER Studies in Business Cycles, Volume 25, Chicago: University of Chicago Press, 1986, pp. 466-70. See, also, the article by Barry Eichengreen listed below. An argument that Smoot-Hawley is unlikely to have had a significant macroeconomic effect is given in Peter Temin, Lessons from the Great Depression, Cambridge, MA: MIT Press, 1989, p. 46. For an argument emphasizing the importance of Smoot-Hawley in explaining the Great Depression, see Alan Meltzer, “Monetary and Other Explanations of the Start of the Great Depression,” Journal of Monetary Economics, 2 (1976): 455-71.

Recent journal articles that deal with the issues discussed in this entry are:

Callahan, Colleen, Judith A. McDonald and Anthony Patrick O’Brien. “Who Voted for Smoot-Hawley?” Journal of Economic History 54, no. 3 (1994): 683-90.

Crucini, Mario J. and James Kahn. “Tariffs and Aggregate Economic Activity: Lessons from the Great Depression.” Journal of Monetary Economics 38, no. 3 (1996): 427-67.

Eichengreen, Barry. “The Political Economy of the Smoot-Hawley Tariff.” Research in Economic History 12 (1989): 1-43.

Irwin, Douglas. “The Smoot-Hawley Tariff: A Quantitative Assessment.” Review of Economics and Statistics 80, no. 2 (1998): 326-334.

Irwin Douglas and Randall S. Kroszner. “Log-Rolling and Economic Interests in the Passage of the Smoot-Hawley Tariff.” Carnegie-Rochester Series on Public Policy 45 (1996): 173-200.

McDonald Judith, Anthony Patrick O’Brien, and Colleen Callahan. “Trade Wars: Canada’s Reaction to the Smoot-Hawley Tariff.” Journal of Economic History 57, no. 4 (1997): 802-26.

Citation: O’Brien, Anthony. “Smoot-Hawley Tariff”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/smoot-hawley-tariff/

A History of Futures Trading in the United States

Joseph Santos, South Dakota State University

Many contemporary [nineteenth century] critics were suspicious of a form of business in which one man sold what he did not own to another who did not want it… Morton Rothstein (1966)

Anatomy of a Futures Market

The Futures Contract

A futures contract is a standardized agreement between a buyer and a seller to exchange an amount and grade of an item at a specific price and future date. The item or underlying asset may be an agricultural commodity, a metal, mineral or energy commodity, a financial instrument or a foreign currency. Because futures contracts are derived from these underlying assets, they belong to a family of financial instruments called derivatives.

Traders buy and sell futures contracts on an exchange – a marketplace that is operated by a voluntary association of members. The exchange provides buyers and sellers the infrastructure (trading pits or their electronic equivalent), legal framework (trading rules, arbitration mechanisms), contract specifications (grades, standards, time and method of delivery, terms of payment) and clearing mechanisms (see section titled The Clearinghouse) necessary to facilitate futures trading. Only exchange members are allowed to trade on the exchange. Nonmembers trade through commission merchants – exchange members who service nonmember trades and accounts for a fee.

The September 2004 light sweet crude oil contract is an example of a petroleum (mineral) future. It trades on the New York Mercantile exchange (NYM). The contract is standardized – every one is an agreement to trade 1,000 barrels of grade light sweet crude in September, on a day of the seller’s choosing. As of May 25, 2004 the contract sold for $40,120=$40.12x1000 and debits Member S’s margin account the same amount.

The Clearinghouse

The clearinghouse is the counterparty to every trade – its members buy every contract that traders sell on the exchange and sell every contract that traders buy on the exchange. Absent a clearinghouse, traders would interact directly, and this would introduce two problems. First, traders. concerns about their counterparty’s credibility would impede trading. For example, Trader A might refuse to sell to Trader B, who is supposedly untrustworthy.

Second, traders would lose track of their counterparties. This would occur because traders typically settle their contractual obligations by offset – traders buy/sell the contracts that they sold/bought earlier. For example, Trader A sells a contract to Trader B, who sells a contract to Trader C to offset her position, and so on.

The clearinghouse eliminates both of these problems. First, it is a guarantor of all trades. If a trader defaults on a futures contract, the clearinghouse absorbs the loss. Second, clearinghouse members, and not outside traders, reconcile offsets at the end of trading each day. Margin accounts and a process called marking-to-market all but assure the clearinghouse’s solvency.

A margin account is a balance that a trader maintains with a commission merchant in order to offset the trader’s daily unrealized loses in the futures markets. Commission merchants also maintain margins with clearinghouse members, who maintain them with the clearinghouse. The margin account begins as an initial lump sum deposit, or original margin.

To understand the mechanics and merits of marking-to-market, consider that the values of the long and short positions of an existing futures contract change daily, even though futures trading is a zero-sum game – a buyer’s gain/loss equals a seller’s loss/gain. So, the clearinghouse breaks even on every trade, while its individual members. positions change in value daily.

With this in mind, suppose Trader B buys a 5,000 bushel soybean contract for $9.70 from Trader S. Technically, Trader B buys the contract from Clearinghouse Member S and Trader S sells the contract to Clearinghouse Member B. Now, suppose that at the end of the day the contract is priced at $9.71. That evening the clearinghouse marks-to-market each member’s account. That is to say, the clearinghouse credits Member B’s margin account $50 and debits Member S’s margin account the same amount.

Member B is now in a position to draw on the clearinghouse $50, while Member S must pay the clearinghouse a $50 variation margin – incremental margin equal to the difference between a contract’s price and its current market value. In turn, clearinghouse members debit and credit accordingly the margin accounts of their commission merchants, who do the same to the margin accounts of their clients (i.e., traders). This iterative process all but assures the clearinghouse a sound financial footing. In the unlikely event that a trader defaults, the clearinghouse closes out the position and loses, at most, the trader’s one day loss.

Active Futures Markets

Futures exchanges create futures contracts. And, because futures exchanges compete for traders, they must create contracts that appeal to the financial community. For example, the New York Mercantile Exchange created its light sweet crude oil contract in order to fill an unexploited niche in the financial marketplace.

Not all contracts are successful and those that are may, at times, be inactive – the contract exists, but traders are not trading it. For example, of all contracts introduced by U.S. exchanges between 1960 and 1977, only 32% traded in 1980 (Stein 1986, 7). Consequently, entire exchanges can become active – e.g., the New York Futures Exchange opened in 1980 – or inactive – e.g., the New Orleans Exchange closed in 1983 (Leuthold 1989, 18). Government price supports or other such regulation can also render trading inactive (see Carlton 1984, 245).

Futures contracts succeed or fail for many reasons, but successful contracts do share certain basic characteristics (see for example, Baer and Saxon 1949, 110-25; Hieronymus 1977, 19-22). To wit, the underlying asset is homogeneous, reasonably durable, and standardized (easily describable); its supply and demand is ample, its price is unfettered, and all relevant information is available to all traders. For example, futures contracts have never derived from, say, artwork (heterogeneous and not standardized) or rent-controlled housing rights (supply, and hence price is fettered by regulation).

Purposes and Functions

Futures markets have three fundamental purposes. The first is to enable hedgers to shift price risk – asset price volatility – to speculators in return for basis risk – changes in the difference between a futures price and the cash, or current spot price of the underlying asset. Because basis risk is typically less than asset price risk, the financial community views hedging as a form of risk management and speculating as a form of risk taking.

Generally speaking, to hedge is to take opposing positions in the futures and cash markets. Hedgers include (but are not restricted to) farmers, feedlot operators, grain elevator operators, merchants, millers, utilities, export and import firms, refiners, lenders, and hedge fund managers (see Peck 1985, 13-21). Meanwhile, to speculate is to take a position in the futures market with no counter-position in the cash market. Speculators may not be affiliated with the underlying cash markets.

To demonstrate how a hedge works, assume Hedger A buys, or longs, 5,000 bushels of corn, which is currently worth $2.40 per bushel, or $12,000=$2.40×5000; the date is May 1st and Hedger A wishes to preserve the value of his corn inventory until he sells it on June 1st. To do so, he takes a position in the futures market that is exactly opposite his position in the spot – current cash – market. For example, Hedger A sells, or shorts, a July futures contract for 5,000 bushels of corn at a price of $2.50 per bushel; put differently, Hedger A commits to sell in July 5,000 bushels of corn for $12,500=$2.50×5000. Recall that to sell (buy) a futures contract means to commit to sell (buy) an amount and grade of an item at a specific price and future date.

Absent basis risk, Hedger A’s spot and futures markets positions will preserve the value of the 5,000 bushels of corn that he owns, because a fall in the spot price of corn will be matched penny for penny by a fall in the futures price of corn. For example, suppose that by June 1st the spot price of corn has fallen five cents to $2.35 per bushel. Absent basis risk, the July futures price of corn has also fallen five cents to $2.45 per bushel.

So, on June 1st, Hedger A sells his 5,000 bushels of corn and loses $250=($2.35-$2.40)x5000 in the spot market. At the same time, he buys a July futures contract for 5,000 bushels of corn and gains $250=($2.50-$2.45)x5000 in the futures market. Notice, because Hedger A has both sold and bought a July futures contract for 5,000 bushels of corn, he has offset his commitment in the futures market.

This example of a textbook hedge – one that eliminates price risk entirely – is instructive but it is also a bit misleading because: basis risk exists; hedgers may choose to hedge more or less than 100% of their cash positions; and hedgers may cross hedge – trade futures contracts whose underlying assets are not the same as the assets that the hedger owns. So, in reality hedgers cannot immunize entirely their cash positions from market fluctuations and in some cases they may not wish to do so. Again, the purpose of a hedge is not to avoid risk, but rather to manage or even profit from it.

The second fundamental purpose of a futures market is to facilitate firms’ acquisitions of operating capital – short term loans that finance firms’ purchases of intermediate goods such as inventories of grain or petroleum. For example, lenders are relatively more likely to finance, at or near prime lending rates, hedged (versus non-hedged) inventories. The futures contact is an efficient form of collateral because it costs only a fraction of the inventory’s value, or the margin on a short position in the futures market.

Speculators make the hedge possible because they absorb the inventory’s price risk; for example, the ultimate counterparty to the inventory dealer’s short position is a speculator. In the absence of futures markets, hedgers could only engage in forward contracts – unique agreements between private parties, who operate independently of an exchange or clearinghouse. Hence, the collateral value of a forward contract is less than that of a futures contract.3

The third fundamental purpose of a futures market is to provide information to decision makers regarding the market’s expectations of future economic events. So long as a futures market is efficient – the market forms expectations by taking into proper consideration all available information – its forecasts of future economic events are relatively more reliable than an individual’s. Forecast errors are expensive, and well informed, highly competitive, profit-seeking traders have a relatively greater incentive to minimize them.

The Evolution of Futures Trading in the U.S.

Early Nineteenth Century Grain Production and Marketing

Into the early nineteenth century, the vast majority of American grains – wheat, corn, barley, rye and oats – were produced throughout the hinterlands of the United States by producers who acted primarily as subsistence farmers – agricultural producers whose primary objective was to feed themselves and their families. Although many of these farmers sold their surplus production on the market, most lacked access to large markets, as well as the incentive, affordable labor supply, and myriad technologies necessary to practice commercial agriculture – the large scale production and marketing of surplus agricultural commodities.

At this time, the principal trade route to the Atlantic seaboard was by river through New Orleans4; though the South was also home to terminal markets – markets of final destination – for corn, provisions and flour. Smaller local grain markets existed along the tributaries of the Ohio and Mississippi Rivers and east-west overland routes. The latter were used primarily to transport manufactured (high valued and nonperishable) goods west.

Most farmers, and particularly those in the East North Central States – the region consisting today of Illinois, Indiana, Michigan, Ohio and Wisconsin – could not ship bulk grains to market profitably (Clark 1966, 4, 15).5 Instead, most converted grains into relatively high value flour, livestock, provisions and whiskies or malt liquors and shipped them south or, in the case of livestock, drove them east (14).6 Oats traded locally, if at all; their low value-to-weight ratios made their shipment, in bulk or otherwise, prohibitive (15n).

The Great Lakes provided a natural water route east to Buffalo but, in order to ship grain this way, producers in the interior East North Central region needed local ports to receive their production. Although the Erie Canal connected Lake Erie to the port of New York by 1825, water routes that connected local interior ports throughout northern Ohio to the Canal were not operational prior to the mid-1830s. Indeed, initially the Erie aided the development of the Old Northwest, not because it facilitated eastward grain shipments, but rather because it allowed immigrants and manufactured goods easy access to the West (Clark 1966, 53).

By 1835 the mouths of rivers and streams throughout the East North Central States had become the hubs, or port cities, from which farmers shipped grain east via the Erie. By this time, shippers could also opt to go south on the Ohio River and then upriver to Pittsburgh and ultimately to Philadelphia, or north on the Ohio Canal to Cleveland, Buffalo and ultimately, via the Welland Canal, to Lake Ontario and Montreal (19).

By 1836 shippers carried more grain north on the Great Lakes and through Buffalo, than south on the Mississippi through New Orleans (Odle 1964, 441). Though, as late as 1840 Ohio was the only state/region who participated significantly in the Great Lakes trade. Illinois, Indiana, Michigan, and the region of modern day Wisconsin either produced for their respective local markets or relied upon Southern demand. As of 1837 only 4,107 residents populated the “village” of Chicago, which became an official city in that year (Hieronymus 1977, 72).7

Antebellum Grain Trade Finance in the Old Northwest

Before the mid-1860s, a network of banks, grain dealers, merchants, millers and commission houses – buying and selling agents located in the central commodity markets – employed an acceptance system to finance the U.S. grain trade (see Clark 1966, 119; Odle 1964, 442). For example, a miller who required grain would instruct an agent in, say, New York to establish, on the miller’s behalf, a line of credit with a merchant there. The merchant extended this line of credit in the form of sight drafts, which the merchant made payable, in sixty or ninety days, up to the amount of the line of credit.

With this credit line established, commission agents in the hinterland would arrange with grain dealers to acquire the necessary grain. The commission agent would obtain warehouse receipts – dealer certified negotiable titles to specific lots and quantities of grain in store – from dealers, attach these to drafts that he drew on the merchant’s line of credit, and discount these drafts at his local bank in return for banknotes; the local bank would forward these drafts on to the New York merchant’s bank for redemption. The commission agents would use these banknotes to advance – lend – grain dealers roughly three quarters of the current market value of the grain. The commission agent would pay dealers the remainder (minus finance and commission fees) when the grain was finally sold in the East. That is, commission agents and grain dealers entered into consignment contracts.

Unfortunately, this approach linked banks, grain dealers, merchants, millers and commission agents such that the “entire procedure was attended by considerable risk and speculation, which was assumed by both the consignee and consignor” (Clark 1966, 120). The system was reasonably adequate if grain prices went unchanged between the time the miller procured the credit and the time the grain (bulk or converted) was sold in the East, but this was rarely the case. The fundamental problem with this system of finance was that commission agents were effectively asking banks to lend them money to purchase as yet unsold grain. To be sure, this inadequacy was most apparent during financial panics, when many banks refused to discount these drafts (Odle 1964, 447).

Grain Trade Finance in Transition: Forward Contracts and Commodity Exchanges

In 1848 the Illinois-Michigan Canal connected the Illinois River to Lake Michigan. The canal enabled farmers in the hinterlands along the Illinois River to ship their produce to merchants located along the river. These merchants accumulated, stored and then shipped grain to Chicago, Milwaukee and Racine. At first, shippers tagged deliverables according to producer and region, while purchasers inspected and chose these tagged bundles upon delivery. Commercial activity at the three grain ports grew throughout the 1850s. Chicago emerged as a dominant grain (primarily corn) hub later that decade (Pierce 1957, 66).8

Amidst this growth of Lake Michigan commerce, a confluence of innovations transformed the grain trade and its method of finance. By the 1840s, grain elevators and railroads facilitated high volume grain storage and shipment, respectively. Consequently, country merchants and their Chicago counterparts required greater financing in order to store and ship this higher volume of grain.9 And, high volume grain storage and shipment required that inventoried grains be fungible – of such a nature that one part or quantity could be replaced by another equal part or quantity in the satisfaction of an obligation. For example, because a bushel of grade No. 2 Spring Wheat was fungible, its price did not depend on whether it came from Farmer A, Farmer B, Grain Elevator C, or Train Car D.

Merchants could secure these larger loans more easily and at relatively lower rates if they obtained firm price and quantity commitments from their buyers. So, merchants began to engage in forward (not futures) contracts. According to Hieronymus (1977), the first such “time contract” on record was made on March 13, 1851. It specified that 3,000 bushels of corn were to be delivered to Chicago in June at a price of one cent below the March 13th cash market price (74).10

Meanwhile, commodity exchanges serviced the trade’s need for fungible grain. In the 1840s and 1850s these exchanges emerged as associations for dealing with local issues such as harbor infrastructure and commercial arbitration (e.g., Detroit in 1847, Buffalo, Cleveland and Chicago in 1848 and Milwaukee in 1849) (see Odle 1964). By the 1850s they established a system of staple grades, standards and inspections, all of which rendered inventory grain fungible (Baer and Saxon 1949, 10; Chandler 1977, 211). As collection points for grain, cotton, and provisions, they weighed, inspected and classified commodity shipments that passed from west to east. They also facilitated organized trading in spot and forward markets (Chandler 1977, 211; Odle 1964, 439).11

The largest and most prominent of these exchanges was the Board of Trade of the City of Chicago, a grain and provisions exchange established in 1848 by a State of Illinois corporate charter (Boyle 1920, 38; Lurie 1979, 27); the exchange is known today as the Chicago Board of Trade (CBT). For at least its first decade, the CBT functioned as a meeting place for merchants to resolve contract disputes and discuss commercial matters of mutual concern. Participation was part-time at best. The Board’s first directorate of 25 members included “a druggist, a bookseller, a tanner, a grocer, a coal dealer, a hardware merchant, and a banker” and attendance was often encouraged by free lunches (Lurie 1979, 25).

However, in 1859 the CBT became a state- (of Illinois) chartered private association. As such, the exchange requested and received from the Illinois legislature sanction to establish rules “for the management of their business and the mode in which it shall be transacted, as they may think proper;” to arbitrate over and settle disputes with the authority as “if it were a judgment rendered in the Circuit Court;” and to inspect, weigh and certify grain and grain trades such that these certifications would be binding upon all CBT members (Lurie 1979, 27).

Nineteenth Century Futures Trading

By the 1850s traders sold and resold forward contracts prior to actual delivery (Hieronymus 1977, 75). A trader could not offset, in the futures market sense of the term, a forward contact. Nonetheless, the existence of a secondary market – market for extant, as opposed to newly issued securities – in forward contracts suggests, if nothing else, speculators were active in these early time contracts.

On March 27, 1863, the Chicago Board of Trade adopted its first rules and procedures for trade in forwards on the exchange (Hieronymus 1977, 76). The rules addressed contract settlement, which was (and still is) the fundamental challenge associated with a forward contract – finding a trader who was willing to take a position in a forward contract was relatively easy to do; finding that trader at the time of contract settlement was not.

The CBT began to transform actively traded and reasonably homogeneous forward contracts into futures contracts in May, 1865. At this time, the CBT: restricted trade in time contracts to exchange members; standardized contract specifications; required traders to deposit margins; and specified formally contract settlement, including payments and deliveries, and grievance procedures (Hieronymus 1977, 76).

The inception of organized futures trading is difficult to date. This is due, in part, to semantic ambiguities – e.g., was a “to arrive” contract a forward contract or a futures contract or neither? However, most grain trade historians agree that storage (grain elevators), shipment (railroad), and communication (telegraph) technologies, a system of staple grades and standards, and the impetus to speculation provided by the Crimean and U.S. Civil Wars enabled futures trading to ripen by about 1874, at which time the CBT was the U.S.’s premier organized commodities (grain and provisions) futures exchange (Baer and Saxon 1949, 87; Chandler 1977, 212; CBT 1936, 18; Clark 1966, 120; Dies 1925, 15; Hoffman 1932, 29; Irwin 1954, 77, 82; Rothstein 1966, 67).

Nonetheless, futures exchanges in the mid-1870s lacked modern clearinghouses, with which most exchanges began to experiment only in the mid-1880s. For example, the CBT’s clearinghouse got its start in 1884, and a complete and mandatory clearing system was in place at the CBT by 1925 (Hoffman 1932, 199; Williams 1982, 306). The earliest formal clearing and offset procedures were established by the Minneapolis Grain Exchange in 1891 (Peck 1985, 6).

Even so, rudiments of a clearing system – one that freed traders from dealing directly with one another – were in place by the 1870s (Hoffman 1920, 189). That is to say, brokers assumed the counter-position to every trade, much as clearinghouse members would do decades later. Brokers settled offsets between one another, though in the absence of a formal clearing procedure these settlements were difficult to accomplish.

Direct settlements were simple enough. Here, two brokers would settle in cash their offsetting positions between one another only. Nonetheless, direct settlements were relatively uncommon because offsetting purchases and sales between brokers rarely balanced with respect to quantity. For example, B1 might buy a 5,000 bushel corn future from B2, who then might buy a 6,000 bushel corn future from B1; in this example, 1,000 bushels of corn remain unsettled between B1 and B2. Of course, the two brokers could offset the remaining 1,000 bushel contract if B2 sold a 1,000 bushel corn future to B1. But what if B2 had already sold a 1,000 bushel corn future to B3, who had sold a 1,000 bushel corn future to B1? In this case, each broker’s net futures market position is offset, but all three must meet in order to settle their respective positions. Brokers referred to such a meeting as a ring settlement. Finally, if, in this example, B1 and B3 did not have positions with each other, B2 could settle her position if she transferred her commitment (which she has with B1) to B3. Brokers referred to this method as a transfer settlement. In either ring or transfer settlements, brokers had to find other brokers who held and wished to settle open counter-positions. Often brokers used runners to search literally the offices and corridors for the requisite counter-parties (see Hoffman 1932, 185-200).

Finally, the transformation in Chicago grain markets from forward to futures trading occurred almost simultaneously in New York cotton markets. Forward contracts for cotton traded in New York (and Liverpool, England) by the 1850s. And, like Chicago, organized trading in cotton futures began on the New York Cotton Exchange in about 1870; rules and procedures formalized the practice in 1872. Futures trading on the New Orleans Cotton Exchange began around 1882 (Hieronymus 1977, 77).

Other successful nineteenth century futures exchanges include the New York Produce Exchange, the Milwaukee Chamber of Commerce, the Merchant’s Exchange of St. Louis, the Chicago Open Board of Trade, the Duluth Board of Trade, and the Kansas City Board of Trade (Hoffman 1920, 33; see Peck 1985, 9).

Early Futures Market Performance

Volume

Data on grain futures volume prior to the 1880s are not available (Hoffman 1932, 30). Though in the 1870s “[CBT] officials openly admitted that there was no actual delivery of grain in more than ninety percent of contracts” (Lurie 1979, 59). Indeed, Chart 1 demonstrates that trading was relatively voluminous in the nineteenth century.

An annual average of 23,600 million bushels of grain futures traded between 1884 and 1888, or eight times the annual average amount of crops produced during that period. By comparison, an annual average of 25,803 million bushels of grain futures traded between 1966 and 1970, or four times the annual average amount of crops produced during that period. In 2002, futures volume outnumbered crop production by a factor of eleven.

The comparable data for cotton futures are presented in Chart 2. Again here, trading in the nineteenth century was significant. To wit, by 1879 futures volume had outnumbered production by a factor of five, and by 1896 this factor had reached eight.

Price of Storage

Nineteenth century observers of early U.S. futures markets either credited them for stabilizing food prices, or discredited them for wagering on, and intensifying, the economic hardships of Americans (Baer and Saxon 1949, 12-20, 56; Chandler 1977, 212; Ferris 1988, 88; Hoffman 1932, 5; Lurie 1979, 53, 115). To be sure, the performance of early futures markets remains relatively unexplored. The extant research on the subject has generally examined this performance in the context of two perspectives on the theory of efficiency: the price of storage and futures price efficiency more generally.

Holbrook Working pioneered research into the price of storage – the relationship, at a point in time, between prices (of storable agricultural commodities) applicable to different future dates (Working 1949, 1254).12 For example, what is the relationship between the current spot price of wheat and the current September 2004 futures price of wheat? Or, what is the relationship between the current September 2004 futures price of wheat and the current May 2005 futures price of wheat?

Working reasoned that these prices could not differ because of events that were expected to occur between these dates. For example, if the May 2004 wheat futures price is less than the September 2004 price, this cannot be due to, say, the expectation of a small harvest between May 2004 and September 2004. On the contrary, traders should factor such an expectation into both May and September prices. And, assuming that they do, then this difference can only reflect the cost of carrying – storing – these commodities over time.13 Though this strict interpretation has since been modified somewhat (see Peck 1985, 44).

So, for example, the September 2004 price equals the May 2004 price plus the cost of storing wheat between May 2004 and September 2004. If the difference between these prices is greater or less than the cost of storage, and the market is efficient, arbitrage will bring the difference back to the cost of storage – e.g., if the difference in prices exceeds the cost of storage, then traders can profit if they buy the May 2004 contract, sell the September 2004 contract, take delivery in May and store the wheat until September. Working (1953) demonstrated empirically that the theory of the price of storage could explain quite satisfactorily these inter-temporal differences in wheat futures prices at the CBT as early as the late 1880s (Working 1953, 556).

Futures Price Efficiency

Many contemporary economists tend to focus on futures price efficiency more generally (for example, Beck 1994; Kahl and Tomek 1986; Kofi 1973; McKenzie, et al. 2002; Tomek and Gray, 1970). That is to say, do futures prices shadow consistently (but not necessarily equal) traders’ rational expectations of future spot prices? Here, the research focuses on the relationship between, say, the cash price of wheat in September 2004 and the September 2004 futures price of wheat quoted two months earlier in July 2004.

Figure 1illustrates the behavior of corn futures prices and their corresponding spot prices between 1877 and 1890. The data consist of the average month t futures price in the last full week of month t-2 and the average cash price in the first full week of month t.

The futures price and its corresponding spot price need not be equal; futures price efficiency does not mean that the futures market is clairvoyant. But, a difference between the two series should exist only because of an unpredictable forecast error and a risk premium – futures prices may be, say, consistently below the expected future spot price if long speculators require an inducement, or premium, to enter the futures market. Recent work finds strong evidence that these early corn (and corresponding wheat) futures prices are, in the long run, efficient estimates of their underlying spot prices (Santos 2002, 35). Although these results and Working’s empirical studies on the price of storage support, to some extent, the notion that early U.S. futures markets were efficient, this question remains largely unexplored by economic historians.

The Struggle for Legitimacy

Nineteenth century America was both fascinated and appalled by futures trading. This is apparent from the litigation and many public debates surrounding its legitimacy (Baer and Saxon 1949, 55; Buck 1913, 131, 271; Hoffman 1932, 29, 351; Irwin 1954, 80; Lurie 1979, 53, 106). Many agricultural producers, the lay community and, at times, legislatures and the courts, believed trading in futures was tantamount to gambling. The difference between the latter and speculating, which required the purchase or sale of a futures contract but not the shipment or delivery of the commodity, was ostensibly lost on most Americans (Baer and Saxon 1949, 56; Ferris 1988, 88; Hoffman 1932, 5; Lurie 1979, 53, 115).

Many Americans believed that futures traders frequently manipulated prices. From the end of the Civil War until 1879 alone, corners – control of enough of the available supply of a commodity to manipulate its price – allegedly occurred with varying degrees of success in wheat (1868, 1871, 1878/9), corn (1868), oats (1868, 1871, 1874), rye (1868) and pork (1868) (Boyle 1920, 64-65). This manipulation continued throughout the century and culminated in the Three Big Corners – the Hutchinson (1888), the Leiter (1898), and the Patten (1909). The Patten corner was later debunked (Boyle 1920, 67-74), while the Leiter corner was the inspiration for Frank Norris’s classic The Pit: A Story of Chicago (Norris 1903; Rothstein 1982, 60).14 In any case, reports of market corners on America’s early futures exchanges were likely exaggerated (Boyle 1920, 62-74; Hieronymus 1977, 84), as were their long term effects on prices and hence consumer welfare (Rothstein 1982, 60).

By 1892 thousands of petitions to Congress called for the prohibition of “speculative gambling in grain” (Lurie, 1979, 109). And, attacks from state legislatures were seemingly unrelenting: in 1812 a New York act made short sales illegal (the act was repealed in 1858); in 1841 a Pennsylvania law made short sales, where the position was not covered in five days, a misdemeanor (the law was repealed in 1862); in 1882 an Ohio law and a similar one in Illinois tried unsuccessfully to restrict cash settlement of futures contracts; in 1867 the Illinois constitution forbade dealing in futures contracts (this was repealed by 1869); in 1879 California’s constitution invalidated futures contracts (this was effectively repealed in 1908); and, in 1882, 1883 and 1885, Mississippi, Arkansas, and Texas, respectively, passed laws that equated futures trading with gambling, thus making the former a misdemeanor (Peterson 1933, 68-69).

Two nineteenth century challenges to futures trading are particularly noteworthy. The first was the so-called Anti-Option movement. According to Lurie (1979), the movement was fueled by agrarians and their sympathizers in Congress who wanted to end what they perceived as wanton speculative abuses in futures trading (109). Although options were (are) not futures contracts, and were nonetheless already outlawed on most exchanges by the 1890s, the legislation did not distinguish between the two instruments and effectively sought to outlaw both (Lurie 1979, 109).

In 1890 the Butterworth Anti-Option Bill was introduced in Congress but never came to a vote. However, in 1892 the Hatch (and Washburn) Anti-Option bills passed both houses of Congress, and failed only on technicalities during reconciliation between the two houses. Had either bill become law, it would have effectively ended options and futures trading in the United States (Lurie 1979, 110).

A second notable challenge was the bucket shop controversy, which challenged the legitimacy of the CBT in particular. A bucket shop was essentially an association of gamblers who met outside the CBT and wagered on the direction of futures prices. These associations had legitimate-sounding names such as the Christie Grain and Stock Company and the Public Grain Exchange. To most Americans, these “exchanges” were no less legitimate than the CBT. That some CBT members were guilty of “bucket shopping” only made matters worse!

The bucket shop controversy was protracted and colorful (see Lurie 1979, 138-167). Between 1884 and 1887 Illinois, Iowa, Missouri and Ohio passed anti-bucket shop laws (Lurie 1979, 95). The CBT believed these laws entitled them to restrict bucket shops access to CBT price quotes, without which the bucket shops could not exist. Bucket shops argued that they were competing exchanges, and hence immune to extant anti-bucket shop laws. As such, they sued the CBT for access to these price quotes.15

The two sides and the telegraph companies fought in the courts for decades over access to these price quotes; the CBT’s very survival hung in the balance. After roughly twenty years of litigation, the Supreme Court of the U.S. effectively ruled in favor of the Chicago Board of Trade and against bucket shops (Board of Trade of the City of Chicago v. Christie Grain & Stock Co., 198 U.S. 236, 25 Sup. Ct. (1905)). Bucket shops disappeared completely by 1915 (Hieronymus 1977, 90).

Regulation

The anti-option movement, the bucket shop controversy and the American public’s discontent with speculation masks an ironic reality of futures trading: it escaped government regulation until after the First World War; though early exchanges did practice self-regulation or administrative law.16 The absence of any formal governmental oversight was due in large part to two factors. First, prior to 1895, the opposition tried unsuccessfully to outlaw rather than regulate futures trading. Second, strong agricultural commodity prices between 1895 and 1920 weakened the opposition, who blamed futures markets for low agricultural commodity prices (Hieronymus 1977, 313).

Grain prices fell significantly by the end of the First World War, and opposition to futures trading grew once again (Hieronymus 1977, 313). In 1922 the U.S. Congress enacted the Grain Futures Act, which required exchanges to be licensed, limited market manipulation and publicized trading information (Leuthold 1989, 369).17 However, regulators could rarely enforce the act because it enabled them to discipline exchanges, rather than individual traders. To discipline an exchange was essentially to suspend it, a punishment unfit (too harsh) for most exchange-related infractions.

The Commodity Exchange Act of 1936 enabled the government to deal directly with traders rather than exchanges. It established the Commodity Exchange Authority (CEA), a bureau of the U.S. Department of Agriculture, to monitor and investigate trading activities and prosecute price manipulation as a criminal offense. The act also: limited speculators’ trading activities and the sizes of their positions; regulated futures commission merchants; banned options trading on domestic agricultural commodities; and restricted futures trading – designated which commodities were to be traded on which licensed exchanges (see Hieronymus 1977; Leuthold, et al. 1989).

Although Congress amended the Commodity Exchange Act in 1968 in order to increase the regulatory powers of the Commodity Exchange Authority, the latter was ill-equipped to handle the explosive growth in futures trading in the 1960s and 1970s. So, in 1974 Congress passed the Commodity Futures Trading Act, which created far-reaching federal oversight of U.S. futures trading and established the Commodity Futures Trading Commission (CFTC).

Like the futures legislation before it, the Commodity Futures Trading Act seeks “to ensure proper execution of customer orders and to prevent unlawful manipulation, price distortion, fraud, cheating, fictitious trades, and misuse of customer funds” (Leuthold, et al. 1989, 34). Unlike the CEA, the CFTC was given broad regulator powers over all futures trading and related exchange activities throughout the U.S. The CFTC oversees and approves modifications to extant contracts and the creation and introduction of new contracts. The CFTC consists of five presidential appointees who are confirmed by the U.S. Senate.

The Futures Trading Act of 1982 amended the Commodity Futures Trading Act of 1974. The 1982 act legalized options trading on agricultural commodities and identified more clearly the jurisdictions of the CFTC and Securities and Exchange Commission (SEC). The regulatory overlap between the two organizations arose because of the explosive popularity during the 1970s of financial futures contracts. Today, the CFTC regulates all futures contracts and options on futures contracts traded on U.S. futures exchanges; the SEC regulates all financial instrument cash markets as well as all other options markets.

Finally, in 2000 Congress passed the Commodity Futures Modernization Act, which reauthorized the Commodity Futures Trading Commission for five years and repealed an 18-year old ban on trading single stock futures. The bill also sought to increase competition and “reduce systematic risk in markets for futures and over-the-counter derivatives” (H.R. 5660, 106th Congress 2nd Session).

Modern Futures Markets

The growth in futures trading has been explosive in recent years (Chart 3).

Futures trading extended beyond physical commodities in the 1970s and 1980s – currency futures in 1972; interest rate futures in 1975; and stock index futures in 1982 (Silber 1985, 83). The enormous growth of financial futures at this time was likely because of the breakdown of the Bretton Woods exchange rate regime, which essentially fixed the relative values of industrial economies’ exchange rates to the American dollar (see Bordo and Eichengreen 1993), and relatively high inflation from the late 1960s to the early 1980s. Flexible exchange rates and inflation introduced, respectively, exchange and interest rate risks, which hedgers sought to mitigate through the use of financial futures. Finally, although futures contracts on agricultural commodities remain popular, financial futures and options dominate trading today. Trading volume in metals, minerals and energy remains relatively small.

Trading volume in agricultural futures contracts first dropped below 50% in 1982. By 1985 this volume had dropped to less than one fourth all trading. In the same year the volume of futures trading in the U.S. Treasury bond contract alone exceeded trading volume in all agricultural commodities combined (Leuthold et al. 1989, 2). Today exchanges in the U.S. actively trade contracts on several underlying assets (Table 1). These range from the traditional – e.g., agriculture and metals – to the truly innovative – e.g. the weather. The latter’s payoff varies with the number of degree-days by which the temperature in a particular region deviates from 65 degrees Fahrenheit.

Table 1: Select Futures Contracts Traded as of 2002

Agriculture Currencies Equity Indexes Interest Rates Metals & Energy
Corn British pound S&P 500 index Eurodollars Copper
Oats Canadian dollar Dow Jones Industrials Euroyen Aluminum
Soybeans Japanese yen S&P Midcap 400 Euro-denominated bond Gold
Soybean meal Euro Nasdaq 100 Euroswiss Platinum
Soybean oil Swiss franc NYSE index Sterling Palladium
Wheat Australian dollar Russell 2000 index British gov. bond (gilt) Silver
Barley Mexican peso Nikkei 225 German gov. bond Crude oil
Flaxseed Brazilian real FTSE index Italian gov. bond Heating oil
Canola CAC-40 Canadian gov. bond Gas oil
Rye DAX-30 Treasury bonds Natural gas
Cattle All ordinary Treasury notes Gasoline
Hogs Toronto 35 Treasury bills Propane
Pork bellies Dow Jones Euro STOXX 50 LIBOR CRB index
Cocoa EURIBOR Electricity
Coffee Municipal bond index Weather
Cotton Federal funds rate
Milk Bankers’ acceptance
Orange juice
Sugar
Lumber
Rice

Source: Bodie, Kane and Marcus (2005), p. 796.

Table 2 provides a list of today’s major futures exchanges.

Table 2: Select Futures Exchanges as of 2002

Exchange Exchange
Chicago Board of Trade CBT Montreal Exchange ME
Chicago Mercantile Exchange CME Minneapolis Grain Exchange MPLS
Coffee, Sugar & Cocoa Exchange, New York CSCE Unit of Euronext.liffe NQLX
COMEX, a division of the NYME CMX New York Cotton Exchange NYCE
European Exchange EUREX New York Futures Exchange NYFE
Financial Exchange, a division of the NYCE FINEX New York Mercantile Exchange NYME
International Petroleum Exchange IPE OneChicago ONE
Kansas City Board of Trade KC Sydney Futures Exchange SFE
London International Financial Futures Exchange LIFFE Singapore Exchange Ltd. SGX
Marche a Terme International de France MATIF

Source: Wall Street Journal, 5/12/2004, C16.

Modern trading differs from its nineteenth century counterpart in other respects as well. First, the popularity of open outcry trading is waning. For example, today the CBT executes roughly half of all trades electronically. And, electronic trading is the rule, rather than the exception throughout Europe. Second, today roughly 99% of all futures contracts are settled prior to maturity. Third, in 1982 the Commodity Futures Trading Commission approved cash settlement – delivery that takes the form of a cash balance – on financial index and Eurodollar futures, whose underlying assets are not deliverable, as well as on several non-financial contracts including lean hog, feeder cattle and weather (Carlton 1984, 253). And finally, on Dec. 6, 2002, the Chicago Mercantile Exchange became the first publicly traded financial exchange in the U.S.

References and Further Reading

Baer, Julius B. and Olin. G. Saxon. Commodity Exchanges and Futures Trading. New York: Harper & Brothers, 1949.

Bodie, Zvi, Alex Kane and Alan J. Marcus. Investments. New York: McGraw-Hill/Irwin, 2005.

Bordo, Michael D. and Barry Eichengreen, editors. A Retrospective on the Bretton Woods System: Lessons for International Monetary Reform. Chicago: University of Chicago Press, 1993.

Boyle, James. E. Speculation and the Chicago Board of Trade. New York: MacMillan Company, 1920.

Buck, Solon. J. The Granger Movement: A Study of Agricultural Organization and Its Political,

Carlton, Dennis W. “Futures Markets: Their Purpose, Their History, Their Growth, Their Successes and Failures.” Journal of Futures Markets 4, no. 3 (1984): 237-271.

Chicago Board of Trade Bulletin. The Development of the Chicago Board of Trade. Chicago: Chicago Board of Trade, 1936.

Chandler, Alfred. D. The Visible Hand: The Managerial Revolution in American Business. Cambridge: Harvard University Press, 1977.

Clark, John. G. The Grain Trade in the Old Northwest. Urbana: University of Illinois Press, 1966.

Commodity Futures Trading Commission. Annual Report. Washington, D.C. 2003.

Dies, Edward. J. The Wheat Pit. Chicago: The Argyle Press, 1925.

Ferris, William. G. The Grain Traders: The Story of the Chicago Board of Trade. East Lansing, MI: Michigan State University Press, 1988.

Hieronymus, Thomas A. Economics of Futures Trading for Commercial and Personal Profit. New York: Commodity Research Bureau, Inc., 1977.

Hoffman, George W. Futures Trading upon Organized Commodity Markets in the United States. Philadelphia: University of Pennsylvania Press, 1932.

Irwin, Harold. S. Evolution of Futures Trading. Madison, WI: Mimir Publishers, Inc., 1954

Leuthold, Raymond M., Joan C. Junkus and Jean E. Cordier. The Theory and Practice of Futures Markets. Champaign, IL: Stipes Publishing L.L.C., 1989.

Lurie, Jonathan. The Chicago Board of Trade 1859-1905. Urbana: University of Illinois Press, 1979.

National Agricultural Statistics Service. “Historical Track Records.” Agricultural Statistics Board, U.S. Department of Agriculture, Washington, D.C. April 2004.

Norris, Frank. The Pit: A Story of Chicago. New York, NY: Penguin Group, 1903.

Odle, Thomas. “Entrepreneurial Cooperation on the Great Lakes: The Origin of the Methods of American Grain Marketing.” Business History Review 38, (1964): 439-55.

Peck, Anne E., editor. Futures Markets: Their Economic Role. Washington D.C.: American Enterprise Institute for Public Policy Research, 1985.

Peterson, Arthur G. “Futures Trading with Particular Reference to Agricultural Commodities.” Agricultural History 8, (1933): 68-80.

Pierce, Bessie L. A History of Chicago: Volume III, the Rise of a Modern City. New York: Alfred A. Knopf, 1957.

Rothstein, Morton. “The International Market for Agricultural Commodities, 1850-1873.” In Economic Change in the Civil War Era, edited by David. T. Gilchrist and W. David Lewis, 62-71. Greenville DE: Eleutherian Mills-Hagley Foundation, 1966.

Rothstein, Morton. “Frank Norris and Popular Perceptions of the Market.” Agricultural History 56, (1982): 50-66.

Santos, Joseph. “Did Futures Markets Stabilize U.S. Grain Prices?” Journal of Agricultural Economics 53, no. 1 (2002): 25-36.

Silber, William L. “The Economic Role of Financial Futures.” In Futures Markets: Their Economic Role, edited by Anne E. Peck, 83-114. Washington D.C.: American Enterprise Institute for Public Policy Research, 1985.

Stein, Jerome L. The Economics of Futures Markets. Oxford: Basil Blackwell Ltd, 1986.

Taylor, Charles. H. History of the Board of Trade of the City of Chicago. Chicago: R. O. Law, 1917.

Werner, Walter and Steven T. Smith. Wall Street. New York: Columbia University Press, 1991.

Williams, Jeffrey C. “The Origin of Futures Markets.” Agricultural History 56, (1982): 306-16.

Working, Holbrook. “The Theory of the Price of Storage.” American Economic Review 39, (1949): 1254-62.

Working, Holbrook. “Hedging Reconsidered.” Journal of Farm Economics 35, (1953): 544-61.

1 The clearinghouse is typically a corporation owned by a subset of exchange members. For details regarding the clearing arrangements of a specific exchange, go to www.cftc.gov and click on “Clearing Organizations.”

2 The vast majority of contracts are offset. Outright delivery occurs when the buyer receives from, or the seller “delivers” to the exchange a title of ownership, and not the actual commodity or financial security – the urban legend of the trader who neglected to settle his long position and consequently “woke up one morning to find several car loads of a commodity dumped on his front yard” is indeed apocryphal (Hieronymus 1977, 37)!

3 Nevertheless, forward contracts remain popular today (see Peck 1985, 9-12).

4 The importance of New Orleans as a point of departure for U.S. grain and provisions prior to the Civil War is unquestionable. According to Clark (1966), “New Orleans was the leading export center in the nation in terms of dollar volume of domestic exports, except for 1847 and a few years during the 1850s, when New York’s domestic exports exceeded those of the Crescent City” (36).

5 This area was responsible for roughly half of U.S. wheat production and a third of U.S. corn production just prior to 1860. Southern planters dominated corn output during the early to mid- 1800s.

6 Millers milled wheat into flour; pork producers fed corn to pigs, which producers slaughtered for provisions; distillers and brewers converted rye and barley into whiskey and malt liquors, respectively; and ranchers fed grains and grasses to cattle, which were then driven to eastern markets.

7 Significant advances in transportation made the grain trade’s eastward expansion possible, but the strong and growing demand for grain in the East made the trade profitable. The growth in domestic grain demand during the early to mid-nineteenth century reflected the strong growth in eastern urban populations. Between 1820 and 1860, the populations of Baltimore, Boston, New York and Philadelphia increased by over 500% (Clark 1966, 54). Moreover, as the 1840’s approached, foreign demand for U.S. grain grew. Between 1845 and 1847, U.S. exports of wheat and flour rose from 6.3 million bushels to 26.3 million bushels and corn exports grew from 840,000 bushels to 16.3 million bushels (Clark 1966, 55).

8 Wheat production was shifting to the trans-Mississippi West, which produced 65% of the nation’s wheat by 1899 and 90% by 1909, and railroads based in the Lake Michigan port cities intercepted the Mississippi River trade that would otherwise have headed to St. Louis (Clark 1966, 95). Lake Michigan port cities also benefited from a growing concentration of corn production in the West North Central region – Iowa, Kansas, Minnesota, Missouri, Nebraska, North Dakota and South Dakota, which by 1899 produced 40% percent of the country’s corn (Clark 1966, 4).

9 Corn had to be dried immediately after it was harvested and could only be shipped profitably by water to Chicago, but only after rivers and lakes had thawed; so, country merchants stored large quantities of corn. On the other hand, wheat was more valuable relative to its weight, and it could be shipped to Chicago by rail or road immediately after it was harvested; so, Chicago merchants stored large quantities of wheat.

10 This is consistent with Odle (1964), who adds that “the creators of the new system of marketing [forward contracts] were the grain merchants of the Great Lakes” (439). However, Williams (1982) presents evidence of such contracts between Buffalo and New York City as early as 1847 (309). To be sure, Williams proffers an intriguing case that forward and, in effect, future trading was active and quite sophisticated throughout New York by the late 1840s. Moreover, he argues that this trading grew not out of activity in Chicago, whose trading activities were quite primitive at this early date, but rather trading in London and ultimately Amsterdam. Indeed, “time bargains” were common in London and New York securities markets in the mid- and late 1700s, respectively. A time bargain was essentially a cash-settled financial forward contract that was unenforceable by law, and as such “each party was forced to rely on the integrity and credit of the other” (Werner and Smith 1991, 31). According to Werner and Smith, “time bargains prevailed on Wall Street until 1840, and were gradually replaced by margin trading by 1860” (68). They add that, “margin trading … had an advantage over time bargains, in which there was little protection against default beyond the word of another broker. Time bargains also technically violated the law as wagering contracts; margin trading did not” (135). Between 1818 and 1840 these contracts comprised anywhere from 0.7% (49-day average in 1830) to 34.6% (78-day average in 1819) of daily exchange volume on the New York Stock & Exchange Board (Werner and Smith 1991, 174).

11 Of course, forward markets could and indeed did exist in the absence of both grading standards and formal exchanges, though to what extent they existed is unclear (see Williams 1982).

12 In the parlance of modern financial futures, the term cost of carry is used instead of the term storage. For example, the cost of carrying a bond is comprised of the cost of acquiring and holding (or storing) it until delivery minus the return earned during the carry period.

13 More specifically, the price of storage is comprised of three components: (1) physical costs such as warehouse and insurance; (2) financial costs such as borrowing rates of interest; and (3) the convenience yield – the return that the merchant, who stores the commodity, derives from maintaining an inventory in the commodity. The marginal costs of (1) and (2) are increasing functions of the amount stored; the more the merchant stores, the greater the marginal costs of warehouse use, insurance and financing. Whereas the marginal benefit of (3) is a decreasing function of the amount stored; put differently, the smaller the merchant’s inventory, the more valuable each additional unit of inventory becomes. Working used this convenience yield to explain a negative price of storage – the nearby contract is priced higher than the faraway contract; an event that is likely to occur when supplies are exceptionally low. In this instance, there is little for inventory dealers to store. Hence, dealers face extremely low physical and financial storage costs, but extremely high convenience yields. The price of storage turns negative; essentially, inventory dealers are willing to pay to store the commodity.

14 Norris’ protagonist, Curtis Jadwin, is a wheat speculator emotionally consumed and ultimately destroyed, while the welfare of producers and consumers hang in the balance, when a nineteenth century CBT wheat futures corner backfires on him.

15 One particularly colorful incident in the controversy came when the Supreme Court of Illinois ruled that the CBT had to either make price quotes public or restrict access to everyone. When the Board opted for the latter, it found it needed to “prevent its members from running (often literally) between the [CBT and a bucket shop next door], but with minimal success. Board officials at first tried to lock the doors to the exchange…However, after one member literally battered down the door to the east side of the building, the directors abandoned this policy as impracticable if not destructive” (Lurie 1979, 140).

16 Administrative law is “a body of rules and doctrines which deals with the powers and actions of administrative agencies” that are organizations other than the judiciary or legislature. These organizations affect the rights of private parties “through either adjudication, rulemaking, investigating, prosecuting, negotiating, settling, or informally acting” (Lurie 1979, 9).

17 In 1921 Congress passed The Futures Trading Act, which was declared unconstitutional.

Citation: Santos, Joseph. “A History of Futures Trading in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/a-history-of-futures-trading-in-the-united-states/

The Economic History of the Fur Trade: 1670 to 1870

Ann M. Carlos, University of Colorado
Frank D. Lewis, Queen’s University

Introduction

A commercial fur trade in North America grew out of the early contact between Indians and European fisherman who were netting cod on the Grand Banks off Newfoundland and on the Bay of Gaspé near Quebec. Indians would trade the pelts of small animals, such as mink, for knives and other iron-based products, or for textiles. Exchange at first was haphazard and it was only in the late sixteenth century, when the wearing of beaver hats became fashionable, that firms were established who dealt exclusively in furs. High quality pelts are available only where winters are severe, so the trade took place predominantly in the regions we now know as Canada, although some activity took place further south along the Mississippi River and in the Rocky Mountains. There was also a market in deer skins that predominated in the Appalachians.

The first firms to participate in the fur trade were French, and under French rule the trade spread along the St. Lawrence and Ottawa Rivers, and down the Mississippi. In the seventeenth century, following the Dutch, the English developed a trade through Albany. Then in 1670, a charter was granted by the British crown to the Hudson’s Bay Company, which began operating from posts along the coast of Hudson Bay (see Figure 1). For roughly the next hundred years, this northern region saw competition of varying intensity between the French and the English. With the conquest of New France in 1763, the French trade shifted to Scottish merchants operating out of Montreal. After the negotiation of Jay’s Treaty (1794), the northern border was defined and trade along the Mississippi passed to the American Fur Company under John Jacob Astor. In 1821, the northern participants merged under the name of the Hudson’s Bay Company, and for many decades this merged company continued to trade in furs. Finally, in the 1990s, under pressure from animal rights groups, the Hudson’s Bay Company, which in the twentieth century had become a large Canadian retailer, ended the fur component of its operation.

Figure 1
Hudson’s Bay Company Hinterlands
 Hudson's Bay Company Hinterlands (map)

Source: Ray (1987, plate 60)

The fur trade was based on pelts destined either for the luxury clothing market or for the felting industries, of which hatting was the most important. This was a transatlantic trade. The animals were trapped and exchanged for goods in North America, and the pelts were transported to Europe for processing and final sale. As a result, forces operating on the demand side of the market in Europe and on the supply side in North America determined prices and volumes; while intermediaries, who linked the two geographically separated areas, determined how the trade was conducted.

The Demand for Fur: Hats, Pelts and Prices

However much hats may be considered an accessory today, they were for centuries a mandatory part of everyday dress, for both men and women. Of course styles changed, and, in response to the vagaries of fashion and politics, hats took on various forms and shapes, from the high-crowned, broad-brimmed hat of the first two Stuarts to the conically-shaped, plainer hat of the Puritans. The Restoration of Charles II of England in 1660 and the Glorious Revolution in 1689 brought their own changes in style (Clarke, 1982, chapter 1). What remained a constant was the material from which hats were made – wool felt. The wool came from various animals, but towards the end of the fifteenth century beaver wool began to be predominate. Over time, beaver hats became increasingly popular eventually dominating the market. Only in the nineteenth century did silk replace beaver in high-fashion men’s hats.

Wool Felt

Furs have long been classified as either fancy or staple. Fancy furs are those demanded for the beauty and luster of their pelt. These furs – mink, fox, otter – are fashioned by furriers into garments or robes. Staple furs are sought for their wool. All staple furs have a double coating of hair with long, stiff, smooth hairs called guard hairs which protect the shorter, softer hair, called wool, that grows next to the animal skin. Only the wool can be felted. Each of the shorter hairs is barbed and once the barbs at the ends of the hair are open, the wool can be compressed into a solid piece of material called felt. The prime staple fur has been beaver, although muskrat and rabbit have also been used.

Wool felt was used for over two centuries to make high-fashion hats. Felt is stronger than a woven material. It will not tear or unravel in a straight line; it is more resistant to water, and it will hold its shape even if it gets wet. These characteristics made felt the prime material for hatters especially when fashion called for hats with large brims. The highest quality hats would be made fully from beaver wool, whereas lower quality hats included inferior wool, such as rabbit.

Felt Making

The transformation of beaver skins into felt and then hats was a highly skilled activity. The process required first that the beaver wool be separated from the guard hairs and the skin, and that some of the wool have open barbs, since felt required some open-barbed wool in the mixture. Felt dates back to the nomads of Central Asia, who are said to have invented the process of felting and made their tents from this light but durable material. Although the art of felting disappeared from much of western Europe during the first millennium, felt-making survived in Russia, Sweden, and Asia Minor. As a result of the Medieval Crusades, felting was reintroduced through the Mediterranean into France (Crean, 1962).

In Russia, the felting industry was based on the European beaver (castor fiber). Given their long tradition of working with beaver pelts, the Russians had perfected the art of combing out the short barbed hairs from among the longer guard hairs, a technology that they safeguarded. As a consequence, the early felting trades in England and France had to rely on beaver wool imported from Russia, although they also used domestic supplies of wool from other animals, such rabbit, sheep and goat. But by the end of the seventeenth century, Russian supplies were drying up, reflecting the serious depletion of the European beaver population.

Coincident with the decline in European beaver stocks was the emergence of a North American trade. North American beaver (castor canadensis) was imported through agents in the English, French and Dutch colonies. Although many of the pelts were shipped to Russia for initial processing, the growth of the beaver market in England and France led to the development of local technologies, and more knowledge of the art of combing. Separating the beaver wool from the felt was only the first step in the felting process. It was also necessary that some of the barbs on the short hairs be raised or open. On the animal these hairs were naturally covered with keratin to prevent the barbs from opening, thus to make felt, the keratin had to be stripped from at least some of the hairs. The process was difficult to refine and entailed considerable experimentation by felt-makers. For instance, one felt maker “bundled [the skins] in a sack of linen and boiled [them] for twelve hours in water containing several fatty substances and nitric acid” (Crean, 1962, p. 381). Although such processes removed the keratin, they did so at the price of a lower quality wool.

The opening of the North American trade not only increased the supply of skins for the felting industry, it also provided a subset of skins whose guard hairs had already been removed and the keratin broken down. Beaver pelts imported from North America were classified as either parchment beaver (castor sec – dry beaver), or coat beaver (castor gras – greasy beaver). Parchment beaver were from freshly caught animals, whose skins were simply dried before being presented for trade. Coat beaver were skins that had been worn by the Indians for a year or more. With wear, the guard hairs fell out and the pelt became oily and more pliable. In addition, the keratin covering the shorter hairs broke down. By the middle of the seventeenth century, hatters and felt-makers came to learn that parchment and coat beaver could be combined to produce a strong, smooth, pliable, top-quality waterproof material.

Until the 1720s, beaver felt was produced with relatively fixed proportions of coat and parchment skins, which led to periodic shortages of one or the other type of pelt. The constraint was relaxed when carotting was developed, a chemical process by which parchment skins were transformed into a type of coat beaver. The original carrotting formula consisted of salts of mercury diluted in nitric acid, which was brushed on the pelts. The use of mercury was a big advance, but it also had serious health consequences for hatters and felters, who were forced to breathe the mercury vapor for extended periods. The expression “mad as a hatter” dates from this period, as the vapor attacked the nervous systems of these workers.

The Prices of Parchment and Coat Beaver

Drawn from the accounts of the Hudson’s Bay Company, Table 1 presents some eighteenth century prices of parchment and coat beaver pelts. From 1713 to 1726, before the carotting process had become established, coat beaver generally fetched a higher price than parchment beaver, averaging 6.6 shillings per pelt as compared to 5.5 shillings. Once carotting was widely used, however, the prices were reversed, and from 1730 to 1770 parchment exceeded coat in almost every year. The same general pattern is seen in the Paris data, although there the reversal was delayed, suggesting slower diffusion in France of the carotting technology. As Crean (1962, p. 382) notes, Nollet’s L’Art de faire des chapeaux included the exact formula, but it was not published until 1765.

A weighted average of parchment and coat prices in London reveals three episodes. From 1713 to 1722 prices were quite stable, fluctuating within the narrow band of 5.0 and 5.5 shillings per pelt. During the period, 1723 to 1745, prices moved sharply higher and remained in the range of 7 to 9 shillings. The years 1746 to 1763 saw another big increase to over 12 shillings per pelt. There are far fewer prices available for Paris, but we do know that in the period 1739 to 1753 the trend was also sharply higher with prices more than doubling.

Table 1
Price of Beaver Pelts in Britain: 1713-1763
(shillings per skin)

Year Parchment Coat Averagea Year Parchment Coat Averagea
1713 5.21 4.62 5.03 1739 8.51 7.11 8.05
1714 5.24 7.86 5.66 1740 8.44 6.66 7.88
1715 4.88 5.49 1741 8.30 6.83 7.84
1716 4.68 8.81 5.16 1742 7.72 6.41 7.36
1717 5.29 8.37 5.65 1743 8.98 6.74 8.27
1718 4.77 7.81 5.22 1744 9.18 6.61 8.52
1719 5.30 6.86 5.51 1745 9.76 6.08 8.76
1720 5.31 6.05 5.38 1746 12.73 7.18 10.88
1721 5.27 5.79 5.29 1747 10.68 6.99 9.50
1722 4.55 4.97 4.55 1748 9.27 6.22 8.44
1723 8.54 5.56 7.84 1749 11.27 6.49 9.77
1724 7.47 5.97 7.17 1750 17.11 8.42 14.00
1725 5.82 6.62 5.88 1751 14.31 10.42 12.90
1726 5.41 7.49 5.83 1752 12.94 10.18 11.84
1727 7.22 1753 10.71 11.97 10.87
1728 8.13 1754 12.19 12.68 12.08
1729 9.56 1755 12.05 12.04 11.99
1730 8.71 1756 13.46 12.02 12.84
1731 6.27 1757 12.59 11.60 12.17
1732 7.12 1758 13.07 11.32 12.49
1733 8.07 1759 15.99 14.68
1734 7.39 1760 13.37 13.06 13.22
1735 8.33 1761 10.94 13.03 11.36
1736 8.72 7.07 8.38 1762 13.17 16.33 13.83
1737 7.94 6.46 7.50 1763 16.33 17.56 16.34
1738 8.95 6.47 8.32

a A weighted average of the prices of parchment, coat and half parchment beaver pelts. Weights are based on the trade in these types of furs at Fort Albany. Prices of the individual types of pelts are not available for the years, 1727 to 1735.

Source: Carlos and Lewis, 1999.

The Demand for Beaver Hats

The main cause of the rising beaver pelt prices in England and France was the increasing demand for beaver hats, which included hats made exclusively with beaver wool and referred to as “beaver hats,” and those hats containing a combination of beaver and a lower cost wool, such as rabbit. These were called “felt hats.” Unfortunately, aggregate consumption series for the eighteenth century Europe are not available. We do, however, have Gregory King’s contemporary work for England which provides a good starting point. In a table entitled “Annual Consumption of Apparell, anno 1688,” King calculated that consumption of all types of hats was about 3.3 million, or nearly one hat per person. King also included a second category, caps of all sorts, for which he estimated consumption at 1.6 million (Harte, 1991, p. 293). This means that as early as 1700, the potential market for hats in England alone was nearly 5 million per year. Over the next century, the rising demand for beaver pelts was a result of a number factors including population growth, a greater export market, a shift toward beaver hats from hats made of other materials, and a shift from caps to hats.

The British export data indicate that demand for beaver hats was growing not just in England, but in Europe as well. In 1700 a modest 69,500 beaver hats were exported from England and almost the same number of felt hats; but by 1760, slightly over 500,000 beaver hats and 370,000 felt halts were shipped from English ports (Lawson, 1943, app. I). In total, over the seventy years to 1770, 21 million beaver and felt hats were exported from England. In addition to the final product, England exported the raw material, beaver pelts. In 1760, £15,000 in beaver pelts were exported along with a range of other furs. The hats and the pelts tended to go to different parts of Europe. Raw pelts were shipped mainly to northern Europe, including Germany, Flanders, Holland and Russia; whereas hats went to the southern European markets of Spain and Portugal. In 1750, Germany imported 16,500 beaver hats, while Spain imported 110,000 and Portugal 175,000 (Lawson, 1943, appendices F & G). Over the first six decades of the eighteenth century, these markets grew dramatically, such that the value of beaver hat sales to Portugal alone was £89,000 in 1756-1760, representing about 300,000 hats or two-thirds of the entire export trade.

European Intermediaries in the Fur Trade

By the eighteenth century, the demand for furs in Europe was being met mainly by exports from North America with intermediaries playing an essential role. The American trade, which moved along the main water systems, was organized largely through chartered companies. At the far north, operating out of Hudson Bay, was the Hudson’s Bay Company, chartered in 1670. The Compagnie d’Occident, founded in 1718, was the most successful of a series of monopoly French companies. It operated through the St. Lawrence River and in the region of the eastern Great Lakes. There was also an English trade through Albany and New York, and a French trade down the Mississippi.

The Hudson’s Bay Company and the Compagnie d’Occident, although similar in title, had very different internal structures. The English trade was organized along hierarchical lines with salaried managers, whereas the French monopoly issued licenses (congés) or leased out the use of its posts. The structure of the English company allowed for more control from the London head office, but required systems that could monitor the managers of the trading posts (Carlos and Nicholas, 1990). The leasing and licensing arrangements of the French made monitoring unnecessary, but led to a system where the center had little influence over the conduct of the trade.

The French and English were distinguished as well by how they interacted with the Natives. The Hudson’s Bay Company established posts around the Bay and waited for the Indians, often middlemen, to come to them. The French, by contrast, moved into the interior, directly trading with the Indians who harvested the furs. The French arrangement was more conducive to expansion, and by the end of the seventeenth century, they had moved beyond the St. Lawrence and Ottawa rivers into the western Great Lakes region (see Figure 1). Later they established posts in the heart of the Hudson Bay hinterland. In addition, the French explored the river systems to the south, setting up a post at the mouth of the Mississippi. As noted earlier, after Jay’s Treaty was signed, the French were replaced in the Mississippi region by U.S. interests which later formed the American Fur Company (Haeger, 1991).

The English takeover of New France at the end of the French and Indian Wars in 1763 did not, at first, fundamentally change the structure of the trade. Rather, French management was replaced by Scottish and English merchants operating in Montreal. But, within a decade, the Montreal trade was reorganized into partnerships between merchants in Montreal and traders who wintered in the interior. The most important of these arrangements led to the formation of the Northwest Company, which for the first two decades of the nineteenth century, competed with the Hudson’s Bay Company (Carlos and Hoffman, 1986). By the early decades of the nineteenth century, the Hudson’s Bay Company, the Northwest Company, and the American Fur Company had, combined, a system of trading posts across North America, including posts in Oregon and British Columbia and on the Mackenzie River. In 1821, the Northwest Company and the Hudson’s Bay Company merged under the name of the Hudson’s Bay Company. The Hudson’s Bay Company then ran the trade as a monopsony until the late 1840s when it began facing serious competition from trappers to the south. The Company’s role in the northwest changed again with the Canadian Confederation in 1867. Over the next decades treaties were signed with many of the northern tribes forever changing the old fur trade order in Canada.

The Supply of Furs: The Harvesting of Beaver and Depletion

During the eighteenth century, the changing technology of felt production and the growing demand for felt hats were met by attempts to increase the supply of furs, especially the supply of beaver pelts. Any permanent increase, however, was ultimately dependent on the animal resource base. How that base changed over time must be a matter of speculation since no animal counts exist from that period; nevertheless, the evidence we do have points to a scenario in which over-harvesting, at least in some years, gave rise to serious depletion of the beaver and possibly other animals such as marten that were also being traded. Why the beaver were over-harvested was closely related to the prices Natives were receiving, but important as well was the nature of Native property rights to the resource.

Harvests in the Fort Albany and York Factory Regions

That beaver populations along the Eastern seaboard regions of North America were depleted as the fur trade advanced is widely accepted. In fact the search for new sources of supply further west, including the region of Hudson Bay, has been attributed in part to dwindling beaver stocks in areas where the fur trade had been long established. Although there has been little discussion of the impact that the Hudson’s Bay Company and the French, who traded in the region of Hudson Bay, were having on the beaver stock, the remarkably complete records of the Hudson’s Bay Company provide the basis for reasonable inferences about depletion. From 1700 there is an uninterrupted annual series of fur returns at Fort Albany; the fur returns from York Factory begin in 1716 (see Figure 1).

The beaver returns at Fort Albany and York Factory for the period 1700 to 1770 are described in Figure 2. At Fort Albany the number of beaver skins over the period 1700 to 1720 averaged roughly 19,000, with wide year-to-year fluctuations; the range was about 15,000 to 30,000. After 1720 and until the late 1740s average returns declined by about 5,000 skins, and remained within the somewhat narrower range of roughly 10,000 to 20,000 skins. The period of relative stability was broken in the final years of the 1740s. In 1748 and 1749, returns increased to an average of nearly 23,000. Following these unusually strong years, the trade fell precipitously so that in 1756 fewer than 6,000 beaver pelts were received. There was a brief recovery in the early 1760s but by the end decade trade had fallen below even the mid-1750s levels. In 1770, Fort Albany took in just 3,600 beaver pelts. This pattern – unusually large returns in the late 1740s and low returns thereafter – indicates that the beaver in the Fort Albany region were being seriously depleted.

Figure 2
Beaver Traded at Fort Albany and York Factory 1700 – 1770

Source: Carlos and Lewis, 1993.

The beaver returns at York Factory from 1716 to 1770, also described in Figure 2, have some of the key features of the Fort Albany data. After some low returns early on (from 1716 to 1720), the number of beaver pelts increased to an average of 35,000. There were extraordinary returns in 1730 and 1731, when the average was 55,600 skins, but beaver receipts then stabilized at about 31,000 over the remainder of the decade. The first break in the pattern came in the early 1740s shortly after the French established several trading posts in the area. Surprisingly perhaps, given the increased competition, trade in beaver pelts at the Hudson’s Bay Company post increased to an average of 34,300, this over the period 1740 to 1743. Indeed, the 1742 return of 38,791 skins was the largest since the French had established any posts in the region. The returns in 1745 were also strong, but after that year the trade in beaver pelts began a decline that continued through to 1770. Average returns over the rest of the decade were 25,000; the average during the 1750s was 18,000, and just 15,500 in the 1760s. The pattern of beaver returns at York Factory – high returns in the early 1740s followed by a large decline – strongly suggests that, as in the Fort Albany hinterland, the beaver population had been greatly reduced.

The overall carrying capacity of any region, or the size of the animal stock, depends on the nature of the terrain and the underlying biological determinants such as birth and death rates. A standard relationship between the annual harvest and the animal population is the Lotka-Volterra logistic, commonly used in natural resource models to relate the natural growth of a population to the size of that population:
F(X) = aX – bX2, a, b > 0 (1)

where X is the population, F(X) is the natural growth in the population, a is the maximum proportional growth rate of the population, and b = a/X, where X is the upper limit to population size. The population dynamics of the species exploited depends on the harvest each period:

DX = aX – bX2– H (2)

where DX is the annual change in the population and H is the harvest. The choice of parameter a and maximum population X is central to the population estimates and have been based largely on estimates from the beaver ecology literature and Ontario provincial field reports of beaver densities (Carlos and Lewis, 1993).

Simulations based on equation 2 suggest that, until the 1730s, beaver populations remained at levels roughly consistent with maximum sustained yield management, sometimes referred to as the biological optimum. But after the 1730s there was a decline in beaver stocks to about half the maximum sustained yield levels. The cause of the depletion was closely related to what was happening in Europe. There, buoyant demand for felt hats and dwindling local fur supplies resulted in much higher prices for beaver pelts. These higher prices, in conjunction with the resulting competition from the French in the Hudson Bay region, led the Hudson’s Bay Company to offer much better terms to Natives who came to their trading posts (Carlos and Lewis, 1999).

Figure 3 reports a price index for furs at Fort Albany and at York Factory. The index represents a measure of what Natives received in European goods for their furs. At Fort Albany, fur prices were close to 70 from 1713 to 1731, but in 1732, in response to higher European fur prices and the entry of la Vérendrye, an important French trader, the price jumped to 81. After that year, prices continued to rise. The pattern at York Factory was similar. Although prices were high in the early years when the post was being established, beginning in 1724 the price settled down to about 70. At York Factory, the jump in price came in 1738, which was the year la Vérendrye set up a trading post in the York Factory hinterland. Prices then continued to increase. It was these higher fur prices that led to over-harvesting and, ultimately, a decline in beaver stocks.

Figure 3
Price Index for Furs: Fort Albany and York Factory, 1713 – 1770

Source: Carlos and Lewis, 2001.

Property Rights Regimes

An increase in price paid to Native hunters did not have to lead to a decline in the animal stocks, because Indians could have chosen to limit their harvesting. Why they did not was closely related their system of property rights. One can classify property rights along a spectrum with, at one end, open access, where anyone can hunt or fish, and at the other, complete private property, where a sole owner has full control over the resource. Between, there are a range of property rights regimes with access controlled by a community or a government, and where individual members of the group do not necessarily have private property rights. Open access creates a situation where there is less incentive to conserve, because animals not harvested by a particular hunter will be available to other hunters in the future. Thus the closer is a system to open access the more likely it is that the resource will be depleted.

Across aboriginal societies in North America, one finds a range of property rights regimes. Native Americans did have a concept of trespass and of property, but individual and family rights to resources were not absolute. Sometimes referred to as the Good Samaritan principle (McManus, 1972), outsiders were not permitted to harvest furs on another’s territory for trade, but they were allowed to hunt game and even beaver for food. Combined with this limitation to private property was an Ethic of Generosity that included liberal gift-giving where any visitor to one’s encampment was to be supplied with food and shelter.

Why a social norm such as gift-giving or the related Good Samaritan principle emerged was due to the nature of the aboriginal environment. The primary objective of aboriginal societies was survival. Hunting was risky, and so rules were put in place that would reduce the risk of starvation. As Berkes et al.(1989, p. 153) notes, for such societies: “all resources are subject to the overriding principle that no one can prevent a person from obtaining what he needs for his family’s survival.” Such actions were reciprocal and especially in the sub-arctic world were an insurance mechanism. These norms, however, also reduced the incentive to conserve the beaver and other animals that were part of the fur trade. The combination of these norms and the increasing price paid to Native traders led to the large harvests in the 1740s and ultimately depletion of the animal stock.

The Trade in European Goods

Indians were the primary agents in the North American commercial fur trade. It was they who hunted the animals, and transported and traded the pelts or skins to European intermediaries. The exchange was a voluntary. In return for their furs, Indians obtained both access to an iron technology to improve production and access to a wide range of new consumer goods. It is important to recognize, however, that although the European goods were new to aboriginals, the concept of exchange was not. The archaeological evidence indicates an extensive trade between Native tribes in the north and south of North America prior to European contact.

The extraordinary records of the Hudson’s Bay Company allow us to form a clear picture of what Indians were buying. Table 2 lists the goods received by Natives at York Factory, which was by far the largest of the Hudson’s Bay Company trading posts. As is evident from the table, the commercial trade was more than in beads and baubles or even guns and alcohol; rather Native traders were receiving a wide range of products that improved their ability to meet their subsistence requirements and allowed them to raise their living standards. The items have been grouped by use. The producer goods category was dominated by firearms, including guns, shot and powder, but also includes knives, awls and twine. The Natives traded for guns of different lengths. The 3-foot gun was used mainly for waterfowl and in heavily forested areas where game could be shot at close range. The 4-foot gun was more accurate and suitable for open spaces. In addition, the 4-foot gun could play a role in warfare. Maintaining guns in the harsh sub-arctic environment was a serious problem, and ultimately, the Hudson’s Bay Company was forced to send gunsmiths to its trading posts to assess quality and help with repairs. Kettles and blankets were the main items in the “household goods” category. These goods probably became necessities to the Natives who adopted them. Then there were the luxury goods, which have been divided into two broad categories: “tobacco and alcohol,” and “other luxuries,” dominated by cloth of various kinds (Carlos and Lewis, 2001; 2002).

Table 2
Value of Goods Received at York Factory in 1740 (made beaver)

We have much less information about the French trade. The French are reported to have exchanged similar items, although given their higher transport costs, both the furs received and the goods traded tended to be higher in value relative to weight. The Europeans, it might be noted, supplied no food to the trade in the eighteenth century. In fact, Indians helped provision the posts with fish and fowl. This role of food purveyor grew in the nineteenth century as groups known as the “home guard Cree” came to live around the posts; as well, pemmican, supplied by Natives, became an important source of nourishment for Europeans involved in the buffalo hunts.

The value of the goods listed in Table 2 is expressed in terms of the unit of account, the made beaver, which the Hudson’s Bay Company used to record its transactions and determine the rate of exchange between furs and European goods. The price of a prime beaver pelt was 1 made beaver, and every other type of fur and good was assigned a price based on that unit. For example, a marten (a type of mink) was a made beaver, a blanket was 7 made beaver, a gallon of brandy, 4 made beaver, and a yard of cloth, 3? made beaver. These were the official prices at York Factory. Thus Indians, who traded at these prices, received, for example, a gallon of brandy for four prime beaver pelts, two yards of cloth for seven beaver pelts, and a blanket for 21 marten pelts. This was barter trade in that no currency was used; and although the official prices implied certain rates of exchange between furs and goods, Hudson’s Bay Company factors were encouraged to trade at rates more favorable to the Company. The actual rates, however, depended on market conditions in Europe and, most importantly, the extent of French competition in Canada. Figure 3 illustrates the rise in the price of furs at York Factory and Fort Albany in response to higher beaver prices in London and Paris, as well as to a greater French presence in the region (Carlos and Lewis, 1999). The increase in price also reflects the bargaining ability of Native traders during periods of direct competition between the English and French and later the Hudson’s Bay Company and the Northwest Company. At such times, the Native traders would play both parties off against each other (Ray and Freeman, 1978).

The records of the Hudson’s Bay Company provide us with a unique window to the trading process, including the bargaining ability of Native traders, which is evident in the range of commodities received. Natives only bought goods they wanted. Clear from the Company records is that it was the Natives who largely determined the nature and quality of those goods. As well the records tell us how income from the trade was being allocated. The breakdown differed by post and varied over time; but, for example, in 1740 at York Factory, the distribution was: producer goods – 44 percent; household goods – 9 percent; alcohol and tobacco – 24 percent; and other luxuries – 23 percent. An important implication of the trade data is that, like many Europeans and most American colonists, Native Americans were taking part in the consumer revolution of the eighteenth century (de Vries, 1993; Shammas, 1993). In addition to necessities, they were consuming a remarkable variety of luxury products. Cloth, including baize, duffel, flannel, and gartering, was by far the largest class, but they also purchased beads, combs, looking glasses, rings, shirts, and vermillion among a much longer list. Because these items were heterogeneous in nature, the Hudson’s Bay Company’s head office went to great lengths to satisfy the specific tastes of Native consumers. Attempts were also made, not always successfully, to introduce new products (Carlos and Lewis, 2002).

Perhaps surprising, given the emphasis that has been placed on it in the historical literature, was the comparatively small role of alcohol in the trade. At York Factory, Native traders received in 1740 a total of 494 gallons of brandy and “strong water,” which had a value of 1,976 made beaver. More than twice this amount was spent on tobacco in that year, nearly five times was spent on firearms, twice was spent on cloth, and more was spent on blankets and kettles than on alcohol. Thus, brandy, although a significant item of trade, was by no means a dominant one. In addition, alcohol could hardly have created serious social problems during this period. The amount received would have allowed for no more than ten two-ounce drinks per year for the adult Native population living in the region.

The Labor Supply of Natives

Another important question can be addressed using the trade data. Were Natives “lazy and improvident” as they have been described by some contemporaries, or were they “industrious” like the American colonists and many Europeans? Central to answering this question is how Native groups responded to the price of furs, which began rising in the 1730s. Much of the literature argues that Indian trappers reduced their effort in response to higher fur prices; that is, they had backward-bending supply curves of labor. The view is that Natives had a fixed demand for European goods that, at higher fur prices, could be met with fewer furs, and hence less effort. Although widely cited, this argument does not stand up. Not only were higher fur prices accompanied by larger total harvests of furs in the region, but the pattern of Native expenditure also points to a scenario of greater effort. From the late 1730s to the 1760s, as the price of furs rose, the share of expenditure on luxury goods increased dramatically (see Figure 4). Thus Natives were not content simply to accept their good fortune by working less; rather they seized the opportunity provided to them by the strong fur market by increasing their effort in the commercial sector, thereby dramatically augmenting the purchases of those goods, namely the luxuries, that could raise their living standards.

Figure 4
Native Expenditure Shares at York Factory 1716 – 1770

Source: Carlos and Lewis, 2001.

A Note on the Non-commercial Sector

As important as the fur trade was to Native Americans in the sub-arctic regions of Canada, commerce with the Europeans comprised just one, relatively small, part of their overall economy. Exact figures are not available, but the traditional sectors; hunting, gathering, food preparation and, to some extent, agriculture must have accounted for at least 75 to 80 percent of Native labor during these decades. Nevertheless, despite the limited time spent in commercial activity, the fur trade had a profound effect on the nature of the Native economy and Native society. The introduction of European producer goods, such as guns, and household goods, mainly kettles and blankets, changed the way Native Americans achieved subsistence; and the European luxury goods expanded the range of products that allowed them to move beyond subsistence. Most importantly, the fur trade connected Natives to Europeans in ways that affected how and how much they chose to work, where they chose to live, and how they exploited the resources on which the trade and their survival was based.

References

Berkes, Fikret, David Feeny, Bonnie J. McCay, and James M. Acheson. “The Benefits of the Commons.” Nature 340 (July 13, 1989): 91-93.

Braund, Kathryn E. Holland.Deerskins and Duffels: The Creek Indian Trade with Anglo-America, 1685-1815. Lincoln: University of Nebraska Press, 1993.

Carlos, Ann M., and Elizabeth Hoffman. “The North American Fur Trade: Bargaining to a Joint Profit Maximum under Incomplete Information, 1804-1821.” Journal of Economic History 46, no. 4 (1986): 967-86.

Carlos, Ann M., and Frank D. Lewis. “Indians, the Beaver and the Bay: The Economics of Depletion in the Lands of the Hudson’s Bay Company, 1700-1763.” Journal of Economic History 53, no. 3 (1993): 465-94.

Carlos, Ann M., and Frank D. Lewis. “Property Rights, Competition and Depletion in the Eighteenth-Century Canadian Fur Trade: The Role of the European Market.” Canadian Journal of Economics 32, no. 3 (1999): 705-28.

Carlos, Ann M., and Frank D. Lewis. “Property Rights and Competition in the Depletion of the Beaver: Native Americans and the Hudson’s Bay Company.” In The Other Side of the Frontier: Economic Explorations in Native American History, edited by Linda Barrington, 131-149. Boulder, CO: Westview Press, 1999.

Carlos, Ann M., and Frank D. Lewis. “Trade, Consumption, and the Native Economy: Lessons from York Factory, Hudson Bay.” Journal of Economic History61, no. 4 (2001): 465-94.

Carlos, Ann M., and Frank D. Lewis. “Marketing in the Land of Hudson Bay: Indian Consumers and the Hudson’s Bay Company, 1670-1770.” Enterprise and Society 2 (2002): 285-317.

Carlos, Ann and Nicholas, Stephen. “Agency Problems in Early Chartered Companies: The Case of the Hudson’s Bay Company.” Journal of Economic History 50, no. 4 (1990): 853-75.

Clarke, Fiona. Hats. London: Batsford, 1982.

Crean, J. F. “Hats and the Fur Trade.” Canadian Journal of Economics and Political Science 28, no. 3 (1962): 373-386.

Corner, David. “The Tyranny of Fashion: The Case of the Felt-Hatting Trade in the Late Seventeenth and Eighteenth Centuries.” Textile History 22, no.2 (1991): 153-178.

de Vries, Jan. “Between Purchasing Power and the World of Goods: Understanding the Household Economy in Early Modern Europe.” In Consumption and the World of Goods, edited by John Brewer and Roy Porter, 85-132. London: Routledge, 1993.

Ginsburg Madeleine. The Hat: Trends and Traditions. London: Studio Editions, 1990.

Haeger, John D. John Jacob Astor: Business and Finance in the Early Republic. Detroit: Wayne State University Press, 1991.

Harte, N.B. “The Economics of Clothing in the Late Seventeenth Century.” Textile History 22, no. 2 (1991): 277-296.

Heidenreich, Conrad E., and Arthur J. Ray. The Early Fur Trade: A Study in Cultural Interaction. Toronto: McClelland and Stewart, 1976.

Helm, Jane, ed. Handbook of North American Indians 6, Subarctic. Washington: Smithsonian, 1981.

Innis, Harold. The Fur Trade in Canada (revised edition). Toronto: University of Toronto Press, 1956.

Krech III, Shepard. The Ecological Indian: Myth and History. New York: Norton, 1999.

Lawson, Murray G. Fur: A Study in English Mercantilism. Toronto: University of Toronto Press, 1943.

McManus, John. “An Economic Analysis of Indian Behavior in the North American Fur Trade.” Journal of Economic History 32, no.1 (1972): 36-53.

Ray, Arthur J. Indians in the Fur Trade: Their Role as Hunters, Trappers and Middlemen in the Lands Southwest of Hudson Bay, 1660-1870. Toronto: University of Toronto Press, 1974.

Ray, Arthur J. and Donald Freeman. “Give Us Good Measure”: An Economic Analysis of Relations between the Indians and the Hudson’s Bay Company before 1763. Toronto: University of Toronto Press, 1978.

Ray, Arthur J. “Bayside Trade, 1720-1780.” In Historical Atlas of Canada 1, edited by R. Cole Harris, plate 60. Toronto: University of Toronto Press, 1987.

Rich, E. E. Hudson’s Bay Company, 1670 – 1870. 2 vols. Toronto: McClelland and Stewart, 1960.

Rich, E.E. “Trade Habits and Economic Motivation among the Indians of North America.” Canadian Journal of Economics and Political Science 26, no. 1 (1960): 35-53.

Shammas, Carole. “Changes in English and Anglo-American Consumption from 1550-1800.” In Consumption and the World of Goods, edited by John Brewer and Roy Porter, 177-205. London: Routledge, 1993.

Wien, Thomas. “Selling Beaver Skins in North America and Europe, 1720-1760: The Uses of Fur-Trade Imperialism.” Journal of the Canadian Historical Association, New Series 1 (1990): 293-317.

Citation: Carlos, Ann and Frank Lewis. “Fur Trade (1670-1870)”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-the-fur-trade-1670-to-1870/