EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

The Economic History of Korea

The Economic History of Korea

Myung Soo Cha, Yeungnam University

Three Periods

Two regime shifts divide the economic history of Korea during the past six centuries into three distinct periods: 1) the period of Malthusian stagnation up to 1910, when Japan annexed Korea; 2) the colonial period from 1910-45, when the country embarked upon modern economic growth; and 3) the post colonial decades, when living standards improved rapidly in South Korea, while North Korea returned to the world of disease and starvation. The dramatic history of living standards in Korea presents one of the most convincing pieces of evidence to show that institutions — particularly the government — matter for economic growth.

Dynastic Degeneration

The founders of the Chosôn dynasty (1392-1910) imposed a tribute system on a little-commercialized peasant economy, collecting taxes in the form of a wide variety of products and mobilizing labor to obtain the handicrafts and services it needed. From the late sixteenth to the early seventeenth century, invading armies from Japan and China shattered the command system and forced a transition to a market economy. The damaged bureaucracy started to receive taxes in money commodities — rice and cotton textiles — and eventually began to mint copper coins and lifted restrictions on trade. The wars also dealt a serious blow to slavery and the pre-war system of forced labor, allowing labor markets to emerge.

Markets were slow to develop: grain markets in agricultural regions of Korea appeared less integrated than those in comparable parts of China and Japan. Population and acreage, however, recovered quickly from the adverse impact of the wars. Population growth came to a halt around 1800, and a century of demographic stagnation followed due to a higher level of mortality. During the nineteenth century, living standards appeared to deteriorate. Both wages and rents fell, tax receipts shrank, and budget deficits expanded, forcing the government to resort to debasement. Peasant rebellions occurred more frequently, and poor peasants left Korea for northern China.

Given that both acreage and population remained stable during the nineteenth century, the worsening living standards imply that the aggregate output contracted, because land and labor were being used in an ever more inefficient way. The decline in efficiency appeared to have much to do with disintegrating system of water control, which included flood control and irrigation.

The water control problem had institutional roots, as in Q’ing China. Population growth caused rapid deforestation, as peasants were able to readily obtain farmlands by burning off forests, where property rights usually remained ill-defined. (This contrasts with Tokugawa Japan, where conflicts and litigation following competitive exploitation of forests led to forest regulation.) While the deforestation wrought havoc on reservoirs by increasing the incidence and intensity of flooding, private individuals had little incentives to repair the damages, as they expected others to free-ride on the benefits of their efforts. Keeping the system of water control in good condition required public initiatives, which the dynastic government could not undertake. During the nineteenth century, powerful landowning families took turns controlling minor or ailing kings, reducing the state to an instrument serving private interests. Failing to take measures to maintain irrigation, provincial officials accelerated its decay by taking bribes in return for conniving at the practice of farming on the rich soil alongside reservoirs. Peasants responded to the decaying irrigation by developing new rice seed varieties, which could better resist droughts but yielded less. They also tried to counter the increasingly unstable water supply by building waterways linking farmlands with rivers, which frequently met opposition from people farming further downstream. Not only did provincial administrators fail to settle the water disputes, but also some of them became central causes of clashes. In 1894 peasants protested against a local administrator’s attempts to generate private income by collecting fees for using waterways, which had been built by peasants. The uprising quickly developed into a nationwide peasant rebellion, which the crumbling government could suppress only by calling in military forces from China and Japan. An unforeseen consequence of the rebellion was the Sino-Japanese war fought on the Korean soil, where Japan defeated China, tipping the balance of power in Korea critically in her favor.

The water control problem affected primarily rice farming productivity: during the nineteenth century paddy land prices (as measured by the amount of rice) fell, while dry farm prices (as measured by the amount of dry farm products) rose. Peasants and landlords converted paddy lands into dry farms during the nineteenth century, and there occurred an exodus of workers out of agriculture into handicraft and commerce. Despite the proto-industrialization, late dynastic Korea remained less urbanized than Q’ing China, not to mention Tokugawa Japan. Seasonal fluctuations in rice prices in the main agricultural regions of Korea were far wider than those observed in Japan during the nineteenth century, implying a significantly higher interest rate, a lower level of capital per person, and therefore lower living standards for Korea. In the mid-nineteenth century paddy land productivity in Korea was about half of that in Japan.

Colonial Transition to Modern Economic Growth

Less than two decades after having been opened by Commodore Perry, Japan first made its ambitions about Korea known by forcing the country open to trade in 1876. Defeating Russia in the war of 1905, Japan virtually annexed Korea, which was made official five years later. What replaced the feeble and predatory bureaucracy of the ChosǑn dynasty was a developmental state. Drawing on the Meiji government’s experience, the colonial state introduced a set of expensive policy measures to modernize Korea. One important project was to improve infrastructure: railway lines were extended, and roads and harbors and communication networks were improved, which rapidly integrated goods and factor markets both nationally and internationally. Another project was a vigorous health campaign: the colonial government improved public hygiene, introduced modern medicine, and built hospitals, significantly accelerating the mortality decline set in motion around 1890, apparently by the introduction of the smallpox vaccination. The mortality transition resulted in a population expanding 1.4% per year during the colonial period. The third project was to revamp education. As modern teaching institutions quickly replaced traditional schools teaching Chinese classics, primary school enrollment ration rose from 1 percent in 1910 to 47 percent in 1943. Finally, the cadastral survey (1910-18) modernized and legalized property rights to land, which boosted not only the efficiency in land use, but also tax revenue from landowners. These modernization efforts generated sizable public deficits, which the colonial government could finance partly by floating bonds in Japan and partly by unilateral transfers from the Japanese government.

The colonial government implemented industrial policy as well. The Rice Production Development Program (1920-1933), a policy response to the Rice Riots in Japan in 1918, was aimed at increasing rice supply within the Japanese empire. In colonial Korea, the program placed particular emphasis upon reversing the decay in water control. The colonial government provided subsidies for irrigation projects, and set up institutions to lower information, negotiation, and enforcement costs in building new waterways and reservoirs. Improved irrigation made it possible for peasants to grow high yielding rice seed varieties. Completion of a chemical fertilizer factory in 1927 increased the use of fertilizer, further boosting the yields from the new type of rice seeds. Rice prices fell rapidly in the late 1920s and early 1930s in the wake of the world agricultural depression, leading to the suspension of the program in 1933.

Despite the Rice Program, the structure of the colonial economy has been shifting away from agriculture towards manufacturing ever since the beginning of the colonial rule at a consistent pace. From 1911-40 the share of manufacturing in GDP increased from 6 percent to 28 percent, and the share of agriculture fell from 76 percent to 41 percent. Major causes of the structural change included diffusion of modern manufacturing technology, the world agricultural depression shifting the terms of trade in favor of manufacturing, and Japan’s early recovery from the Great Depression generating an investment boom in the colony. Also Korea’s cheap labor and natural resources and the introduction of controls on output and investment in Japan to mitigate the impact of the Depression helped attract direct investment in the colony. Finally, subjugating party politicians and pushing Japan into the Second World War with the invasion of China in 1937, the Japanese military began to develop northern parts of Korea peninsula as an industrial base producing munitions.

The institutional modernization, technological diffusion, and the inflow of Japanese capital put an end to the Malthusian degeneration and pushed Korea onto the path of modern economic growth. Both rents and wages stopped falling and started to rise from the early twentieth century. As the population explosion made labor increasingly abundant vis-a-vis land, rents increased more rapidly than wages, suggesting that income distribution became less equal during the colonial period. Per capita output rose faster than one percent per year from 1911-38.

Per capita grain consumption declined during the colonial period, providing grounds for traditional criticism of the Japanese colonialism exploiting Korea. However, per capita real consumption increased, due to rising non-grain and non-good consumption, and Koreans were also getting better education and living longer. In the late 1920s, life expectancy at birth was 37 years, an estimate several years longer than in China and almost ten years shorter than in Japan. Life expectancy increased to 43 years at the end of the colonial period. Male mean stature was slightly higher than 160 centimeters at the end of the 1920s, a number not significantly different from the Chinese or Japanese height, and appeared to become shorter during the latter half of the colonial period.

South Korean Prosperity

With the end of the Second World War in 1945, two separate regimes emerged on the Korean peninsula to replace the colonial government. The U.S. military government took over the southern half, while communist Russia set up a Korean leadership in the northern half. The de-colonization and political division meant sudden disruption of trade both with Japan and within Korea, causing serious economic turmoil. Dealing with the post-colonial chaos with economic aid, the U.S. military government privatized properties previously owned by the Japanese government and civilians. The first South Korean government, established in 1948, carried out a land reform, making land distribution more egalitarian. Then the Korean War broke out in 1950, killing one and half million people and destroying about a quarter of capital stock during its three year duration.

After the war, South Korean policymakers set upon stimulating economic growth by promoting indigenous industrial firms, following the example of many other post-World War II developing countries. The government selected firms in targeted industries and gave them privileges to buy foreign currencies and to borrow funds from banks at preferential rates. It also erected tariff barriers and imposed a prohibition on manufacturing imports, hoping that the protection would give domestic firms a chance to improve productivity through learning-by-doing and importing advanced technologies. Under the policy, known as import-substitution industrialization (ISI), entrepreneurs seemed more interested in maximizing and perpetuating favors by bribing bureaucrats and politicians, however. This behavior, dubbed as directly unproductive profit-seeking activities (DUP), caused efficiency to falter and living standards to stagnate, providing a background to the collapse of the First Republic in April 1960.

The military coup led by General Park Chung Hee overthrew the short-lived Second Republic in May 1961, making a shift to a strategy of stimulating growth through export promotion (EP hereafter), although ISI was not altogether abandoned. Under EP, policymakers gave various types of favors — low interest loans being the most important — to exporting firms according to their export performance. As the qualification for the special treatment was quantifiable and objective, the room for DUP became significantly smaller. Another advantage of EP over ISI was that it accelerated productivity advances by placing firms under the discipline of export markets and by widening the contact with the developed world: efficiency growth was significantly faster in export industries than in the rest of the economy. In the decade following the shift to EP, per capita output doubled, and South Korea became an industrialized country: from 1960/62 to 1973/75 the share of agriculture in GDP fell from 45 percent to 25 percent, while the share of manufacturing rose from 9 percent to 27 percent. One important factor contributing to the achievement was that the authoritarian government could enjoy relative independence from and avoid capture by special interests.

The withdrawal of U.S. troops from Vietnam in the early 1970s and the subsequent conquest of the region by the communist regime alarmed the South Korean leadership, which has been coping with the threat of North Korea with the help of the U.S. military presence. Park Chung Hee’s reaction was to reduce the level of reliance on the U.S. armed support by expanding capability to produce munitions, which required returning to ISI to build heavy and chemical industries (HCI). The government intervened heavily in the financial markets, directing banks to provide low interest loans to chaebols — conglomerates of businesses owned by a single family — selected for the task of developing different sectors of HCI. Successfully expanding the capital-intensive industries more rapidly than the rest of the economy, the HCI drive generated multiple symptoms of distortion, including rapidly slowing growth, worsening inflation and accumulation of non-performing loans.

Again the ISI ended with a regime shift, triggered by Park Chung Hee’s assassination in 1979. In the 1980s, the succeeding leadership made systematic attempts to sort out the unwelcome legacy of the HCI drive by de-regulating trade and financial sectors. In the 1990s, liberalization of capital account followed, causing rapid accumulation of short-term external debts. This, together with a highly leveraged corporate sector and the banking sector destabilized by the financial repression, provided the background to the contagion of financial crisis from Southeast Asia in 1997. The crisis provided a strong momentum for corporate and financial sector reform.

In the quarter century following the policy shift in the early 1960s, the South Korean per capita output grew at an unusually rapid rate of 7 percent per year, a growth performance paralleled only by Taiwan and two city-states, Hong Kong and Singapore. The portion of South Koreans enjoying the benefits of the growth increased more rapidly from the end of 1970s, when the rising trend in the Gini coefficient (which measures the inequality of income distribution) since the colonial period was reversed. The growth was attributable far more to increased use of productive inputs — physical capital in particular — than to productivity advances. The rapid capital accumulation was driven by an increasingly high savings rate due to a falling dependency ratio, a lagged outcome of rapidly falling mortality during the colonial period. The high growth was also aided by accumulation of human capital, which started with the introduction of modern education under the Japanese rule. Finally, the South Korean developmental state, as symbolized by Park Chung Hee, a former officer of the Japanese Imperial army serving in wartime Manchuria, was closely modeled upon the colonial system of government. In short, South Korea grew on the shoulders of the colonial achievement, rather than emerging out of the ashes left by the Korean War, as is sometimes asserted.

North Korean Starvation

Neither did the North Korean economy emerge out of a void. Founders of the regime took over the system of command set up by the Japanese rulers to invade China. They also benefited from the colonial industrialization concentrated in the north, which had raised the standard of living in the north above that in the south at the end of the colonial rule. While the economic advantage led the North Korean leadership to feel confident enough to invade the South in 1950, it could not sustain the lead: North Korea started to lag behind the fast growing South from the late 1960s, and then suffered a tragic decline in living standards in the 1990s.

After the conclusion of the Korean War, the North Korean power elites adopted a strategy of driving growth through forced saving, which went quickly to the wall for several reasons. First, managers and workers in collective farms and state enterprises had little incentive to improve productivity to counter the falling marginal productivity of capital. Second, the country’s self-imposed isolation made it difficult for it to benefit from the advanced technologies of the developed world through trade and foreign investment. Finally, the despotic and militaristic rule diverted resources to unproductive purposes and disturbed the consistency of planning.

The economic stalemate forced the ruling elites to experiment with the introduction of material incentives and independent accounting of state enterprises. However, they could not push the institutional reform far enough, for fear that it might destabilize their totalitarian rule. Efforts were also made to attract foreign capital, which ended in failure too. Having spent the funds lent by western banks in the early 1970s largely for military purposes, North Korea defaulted on the loans. Laws introduced in the 1980s to draw foreign direct investment had little effect.

The collapse of centrally planned economies in the late 1980s virtually ended energy and capital goods imports at subsidized prices, dealing a serious blow to the wobbly regime. Desperate efforts to resolve chronic food shortages by expanding acreage through deforestation made the country vulnerable to climatic shocks in the 1990s. The end result was a disastrous subsistence crisis, to which the militarist regime responded by extorting concessions from the rest of the world through brinkmanship diplomacy.

Further Reading

Amsden, Alice. Asia’s Next Giant: South Korea and Late Industrialization. Oxford: Oxford University Press, 1989.

Ban, Sung Hwan. “Agricultural Growth in Korea.” In Agricultural Growth in Japan, Taiwan, Korea, and the Philippines, edited by Yujiro Hayami, Vernon W. Ruttan, and Herman M. Southworth, 96-116. Honolulu: University Press of Hawaii, 1979.

Cha, Myung Soo. “Imperial Policy or World Price Shocks? Explaining Interwar Korean Consumption Trend.” Journal of Economic History 58, no. 3 (1998): 731-754.

Cha, Myung Soo. “The Colonial Origins of Korea’s Market Economy.” In Asia-Pacific Dynamism, 1550-2000, edited by A.J.H. Latham and H. Kawakatsu, 86-103. London: Routledge, 2000.

Cha, Myung Soo. “Facts and Myths about Korea’s Economic Past.” Forthcoming in Australian Review of Economic History 44 (2004).

Cole, David C. and Yung Chul Park. Financial Development in Korea, 1945-1978. Cambridge: Harvard University Press, 1983.

Dollar, David and Kenneth Sokoloff. “Patterns of Productivity Growth in South Korean Manufacturing Industries, 1963-1979.” Journal of Development Economics 33, no. 2 (1990): 390-27.

Eckert, Carter J. Offspring of Empire: The Koch’ang Kims and the Colonial Origins of Korean Capitalism, 1876-1945. Seattle: Washington University Press, 1991.

Gill, Insong. “Stature, Consumption, and the Standard of Living in Colonial Korea.” In The Biological Standard of Living in Comparative Perspective, edited by John Komlos and Joerg Baten, 122-138. Stuttgart: Franz Steiner Verlag, 1998.

Gragert, Edwin H. Landownership under Colonial Rule: Korea’s Japanese Experience, 1900-1935. Honolulu: University Press of Hawaii, 1994.

Haggard, Stephan. The Political Economy of the Asian Financial Crisis. Washington: Institute of International Economics, 2000.

Haggard, Stephan, D. Kang and C. Moon. “Japanese Colonialism and Korean Development: A Critique.” World Development 25 (1997): 867-81.

Haggard, Stephan, Byung-kook Kim and Chung-in Moon. “The Transition to Export-led Growth in South Korea: 1954-1966.” Journal of Asian Studies 50, no. 4 (1991): 850-73.

Kang, Kenneth H. “Why Did Koreans Save So Little and Why Do They Now Save So Much?” International Economic Journal 8 (1994): 99-111.

Kang, Kenneth H, and Vijaya Ramachandran. “Economic Transformation in Korea: Rapid Growth without an Agricultural Revolution?” Economic Development and Cultural Change 47, no. 4 (1999): 783-801.

Kim, Kwang Suk and Michael Roemer. Growth and Structural Transformation. Cambridge, MA: Harvard University Press, 1979.

Kimura, Mitsuhiko. “From Fascism to Communism: Continuity and Development of Collectivist Economic Policy in North Korea.” Economic History Review 52, no.1 (1999): 69-86.

Kimura, Mitsuhiko. “Standards of Living in Colonial Korea: Did the Masses Become Worse Off or Better Off under Japanese Rule?” Journal of Economic History 53, no. 3 (1993): 629-652.

Kohli, Atul. “Where Do High Growth Political Economies Come From? The Japanese Lineage of Korea’s ‘Developmental State’.” World Development 9: 1269-93.

Krueger, Anne. The Developmental Role of the Foreign Sector and Aid. Cambridge: Harvard University Press, 1982.

Kwon, Tai Hwan. Demography of Korea: Population Change and Its Components, 1925-66. Seoul: Seoul National University Press, 1977.

Noland, Marcus. Avoiding the Apocalypse: The Future of the Two Koreas. Washington: Institute for International Economics, 2000.

Palais, James B. Politics and Policy in Traditional Korea. Cambridge: Harvard University Press, 1975.

Stern, Joseph J, Ji-hong Kim, Dwight H. Perkins and Jung-ho Yoo, editors. Industrialization and the State: The Korean Heavy and Chemical Industry Drive. Cambridge: Harvard University Press, 1995.

Woo, Jung-en. Race to the Swift: State and Finance in Korean Industrialization. New York: Columbia University Press, 1991.

Young, Alwyn. “The Tyranny of Numbers: Confronting the Statistical Realities of the East Asian Growth Experience.” Quarterly Journal of Economics 110, no. 3 (1995): 641-80.

Citation: Cha, Myung. “The Economic History of Korea”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-korea/

Japanese Industrialization and Economic Growth

Carl Mosk, University of Victoria

Japan achieved sustained growth in per capita income between the 1880s and 1970 through industrialization. Moving along an income growth trajectory through expansion of manufacturing is hardly unique. Indeed Western Europe, Canada, Australia and the United States all attained high levels of income per capita by shifting from agrarian-based production to manufacturing and technologically sophisticated service sector activity.

Still, there are four distinctive features of Japan’s development through industrialization that merit discussion:

The proto-industrial base

Japan’s agricultural productivity was high enough to sustain substantial craft (proto-industrial) production in both rural and urban areas of the country prior to industrialization.

Investment-led growth

Domestic investment in industry and infrastructure was the driving force behind growth in Japanese output. Both private and public sectors invested in infrastructure, national and local governments serving as coordinating agents for infrastructure build-up.

  • Investment in manufacturing capacity was largely left to the private sector.
  • Rising domestic savings made increasing capital accumulation possible.
  • Japanese growth was investment-led, not export-led.

Total factor productivity growth — achieving more output per unit of input — was rapid.

On the supply side, total factor productivity growth was extremely important. Scale economies — the reduction in per unit costs due to increased levels of output — contributed to total factor productivity growth. Scale economies existed due to geographic concentration, to growth of the national economy, and to growth in the output of individual companies. In addition, companies moved down the “learning curve,” reducing unit costs as their cumulative output rose and demand for their product soared.

The social capacity for importing and adapting foreign technology improved and this contributed to total factor productivity growth:

  • At the household level, investing in education of children improved social capability.
  • At the firm level, creating internalized labor markets that bound firms to workers and workers to firms, thereby giving workers a strong incentive to flexibly adapt to new technology, improved social capability.
  • At the government level, industrial policy that reduced the cost to private firms of securing foreign technology enhanced social capacity.

Shifting out of low-productivity agriculture into high productivity manufacturing, mining, and construction contributed to total factor productivity growth.

Dualism

Sharply segmented labor and capital markets emerged in Japan after the 1910s. The capital intensive sector enjoying high ratios of capital to labor paid relatively high wages, and the labor intensive sector paid relatively low wages.

Dualism contributed to income inequality and therefore to domestic social unrest. After 1945 a series of public policy reforms addressed inequality and erased much of the social bitterness around dualism that ravaged Japan prior to World War II.

The remainder of this article will expand on a number of the themes mentioned above. The appendix reviews quantitative evidence concerning these points. The conclusion of the article lists references that provide a wealth of detailed evidence supporting the points above, which this article can only begin to explore.

The Legacy of Autarky and the Proto-Industrial Economy: Achievements of Tokugawa Japan (1600-1868)

Why Japan?

Given the relatively poor record of countries outside the European cultural area — few achieving the kind of “catch-up” growth Japan managed between 1880 and 1970 – the question naturally arises: why Japan? After all, when the United States forcibly “opened Japan” in the 1850s and Japan was forced to cede extra-territorial rights to a number of Western nations as had China earlier in the 1840s, many Westerners and Japanese alike thought Japan’s prospects seemed dim indeed.

Tokugawa achievements: urbanization, road networks, rice cultivation, craft production

In answering this question, Mosk (2001), Minami (1994) and Ohkawa and Rosovsky (1973) emphasize the achievements of Tokugawa Japan (1600-1868) during a long period of “closed country” autarky between the mid-seventeenth century and the 1850s: a high level of urbanization; well developed road networks; the channeling of river water flow with embankments and the extensive elaboration of irrigation ditches that supported and encouraged the refinement of rice cultivation based upon improving seed varieties, fertilizers and planting methods especially in the Southwest with its relatively long growing season; the development of proto-industrial (craft) production by merchant houses in the major cities like Osaka and Edo (now called Tokyo) and its diffusion to rural areas after 1700; and the promotion of education and population control among both the military elite (the samurai) and the well-to-do peasantry in the eighteenth and early nineteenth centuries.

Tokugawa political economy: daimyo and shogun

These developments were inseparable from the political economy of Japan. The system of confederation government introduced at the end of the fifteenth century placed certain powers in the hands of feudal warlords, daimyo, and certain powers in the hands of the shogun, the most powerful of the warlords. Each daimyo — and the shogun — was assigned a geographic region, a domain, being given taxation authority over the peasants residing in the villages of the domain. Intercourse with foreign powers was monopolized by the shogun, thereby preventing daimyo from cementing alliances with other countries in an effort to overthrow the central government. The samurai military retainers of the daimyo were forced to abandon rice farming and reside in the castle town headquarters of their daimyo overlord. In exchange, samurai received rice stipends from the rice taxes collected from the villages of their domain. By removing samurai from the countryside — by demilitarizing rural areas — conflicts over local water rights were largely made a thing of the past. As a result irrigation ditches were extended throughout the valleys, and riverbanks were shored up with stone embankments, facilitating transport and preventing flooding.

The sustained growth of proto-industrialization in urban Japan, and its widespread diffusion to villages after 1700 was also inseparable from the productivity growth in paddy rice production and the growing of industrial crops like tea, fruit, mulberry plant growing (that sustained the raising of silk cocoons) and cotton. Indeed, Smith (1988) has given pride of place to these “domestic sources” of Japan’s future industrial success.

Readiness to emulate the West

As a result of these domestic advances, Japan was well positioned to take up the Western challenge. It harnessed its infrastructure, its high level of literacy, and its proto-industrial distribution networks to the task of emulating Western organizational forms and Western techniques in energy production, first and foremost enlisting inorganic energy sources like coal and the other fossil fuels to generate steam power. Having intensively developed the organic economy depending upon natural energy flows like wind, water and fire, Japanese were quite prepared to master inorganic production after the Black Ships of the Americans forced Japan to jettison its long-standing autarky.

From Balanced to Dualistic Growth, 1887-1938: Infrastructure and Manufacturing Expand

Fukoku Kyohei

After the Tokugawa government collapsed in 1868, a new Meiji government committed to the twin policies of fukoku kyohei (wealthy country/strong military) took up the challenge of renegotiating its treaties with the Western powers. It created infrastructure that facilitated industrialization. It built a modern navy and army that could keep the Western powers at bay and establish a protective buffer zone in North East Asia that eventually formed the basis for a burgeoning Japanese empire in Asia and the Pacific.

Central government reforms in education, finance and transportation

Jettisoning the confederation style government of the Tokugawa era, the new leaders of the new Meiji government fashioned a unitary state with powerful ministries consolidating authority in the capital, Tokyo. The freshly minted Ministry of Education promoted compulsory primary schooling for the masses and elite university education aimed at deepening engineering and scientific knowledge. The Ministry of Finance created the Bank of Japan in 1882, laying the foundations for a private banking system backed up a lender of last resort. The government began building a steam railroad trunk line girding the four major islands, encouraging private companies to participate in the project. In particular, the national government committed itself to constructing a Tokaido line connecting the Tokyo/Yokohama region to the Osaka/Kobe conurbation along the Pacific coastline of the main island of Honshu, and to creating deepwater harbors at Yokohama and Kobe that could accommodate deep-hulled steamships.

Not surprisingly, the merchants in Osaka, the merchant capital of Tokugawa Japan, already well versed in proto-industrial production, turned to harnessing steam and coal, investing heavily in integrated spinning and weaving steam-driven textile mills during the 1880s.

Diffusion of best-practice agriculture

At the same time, the abolition of the three hundred or so feudal fiefs that were the backbone of confederation style-Tokugawa rule and their consolidation into politically weak prefectures, under a strong national government that virtually monopolized taxation authority, gave a strong push to the diffusion of best practice agricultural technique. The nationwide diffusion of seed varieties developed in the Southwest fiefs of Tokugawa Japan spearheaded a substantial improvement in agricultural productivity especially in the Northeast. Simultaneously, expansion of agriculture using traditional Japanese technology agriculture and manufacturing using imported Western technology resulted.

Balanced growth

Growth at the close of the nineteenth century was balanced in the sense that traditional and modern technology using sectors grew at roughly equal rates, and labor — especially young girls recruited out of farm households to labor in the steam using textile mills — flowed back and forth between rural and urban Japan at wages that were roughly equal in industrial and agricultural pursuits.

Geographic economies of scale in the Tokaido belt

Concentration of industrial production first in Osaka and subsequently throughout the Tokaido belt fostered powerful geographic scale economies (the ability to reduce per unit costs as output levels increase), reducing the costs of securing energy, raw materials and access to global markets for enterprises located in the great harbor metropolises stretching from the massive Osaka/Kobe complex northward to the teeming Tokyo/Yokohama conurbation. Between 1904 and 1911, electrification mainly due to the proliferation of intercity electrical railroads created economies of scale in the nascent industrial belt facing outward onto the Pacific. The consolidation of two huge hydroelectric power grids during the 1920s — one servicing Tokyo/Yokohama, the other Osaka and Kobe — further solidified the comparative advantage of the Tokaido industrial belt in factory production. Finally, the widening and paving during the 1920s of roads that could handle buses and trucks was also pioneered by the great metropolises of the Tokaido, which further bolstered their relative advantage in per capita infrastructure.

Organizational economies of scale — zaibatsu

In addition to geographic scale economies, organizational scale economies also became increasingly important in the late nineteenth centuries. The formation of the zaibatsu (“financial cliques”), which gradually evolved into diversified industrial combines tied together through central holding companies, is a case in point. By the 1910s these had evolved into highly diversified combines, binding together enterprises in banking and insurance, trading companies, mining concerns, textiles, iron and steel plants, and machinery manufactures. By channeling profits from older industries into new lines of activity like electrical machinery manufacturing, the zaibatsu form of organization generated scale economies in finance, trade and manufacturing, drastically reducing information-gathering and transactions costs. By attracting relatively scare managerial and entrepreneurial talent, the zaibatsu format economized on human resources.

Electrification

The push into electrical machinery production during the 1920s had a revolutionary impact on manufacturing. Effective exploitation of steam power required the use of large central steam engines simultaneously driving a large number of machines — power looms and mules in a spinning/weaving plant for instance – throughout a factory. Small enterprises did not mechanize in the steam era. But with electrification the “unit drive” system of mechanization spread. Each machine could be powered up independently of one another. Mechanization spread rapidly to the smallest factory.

Emergence of the dualistic economy

With the drive into heavy industries — chemicals, iron and steel, machinery — the demand for skilled labor that would flexibly respond to rapid changes in technique soared. Large firms in these industries began offering premium wages and guarantees of employment in good times and bad as a way of motivating and holding onto valuable workers. A dualistic economy emerged during the 1910s. Small firms, light industry and agriculture offered relatively low wages. Large enterprises in the heavy industries offered much more favorable remuneration, extending paternalistic benefits like company housing and company welfare programs to their “internal labor markets.” As a result a widening gulf opened up between the great metropolitan centers of the Tokaido and rural Japan. Income per head was far higher in the great industrial centers than in the hinterland.

Clashing urban/rural and landlord/tenant interests

The economic strains of emergent dualism were amplified by the slowing down of technological progress in the agricultural sector, which had exhaustively reaped the benefits due to regional diffusion from the Southwest to the Northeast of best practice Tokugawa rice cultivation. Landlords — around 45% of the cultivable rice paddy land in Japan was held in some form of tenancy at the beginning of the twentieth century — who had played a crucial role in promoting the diffusion of traditional best practice techniques now lost interest in rural affairs and turned their attention to industrial activities. Tenants also found their interests disregarded by the national authorities in Tokyo, who were increasingly focused on supplying cheap foodstuffs to the burgeoning industrial belt by promoting agricultural production within the empire that it was assembling through military victories. Japan secured Taiwan from China in 1895, and formally brought Korea under its imperial rule in 1910 upon the heels of its successful war against Russia in 1904-05. Tenant unions reacted to this callous disrespect of their needs through violence. Landlord/tenant disputes broke out in the early 1920s, and continued to plague Japan politically throughout the 1930s, calls for land reform and bureaucratic proposals for reform being rejected by a Diet (Japan’s legislature) politically dominated by landlords.

Japan’s military expansion

Japan’s thrust to imperial expansion was inflamed by the growing instability of the geopolitical and international trade regime of the later 1920s and early 1930s. The relative decline of the United Kingdom as an economic power doomed a gold standard regime tied to the British pound. The United States was becoming a potential contender to the United Kingdom as the backer of a gold standard regime but its long history of high tariffs and isolationism deterred it from taking over leadership in promoting global trade openness. Germany and the Soviet Union were increasingly becoming industrial and military giants on the Eurasian land mass committed to ideologies hostile to the liberal democracy championed by the United Kingdom and the United States. It was against this international backdrop that Japan began aggressively staking out its claim to being the dominant military power in East Asia and the Pacific, thereby bringing it into conflict with the United States and the United Kingdom in the Asian and Pacific theaters after the world slipped into global warfare in 1939.

Reform and Reconstruction in a New International Economic Order, Japan after World War II

Postwar occupation: economic and institutional restructuring

Surrendering to the United States and its allies in 1945, Japan’s economy and infrastructure was revamped under the S.C.A.P (Supreme Commander of the Allied Powers) Occupation lasting through 1951. As Nakamura (1995) points out, a variety of Occupation-sponsored reforms transformed the institutional environment conditioning economic performance in Japan. The major zaibatsu were liquidated by the Holding Company Liquidation Commission set up under the Occupation (they were revamped as keiretsu corporate groups mainly tied together through cross-shareholding of stock in the aftermath of the Occupation); land reform wiped out landlordism and gave a strong push to agricultural productivity through mechanization of rice cultivation; and collective bargaining, largely illegal under the Peace Preservation Act that was used to suppress union organizing during the interwar period, was given the imprimatur of constitutional legality. Finally, education was opened up, partly through making middle school compulsory, partly through the creation of national universities in each of Japan’s forty-six prefectures.

Improvement in the social capability for economic growth

In short, from a domestic point of view, the social capability for importing and adapting foreign technology was improved with the reforms in education and the fillip to competition given by the dissolution of the zaibatsu. Resolving tension between rural and urban Japan through land reform and the establishment of a rice price support program — that guaranteed farmers incomes comparable to blue collar industrial workers — also contributed to the social capacity to absorb foreign technology by suppressing the political divisions between metropolitan and hinterland Japan that plagued the nation during the interwar years.

Japan and the postwar international order

The revamped international economic order contributed to the social capability of importing and adapting foreign technology. The instability of the 1920s and 1930s was replaced with replaced with a relatively predictable bipolar world in which the United States and the Soviet Union opposed each other in both geopolitical and ideological arenas. The United States became an architect of multilateral architecture designed to encourage trade through its sponsorship of the United Nations, the World Bank, the International Monetary Fund and the General Agreement on Tariffs and Trade (the predecessor to the World Trade Organization). Under the logic of building military alliances to contain Eurasian Communism, the United States brought Japan under its “nuclear umbrella” with a bilateral security treaty. American companies were encouraged to license technology to Japanese companies in the new international environment. Japan redirected its trade away from the areas that had been incorporated into the Japanese Empire before 1945, and towards the huge and expanding American market.

Miracle Growth: Soaring Domestic Investment and Export Growth, 1953-1970

Its infrastructure revitalized through the Occupation period reforms, its capacity to import and export enhanced by the new international economic order, and its access to American technology bolstered through its security pact with the United States, Japan experienced the dramatic “Miracle Growth” between 1953 and the early 1970s whose sources have been cogently analyzed by Denison and Chung (1976). Especially striking in the Miracle Growth period was the remarkable increase in the rate of domestic fixed capital formation, the rise in the investment proportion being matched by a rising savings rate whose secular increase — especially that of private household savings – has been well documented and analyzed by Horioka (1991). While Japan continued to close the gap in income per capita between itself and the United States after the early 1970s, most scholars believe that large Japanese manufacturing enterprises had by and large become internationally competitive by the early 1970s. In this sense it can be said that Japan had completed its nine decade long convergence to international competitiveness through industrialization by the early 1970s.

MITI

There is little doubt that the social capacity to import and adapt foreign technology was vastly improved in the aftermath of the Pacific War. Creating social consensus with Land Reform and agricultural subsidies reduced political divisiveness, extending compulsory education and breaking up the zaibatsu had a positive impact. Fashioning the Ministry of International Trade and Industry (M.I.T.I.) that took responsibility for overseeing industrial policy is also viewed as facilitating Japan’s social capability. There is no doubt that M.I.T.I. drove down the cost of securing foreign technology. By intervening between Japanese firms and foreign companies, it acted as a single buyer of technology, playing off competing American and European enterprises in order to reduce the royalties Japanese concerns had to pay on technology licenses. By keeping domestic patent periods short, M.I.T.I. encouraged rapid diffusion of technology. And in some cases — the experience of International Business Machines (I.B.M.), enjoying a virtual monopoly in global mainframe computer markets during the 1950s and early 1960s, is a classical case — M.I.T.I. made it a condition of entry into the Japanese market (through the creation of a subsidiary Japan I.B.M. in the case of I.B.M.) that foreign companies share many of their technological secrets with potential Japanese competitors.

How important industrial policy was for Miracle Growth remains controversial, however. The view of Johnson (1982), who hails industrial policy as a pillar of the Japanese Development State (government promoting economic growth through state policies) has been criticized and revised by subsequent scholars. The book by Uriu (1996) is a case in point.

Internal labor markets, just-in-time inventory and quality control circles

Furthering the internalization of labor markets — the premium wages and long-term employment guarantees largely restricted to white collar workers were extended to blue collar workers with the legalization of unions and collective bargaining after 1945 — also raised the social capability of adapting foreign technology. Internalizing labor created a highly flexible labor force in post-1950 Japan. As a result, Japanese workers embraced many of the key ideas of Just-in-Time inventory control and Quality Control circles in assembly industries, learning how to do rapid machine setups as part and parcel of an effort to produce components “just-in-time” and without defect. Ironically, the concepts of just-in-time and quality control were originally developed in the United States, just-in-time methods being pioneered by supermarkets and quality control by efficiency experts like W. Edwards Deming. Yet it was in Japan that these concepts were relentlessly pursued to revolutionize assembly line industries during the 1950s and 1960s.

Ultimate causes of the Japanese economic “miracle”

Miracle Growth was the completion of a protracted historical process involving enhancing human capital, massive accumulation of physical capital including infrastructure and private manufacturing capacity, the importation and adaptation of foreign technology, and the creation of scale economies, which took decades and decades to realize. Dubbed a miracle, it is best seen as the reaping of a bountiful harvest whose seeds were painstakingly planted in the six decades between 1880 and 1938. In the course of the nine decades between the 1880s and 1970, Japan amassed and lost a sprawling empire, reorienting its trade and geopolitical stance through the twists and turns of history. While the ultimate sources of growth can be ferreted out through some form of statistical accounting, the specific way these sources were marshaled in practice is inseparable from the history of Japan itself and of the global environment within which it has realized its industrial destiny.

Appendix: Sources of Growth Accounting and Quantitative Aspects of Japan’s Modern Economic Development

One of the attractions of studying Japan’s post-1880 economic development is the abundance of quantitative data documenting Japan’s growth. Estimates of Japanese income and output by sector, capital stock and labor force extend back to the 1880s, a period when Japanese income per capita was low. Consequently statistical probing of Japan’s long-run growth from relative poverty to abundance is possible.

The remainder of this appendix is devoted to introducing the reader to the vast literature on quantitative analysis of Japan’s economic development from the 1880s until 1970, a nine decade period during which Japanese income per capita converged towards income per capita levels in Western Europe. As the reader will see, this discussion confirms the importance of factors discussed at the outset of this article.

Our initial touchstone is the excellent “sources of growth” accounting analysis carried out by Denison and Chung (1976) on Japan’s growth between 1953 and 1971. Attributing growth in national income in growth of inputs, the factors of production — capital and labor — and growth in output per unit of the two inputs combined (total factor productivity) along the following lines:

G(Y) = { a G(K) + [1-a] G(L) } + G (A)

where G(Y) is the (annual) growth of national output, g(K) is the growth rate of capital services, G(L) is the growth rate of labor services, a is capital’s share in national income (the share of income accruing to owners of capital), and G(A) is the growth of total factor productivity, is a standard approach used to approximate the sources of growth of income.

Using a variant of this type of decomposition that takes into account improvements in the quality of capital and labor, estimates of scale economies and adjustments for structural change (shifting labor out of agriculture helps explain why total factor productivity grows), Denison and Chung (1976) generate a useful set of estimates for Japan’s Miracle Growth era.

Operating with this “sources of growth” approach and proceeding under a variety of plausible assumptions, Denison and Chung (1976) estimate that of Japan’s average annual real national income growth of 8.77 % over 1953-71, input growth accounted for 3.95% (accounting for 45% of total growth) and growth in output per unit of input contributed 4.82% (accounting for 55% of total growth). To be sure, the precise assumptions and techniques they use can be criticized. The precise numerical results they arrive at can be argued over. Still, their general point — that Japan’s growth was the result of improvements in the quality of factor inputs — health and education for workers, for instance — and improvements in the way these inputs are utilized in production — due to technological and organizational change, reallocation of resources from agriculture to non-agriculture, and scale economies, is defensible.

With this in mind consider Table 1.

Table 1: Industrialization and Economic Growth in Japan, 1880-1970:
Selected Quantitative Characteristics

Panel A: Income and Structure of National Output

Real Income per Capita [a] Share of National Output (of Net Domestic Product) and Relative Labor Productivity (Ratio of Output per Worker in Agriculture to Output per Worker in the N Sector) [b]
Years Absolute Relative to U.S. level Year Agriculture Manufacturing & Mining

(Ma)

Manufacturing,

Construction & Facilitating Sectors [b]

Relative Labor Productivity

A/N

1881-90 893 26.7% 1887 42.5% 13.6% 20.0% 68.3
1891-1900 1,049 28.5 1904 37.8 17.4 25.8 44.3
1900-10 1,195 25.3 1911 35.5 20.3 31.1 37.6
1911-20 1,479 27.9 1919 29.9 26.2 38.3 32.5
1921-30 1,812 29.1 1930 20.0 25.8 43.3 27.4
1930-38 2,197 37.7 1938 18.5 35.3 51.7 20.8
1951-60 2,842 26.2 1953 22.0 26.3 39.7 22.6
1961-70 6,434 47.3 1969 8.7 30.5 45.9 19.1

Panel B: Domestic and External Sources of Aggregate Supply and Demand Growth: Manufacturing and Mining (Ma), Gross Domestic Fixed Capital Formation (GDFCF), and Trade (TR)

Percentage Contribution to Growth due to: Trade Openness and Trade Growth [c]
Years Ma to Output Growth GDFCF to Effective

Demand Growth

Years Openness Growth in Trade
1888-1900 19.3% 17.9% 1885-89 6.9% 11.4%
1900-10 29.2 30.5 1890-1913 16.4 8.0
1910-20 26.5 27.9 1919-29 32.4 4.6
1920-30 42.4 7.5 1930-38 43.3 8.1
1930-38 50.5 45.3 1954-59 19.3 12.0
1955-60 28.1 35.0 1960-69 18.5 10.3
1960-70 33.5 38.5

Panel C: Infrastructure and Human Development

Human Development Index (HDI) [d] Electricity Generation and National Broadcasting (NHK) per 100 Persons [e]
Year Educational Attainment Infant Mortality Rate (IMR) Overall HDI

Index

Year Electricity NHK Radio Subscribers
1900 0.57 155 0.57 1914 0.28 n.a.
1910 0.69 161 0.61 1920 0.68 n.a.
1920 0.71 166 0.64 1930 2.46 1.2
1930 0.73 124 0.65 1938 4.51 7.8
1950 0.81 63 0.69 1950 5.54 11.0
1960 0.87 34 0.75 1960 12.28 12.6
1970 0.95 14 0.83 1970 34.46 21.9

Notes: [a] Maddison (2000) provides estimates of real income that take into account the purchasing power of national currencies.

[b] Ohkawa (1979) gives estimates for the “N” sector that is defined as manufacturing and mining (Ma) plus construction plus facilitating industry (transport, communications and utilities). It should be noted that the concept of an “N” sector is not standard in the field of economics.

[c] The estimates of trade are obtained by adding merchandise imports to merchandise exports. Trade openness is estimated by taking the ratio of total (merchandise) trade to national output, the latter defined as Gross Domestic Product (G.D.P.). The trade figures include trade with Japan’s empire (Korea, Taiwan, Manchuria, etc.); the income figures for Japan exclude income generated in the empire.

[d] The Human Development Index is a composite variable formed by adding together indices for educational attainment, for health (using life expectancy that is inversely related to the level of the infant mortality rate, the IMR), and for real per capita income. For a detailed discussion of this index see United Nations Development Programme (2000).

[e] Electrical generation is measured in million kilowatts generated and supplied. For 1970, the figures on NHK subscribers are for television subscribers. The symbol n.a. = not available.

Sources: The figures in this table are taken from various pages and tables in Japan Statistical Association (1987), Maddison (2000), Minami (1994), and Ohkawa (1979).

Flowing from this table are a number of points that bear lessons of the Denison and Chung (1976) decomposition. One cluster of points bears upon the timing of Japan’s income per capita growth and the relationship of manufacturing expansion to income growth. Another highlights improvements in the quality of the labor input. Yet another points to the overriding importance of domestic investment in manufacturing and the lesser significance of trade demand. A fourth group suggests that infrastructure has been important to economic growth and industrial expansion in Japan, as exemplified by the figures on electricity generating capacity and the mass diffusion of communications in the form of radio and television broadcasting.

Several parts of Table 1 point to industrialization, defined as an increase in the proportion of output (and labor force) attributable to manufacturing and mining, as the driving force in explaining Japan’s income per capita growth. Notable in Panels A and B of the table is that the gap between Japanese and American income per capita closed most decisively during the 1910s, the 1930s, and the 1960s, precisely the periods when manufacturing expansion was the most vigorous.

Equally noteworthy of the spurts of the 1910s, 1930s and the 1960s is the overriding importance of gross domestic fixed capital formation, that is investment, for growth in demand. By contrast, trade seems much less important to growth in demand during these critical decades, a point emphasized by both Minami (1994) and by Ohkawa and Rosovsky (1973). The notion that Japanese growth was “export led” during the nine decades between 1880 and 1970 when Japan caught up technologically with the leading Western nations is not defensible. Rather, domestic capital investment seems to be the driving force behind aggregate demand expansion. The periods of especially intense capital formation were also the periods when manufacturing production soared. Capital formation in manufacturing, or in infrastructure supporting manufacturing expansion, is the main agent pushing long-run income per capita growth.

Why? As Ohkawa and Rosovsky (1973) argue, spurts in manufacturing capital formation were associated with the import and adaptation of foreign technology, especially from the United States These investment spurts were also associated with shifts of labor force out of agriculture and into manufacturing, construction and facilitating sectors where labor productivity was far higher than it was in labor-intensive farming centered around labor-intensive rice cultivation. The logic of productivity gain due to more efficient allocation of labor resources is apparent from the right hand column of Panel A in Table 1.

Finally, Panel C of Table 1 suggests that infrastructure investment that facilitated health and educational attainment (combined public and private expenditure on sanitation, schools and research laboratories), and public/private investment in physical infrastructure including dams and hydroelectric power grids helped fuel the expansion of manufacturing by improving human capital and by reducing the costs of transportation, communications and energy supply faced by private factories. Mosk (2001) argues that investments in human-capital-enhancing (medicine, public health and education), financial (banking) and physical infrastructure (harbors, roads, power grids, railroads and communications) laid the groundwork for industrial expansions. Indeed, the “social capability for importing and adapting foreign technology” emphasized by Ohkawa and Rosovsky (1973) can be largely explained by an infrastructure-driven growth hypothesis like that given by Mosk (2001).

In sum, Denison and Chung (1976) argue that a combination of input factor improvement and growth in output per combined factor inputs account for Japan’s most rapid spurt of economic growth. Table 1 suggests that labor quality improved because health was enhanced and educational attainment increased; that investment in manufacturing was important not only because it increased capital stock itself but also because it reduced dependence on agriculture and went hand in glove with improvements in knowledge; and that the social capacity to absorb and adapt Western technology that fueled improvements in knowledge was associated with infrastructure investment.

References

Denison, Edward and William Chung. “Economic Growth and Its Sources.” In Asia’s Next Giant: How the Japanese Economy Works, edited by Hugh Patrick and Henry Rosovsky, 63-151. Washington, DC: Brookings Institution, 1976.

Horioka, Charles Y. “Future Trends in Japan’s Savings Rate and the Implications Thereof for Japan’s External Imbalance.” Japan and the World Economy 3 (1991): 307-330.

Japan Statistical Association. Historical Statistics of Japan [Five Volumes]. Tokyo: Japan Statistical Association, 1987.

Johnson, Chalmers. MITI and the Japanese Miracle: The Growth of Industrial Policy, 1925-1975. Stanford: Stanford University Press, 1982.

Maddison, Angus. Monitoring the World Economy, 1820-1992. Paris: Organization for Economic Co-operation and Development, 2000.

Minami, Ryoshin. Economic Development of Japan: A Quantitative Study. [Second edition]. Houndmills, Basingstoke, Hampshire: Macmillan Press, 1994.

Mitchell, Brian. International Historical Statistics: Africa and Asia. New York: New York University Press, 1982.

Mosk, Carl. Japanese Industrial History: Technology, Urbanization, and Economic Growth. Armonk, New York: M.E. Sharpe, 2001.

Nakamura, Takafusa. The Postwar Japanese Economy: Its Development and Structure, 1937-1994. Tokyo: University of Tokyo Press, 1995.

Ohkawa, Kazushi. “Production Structure.” In Patterns of Japanese Economic Development: A Quantitative Appraisal, edited by Kazushi Ohkawa and Miyohei Shinohara with Larry Meissner, 34-58. New Haven: Yale University Press, 1979.

Ohkawa, Kazushi and Henry Rosovsky. Japanese Economic Growth: Trend Acceleration in the Twentieth Century. Stanford, CA: Stanford University Press, 1973.

Smith, Thomas. Native Sources of Japanese Industrialization, 1750-1920. Berkeley: University of California Press, 1988.

Uriu, Robert. Troubled Industries: Confronting Economic Challenge in Japan. Ithaca: Cornell University Press, 1996.

United Nations Development Programme. Human Development Report, 2000. New York: Oxford University Press, 2000.

Citation: Mosk, Carl. “Japan, Industrialization and Economic Growth”. EH.Net Encyclopedia, edited by Robert Whaples. January 18, 2004. URL http://eh.net/encyclopedia/japanese-industrialization-and-economic-growth/

A Brief Economic History of Modern Israel

Nadav Halevi, Hebrew University

The Pre-state Background

The history of modern Israel begins in the 1880s, when the first Zionist immigrants came to Palestine, then under Ottoman rule, to join the small existing Jewish community, establishing agricultural settlements and some industry, restoring Hebrew as the spoken national language, and creating new economic and social institutions. The ravages of World War I reduced the Jewish population by a third, to 56,000, about what it had been at the beginning of the century.

As a result of the war, Palestine came under the control of Great Britain, whose Balfour Declaration had called for a Jewish National Home in Palestine. Britain’s control was formalized in 1920, when it was given the Mandate for Palestine by the League of Nations. During the Mandatory period, which lasted until May 1948, the social, political and economic structure for the future state of Israel was developed. Though the government of Palestine had a single economic policy, the Jewish and Arab economies developed separately, with relatively little connection.

Two factors were instrumental in fostering rapid economic growth of the Jewish sector: immigration and capital inflows. The Jewish population increased mainly through immigration; by the end of 1947 it had reached 630,000, about 35 percent of the total population. Immigrants came in waves, particularly large in the mid 1920s and mid 1930s. They consisted of ideological Zionists and refugees, economic and political, from Central and Eastern Europe. Capital inflows included public funds, collected by Zionist institutions, but were for the most part private funds. National product grew rapidly during periods of large immigration, but both waves of mass immigration were followed by recessions, periods of adjustment and consolidation.

In the period from 1922 to 1947 real net domestic product (NDP) of the Jewish sector grew at an average rate of 13.2 percent, and in 1947 accounted for 54 percent of the NDP of the Jewish and Arab economies together. NDP per capita in the Jewish sector grew at a rate of 4.8 percent; by the end of the period it was 8.5 times larger in than in 1922, and 2.5 times larger than in the Arab sector (Metzer, 1998). Though agricultural development – an ideological objective – was substantial, this sector never accounted for more than 15 percent of total net domestic product of the Jewish economy. Manufacturing grew slowly for most of the period, but very rapidly during World War II, when Palestine was cut off from foreign competition and was a major provider to the British armed forces in the Middle East. By the end of the period, manufacturing accounted for a quarter of NDP. Housing construction, though a smaller component of NDP, was the most volatile sector, and contributed to sharp business cycle movements. A salient feature of the Jewish economy during the Mandatory period, which carried over into later periods, was the dominant size of the services sector – more than half of total NDP. This included a relatively modern educational and health sector, efficient financial and business sectors, and semi-governmental Jewish institutions, which later were ready to take on governmental duties.

The Formative Years: 1948-1965

The state of Israel came into being, in mid May 1948, in the midst of a war with its Arab neighbors. The immediate economic problems were formidable: to finance and wage a war, to take in as many immigrants as possible (first the refugees kept in camps in Europe and on Cyprus), to provide basic commodities to the old and new population, and to create a government bureaucracy to cope with all these challenges. The creation of a government went relatively smoothly, as semi-governmental Jewish institutions which had developed during the Mandatory period now became government departments.

Cease-fire agreements were signed during 1949. By the end of that year a total of 340,000 immigrants had arrived, and by the end of 1951 an additional 345,000 (the latter including immigrants from Arab countries), thus doubling the Jewish population. Immediate needs were met by a strict austerity program and inflationary government finance, repressed by price controls and rationing of basic commodities. However, the problems of providing housing and employment for the new population were solved only gradually. A New Economic Policy was introduced in early 1952. It consisted of exchange rate devaluation, the gradual relaxation of price controls and rationing, and curbing of monetary expansion, primarily by budgetary restraint. Active immigration encouragement was curtailed, to await the absorption of the earlier mass immigration.

From 1950 until 1965, Israel achieved a high rate of growth: Real GNP (gross national product) grew by an average annual rate of over 11 percent, and per capita GNP by greater than 6 percent. What made this possible? Israel was fortunate in receiving large sums of capital inflows: U.S. aid in the forms of unilateral transfers and loans, German reparations and restitutions to individuals, sale of State of Israel Bonds abroad, and unilateral transfers to public institutions, mainly the Jewish Agency, which retained responsibility for immigration absorption and agricultural settlement. Thus, Israel had resources available for domestic use – for public and private consumption and investment – about 25 percent more than its own GNP. This made possible a massive investment program, mainly financed through a special government budget. Both the enormity of needs and the socialist philosophy of the main political party in the government coalitions led to extreme government intervention in the economy.

Governmental budgets and strong protectionist measures to foster import-substitution enabled the development of new industries, chief among them textiles, and subsidies were given to help the development of exports, additional to the traditional exports of citrus products and cut diamonds.

During the four decades from the mid 1960s until the present, Israel’s economy developed and changed, as did economic policy. A major factor affecting these developments has been the Arab-Israeli conflict. Its influence is discussed first, and is followed by brief descriptions of economic growth and fluctuations, and evolution of economic policy.

The Arab-Israel Conflict

The most dramatic event of the 1960s was the Six Day War of 1967, at the end of which Israel controlled the West Bank (of the Jordan River) – the area of Palestine absorbed by the Jordan since 1949 – and the Gaza Strip, controlled until then by Egypt.

As a consequence of the occupation of these territories Israel was responsible for the economic as well as the political life in the areas taken over. The Arab sections of Jerusalem were united with the Jewish section. Jewish settlements were established in parts of the occupied territories. As hostilities intensified, special investments in infrastructure were made to protect Jewish settlers. The allocation of resources to Jewish settlements in the occupied territories has been a political and economic issue ever since.

The economies of Israel and the occupied territories were partially integrated. Trade in goods and services developed, with restrictions placed on exports to Israel of products deemed too competitive, and Palestinian workers were employed in Israel particularly in construction and agriculture. At its peak, in 1996, Palestinian employment in Israel reached 115,000 to 120,000, about 40 percent of the Palestinian labor force, but never more than 6.5 percent of total Israeli employment. Thus, while employment in Israel was a major contributor to the economy of the Palestinians, its effects on the Israeli economy, except for the sectors of construction and agriculture, were not large.

The Palestinian economy developed rapidly – real per capita national income grew at an annual rate of close to 20 percent in 1969-1972 and 5 percent in 1973-1980 – but fluctuated widely thereafter, and actually decreased in times of hostilities. Palestinian per capita income equaled 10.2 percent of Israeli per capita income in 1968, 22.8 percent in 1986, and declined to 9.7 percent in 1998 (Kleiman, 2003).

As part of the peace process between Israel and the Palestinians initiated in the 1990s, an economic agreement was signed between the parties in 1994, which in effect transformed what had been essentially a one-sided customs agreement (which gave Israel full freedom to export to the Territories but put restrictions on Palestinian exports to Israel) into a more equal customs union: the uniform external trade policy was actually Israel’s, but the Palestinians were given limited sovereignty regarding imports of certain commodities.

Arab uprisings (intifadas), in the 1980s, and especially the more violent one beginning in 2000 and continuing into 2005, led to severe Israeli restrictions on interaction between the two economies, particularly employment of Palestinians in Israel, and even to military reoccupation of some areas given over earlier to Palestinian control. These measures set the Palestinian economy back many years, wiping out much of the gains in income which had been achieved since 1967 – per capita GNP in 2004 was $932, compared to about $1500 in 1999. Palestinian workers in Israel were replaced by foreign workers.

An important economic implication of the Arab-Israel conflict is that Israel must allocate a major part of its budget to defense. The size of the defense budget has varied, rising during wars and armed hostilities. The total defense burden (including expenses not in the budget) reached its maximum relative size during and after the Yom Kippur War of 1973, close to 30 percent of GNP in 1974-1978. In the 2000-2004 period, the defense budget alone reached about 22 to 25 percent of GDP. Israel has been fortunate in receiving generous amounts of U.S. aid. Until 1972 most of this came in the form of grants and loans, primarily for purchases of U.S. agricultural surpluses. But since 1973 U.S. aid has been closely connected to Israel’s defense needs. During 1973-1982 annual loans and grants averaged $1.9 billion, and covered some 60 percent of total defense imports. But even in more tranquil periods, the defense burden, exclusive of U.S. aid, has been much larger than usual in industrial countries during peace time.

Growth and Economic Fluctuations

The high rates of growth of income and income per capita which characterized Israel until 1973 were not achieved thereafter. GDP growth fluctuated, generally between 2 and 5 percent, reaching as high as 7.5 percent in 2000, but falling below zero in the recession years from 2001 to mid 2003. By the end of the twentieth century income per capita reached about $20,000, similar to many of the more developed industrialized countries.

Economic fluctuations in Israel have usually been associated with waves of immigration: a large flow of immigrants which abruptly increases the population requires an adjustment period until it is absorbed productively, with the investments for its absorption in employment and housing stimulating economic activity. Immigration never again reached the relative size of the first years after statehood, but again gained importance with the loosening of restrictions on emigration from the Soviet Union. The total number of immigrants in 1972-1982 was 325,000, and after the collapse of the Soviet Union immigration totaled 1,050,000 in 1990-1999, mostly from the former Soviet Union. Unlike the earlier period, these immigrants were gradually absorbed in productive employment (though often not in the same activity as abroad) without resort to make-work projects. By the end of the century the population of Israel passed 6,300,000, with the Jewish population being 78 percent of the total. The immigrants from the former Soviet Union were equal to about one-fifth of the Jewish population, and were a significant and important addition of human capital to the labor force.

As the economy developed, the structure of output changed. Though the service sectors are still relatively large – trade and services contributing 46 percent of the business sector’s product – agriculture has declined in importance, and industry makes up over a quarter of the total. The structure of manufacturing has also changed: both in total production and in exports the share of traditional, low-tech industries has declined, with sophisticated, high-tech products, particularly electronics, achieving primary importance.

Fluctuations in output were marked by periods of inflation and periods of unemployment. After a change in exchange rate policy in the late 1970s (discussed below), an inflationary spiral was unleashed. Hyperinflation rates were reached in the early 1980s, about 400 percent per year by the time a drastic stabilization policy was imposed in 1985. Exchange rate stabilization, budgetary and monetary restraint, and wage and price freezes sharply reduced the rate of inflation to less than 20 percent, and then to about 16 percent in the late 1980s. Very drastic monetary policy, from the late 1990s, finally reduced the inflation to zero by 2005. However, this policy, combined with external factors such as the bursting of the high-tech bubble, recession abroad, and domestic insecurity resulting from the intifada, led to unemployment levels above 10 percent at the beginning of the new century. The economic improvements since the latter half of 2003 have, as yet (February 2005), not significantly reduced the level of unemployment.

Policy Changes

The Israeli economy was initially subject to extensive government controls. Only gradually was the economy converted into a fairly free (though still not completely so) market economy. This process began in the 1960s. In response to a realization by policy makers that government intervention in the economy was excessive, and to the challenge posed by the creation in Europe of a customs union (which gradually progressed into the present European Union), Israel embarked upon a very gradual process of economic liberalization. This appeared first in foreign trade: quantitative restrictions on imports were replaced by tariff protection, which was slowly reduced, and both import-substitution and exports were encouraged by more realistic exchange rates rather than by protection and subsidies. Several partial trade agreements with the European Economic Community (EEC), starting in 1964, culminated in a free trade area agreement (FTA) in industrial goods in 1975, and an FTA agreement with the U.S. came into force in 1985.

By late 1977 a considerable degree of trade liberalization had taken place. In October of that year, Israel moved from a fixed exchange rate system to a floating rate system, and restrictions on capital movements were considerably liberalized. However, there followed a disastrous inflationary spiral which curbed the capital liberalization process. Capital flows were not completely liberalized until the beginning of the new century.

Throughout the 1980s and the 1990s there were additional liberalization measures: in monetary policy, in domestic capital markets, and in various instruments of governmental interference in economic activity. The role of government in the economy was considerably decreased. On the other hand, some governmental economic functions were increased: a national health insurance system was introduced, though private health providers continued to provide health services within the national system. Social welfare payments, such as unemployment benefits, child allowances, old age pensions and minimum income support, were expanded continuously, until they formed a major budgetary expenditure. These transfer payments compensated, to a large extent, for the continuous growth of income inequality, which had moved Israel from among the developed countries with the least income inequality to those with the most. By 2003, 15 percent of the government’s budget went to health services, 15 percent to education, and an additional 20 percent were transfer payments through the National Insurance Agency.

Beginning in 2003, the Ministry of Finance embarked upon a major effort to decrease welfare payments, induce greater participation in the labor force, privatize enterprises still owned by government, and reduce both the relative size of the government deficit and the government sector itself. These activities are the result of an ideological acceptance by the present policy makers of the concept that a truly free market economy is needed to fit into and compete in the modern world of globalization.

An important economic institution is the Histadrut, a federation of labor unions. What had made this institution unique is that, in addition to normal labor union functions, it encompassed agricultural and other cooperatives, major construction and industrial enterprises, and social welfare institutions, including the main health care provider. During the Mandatory period, and for many years thereafter, the Histadrut was an important factor in economic development and in influencing economic policy. During the 1990s, the Histadrut was divested of many of its non-union activities, and its influence in the economy has greatly declined. The major unions associated with it still have much say in wage and employment issues.

The Challenges Ahead

As it moves into the new century, the Israeli economy has proven to be prosperous, as it continuously introduces and applies economic innovation, and to be capable of dealing with economic fluctuations. However, it faces some serious challenges. Some of these are the same as those faced by most industrial economies: how to reconcile innovation, the switch from traditional activities which are no longer competitive, to more sophisticated, skill-intensive products, with the dislocation of labor it involves, and the income inequality it intensifies. Like other small economies, Israel has to see how it fits into the new global economy, marked by the two major markets of the EU and the U.S., and the emergence of China as a major economic factor.

Special issues relate to the relations of Israel with its Arab neighbors. First are the financial implications of continuous hostilities and military threats. Clearly, if peace can come to the region, resources can be transferred to more productive uses. Furthermore, foreign investment, so important for Israel’s future growth, is very responsive to political security. Other issues depend on the type of relations established: will there be the free movement of goods and workers between Israel and a Palestinian state? Will relatively free economic relations with other Arab countries lead to a greater integration of Israel in the immediate region, or, as is more likely, will Israel’s trade orientation continue to be directed mainly to the present major industrial countries? If the latter proves true, Israel will have to carefully maneuver between the two giants: the U.S. and the EU.

References and Recommended Reading

Ben-Bassat, Avi, editor. The Israeli Economy, 1985-1998: From Government Intervention to Market Economics. Cambridge, MA: MIT Press, 2002.

Ben-Porath, Yoram, editor. The Israeli Economy: Maturing through Crisis. Cambridge, MA: Harvard University Press, 1986.

Fischer, Stanley, Dani Rodrik and Elias Tuma, editors. The Economics of Middle East Peace. Cambridge, MA: MIT Press, 1993.

Halevi, Nadav and Ruth Klinov-Malul, The Economic Development of Israel. New York: Praeger, 1968.

Kleiman, Ephraim. “Palestinian Economic Viability and Vulnerability.” Paper presented at the UCLA Burkle Conference in Athens, August 2003. (Available at www.international.ucla.edu.)

Metz, Helen Chapin, editor. Israel: A Country Study. Washington: Library of Congress Country Studies, 1986.

Metzer, Jacob, The Divided Economy of Mandatory Palestine. Cambridge: Cambridge University Press, 1998.

Patinkin, Don. The Israel Economy: The First Decade. Jerusalem: Maurice Falk Institute for Economic Research in Israel, 1967.

Razin, Assaf and Efraim Sadka, The Economy of Modern Israel: Malaise and Promise. London: Chicago University Press, 1993.

World Bank. Developing the Occupied Territories: An Investment in Peace. Washington D.C.: The World Bank, September, 1993.

Citation: Halevi, Nadav. “A Brief Economic History of Modern Israel”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/a-brief-economic-history-of-modern-israel/

Islamic Economics: What It Is and How It Developed

M. Umer Chapra, Islamic Research and Training Institute

Islamic economics has been having a revival over the last few decades. However, it is still in a preliminary stage of development. In contrast with this, conventional economics has become a well-developed and sophisticated discipline after going through a long and rigorous process of development over more than a century. Is a new discipline in economics needed? If so, what is Islamic economics, how does it differ from conventional economics, and what contributions has it made over the centuries? This article tries to briefly answer these questions.

It is universally recognized that resources are scarce compared with the claims on them. However, it is also simultaneously recognized by practically all civilizations that the well-being of all human beings needs to be ensured. Given the scarcity of resources, the well-being of all may remain an unrealized dream if the scarce resources are not utilized efficiently and equitably. For this purpose, every society needs to develop an effective strategy, which is consciously or unconsciously conditioned by its worldview. If the worldview is flawed, the strategy may not be able to help the society actualize the well-being of all. Prevailing worldviews may be classified for the sake of ease into two board theoretical constructs (1) secular and materialist, and (2) spiritual and humanitarian.

The Role of the Worldview

Secular and materialist worldviews attach maximum importance to the material aspect of human well-being and tend generally to ignore the importance of the spiritual aspect. They often argue that maximum material well-being can be best realized if individuals are given unhindered freedom to pursue their self-interest and to maximize their want satisfaction in keeping with their own tastes and preferences.[1] In their extreme form they do not recognize any role for Divine guidance in human life and place full trust in the ability of human beings to chalk out a proper strategy with the help of their reason. In such a worldview there is little role for values or government intervention in the efficient and equitable allocation and distribution of resources. When asked about how social interest would be served when everyone has unlimited freedom to pursue his/her self-interest, the reply is that market forces will themselves ensure this because competition will keep self-interest under check.

In contrast with this, religious worldviews give attention to both the material as well as the spiritual aspects of human well-being. They do not necessarily reject the role of reason in human development. They, however, recognize the limitations of reason and wish to complement it by revelation. They do not also reject the need for individual freedom or the role that the serving of self-interest can play in human development They, however, emphasize that both freedom and the pursuit of self-interest need to be toned down by moral values and good governance to ensure that everyone’s well-being is realized and that social harmony and family integrity are not hurt in the process of everyone serving his/her self-interest.

Material and Spiritual Needs

Even though none of the major worldviews prevailing around the world is totally materialist and hedonist, there are, nevertheless, significant differences among them in terms of the emphasis they place on material or spiritual goals and the role of moral values and government intervention in ordering human affairs. While material goals concentrate primarily on goods and services that contribute to physical comfort and well-being, spiritual goals include nearness to God, peace of mind, inner happiness, honesty, justice, mutual care and cooperation, family and social harmony, and the absence of crime and anomie. These may not be quantifiable, but are, nevertheless, crucial for realizing human well-being. Resources being limited, excessive emphasis on the material ingredients of well-being may lead to a neglect of spiritual ingredients. The greater the difference in emphasis, the greater may be the difference in the economic disciplines of these societies. Feyerabend (1993) frankly recognized this in the introduction to the Chinese edition of his thought-provoking book, Against Method, by stating that “First world science is only one science among many; by claiming to be more it ceases to be an instrument of research and turns into a (political) pressure group” (p.3, parentheses are in the original).

The Enlightenment Worldview and Conventional Economics

There is a great deal that is common between the worldviews of most major religions, particularly those of Judaism, Christianity and Islam. This is because, according to Islam, there is a continuity and similarity in the value systems of all Revealed religions to the extent to which the Message has not been lost or distorted over the ages. The Qur’an clearly states that: “Nothing has been said to you [Muhammad] that was not said to the Messengers before you” (Al-Qur’an, 41:43). If conventional economics had continued to develop in the image of the Judeo-Christian worldview, as it did before the Enlightenment Movement of the seventeenth and eighteenth centuries, there may not have been any significant difference between conventional and Islamic economics. However, after the Enlightenment Movement, all intellectual disciplines in Europe became influenced by its secular, value-neutral, materialist and social-Darwinist worldview, even though this did not succeed fully. All economists did not necessarily become materialist or social-Darwinist in their individual lives and many of them continued to be attached to their religious worldviews. Koopmans (1969) has rightly observed that “scratch an economist and you will find a moralist underneath.” Therefore, while theoretically conventional economics adopted the secular and value neutral orientation of the Enlightenment worldview and failed to recognize the role of value judgments and good governance in the efficient and equitable allocation and distribution of resources, in practice this did not take place fully. The pre-Enlightenment tradition never disappeared completely (see Baeck, 1994, p. 11).

There is no doubt that, in spite of its secular and materialist worldview, the market system led to a long period of prosperity in the Western market-oriented economies. However, this unprecedented prosperity did not lead to the elimination of poverty or the fulfillment of everyone’s needs in conformity with the Judeo-Christian value system even in the wealthiest countries. Inequalities of income and wealth have also continued to persist and there has also been a substantial degree of economic instability and unemployment which have added to the miseries of the poor. This indicates that both efficiency and equity have remained elusive in spite of rapid development and phenomenal rise in wealth.

Consequently there has been persistent criticism of economics by a number of well-meaning scholars, including Thomas Carlyle (Past and Present, 1843), John Ruskin (Unto this Last, 1862) and Charles Dickens (Hard Times, 1854-55) in England, and Henry George (Progress and Poverty, 1879) in America. They ridiculed the dominant doctrine of laissez-faire with its emphasis on self-interest. Thomas Carlyle called economics a “dismal science” and rejected the idea that free and uncontrolled private interests will work in harmony and further the public welfare (see Jay and Jay, 1986). Henry George condemned the resulting contrast between wealth and poverty and wrote: “So long as all the increased wealth which modern progress brings goes but to build great fortunes, to increase luxury and make sharper the contrast between the House of Have and the House of Want, progress is not real and cannot be permanent” (1955, p. 10).

In addition to failing to fulfill the basic needs of a large number of people and increasing inequalities of income and wealth, modern economic development has been associated with the disintegration of the family and a failure to bring peace of mind and inner happiness (Easterlin 2001, 1995 and 1974; Oswald, 1997; Blanchflower and Oswald, 2000; Diener and Oshi, 2000; and Kenny, 1999). Due to these problems and others the laissez-faire approach lost ground, particularly after the Great Depression of the 1930s as a result of the Keynesian revolution and the socialist onslaught. However, most observers have concluded that government intervention alone cannot by itself remove all socio-economic ills. It is also necessary to motivate individuals to do what is right and abstain from doing what is wrong. This is where the moral uplift of society can be helpful. Without it, more and more difficult and costly regulations are needed. Nobel-laureate Amartya Sen has, therefore, rightly argued that “the distancing of economics from ethics has impoverished welfare economics and also weakened the basis of a good deal of descriptive and predictive economics” and that economics “can be made more productive by paying greater and more explicit attention to ethical considerations that shaped human behaviour and judgment” (1987, pp. 78-79). Hausman and McPherson also conclude in their survey article “Economics and Contemporary Moral Philosophy” that “An economy that is engaged actively and self-critically with the moral aspects of its subject matter cannot help but be more interesting, more illuminating and, ultimately, more useful than the one that tries not to be” (1993, p. 723).

Islamic Economics – and How It Differs from Conventional Economics

While conventional economics is now in the process of returning to its pre-Enlightenment roots, Islamic economics never got entangled in a secular and materialist worldview. It is based on a religious worldview which strikes at the roots of secularism and value neutrality. To ensure the true well-being of all individuals, irrespective of their sex, age, race, religion or wealth, Islamic economics does not seek to abolish private property, as was done by communism, nor does it prevent individuals from serving their self-interest. It recognizes the role of the market in the efficient allocation of resources, but does not find competition to be sufficient to safeguard social interest. It tries to promote human brotherhood, socio-economic justice and the well-being of all through an integrated role of moral values, market mechanism, families, society, and ‘good governance.’ This is because of the great emphasis in Islam on human brotherhood and socio-economic justice.

The Integrated Role of the Market, Families, Society, and Government

The market is not the only institution where people interact in human society. They also interact in the family, the society and the government and their interaction in all these institutions is closely interrelated. There is no doubt that the serving of self-interest does help raise efficiency in the market place. However, if self-interest is overemphasized and there are no moral restraints on individual behavior, other institutions may not work effectively – families may disintegrate, the society may be uncaring, and the government may be corrupt, partisan, and self-centered. Mutual sacrifice is necessary for keeping the families glued together. Since the human being is the most important input of not only the market, but also of the family, the society and the government, and the family is the source of this input, nothing may work if families disintegrate and are unable to provide loving care to children. This is likely to happen if both the husband and wife try to serve just their own self-interest and are not attuned to the making of sacrifices that the proper care and upbringing of children demands. Lack of willingness to make such sacrifice can lead to a decline in the quality of the human input to all other institutions, including the market, the society and the government. It may also lead to a fall in fertility rates below the replacement level, making it difficult for society not only to sustain its development but also its social security system.

The Role of Moral Values

While conventional economics generally considers the behavior and tastes and preferences of individuals as given, Islamic economics does not do so. It places great emphasis on individual and social reform through moral uplift. This is the purpose for which all God’s messengers, including Abraham, Moses, Jesus, and Muhammad, came to this world. Moral uplift aims at the change in human behavior, tastes and preferences and, thereby, it complements the price mechanism in promoting general well-being. Before even entering the market place and being exposed to the price filter, consumers are expected to pass their claims through the moral filter. This will help filter out conspicuous consumption and all wasteful and unnecessary claims on resources. The price mechanism can then take over and reduce the claims on resources even further to lead to the market equilibrium. The two filters can together make it possible to have optimum economy in the use of resources, which is necessary to satisfy the material as well as spiritual needs of all human beings, to reduce the concentration of wealth in a few hands, and to raise savings, which are needed to promote greater investment and employment. Without complementing the market system with morally-based value judgments, we may end up perpetuating inequities in spite of our good intentions through what Solo calls inaction, non-choice and drifting (Solo, 1981, p. 38)

From the above discussion, one may easily notice the similarities and differences between the two disciplines. While the subject matter of both is the allocation and distribution of resources and both emphasize the fulfillment of material needs, there is an equal emphasis in Islamic economics on the fulfillment of spiritual needs. While both recognize the important role of market mechanism in the allocation and distribution of resources, Islamic economics argues that the market may not by itself be able to fulfill even the material needs of all human beings. This is because it can promote excessive use of scarce resources by the rich at the expense of the poor if there is undue emphasis on the serving of self-interest. Sacrifice is involved in fulfilling our obligations towards others and excessive emphasis on the serving of self-interest does not have the potential of motivating people to make the needed sacrifice. This, however, raises the crucial question of why a rational person would sacrifice his self-interest for the sake of others?

The Importance of the Hereafter

This is where the concepts of the innate goodness of human beings and of the Hereafter come in – concepts which conventional economics ignores but on which Islam and other major religions place a great deal of emphasis. Because of their innate goodness, human beings do not necessarily always try to serve their self-interest. They are also altruistic and are willing to make sacrifices for the well-being of others. In addition, the concept of the Hereafter does not confine self-interest to just this world. It rather extends it beyond this world to life after death. We may be able to serve our self-interest in this world by being selfish, dishonest, uncaring, and negligent of our obligations towards our families, other human beings, animals, and the environment. However, we cannot serve our self-interest in the Hereafter except by fulfilling all these obligations.

Thus, the serving of self-interest receives a long-run perspective in Islam and other religions by taking into account both this world and the next. This serves to provide a motivating mechanism for sacrifice for the well-being of others that conventional economics fails to provide. The innate goodness of human beings along with the long-run perspective given to self-interest has the potential of inducing a person to be not only efficient but also equitable and caring. Consequently, the three crucial concepts of conventional economics – rational economic man, positivism, and laissez-faire – were not able to gain intellectual blessing in their conventional economics sense from any of the outstanding scholars who represent the mainstream of Islamic thought.

Rational Economic Man

While there is hardly anyone opposed to the need for rationality in human behavior, there are differences of opinion in defining rationality (Sen, 1987, pp. 11-14). However, once rationality has been defined in terms of overall individual as well as social well-being, then rational behavior could only be that which helps us realize this goal. Conventional economics does not define rationality in this way. It equates rationality with the serving of self-interest through the maximization of wealth and want satisfaction, The drive of self-interest is considered to be the “moral equivalent of the force of gravity in nature” (Myers, 1983, p. 4). Within this framework society is conceptualized as a mere collection of individuals united through ties of self-interest.

The concept of ‘rational economic man’ in this social-Darwinist, utilitarian, and material sense of serving self–interest could not find a foothold in Islamic economics. ‘Rationality’ in Islamic economics does not get confined to the serving of one’s self-interest in this world alone; it also gets extended to the Hereafter through the faithful compliance with moral values that help rein self-interest to promote social interest. Al-Mawardi (d. 1058) considered it necessary, like all other Muslim scholars, to rein individual tastes and preferences through moral values (1955, pp. 118-20). Ibn Khaldun (d.1406) emphasized that moral orientation helps remove mutual rivalry and envy, strengthens social solidarity, and creates an inclination towards righteousness (n.d., p.158).

Positivism

Similarly, positivism in the conventional economics sense of being “entirely neutral between ends” (Robbins, 1935, p. 240) or “independent of any particular ethical position or normative judgment” (Friedman, 1953) did not find a place in Muslim intellectual thinking. Since all resources at the disposal of human beings are a trust from God, and human beings are accountable before Him, there is no other option but to use them in keeping with the terms of trust. These terms are defined by beliefs and moral values. Human brotherhood, one of the central objectives of Islam, would be a meaningless jargon if it were not reinforced by justice in the allocation and distribution of resources.

Pareto Optimum

Without justice, it would be difficult to realize even development. Muslim scholars have emphasized this throughout history. Development Economics has also started emphasizing its importance, more so in the last few decades.[2] Abu Yusuf (d. 798) argued that: “Rendering justice to those wronged and eradicating injustice, raises tax revenue, accelerates development of the country, and brings blessings in addition to reward in the Hereafter” (1933/34, p. 111: see also pp. 3-17). Al-Mawardi argued that comprehensive justice “inculcates mutual love and affection, obedience to the law, development of the country, expansion of wealth, growth of progeny, and security of the sovereign” (1955, p. 27). Ibn Taymiyyah (d. 1328) emphasized that “justice towards everything and everyone is an imperative for everyone, and injustice is prohibited to everything and everyone. Injustice is absolutely not permissible irrespective of whether it is to a Muslim or a non-Muslim or even to an unjust person” (1961-63, Vol. 18, p. 166).

Justice and the well-being of all may be difficult to realize without a sacrifice on the part of the well-to-do. The concept of Pareto optimum does not, therefore, fit into the paradigm of Islamic economics. This is because Pareto optimum does not recognize any solution as optimum if it requires a sacrifice on the part of a few (rich) for raising the well-being of the many (poor). Such a position is in clear conflict with moral values, the raison d’être of which is the well-being of all. Hence, this concept did not arise in Islamic economics. In fact, Islam makes it a religious obligation of Muslims to make a sacrifice for the poor and the needy, by paying Zakat at the rate of 2.5 percent of their net worth. This is in addition to the taxes that they pay to the governments as in other countries.

The Role of State

Moral values may not be effective if they are not observed by all. They need to be enforced. It is the duty of the state to restrain all socially harmful behavior[3] including injustice, fraud, cheating, transgression against other people’s person, honor and property, and the non-fulfillment of contracts and other obligations through proper upbringing, incentives and deterrents, appropriate regulations, and an effective and impartial judiciary. The Qur’an can only provide norms. It cannot by itself enforce them. The state has to ensure this. That is why the Prophet Muhammad said: “God restrains through the sovereign more than what He restrains through the Qur’an” (cited by al-Mawardi, 1955, p. 121). This emphasis on the role of the state has been reflected in the writings of all leading Muslim scholars throughout history.[4] Al-Mawardi emphasized that an effective government (Sultan Qahir) is indispensable for preventing injustice and wrongdoing (1960, p. 5). Say’s Law could not, therefore, become a meaningful proposition in Islamic economics.

How far is the state expected to go in the fulfillment of its role? What is it that the state is expected to do? This has been spelled out by a number of scholars in the literature on what has come to be termed as “Mirrors for Princes.”[5] None of them visualized regimentation or the owning and operating of a substantial part of the economy by the state. Several classical Muslim scholars, including al-Dimashqi (d. after 1175) and Ibn Khaldun, clearly expressed their disapproval of the state becoming directly involved in the economy (Al-Dimashqi, 1977, pp. 12 and 61; Ibn Khaldun, pp. 281-83). According to Ibn Khaldun, the state should not acquire the character of a monolithic or despotic state resorting to a high degree of regimentation (ibid., p. 188). It should not feel that, because it has authority, it can do anything it likes (ibid, p. 306). It should be welfare-oriented, moderate in its spending, respect the property rights of the people, and avoid onerous taxation (ibid, p. 296). This implies that what these scholars visualized as the role of government is what has now been generally referred to as ‘good governance’.

Some of the Contributions Made by Islamic Economics

The above discussion should not lead one to an impression that the two disciplines are entirely different. One of the reasons for this is that the subject matter of both disciplines is the same, allocation and distribution of scarce resources. Another reason is that all conventional economists have never been value neutral. They have made value judgments in conformity with their beliefs. As indicated earlier, even the paradigm of conventional economics has been changing – the role of good governance has now become well recognized and the injection of a moral dimension has also become emphasized by a number of prominent economists. Moreover, Islamic economists have benefited a great deal from the tools of analysis developed by neoclassical, Keynesian, social, humanistic and institutional economics as well as other social sciences, and will continue to do so in the future.

The Fallacy of the ‘Great Gap’ Theory

A number of economic concepts developed in Islamic economics long before they did in conventional economics. These cover a number of areas including interdisciplinary approach; property rights; division of labor and specialization; the importance of saving and investment for development; the role that both demand and supply play in the determination of prices and the factors that influence demand and supply; the roles of money, exchange, and the market mechanism; characteristics of money, counterfeiting, currency debasement, and Gresham’s law; the development of checks, letters of credit and banking; labor supply and population; the role of the state, justice, peace, and stability in development; and principles of taxation.I t is not possible to provide comprehensive coverage of all the contributions Muslim scholars have made to economics. Only some of their contributions will be highlighted below to remove the concept of the “Great Gap” of “over 500 years” that exists in the history of conventional economic thought as a result of the incorrect conclusion by Joseph Schumpeter in History of Economic Analysis (1954), that the intervening period between the Greeks and the Scholastics was sterile and unproductive.[6] This concept has become well embedded in the conventional economics literature as may be seen from the reference to this even by the Nobel-laureate, Douglass North, in his December 1993 Nobel lecture (1994, p. 365). Consequently, as Todd Lowry has rightly observed, “the character and sophistication of Arabian writings has been ignored” (See his ‘Foreword’ in Ghazanfar, 2003, p. xi).

The reality, however, is that the Muslim civilization, which benefited greatly from the Chinese, Indian, Sassanian and Byzantine civilizations, itself made rich contributions to intellectual activity, including socio-economic thought, during the ‘Great Gap’ period, and thereby played a part in kindling the flame of the European Enlightenment Movement. Even the Scholastics themselves were greatly influenced by the contributions made by Muslim scholars. The names of Ibn Sina (Avicenna, d. 1037), Ibn Rushd (Averroes, d. 1198) and Maimonides (d. 1204, a Jewish philosopher, scientist, and physician who flourished in Muslim Spain) appear on almost every page of the thirteenth-century summa (treatises written by scholastic philosophers) (Pifer, 1978, p. 356).

Multidisciplinary Approach for Development

One of the most important contributions of Islamic economics, in addition to the above paradigm discussion, was the adoption of a multidisciplinary dynamic approach. Muslim scholars did not focus their attention primarily on economic variables. They considered overall human well-being to be the end product of interaction over a long period of time between a number of economic, moral, social, political, demographic and historical factors in such a way that none of them is able to make an optimum contribution without the support of the others. Justice occupied a pivotal place in this whole framework because of its crucial importance in the Islamic worldview There was an acute realization that justice is indispensable for development and that, in the absence of justice, there will be decline and disintegration.

The contributions made by different scholars over the centuries seem to have reached their consummation in Ibn Khaldun’s Maquddimah, which literally means ‘introduction,’ and constitutes the first volume of a seven-volume history, briefly called Kitab al-‘Ibar or the Book of Lessons [of History].[7] Ibn Khaldun lived at a time (1332-1406) when the Muslim civilization was in the process of decline. He wished to see a reversal of this tide, and, as a social scientist, he was well aware that such a reversal could not be envisaged without first drawing lessons (‘ibar) from history to determine the factors that had led the Muslim civilization to bloom out of humble beginnings and to decline thereafter. He was, therefore, not interested in knowing just what happened. He wanted to know the how and why of what happened. He wanted to introduce a cause and effect relationship into the discussion of historical phenomena. The Muqaddimah is the result of this desire. It tries to derive the principles that govern the rise and fall of a ruling dynasty, state (dawlah) or civilization (‘umran).

Since the centre of Ibn Khaldun’s analysis is the human being, he sees the rise and fall of dynasties or civilizations to be closely dependent on the well-being or misery of the people. The well-being of the people is in turn not dependent just on economic variables, as conventional economics has emphasized until recently, but also on the closely interrelated role of moral, psychological, social, economic, political, demographic and historical factors. One of these factors acts as the trigger mechanism. The others may, or may not, react in the same way. If the others do not react in the same direction, then the decay in one sector may not spread to the others and either the decaying sector may be reformed or the decline of the civilization may be much slower. If, however, the other sectors react in the same direction as the trigger mechanism, the decay will gain momentum through an interrelated chain reaction such that it becomes difficult over time to identify the cause from the effect. He, thus, seems to have had a clear vision of how all the different factors operate in an interrelated and dynamic manner over a long period to promote the development or decline of a society.

He did not, thus, adopt the neoclassical economist’s simplification of confining himself to primarily short-term static analysis of only markets by assuming unrealistically that all other factors remain constant. Even in the short-run, everything may be in a state of flux through a chain reaction to the various changes constantly taking place in human society, even though these may be so small as to be imperceptible. Therefore, even though economists may adopt the ceteris paribus assumption for ease of analysis, Ibn Khaldun’s multidisciplinary dynamics can be more helpful in formulating socio-economic policies that help improve the overall performance of a society. Neoclassical economics is unable to do this because, as North has rightly asked, “How can one prescribe policies when one does not understand how economies develop?” He, therefore, considers neoclassical economics to be “an inappropriate tool to analyze and prescribe policies that will induce development” (North, 1994, p. 549).

However, this is not all that Islamic economics has done. Muslim scholars, including Abu Yusuf (d. 798), al-Mawardi (d. 1058), Ibn Hazm (d. 1064), al-Sarakhsi (d. 1090), al-Tusi (d. 1093), al-Ghazali (d. 1111), al-Dimashqi (d. after 1175), Ibn Rushd (d. 1187), Ibn Taymiyyah (d.1328), Ibn al-Ukhuwwah (d. 1329), Ibn al-Qayyim (d. 1350), al-Shatibi (d. 1388), Ibn Khaldun (d. 1406), al-Maqrizi (d. 1442), al-Dawwani (d. 1501), and Shah Waliyullah (d. 1762) made a number of valuable contributions to economic theory. Their insight into some economic concepts was so deep that a number of the theories propounded by them could undoubtedly be considered the forerunners of some more sophisticated modern formulations of these theories.[8]

Division of Labor, Specialization, Trade, Exchange and Money and Banking

A number of scholars emphasized the necessity of division of labor for economic development long before this happened in conventional economics. For example, al-Sarakhsi (d. 1090) said: “the farmer needs the work of the weaver to get clothing for himself, and the weaver needs the work of the farmer to get his food and the cotton from which the cloth is made …, and thus everyone of them helps the other by his work…” (1978, Vol. 30, p. 264). Al-Dimashqi, writing about a century later, elaborates further by saying: “No individual can, because of the shortness of his life span, burden himself with all industries. If he does, he may not be able to master the skills of all of them from the first to the last. Industries are all interdependent. Construction needs the carpenter and the carpenter needs the ironsmith and the ironsmith needs the miner, and all these industries need premises. People are, therefore, necessitated by force of circumstances to be clustered in cities to help each other in fulfilling their mutual needs” (1977, p. 20-21).

Ibn Khaldun ruled out the feasibility or desirability of self-sufficiency, and emphasized the need for division of labor and specialization by indicating that: “It is well-known and well-established that individual human beings are not by themselves capable of satisfying all their individual economic needs. They must all cooperate for this purpose. The needs that can be satisfied by a group of them through mutual cooperation are many times greater than what individuals are capable of satisfying by themselves” (p. 360). In this respect he was perhaps the forerunner of the theory of comparative advantage, the credit for which is generally given in conventional economics to David Ricardo who formulated it in 1817.

The discussion of division of labor and specialization, in turn, led to an emphasis on trade and exchange, the existence of well-regulated and properly functioning markets through their effective regulation and supervision (hisbah), and money as a stable and reliable measure, medium of exchange and store of value. However, because of bimetallism (gold and silver coins circulating together) which then prevailed, and the different supply and demand conditions that the two metals faced, the rate of exchange between the two full-bodied coins fluctuated. This was further complicated by debasement of currencies by governments in the later centuries to tide over their fiscal problems. This had, according to Ibn Taymiyyah (d. 1328) (1961-63, Vol. 29, p. 649), and later on al-Maqrizi (d. 1442) and al-Asadi (d. 1450), the effect of bad coins driving good coins out of circulation (al-Misri, 1981, pp. 54 and 66), a phenomenon which was recognized and referred to in the West in the sixteenth century as Gresham’s Law. Since debasement of currencies is in sheer violation of the Islamic emphasis on honesty and integrity in all measures of value, fraudulent practices in the issue of coins in the fourteenth century and afterwards elicited a great deal of literature on monetary theory and policy. The Muslims, according to Baeck, should, therefore, be considered forerunners and critical incubators of the debasement literature of the fourteenth and fifteenth centuries (Baeck, 1994, p. 114).

To finance their expanding domestic and international trade, the Muslim world also developed a financial system, which was able to mobilize the “entire reservoir of monetary resources of the mediaeval Islamic world” for financing agriculture, crafts, manufacturing and long-distance trade (Udovitch, 1970, pp. 180 and 261). Financiers were known as sarrafs. By the time of Abbasid Caliph al-Muqtadir (908-32), they had started performing most of the basic functions of modern banks (Fischel, 1992). They had their markets, something akin to the Wall Street in New York and Lombard Street in London, and fulfilled all the banking needs of commerce, agriculture and industry (Duri, 1986, p. 898). This promoted the use of checks (sakk) and letters of credit (hawala). The English word check comes from the Arabic term sakk.

Demand and Supply

A number of Muslim scholars seem to have clearly understood the role of both demand and supply in the determination of prices. For example, Ibn Taymiyyah (d. 1328) wrote: “The rise or fall of prices may not necessarily be due to injustice by some people. They may also be due to the shortage of output or the import of commodities in demand. If the demand for a commodity increases and the supply of what is demanded declines, the price rises. If, however, the demand falls and the supply increases, the price falls” (1961-3, Vol. 8, p. 523).

Even before Ibn Taymiyyah, al-Jahiz (d. 869) wrote nearly five centuries earlier that: “Anything available in the market is cheap because of its availability [supply] and dear by its lack of availability if there is need [demand] for it” (1983, p. 13), and that “anything the supply of which increases, becomes cheap except intelligence, which becomes dearer when it increases” (ibid., p. 13).

Ibn Khaldun went even further by emphasizing that both an increase in demand or a fall in supply leads to a rise in prices, while a decline in demand or a rise in supply contributes to a fall in prices (pp. 393 and 396). He believed that while continuation of ‘excessively low’ prices hurts the craftsmen and traders and drives them out of the market, the continuation of ‘excessively high’ prices hurts the consumers. ‘Moderate’ prices in between the two extremes were, therefore, desirable, because they would not only allow the traders a socially-acceptable level of return but also lead to the clearance of the markets by promoting sales and thereby generating a given turnover and prosperity (ibid, p. 398). Nevertheless, low prices were desirable for necessities because they provide relief to the poor who constitute the majority of the population (ibid, p. 398). If one were to use modem terminology, one could say that Ibn Khaldun found a stable price level with a relatively low cost of living to be preferable, from the point of view of both growth and equity in comparison with bouts of inflation and deflation. The former hurts equity while the latter reduces incentive and efficiency. Low prices for necessities should not, however, be attained through the fixing of prices by the state; this destroys the incentive for production (ibid, pp. 279-83).

The factors which determined demand were, according to Ibn Khaldun, income, price level, the size of the population, government spending, the habits and customs of the people, and the general development and prosperity of the society (ibid, pp.398-404). The factors which determined supply were demand (ibid, pp. 400 and 403), order and stability (pp. 306-08), the relative rate of profit (ibid, pp. 395 and 398), the extent of human effort (p. 381), the size of the labor force as well as their knowledge and skill (pp. 363 and 399-400), peace and security (pp. 394-95 and 396), and the technical background and development of the whole society (pp. 399-403). All these constituted important elements of his theory of production. If the price falls and leads to a loss, capital is eroded, the incentive to supply declines, leading to a recession. Trade and crafts also consequently suffer (p. 398).

This is highly significant because the role of both demand and supply in the determination of value was not well understood in the West until the late nineteenth and the early twentieth centuries. Pre-classical English economists like William Petty (1623-87), Richard Cantillon (1680-1734), James Steuart (1712-80), and even Adam Smith (1723-90), the founder of the Classical School, generally stressed only the role of the cost of production, and particularly of labor, in the determination of value. The first use in English writings of the notions of both demand and supply was perhaps in 1767 (Thweatt, 1983). Nevertheless, it was not until the second decade of the nineteenth century that the role of both demand and supply in the determination of market prices began to be fully appreciated (Groenewegen, 1973). While Ibn Khaldun had been way ahead of conventional economists, he probably did not have any idea of demand and supply schedules, elasticities of demand and supply and most important of all, equilibrium price, which plays a crucial role in modern economic discussions.

Public Finance

Taxation

Long before Adam Smith (d. 1790), who is famous, among other things, for his canons of taxation (equality, certainty, convenience of payment, and economy in collection) (see Smith, 1937, pp. 777-79), the development of these canons can be traced in the writings of pre-Islamic as well as Muslim scholars, particularly the need for the tax system to be just and not oppressive. Caliphs Umar (d. 644), Ali (d. 661) and Umar ibn Abd al-Aziz (d. 720), stressed that taxes should be collected with justice and leniency and should not be beyond the ability of the people to bear. Tax collectors should not under any circumstances deprive the people of the necessities of life (Abu Yusuf, 1933/34, pp. 14, 16 and 86). Abu Yusuf, adviser to Caliph Harun al-Rashid (786-809), argued that a just tax system would lead not only to an increase in revenues but also to the development of the country (Abu Yusuf, 1933/34, p. 111; see also pp. 14, 16, 60, 85, 105-19 and 125). Al-Mawardi also argued that the tax system should do justice to both the taxpayer and the treasury – “taking more was iniquitous with respect to the rights of the people, while taking less was unfair with respect to the right of the public treasury” (1960, p. 209; see also pp. 142-56 and 215).[9]

Ibn Khaldun stressed the principles of taxation very forcefully in the Muqaddimah. He quoted from a letter written by Tahir ibn al-Husayn, Caliph al-Ma’mun’s general, advising his son, ‘Abdullah ibn Tahir, Governor of al-Raqqah (Syria): “So distribute [taxes] among all people making them general, not exempting anyone because of his nobility or wealth and not exempting even your own officials or courtiers or followers. And do not levy on anyone a tax which is beyond his capacity to pay” (p. 308).[10] In this particular passage, he stressed the principles of equity and neutrality, while in other places he also stressed the principles of convenience and productivity.

The effect of taxation on incentives and productivity was so clearly visualized by Ibn Khaldun that he seems to have grasped the concept of optimum taxation. He anticipated the gist of the Laffer Curve, nearly six hundred years before Arthur Laffer, in two full chapters of the Muqaddimah.[11] At the end of the first chapter, he concluded that “the most important factor making for business prosperity is to lighten as much as possible the burden of taxation on businessmen, in order to encourage enterprise by ensuring greater profits [after taxes]” (p. 280). This he explained by stating that “when taxes and imposts are light, the people have the incentive to be more active. Business therefore expands, bringing greater satisfaction to the people because of low taxes …, and tax revenues also rise, being the sum total of all assessments” (p. 279). He went on to say that as time passes the needs of the state increase and rates of taxation rise to increase the yield. If this rise is gradual people become accustomed to it, but ultimately there is an adverse impact on incentives. Business activity is discouraged and declines, and so does the yield of taxation (pp. 280-81). A prosperous economy at the beginning of the dynasty, thus, yields higher tax revenue from lower tax rates while a depressed economy at the end of the dynasty, yields smaller tax revenue from higher rates (p. 279). He explained the reasons for this by stating: “Know that acting unjustly with respect to people’s wealth, reduces their will to earn and acquire wealth … and if the will to earn goes, they stop working. The greater the oppression, the greater the effect on their effort to earn … and, if people abstain from earning and stop working, the markets will stagnate and the condition of people will worsen” (pp. 286-87); tax revenues will also decline (p. 362). He, therefore, advocated justice in taxation (p. 308).

Public Expenditure

For Ibn Khaldun the state was also an important factor of production. By its spending it promotes production and by its taxation it discourages production (pp. 279-81). Since the government constitutes the greatest market for goods and services, and is a major source of all development (pp. 286 and 403), a decrease in its spending leads to not only a slackening of business activity and a decline in profits but also a decline in tax revenue (p. 286). The more the government spends, the better it may be for the economy (p. 286).[12] Higher spending enables the government to do the things that are needed to support the population and to ensure law and order and political stability (pp. 306 and 308). Without order and political stability, the producers have no incentive to produce. He stated that “the only reason [for the accelerated development of cities] is that the government is near them and pours its money into them, like the water [of a river] that makes green everything around it, and irrigates the soil adjacent to it, while in the distance everything remains dry” (p. 369).

Ibn Khaldun also analyzed the effect of government expenditure on the economy and is, in this respect, a forerunner of Keynes. He stated: “A decrease in government spending leads to a decline in tax revenues. The reason for this is that the state represents the greatest market for the world and the source of civilization. If the ruler hoards tax revenues, or if these are lost, and he does not spend them as they should be, the amount available with his courtiers and supporters would decrease, as would also the amount that reaches through them to their employees and dependents [the multiplier effect]. Their total spending would, therefore, decline. Since they constitute a significant part of the population and their spending constitutes a substantial part of the market, business will slacken and the profits of businessmen will decline, leading also to a decline in tax revenues … Wealth tends to circulate between the people and the ruler, from him to them and from them to him. Therefore, if the ruler withholds it from spending, the people would become deprived of it” (p. 286).

Economic Mismanagement and Famine

Ibn Khaldun established the causal link between bad government and high grain prices by indicating that in the later stage of the dynasty, when public administration becomes corrupt and inefficient, and resorts to coercion and oppressive taxation, incentive is adversely affected and the farmers refrain from cultivating the land. Grain production and reserves fail to keep pace with the rising population. The absence of reserves causes supply shortages in the event of a famine and leads to price escalation (pp. 301-02).

Al-Maqrizi (d. 1442) who, as muhtasib (market supervisor), had intimate knowledge of the economic conditions during his times, applied Ibn Khaldun’s analysis in his book (1956) to determine the reasons for the economic crisis of Egypt during the period 1403-06. He identified that the political administration had become very weak and corrupt during the Circassian period. Public officials were appointed on the basis of bribery rather than ability.[13] To recover the bribes, officials resorted to oppressive taxation. The incentive to work and produce was adversely affected and output declined. The crisis was further intensified by debasement of the currency through the excessive issue of copper fulus, or fiat money, to cover state budgetary deficits. All these factors joined hands with the famine to lead to a high degree of inflation, misery of the poor, and impoverishment of the country.

Hence, al-Maqrizi laid bare the socio-political determinants of the prevailing ‘system crisis’ by taking into account a number of variables like corruption, bad government policies, and weak administration. All of these together played a role in worsening the impact of the famine, which could otherwise have been handled effectively without a significant adverse impact on the population. This is clearly a forerunner of Sen’s entitlement theory, which holds the economic mismanagement of illegitimate governments to be responsible for the poor people’s misery during famines and other natural disasters (Sen, 1981). What al-Maqrizi wrote of the Circassian Mamluks was also true of the later Ottoman period (See Meyer, 1989).

Stages of Development

Ibn Khaldun stated the stages of development through which every society passes, moving from the primitive Bedouin stage to the rise of village, towns and urban centers with an effective government, development of agriculture, industry and sciences, and the impact of values and environment on this development ( Muqaddimah, pp. 35, 41-44, 87-95, 120-48, 172-76). Walliyullah[14] (d. 1762) later analyzed the development of society through four different stages from primitive existence to a well-developed community with khilafah (morally-based welfare state), which tries to ensure the spiritual as well as material well-being of the people. Like Ibn Khaldun, he considered political authority to be indispensable for human well-being. To be able to serve as a source of well-being for all and not of burden and decay, it must have the characteristics of the khilafah. He applied this analysis in various writings to the conditions prevailing during his life-time. He found that the luxurious life style of the rulers, along with their exhausting military campaigns, the increasing corruption and inefficiency of the civil service, and huge stipends to a vast retinue of unproductive courtiers, led them to the imposition of oppressive taxes on farmers, traders and craftsmen, who constituted the main productive section of the population. These people had, therefore, lost interest in their occupations, output had slowed down, state financial resources had declined, and the country had become impoverished (Waliyullah, 1992, Vol. I, pp. 119-52). Thus, in step with Ibn Khaldun and other Muslim scholars, al-Maqrizi and Waliyullah combined moral, political, social and economic factors to explain the economic phenomena of their times and the rise and fall of their societies.

Muslim Intellectual Decline

Unfortunately, the rich theoretical contribution made by Muslim scholars up until Ibn Khaldun did not get fertilized and irrigated by later scholars to lead to the development of Islamic economics, except by a few isolated scholars like al-Maqrizi, al-Dawwani (d. 1501), and Waliyullah. Their contributions were, however, only in specific areas and did not lead to a further development of Ibn Khaldun’s model of socio-economic and political dynamics. Islamic economics did not, therefore, develop as a separate intellectual discipline in conformity with the Islamic paradigm along the theoretical foundations and method laid down by Ibn Khaldun and his predecessors. It continued to remain an integral part of the social and moral philosophy of Islam.

One may ask here why the rich intellectual contributions made by Muslim scholars did not continue after Ibn Khaldun. The reason may be that, as indicated earlier, Ibn Khaldun lived at a time when the political and socio-economic decline of the Muslim world was underway.[15] He was perhaps “the sole point of light in his quarter of the firmament” (Toynbee, 1935, Vol. 3, p. 321). According to Ibn Khaldun himself, sciences progress only when a society is itself progressing (p. 434). This theory is clearly upheld by Muslim history. Sciences progressed rapidly in the Muslim world for four centuries from the middle of the eighth century to the middle of the twelfth century and continued to do so at a substantially decelerated pace for at least two more centuries, tapering off gradually thereafter (Sarton 1927, Vol. 1 and Book 1 of Vol. 2). Once in a while there did appear a brilliant star on an otherwise unexciting firmament. Economics was no exception. It also continued to be in a state of limbo in the Muslim world. No worthwhile contributions were made after Ibn Khaldun.

The trigger mechanism for this decline was, according to Ibn Khaldun, the failure of political authority to provide good governance. Political illegitimacy, which started after the end of khilafah in 661 gradually led to increased corruption and the use of state resources for private benefit at the neglect of education and other nation-building functions of the state. This gradually triggered the decline of all other sectors of the society and economy.[16]

The rapidly rising Western civilization took over the torch of knowledge from the declining Muslim world and has kept it burning with even greater brightness. All sciences, including the social sciences, have made phenomenal progress. Conventional economics became a separate academic discipline after the publication of Alfred Marshall’s great treatise, Principles of Economics, in 1890 (Schumpeter, 1954, p.21),[17] and has continued to develop since then at a remarkable speed. With such a great achievement to its credit, there is no psychological need to allow the ‘Great Gap’ thesis to persist. It would help promote better understanding of Muslim civilization in the West if textbooks started giving credit to Muslim scholars. They were “the torchbearers of ancient learning during the medieval period” and “it was from them that the Renaissance was sparked and the Enlightenment kindled” (Todd Lowry in his ‘Foreword’ in Ghazanfar, 2003, p. xi). Watt has been frank enough to admit that, “the influence of Islam on Western Christendom is greater than is usually realized” and that, “an important task for Western Europeans, as we move into the era of the one world, is … to acknowledge fully our debt to the Arab and Islamic world” (Watt, 1972, p. 84).

Conventional economics, however, took a wrong turn after the Enlightenment Movement by stripping itself of the moral basis of society emphasized by Aristotelian and Judeo-Christian philosophies. This deprived it of the role that moral values and good governance can play in helping society raise both efficiency and equity in the allocation and distribution of scarce resources needed for promoting the well-being of all. However, this has been changing. The role of good governance has already been recognized and that of moral values is gradually penetrating the economics orthodoxy. Islamic economics is also reviving now after the independence of Muslim countries from foreign domination. It is likely that the two disciplines will converge and become one after a period of time. This will be in keeping with the teachings of the Qur’an, which clearly states that mankind was created as one but became divided as a result of their differences and transgression against each other (10:19, 2:213 and 3: 19). This reunification [globalization, as it is new called], if reinforced by justice and mutual care, should help promote peaceful coexistence and enable mankind to realize the well-being of all, a goal the realization of which we are all anxiously looking forward to.

References

Abu Yusuf, Ya ‘qub ibn Ibrahim. Kitab al-Kharaj. Cairo: al-Matab‘ah al-Salafiyyah, second edition, 1933/34. (This book has been translated into English by A. Ben Shemesh. Taxation in Islam. Leiden: E. J. Brill, 1969.)

Allouche, Adel. Mamluk Economics: A Study and Translation of Al-Maqrizi’s Ighathah. Salt Lake City: University of Utah Press, 1994.

Baeck Louis. The Mediterranean Tradition in Economic Thought. London: Routledge, 1994.

Blanchflower, David, and Andrew Oswald. “Well-being over Time in Britain and USA.” NBER, Working Paper No. 7487, 2000.

Blaug Mark. Economic Theory in Retrospect. Cambridge: Cambridge University Press, 1985.

Boulakia, Jean David C. “Ibn Khaldun: A Fourteenth-Century Economist.” Journal of Political Economy 79, no. 5 (1971): 1105-18.

Chapra, M. Umer. The Future of Economics: An Islamic Perspective. Leicester, UK: The Islamic Foundation, 2000.

Cline, William R. Potential Effects of Income Redistribution on Economic Growth. New York: Praeger, 1973.

DeSmogyi, Joseph N. “Economic Theory in Classical Arabic Literature.” Studies in Islam (Delhi), (1965): 1-6.

Diener E., and Shigehiro Oshi. “Money and Happiness: Income and Subjective Well-being.” In Culture and Subjective Well-being, edited by E. Diener and E. Suh. Cambridge, MA: MIT Press, 2000.

Dimashqi, Abu al-Fadl Ja‘far ibn ‘Ali al-. Al-Isharah ila Mahasin al-Tijarah, Al-Bushra al-Shurbaji, editor. Cairo: Maktabah al-Kulliyat al-Azhar, 1977.

Duri, A.A. “Baghdad.” The Encyclopedia of Islam, 894-99. Leiden: Brill, 1986.

Easterlin, Richard. “Does Economic Growth Improve the Human Lot? Some Empirical Evidence.” In Nations and Households in Economic Growth: Essays in Honor of Moses Abramowitz, edited by Paul David and Melvin Reder. New York: Academic Press, 1974.

Easterlin, Richard. “Will Raising the Income of All Increase the Happiness of All?” Journal of Economic Behavior and Organization 27, no. 1 (1995): 35-48.

Easterlin, Richard (2001), “Income and Happiness: Towards a Unified Theory” in Economic Journal, 111: 473 (2001).

Essid, M. Yassine. A Critique of the Origins of Islamic Economic Thought. Leiden: Brill, 1995.

Feyerabend, Paul. Against Method: Outline of an Anarchistic Theory of Knowledge. London: Verso, third edition, 1993.

Fischel, W.J. “Djahbadh.” In Encyclopedia of Islam, volume 2, 382-83. Leiden: Brill, 1992.

Friedman, Milton. Essays in Positive Economics. Chicago: University of Chicago Press, 1953.

George, Henry. Progress and Poverty. New York: Robert Schalkenback Foundation, 1955.

Ghazanfar, S.M. Medieval Islamic Economic Thought: Filling the Great Gap in European Economics. London: Routledge Curzon, 2003.

Groenewegen, P.D. “A Note on the Origin of the Phrase, ‘Supply and Demand.’” Economic Journal 83, no. 330 (1973): 505-09.

Hausman, Daniel, and Michael McPherson. “Taking Ethics Seriously: Economics and Contemporary Moral Philosophy.” Journal of Economic Literature 31, no. 2 (1993): 671-731.

Ibn Khaldun. Muqaddimah. Cairo: Al-Maktabah al-Tijariyyah al-Kubra. See also its translation under Rosenthal (1967) and selections from it under Issawi (1950).

Ibn Taymiyyah. Majmu‘ Fatawa Shaykh al-Islam Ahmad Ibn Taymiyyah. ‘Abd al-Rahman al-‘Asimi, editor. Riyadh: Matabi‘ al-Riyad, 1961-63.

Islahi, A. Azim. History of Economic Thought in Islam. Aligharh, India: Department of Economics, Aligharh Muslim University, 1996.

Issawi, Charles. An Arab Philosophy of History: Selections from the Prolegomena of Ibn Khaldun of Tunis (1332-1406). London: John Muray, 1950.

Jahiz, Amr ibn Bahr al-. Kitab al-Tabassur bi al-Tijarah. Beirut: Dar al-Kitab al-Jadid, 1983.

Jay, Elizabeth, and Richard Jay. Critics of Capitalism: Victorian Reactions to Political Economy. Cambridge: Cambridge University Press, 1986.

Kenny, Charles. “Does Growth Cause Happiness, or Does Happiness Cause Growth?” Kyklos 52, no. 1 (1999): 3-26.

Koopmans, T.C. (1969), “Inter-temporal Distribution and ‘Optimal’ Aggregate Economic Growth”, in Fellner et. al., Ten Economic Studies in the Tradition of Irving Fisher (John Willey and Sons).

Mahdi, Mohsin. Ibn Khaldun’s Philosophy of History. Chicago: University of Chicago Press, 1964.

Maqrizi, Taqi al-Din Ahmad ibn Ali al-. Ighathah al-Ummah bi Kashf al-Ghummah. Hims, Syria: Dar ibn al-Wahid, 1956. (See its English translation by Allouche, 1994).

Mawardi, Abu al-Hasan ‘Ali al-. Adab al-Dunya wa al-Din. Mustafa al Saqqa, editor. Cairo: Mustafa al-Babi al Halabi, 1955.

Mawardi, Abdu al-Hasan. Al-Ahkam al-Sultaniyyah wa al-Wilayat al-Diniyyah. Cairo: Mustafa al-Babi al-Halabi, 1960. (The English translation of this book by Wafa Wahba has been published under the title, The Ordinances of Government. Reading: Garnet, 1996.)

Mirakhor, Abbas. “The Muslim Scholars and the History of Economics: A Need for Consideration.” American Journal of Islamic Social Sciences (1987): 245-76.

Misri Rafiq Yunus al-. Al-Islam wa al-Nuqud. Jeddah: King Abdulaziz University, 1981.

Meyer, M.S. “Economic Thought in the Ottoman Empire in the 14th – Early 19th Centuries.” Archiv Orientali 4, no. 57 (1989): 305-18.

Myers, Milton L. The Soul of Modern Economic Man: Ideas of Self-Interest, Thomas Hobbes to Adam Smith. Chicago: University of Chicago Press, 1983.

North, Douglass C. Structure and Change in Economic History. New York: W.W. Norton, 1981.

North, Douglass C. “Economic Performance through Time.” American Economic Review 84, no. 2 (1994): 359-68.

Oswald, A.J. “Happiness and Economic Performance,” Economic Journal 107, no. 445 (1997): 1815-31.

Pifer, Josef. “Scholasticism.” Encyclopedia Britannica 16 (1978): 352-57.

Robbins, Lionel. An Essay on the Nature and Significance of Economic Science. London: Macmillan, second edition, 1935.

Rosenthal, Franz. Ibn Khaldun: The Muqaddimah, An Introduction to History. Princeton, NJ: Princeton University Press, 1967.

Sarakhsi, Shams al-Din al-. Kitab al-Mabsut. Beirut: Dar al-Ma‘rifah, third edition, 1978 (particularly “Kitab al-Kasb” of al-Shaybani in Vol. 30: 245-97).

Sarton, George. Introduction to the History of Science. Washington, DC: Carnegie Institute (three volumes issued between 1927 and 1948, each of the second and third volumes has two parts).

Schumpeter, Joseph A. History of Economic Analysis. New York: Oxford University Press, 1954.

Sen, Amartya. Poverty and Famines: An Essay on Entitlement and Deprivation. Oxford: Clarendon Press, 1981.

Sen, Amartya. On Ethics and Economics. Oxford: Basil Blackwell, 1987.

Siddiqi, M. Nejatullah. “History of Islamic Economic Thought.” In Lectures on Islamic Economics, Ausaf Ahmad and K.R. Awan, 69-90. Jeddah: IDB/IRTI, 1992.

Smith, Adam. An Inquiry into the Nature and Causes of the Wealth of Nations. New York: Modern Library, 1937.

Solo, Robert A. “Values and Judgments in the Discourse of the Sciences.” In Value Judgment and Income Distribution, edited by Robert A. Solo and Charles A. Anderson, 9-40. New York, Praeger, 1981.

Spengler, Joseph. “Economic Thought in Islam: Ibn Khaldun.” Comparative Studies in Society and History (1964): 268-306.

Thweatt, W.O. “Origins of the Terminology, Supply and Demand.” Scottish Journal of Political Economy (1983): 287-94.

Toynbee, Arnold J. A Study of History. London: Oxford University Press, second edition, 1935.

Udovitch, Abraham L. Partnership and Profit in Medieval Islam. Princeton; NJ: Princeton University Press, 1970.

Waliyullah, Shah. Hujjatullah al-Balighah. M.Sharif Sukkar, editor. Beirut: Dar Ihya al- Ulum, second edition, two volumes, 1992. (An English translation of this book by Marcia K. Hermansen was published bu Brill, Leiden, 1966.)

Watt, W. Montgomery. The Influence of Islam on Medieval Europe. Edinburgh: Edinburgh University Press, 1972.


[1] This is the liberal version of the secular and materialist worldviews. There is also the totalitarian version which does not have faith in the individuals’ ability to manage private property in a way that would ensure social well-being. Hence its prescription is to curb individual freedom and to transfer all means of production and decision making to a totalitarian state. Since this form of the secular and materialist worldview failed to realize human well-being and has been overthrown practically everywhere, it is not discussed in this paper.

[2] The literature on economic development is full of assertions that improvement in income distribution is in direct conflict with economic growth. For a summary of these views, see Cline, 1973, Chapter 2. This has, however, changed and there is hardly any development economist now who argues that injustice can help promote development.

[3] North has used the term ‘nasty’ for all such behavior. See the chapter “Ideology and Free Rider,” in North, 1981.

[4] Some of these scholars include Abu Yusuf (d. 798), al-Mawardi (d. 1058), Abu Ya’la (d. 1065), Nazam al-Mulk (d.1092), al-Ghazali (d. 1111), Ibn Taymiyyah (d. 1328), Ibn Khaldun (d. 1406), Shah Walliyullah (d. 1762), Jamaluddin al-Afghani (d. 1897), Muhammad ‘Abduh (d. 1905), Muhammad Iqbal (d. 1938), Hasan al-Banna (d. 1949), Sayyid Mawdudi (d. 1979), and Baqir al-Sadr (d. 1980).

[5] Some of these authors include al-Katib (d. 749), Ibn al-Muqaffa (d. 756) al-Nu‘man (d. 974), al-Mawardi (d. 1058), Kai Ka’us (d. 1082), Nizam al-Mulk (d. 1092), al-Ghazali (d. 1111), al-Turtushi (d. 1127). (For details, see Essid, 1995, pp.19-41.)

[6] For the fallacy of the Great Gap thesis, see Mirakhor (1987) and Ghazanfar (2003), particularly the “Foreword” by Todd Lowry and the “Introduction” by Ghazanfar.

[7] The full name of the book (given in the bibliography) may be freely translated as “The Book of Lessons and the Record of Cause and Effect in the History of Arabs, Persians and Berbers and their Powerful Contemporaries.” Several different editions of the Muqaddimah are now available in Arabic. The one I have used is that published in Cairo by al-Maktabah al-Tijarriyah al-Kubra without any indication of the year of publication. It has the advantage of showing all vowel marks, which makes the reading relatively easier. The Muqaddimah was translated into English in three volumes by Franz Rosenthal. Its first edition was published in 1958 and the second edition in 1967. Selections from the Muqaddimah by Charles Issawi were published in 1950 under the title, An Arab Philosophy of History: Selections from the Prolegomena of Ibn Khaldun of Tunis (1332-1406).

A considerable volume of literature is now available on Ibn Khaldun. This includes Spengler, 1964; Boulakia, 1971; Mirakhor, 1987; and Chapra, 2000.

[8] For some of these contributions, see Spengler, 1964; DeSmogyi, 1965; Mirakhor, 1987; Siddiqi, 1992; Essid, 1995; Islahi, 1996; Chapra, 2000; and Ghazanfar, 2003.

[9] For a more detailed discussion of taxation by various Muslim scholars, see the section on “Literature on Mirrors for Princes” in Essid, 1995, pp. 19-41.

[10] This letter is a significant development over the letter of Abu Yusuf to Caliph Harun al-Rashid (1933/34, pp. 3-17). It is more comprehensive and covers a larger number of topics.

[11] These are “On tax revenues and the reason for their being low and high” (pp. 279-80) and “Injustice ruins development” (pp. 286-410).

[12] Bear in mind the fact that this was stated at the time when commodity money, which it is not possible for the government to ‘create,’ was used, and fiduciary money, had not become the rule of the day.

[13] This was during the Slave (Mamluk) Dynasty in Egypt, which is divided into two periods. The first period was that of the Bahri (or Turkish) Mamluks (1250-1382), who have generally received praise in the chronicles of their contemporaries. The second period was that of the Burji Mamluks (Circassians, 1382-1517). This period was beset by a series of severe economic crises. (For details see Allouche, 1994.)

[14] Shah Walliyullah al-Dihlawi, popularly known as Walliyullah, was born in 1703, four years before the death of the Mughal Emperor, Aurangzeb (1658-1707). Aurangzeb’s rule, spanning a period of forty-nine years, was followed by a great deal of political instability – ten different changes in rulers during Walliyullah’s life-span of 59 years – leading ultimately to the weakening and decline of the Mughal Empire.

[15] For a brief account of the general decline and disintegration of the Muslim world during the fourteenth century, see Muhsin Mahdi, 1964, pp. 17-26.

[16] For a discussion of the causes of Muslim decline, see Chapra, 2000, pp. 173-252.

[17] According to Blaug (1985), economics became an academic discipline in the 1880s (p. 3).

Citation: Chapra, M. “Islamic Economics: What It Is and How It Developed”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/islamic-economics-what-it-is-and-how-it-developed/

The Economic History of Indonesia

Jeroen Touwen, Leiden University, Netherlands

Introduction

In recent decades, Indonesia has been viewed as one of Southeast Asia’s successful highly performing and newly industrializing economies, following the trail of the Asian tigers (Hong Kong, Singapore, South Korea, and Taiwan) (see Table 1). Although Indonesia’s economy grew with impressive speed during the 1980s and 1990s, it experienced considerable trouble after the financial crisis of 1997, which led to significant political reforms. Today Indonesia’s economy is recovering but it is difficult to say when all its problems will be solved. Even though Indonesia can still be considered part of the developing world, it has a rich and versatile past, in the economic as well as the cultural and political sense.

Basic Facts

Indonesia is situated in Southeastern Asia and consists of a large archipelago between the Indian Ocean and the Pacific Ocean, with more than 13.000 islands. The largest islands are Java, Kalimantan (the southern part of the island Borneo), Sumatra, Sulawesi, and Papua (formerly Irian Jaya, which is the western part of New Guinea). Indonesia’s total land area measures 1.9 million square kilometers (750,000 square miles). This is three times the area of Texas, almost eight times the area of the United Kingdom and roughly fifty times the area of the Netherlands. Indonesia has a tropical climate, but since there are large stretches of lowland and numerous mountainous areas, the climate varies from hot and humid to more moderate in the highlands. Apart from fertile land suitable for agriculture, Indonesia is rich in a range of natural resources, varying from petroleum, natural gas, and coal, to metals such as tin, bauxite, nickel, copper, gold, and silver. The size of Indonesia’s population is about 230 million (2002), of which the largest share (roughly 60%) live in Java.

Table 1

Indonesia’s Gross Domestic Product per Capita

Compared with Several Other Asian Countries (in 1990 dollars)

Indonesia Philippines Thailand Japan
1900 745 1 033 812 1 180
1913 904 1 066 835 1 385
1950 840 1 070 817 1 926
1973 1 504 1 959 1 874 11 439
1990 2 516 2 199 4 645 18 789
2000 3 041 2 385 6 335 20 084

Source: Angus Maddison, The World Economy: A Millennial Perspective, Paris: OECD Development Centre Studies 2001, 206, 214-215. For year 2000: University of Groningen and the Conference Board, GGDC Total Economy Database, 2003, http://www.eco.rug.nl/ggdc.

Important Aspects of Indonesian Economic History

“Missed Opportunities”

Anne Booth has characterized the economic history of Indonesia with the somewhat melancholy phrase “a history of missed opportunities” (Booth 1998). One may compare this with J. Pluvier’s history of Southeast Asia in the twentieth century, which is entitled A Century of Unfulfilled Expectations (Breda 1999). The missed opportunities refer to the fact that despite its rich natural resources and great variety of cultural traditions, the Indonesian economy has been underperforming for large periods of its history. A more cyclical view would lead one to speak of several ‘reversals of fortune.’ Several times the Indonesian economy seemed to promise a continuation of favorable economic development and ongoing modernization (for example, Java in the late nineteenth century, Indonesia in the late 1930s or in the early 1990s). But for various reasons Indonesia time and again suffered from severe incidents that prohibited further expansion. These incidents often originated in the internal institutional or political spheres (either after independence or in colonial times), although external influences such as the 1930s Depression also had their ill-fated impact on the vulnerable export-economy.

“Unity in Diversity”

In addition, one often reads about “unity in diversity.” This is not only a political slogan repeated at various times by the Indonesian government itself, but it also can be applied to the heterogeneity in the national features of this very large and diverse country. Logically, the political problems that arise from such a heterogeneous nation state have had their (negative) effects on the development of the national economy. The most striking difference is between densely populated Java, which has a long tradition of politically and economically dominating the sparsely populated Outer Islands. But also within Java and within the various Outer Islands, one encounters a rich cultural diversity. Economic differences between the islands persist. Nevertheless, for centuries, the flourishing and enterprising interregional trade has benefited regional integration within the archipelago.

Economic Development and State Formation

State formation can be viewed as a condition for an emerging national economy. This process essentially started in Indonesia in the nineteenth century, when the Dutch colonized an area largely similar to present-day Indonesia. Colonial Indonesia was called ‘the Netherlands Indies.’ The term ‘(Dutch) East Indies’ was mainly used in the seventeenth and eighteenth centuries and included trading posts outside the Indonesian archipelago.

Although Indonesian national historiography sometimes refers to a presumed 350 years of colonial domination, it is exaggerated to interpret the arrival of the Dutch in Bantam in 1596 as the starting point of Dutch colonization. It is more reasonable to say that colonization started in 1830, when the Java War (1825-1830) was ended and the Dutch initiated a bureaucratic, centralizing polity in Java without further restraint. From the mid-nineteenth century onward, Dutch colonization did shape the borders of the Indonesian nation state, even though it also incorporated weaknesses in the state: ethnic segmentation of economic roles, unequal spatial distribution of power, and a political system that was largely based on oppression and violence. This, among other things, repeatedly led to political trouble, before and after independence. Indonesia ceased being a colony on 17 August 1945 when Sukarno and Hatta proclaimed independence, although full independence was acknowledged by the Netherlands only after four years of violent conflict, on 27 December 1949.

The Evolution of Methodological Approaches to Indonesian Economic History

The economic history of Indonesia analyzes a range of topics, varying from the characteristics of the dynamic exports of raw materials, the dualist economy in which both Western and Indonesian entrepreneurs participated, and the strong measure of regional variation in the economy. While in the past Dutch historians traditionally focused on the colonial era (inspired by the rich colonial archives), from the 1960s and 1970s onward an increasing number of scholars (among which also many Indonesians, but also Australian and American scholars) started to study post-war Indonesian events in connection with the colonial past. In the course of the 1990s attention gradually shifted from the identification and exploration of new research themes towards synthesis and attempts to link economic development with broader historical issues. In 1998 the excellent first book-length survey of Indonesia’s modern economic history was published (Booth 1998). The stress on synthesis and lessons is also present in a new textbook on the modern economic history of Indonesia (Dick et al 2002). This highly recommended textbook aims at a juxtaposition of three themes: globalization, economic integration and state formation. Globalization affected the Indonesian archipelago even before the arrival of the Dutch. The period of the centralized, military-bureaucratic state of Soeharto’s New Order (1966-1998) was only the most recent wave of globalization. A national economy emerged gradually from the 1930s as the Outer Islands (a collective name which refers to all islands outside Java and Madura) reoriented towards industrializing Java.

Two research traditions have become especially important in the study of Indonesian economic history during the past decade. One is a highly quantitative approach, culminating in reconstructions of Indonesia’s national income and national accounts over a long period of time, from the late nineteenth century up to today (Van der Eng 1992, 2001). The other research tradition highlights the institutional framework of economic development in Indonesia, both as a colonial legacy and as it has evolved since independence. There is a growing appreciation among scholars that these two approaches complement each other.

A Chronological Survey of Indonesian Economic History

The precolonial economy

There were several influential kingdoms in the Indonesian archipelago during the pre-colonial era (e.g. Srivijaya, Mataram, Majapahit) (see further Reid 1988,1993; Ricklefs 1993). Much debate centers on whether this heyday of indigenous Asian trade was effectively disrupted by the arrival of western traders in the late fifteenth century

Sixteenth and seventeenth century

Present-day research by scholars in pre-colonial economic history focuses on the dynamics of early-modern trade and pays specific attention to the role of different ethnic groups such as the Arabs, the Chinese and the various indigenous groups of traders and entrepreneurs. During the sixteenth to the nineteenth century the western colonizers only had little grip on a limited number of spots in the Indonesian archipelago. As a consequence much of the economic history of these islands escapes the attention of the economic historian. Most data on economic matters is handed down by western observers with their limited view. A large part of the area remained engaged in its own economic activities, including subsistence agriculture (of which the results were not necessarily very meager) and local and regional trade.

An older research literature has extensively covered the role of the Dutch in the Indonesian archipelago, which began in 1596 when the first expedition of Dutch sailing ships arrived in Bantam. In the seventeenth and eighteenth centuries the Dutch overseas trade in the Far East, which focused on high-value goods, was in the hands of the powerful Dutch East India Company (in full: the United East Indies Trading Company, or Vereenigde Oost-Indische Compagnie [VOC], 1602-1795). However, the region was still fragmented and Dutch presence was only concentrated in a limited number of trading posts.

During the eighteenth century, coffee and sugar became the most important products and Java became the most important area. The VOC gradually took over power from the Javanese rulers and held a firm grip on the productive parts of Java. The VOC was also actively engaged in the intra-Asian trade. For example, cotton from Bengal was sold in the pepper growing areas. The VOC was a successful enterprise and made large dividend payments to its shareholders. Corruption, lack of investment capital, and increasing competition from England led to its demise and in 1799 the VOC came to an end (Gaastra 2002, Jacobs 2000).

The nineteenth century

In the nineteenth century a process of more intensive colonization started, predominantly in Java, where the Cultivation System (1830-1870) was based (Elson 1994; Fasseur 1975).

During the Napoleonic era the VOC trading posts in the archipelago had been under British rule, but in 1814 they came under Dutch authority again. During the Java War (1825-1830), Dutch rule on Java was challenged by an uprising led by Javanese prince Diponegoro. To repress this revolt and establish firm rule in Java, colonial expenses increased, which in turn led to a stronger emphasis on economic exploitation of the colony. The Cultivation System, initiated by Johannes van den Bosch, was a state-governed system for the production of agricultural products such as sugar and coffee. In return for a fixed compensation (planting wage), the Javanese were forced to cultivate export crops. Supervisors, such as civil servants and Javanese district heads, were paid generous ‘cultivation percentages’ in order to stimulate production. The exports of the products were consigned to a Dutch state-owned trading firm (the Nederlandsche Handel-Maatschappij, NHM, established in 1824) and sold profitably abroad.

Although the profits (‘batig slot’) for the Dutch state of the period 1830-1870 were considerable, various reasons can be mentioned for the change to a liberal system: (a) the emergence of new liberal political ideology; (b) the gradual demise of the Cultivation System during the 1840s and 1850s because internal reforms were necessary; and (c) growth of private (European) entrepreneurship with know-how and interest in the exploitation of natural resources, which took away the need for government management (Van Zanden and Van Riel 2000: 226).

Table 2

Financial Results of Government Cultivation, 1840-1849 (‘Cultivation System’) (in thousands of guilders in current values)

1840-1844 1845-1849
Coffee 40 278 24 549
Sugar 8 218 4 136
Indigo, 7 836 7 726
Pepper, Tea 647 1 725
Total net profits 39 341 35 057

Source: Fasseur 1975: 20.

Table 3

Estimates of Total Profits (‘batig slot’) during the Cultivation System,

1831/40 – 1861/70 (in millions of guilders)

1831/40 1841/50 1851/60 1861/70
Gross revenues of sale of colonial products 227.0 473.9 652.7 641.8
Costs of transport etc (NHM) 88.0 165.4 138.7 114.7
Sum of expenses 59.2 175.1 275.3 276.6
Total net profits* 150.6 215.6 289.4 276.7

Source: Van Zanden and Van Riel 2000: 223.

* Recalculated by Van Zanden and Van Riel to include subsidies for the NHM and other costs that in fact benefited the Dutch economy.

The heyday of the colonial export economy (1900-1942)

After 1870, private enterprise was promoted but the exports of raw materials gained decisive momentum after 1900. Sugar, coffee, pepper and tobacco, the old export products, were increasingly supplemented with highly profitable exports of petroleum, rubber, copra, palm oil and fibers. The Outer Islands supplied an increasing share in these foreign exports, which were accompanied by an intensifying internal trade within the archipelago and generated an increasing flow of foreign imports. Agricultural exports were cultivated both in large-scale European agricultural plantations (usually called agricultural estates) and by indigenous smallholders. When the exploitation of oil became profitable in the late nineteenth century, petroleum earned a respectable position in the total export package. In the early twentieth century, the production of oil was increasingly concentrated in the hands of the Koninklijke/Shell Group.


Figure 1

Foreign Exports from the Netherlands-Indies, 1870-1940

(in millions of guilders, current values)

Source: Trade statistics

The momentum of profitable exports led to a broad expansion of economic activity in the Indonesian archipelago. Integration with the world market also led to internal economic integration when the road system, railroad system (in Java and Sumatra) and port system were improved. In shipping lines, an important contribution was made by the KPM (Koninklijke Paketvaart-Maatschappij, Royal Packet boat Company) that served economic integration as well as imperialist expansion. Subsidized shipping lines into remote corners of the vast archipelago carried off export goods (forest products), supplied import goods and transported civil servants and military.

The Depression of the 1930s hit the export economy severely. The sugar industry in Java collapsed and could not really recover from the crisis. In some products, such as rubber and copra, production was stepped up to compensate for lower prices. In the rubber exports indigenous producers for this reason evaded the international restriction agreements. The Depression precipitated the introduction of protectionist measures, which ended the liberal period that had started in 1870. Various import restrictions were launched, making the economy more self-sufficient, as for example in the production of rice, and stimulating domestic integration. Due to the strong Dutch guilder (the Netherlands adhered to the gold standard until 1936), it took relatively long before economic recovery took place. The outbreak of World War II disrupted international trade, and the Japanese occupation (1942-1945) seriously disturbed and dislocated the economic order.

Table 4

Annual Average Growth in Economic Key Aggregates 1830-1990

GDP per capita Export volume Export

Prices

Government Expenditure
Cultivation System 1830-1840 n.a. 13.5 5.0 8.5
Cultivation System 1840-1848 n.a. 1.5 - 4.5 [very low]
Cultivation System 1849-1873 n.a. 1.5 1.5 2.6
Liberal Period 1874-1900 [very low] 3.1 - 1.9 2.3
Ethical Period 1901-1928 1.7 5.8 17.4 4.1
Great Depression 1929-1934 -3.4 -3.9 -19.7 0.4
Prewar Recovery 1934-1940 2.5 2.2 7.8 3.4
Old Order 1950-1965 1.0 0.8 - 2.1 1.8
New Order 1966-1990 4.4 5.4 11.6 10.6

Source: Booth 1998: 18.

Note: These average annual growth percentages were calculated by Booth by fitting an exponential curve to the data for the years indicated. Up to 1873 data refer only to Java.

The post-1945 period

After independence, the Indonesian economy had to recover from the hardships of the Japanese occupation and the war for independence (1945-1949), on top of the slow recovery from the 1930s Depression. During the period 1949-1965, there was little economic growth, predominantly in the years from 1950 to 1957. In 1958-1965, growth rates dwindled, largely due to political instability and inappropriate economic policy measures. The hesitant start of democracy was characterized by a power struggle between the president, the army, the communist party and other political groups. Exchange rate problems and absence of foreign capital were detrimental to economic development, after the government had eliminated all foreign economic control in the private sector in 1957/58. Sukarno aimed at self-sufficiency and import substitution and estranged the suppliers of western capital even more when he developed communist sympathies.

After 1966, the second president, general Soeharto, restored the inflow of western capital, brought back political stability with a strong role for the army, and led Indonesia into a period of economic expansion under his authoritarian New Order (Orde Baru) regime which lasted until 1997 (see below for the three phases in New Order). In this period industrial output quickly increased, including steel, aluminum, and cement but also products such as food, textiles and cigarettes. From the 1970s onward the increased oil price on the world market provided Indonesia with a massive income from oil and gas exports. Wood exports shifted from logs to plywood, pulp, and paper, at the price of large stretches of environmentally valuable rainforest.

Soeharto managed to apply part of these revenues to the development of technologically advanced manufacturing industry. Referring to this period of stable economic growth, the World Bank Report of 1993 speaks of an ‘East Asian Miracle’ emphasizing the macroeconomic stability and the investments in human capital (World Bank 1993: vi).

The financial crisis in 1997 revealed a number of hidden weaknesses in the economy such as a feeble financial system (with a lack of transparency), unprofitable investments in real estate, and shortcomings in the legal system. The burgeoning corruption at all levels of the government bureaucracy became widely known as KKN (korupsi, kolusi, nepotisme). These practices characterize the coming-of-age of the 32-year old, strongly centralized, autocratic Soeharto regime.

From 1998 until present

Today, the Indonesian economy still suffers from severe economic development problems following the financial crisis of 1997 and the subsequent political reforms after Soeharto stepped down in 1998. Secessionist movements and the low level of security in the provincial regions, as well as relatively unstable political policies, form some of its present-day problems. Additional problems include the lack of reliable legal recourse in contract disputes, corruption, weaknesses in the banking system, and strained relations with the International Monetary Fund. The confidence of investors remains low, and in order to achieve future growth, internal reform will be essential to build up confidence of international donors and investors.

An important issue on the reform agenda is regional autonomy, bringing a larger share of export profits to the areas of production instead of to metropolitan Java. However, decentralization policies do not necessarily improve national coherence or increase efficiency in governance.

A strong comeback in the global economy may be at hand, but has not as yet fully taken place by the summer of 2003 when this was written.

Additional Themes in the Indonesian Historiography

Indonesia is such a large and multi-faceted country that many different aspects have been the focus of research (for example, ethnic groups, trade networks, shipping, colonialism and imperialism). One can focus on smaller regions (provinces, islands), as well as on larger regions (the western archipelago, the eastern archipelago, the Outer Islands as a whole, or Indonesia within Southeast Asia). Without trying to be exhaustive, eleven themes which have been subject of debate in Indonesian economic history are examined here (on other debates see also Houben 2002: 53-55; Lindblad 2002b: 145-152; Dick 2002: 191-193; Thee 2002: 242-243).

The indigenous economy and the dualist economy

Although western entrepreneurs had an advantage in technological know-how and supply of investment capital during the late-colonial period, there has been a traditionally strong and dynamic class of entrepreneurs (traders and peasants) in many regions of Indonesia. Resilient in times of economic malaise, cunning in symbiosis with traders of other Asian nationalities (particularly Chinese), the Indonesian entrepreneur has been rehabilitated after the relatively disparaging manner in which he was often pictured in the pre-1945 literature. One of these early writers, J.H. Boeke, initiated a school of thought centering on the idea of ‘economic dualism’ (referring to a modern western and a stagnant eastern sector). As a consequence, the term ‘dualism’ was often used to indicate western superiority. From the 1960s onward such ideas have been replaced by a more objective analysis of the dualist economy that is not so judgmental about the characteristics of economic development in the Asian sector. Some focused on technological dualism (such as B. Higgins) others on ethnic specialization in different branches of production (see also Lindblad 2002b: 148, Touwen 2001: 316-317).

The characteristics of Dutch imperialism

Another vigorous debate concerns the character of and the motives for Dutch colonial expansion. Dutch imperialism can be viewed as having a rather complex mix of political, economic and military motives which influenced decisions about colonial borders, establishing political control in order to exploit oil and other natural resources, and preventing local uprisings. Three imperialist phases can be distinguished (Lindblad 2002a: 95-99). The first phase of imperialist expansion was from 1825-1870. During this phase interference with economic matters outside Java increased slowly but military intervention was occasional. The second phase started with the outbreak of the Aceh War in 1873 and lasted until 1896. During this phase initiatives in trade and foreign investment taken by the colonial government and by private businessmen were accompanied by extension of colonial (military) control in the regions concerned. The third and final phase was characterized by full-scale aggressive imperialism (often known as ‘pacification’) and lasted from 1896 until 1907.

The impact of the cultivation system on the indigenous economy

The thesis of ‘agricultural involution’ was advocated by Clifford Geertz (1963) and states that a process of stagnation characterized the rural economy of Java in the nineteenth century. After extensive research, this view has generally been discarded. Colonial economic growth was stimulated first by the Cultivation System, later by the promotion of private enterprise. Non-farm employment and purchasing power increased in the indigenous economy, although there was much regional inequality (Lindblad 2002a: 80; 2002b:149-150).

Regional diversity in export-led economic expansion

The contrast between densely populated Java, which had been dominant in economic and political regard for a long time, and the Outer Islands, which were a large, sparsely populated area, is obvious. Among the Outer Islands we can distinguish between areas which were propelled forward by export trade, either from Indonesian or European origin (examples are Palembang, East Sumatra, Southeast Kalimantan) and areas which stayed behind and only slowly picked the fruits of the modernization that took place elsewhere (as for example Benkulu, Timor, Maluku) (Touwen 2001).

The development of the colonial state and the role of Ethical Policy

Well into the second half of the nineteenth century, the official Dutch policy was to abstain from interference with local affairs. The scarce resources of the Dutch colonial administrators should be reserved for Java. When the Aceh War initiated a period of imperialist expansion and consolidation of colonial power, a call for more concern with indigenous affairs was heard in Dutch politics, which resulted in the official Ethical Policy which was launched in 1901 and had the threefold aim of improving indigenous welfare, expanding the educational system, and allowing for some indigenous participation in the government (resulting in the People’s Council (Volksraad) that was installed in 1918 but only had an advisory role). The results of the Ethical Policy, as for example measured in improvements in agricultural technology, education, or welfare services, are still subject to debate (Lindblad 2002b: 149).

Living conditions of coolies at the agricultural estates

The plantation economy, which developed in the sparsely populated Outer Islands (predominantly in Sumatra) between 1870 and 1942, was in bad need of labor. The labor shortage was solved by recruiting contract laborers (coolies) in China, and later in Java. The Coolie Ordinance was a government regulation that included the penal clause (which allowed for punishment by plantation owners). In response to reported abuse, the colonial government established the Labor Inspectorate (1908), which aimed at preventing abuse of coolies on the estates. The living circumstances and treatment of the coolies has been subject of debate, particularly regarding the question whether the government put enough effort in protecting the interests of the workers or allowed abuse to persist (Lindblad 2002b: 150).

Colonial drain

How large of a proportion of economic profits was drained away from the colony to the mother country? The detrimental effects of the drain of capital, in return for which European entrepreneurial initiatives were received, have been debated, as well as the exact methods of its measurement. There was also a second drain to the home countries of other immigrant ethnic groups, mainly to China (Van der Eng 1998; Lindblad 2002b: 151).

The position of the Chinese in the Indonesian economy

In the colonial economy, the Chinese intermediary trader or middleman played a vital role in supplying credit and stimulating the cultivation of export crops such as rattan, rubber and copra. The colonial legal system made an explicit distinction between Europeans, Chinese and Indonesians. This formed the roots of later ethnic problems, since the Chinese minority population in Indonesia has gained an important (and sometimes envied) position as capital owners and entrepreneurs. When threatened by political and social turmoil, Chinese business networks may have sometimes channel capital funds to overseas deposits.

Economic chaos during the ‘Old Order’

The ‘Old Order’-period, 1945-1965, was characterized by economic (and political) chaos although some economic growth undeniably did take place during these years. However, macroeconomic instability, lack of foreign investment and structural rigidity formed economic problems that were closely connected with the political power struggle. Sukarno, the first president of the Indonesian republic, had an outspoken dislike of colonialism. His efforts to eliminate foreign economic control were not always supportive of the struggling economy of the new sovereign state. The ‘Old Order’ has for long been a ‘lost area’ in Indonesian economic history, but the establishment of the unitary state and the settlement of major political issues, including some degree of territorial consolidation (as well as the consolidation of the role of the army) were essential for the development of a national economy (Dick 2002: 190; Mackie 1967).

Development policy and economic planning during the ‘New Order’ period

The ‘New Order’ (Orde Baru) of Soeharto rejected political mobilization and socialist ideology, and established a tightly controlled regime that discouraged intellectual enquiry, but did put Indonesia’s economy back on the rails. New flows of foreign investment and foreign aid programs were attracted, the unbridled population growth was reduced due to family planning programs, and a transformation took place from a predominantly agricultural economy to an industrializing economy. Thee Kian Wie distinguishes three phases within this period, each of which deserve further study:

(a) 1966-1973: stabilization, rehabilitation, partial liberalization and economic recovery;

(b) 1974-1982: oil booms, rapid economic growth, and increasing government intervention;

(c) 1983-1996: post-oil boom, deregulation, renewed liberalization (in reaction to falling oil-prices), and rapid export-led growth. During this last phase, commentators (including academic economists) were increasingly concerned about the thriving corruption at all levels of the government bureaucracy: KKN (korupsi, kolusi, nepotisme) practices, as they later became known (Thee 2002: 203-215).

Financial, economic and political crisis: KRISMON, KRISTAL

The financial crisis of 1997 started with a crisis of confidence following the depreciation of the Thai baht in July 1997. Core factors causing the ensuing economic crisis in Indonesia were the quasi-fixed exchange rate of the rupiah, quickly rising short-term foreign debt and the weak financial system. Its severity had to be attributed to political factors as well: the monetary crisis (KRISMON) led to a total crisis (KRISTAL) because of the failing policy response of the Soeharto regime. Soeharto had been in power for 32 years and his government had become heavily centralized and corrupt and was not able to cope with the crisis in a credible manner. The origins, economic consequences, and socio-economic impact of the crisis are still under discussion. (Thee 2003: 231-237; Arndt and Hill 1999).

(Note: I want to thank Dr. F. Colombijn and Dr. J.Th Lindblad at Leiden University for their useful comments on the draft version of this article.)

Selected Bibliography

In addition to the works cited in the text above, a small selection of recent books is mentioned here, which will allow the reader to quickly grasp the most recent insights and find useful further references.

General textbooks or periodicals on Indonesia’s (economic) history:

Booth, Anne. The Indonesian Economy in the Nineteenth and Twentieth Centuries: A History of Missed Opportunities. London: Macmillan, 1998.

Bulletin of Indonesian Economic Studies.

Dick, H.W., V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie. The Emergence of a National Economy in Indonesia, 1800-2000. Sydney: Allen & Unwin, 2002.

Itinerario “Economic Growth and Institutional Change in Indonesia in the 19th and 20th centuries” [special issue] 26 no. 3-4 (2002).

Reid, Anthony. Southeast Asia in the Age of Commerce, 1450-1680, Vol. I: The Lands below the Winds. New Haven: Yale University Press, 1988.

Reid, Anthony. Southeast Asia in the Age of Commerce, 1450-1680, Vol. II: Expansion and Crisis. New Haven: Yale University Press, 1993.

Ricklefs, M.C. A History of Modern Indonesia since ca. 1300. Basingstoke/Londen: Macmillan, 1993.

On the VOC:

Gaastra, F.S. De Geschiedenis van de VOC. Zutphen: Walburg Pers, 1991 (1st edition), 2002 (4th edition).

Jacobs, Els M. Koopman in Azië: de Handel van de Verenigde Oost-Indische Compagnie tijdens de 18de Eeuw. Zutphen: Walburg Pers, 2000.

Nagtegaal, Lucas. Riding the Dutch Tiger: The Dutch East Indies Company and the Northeast Coast of Java 1680-1743. Leiden: KITLV Press, 1996.

On the Cultivation System:

Elson, R.E. Village Java under the Cultivation System, 1830-1870. Sydney: Allen and Unwin, 1994.

Fasseur, C. Kultuurstelsel en Koloniale Baten. De Nederlandse Exploitatie van Java, 1840-1860. Leiden, Universitaire Pers, 1975. (Translated as: The Politics of Colonial Exploitation: Java, the Dutch and the Cultivation System. Ithaca, NY: Southeast Asia Program, Cornell University Press 1992.)

Geertz, Clifford. Agricultural Involution: The Processes of Ecological Change in Indonesia. Berkeley: University of California Press, 1963.

Houben, V.J.H. “Java in the Nineteenth Century: Consolidation of a Territorial State.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 56-81. Sydney: Allen & Unwin, 2002.

On the Late-Colonial Period:

Dick, H.W. “Formation of the Nation-state, 1930s-1966.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 153-193. Sydney: Allen & Unwin, 2002.

Lembaran Sejarah, “Crisis and Continuity: Indonesian Economy in the Twentieth Century” [special issue] 3 no. 1 (2000).

Lindblad, J.Th., editor. New Challenges in the Modern Economic History of Indonesia. Leiden: PRIS, 1993. Translated as: Sejarah Ekonomi Modern Indonesia. Berbagai Tantangan Baru. Jakarta: LP3ES, 2002.

Lindblad, J.Th., editor. The Historical Foundations of a National Economy in Indonesia, 1890s-1990s. Amsterdam: North-Holland, 1996.

Lindblad, J.Th. “The Outer Islands in the Nineteenthh Century: Contest for the Periphery.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 82-110. Sydney: Allen & Unwin, 2002a.

Lindblad, J.Th. “The Late Colonial State and Economic Expansion, 1900-1930s.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 111-152. Sydney: Allen & Unwin, 2002b.

Touwen, L.J. Extremes in the Archipelago: Trade and Economic Development in the Outer Islands of Indonesia, 1900‑1942. Leiden: KITLV Press, 2001.

Van der Eng, Pierre. “Exploring Exploitation: The Netherlands and Colonial Indonesia, 1870-1940.” Revista de Historia Económica 16 (1998): 291-321.

Zanden, J.L. van, and A. van Riel. Nederland, 1780-1914: Staat, instituties en economische ontwikkeling. Amsterdam: Balans, 2000. (On the Netherlands in the nineteenth century.)

Independent Indonesia:

Arndt, H.W. and Hal Hill, editors. Southeast Asia’s Economic Crisis: Origins, Lessons and the Way forward. Singapore: Institute of Southeast Asian Studies, 1999.

Cribb, R. and C. Brown. Modern Indonesia: A History since 1945. Londen/New York: Longman, 1995.

Feith, H. The Decline of Constitutional Democracy in Indonesia. Ithaca, New York: Cornell University Press, 1962.

Hill, Hal. The Indonesian Economy. Cambridge: Cambridge University Press, 2000. (This is the extended second edition of Hill, H., The Indonesian Economy since 1966. Southeast Asia’s Emerging Giant. Cambridge: Cambridge University Press, 1996.)

Hill, Hal, editor. Unity and Diversity: Regional Economic Development in Indonesia since 1970. Singapore: Oxford University Press, 1989.

Mackie, J.A.C. “The Indonesian Economy, 1950-1960.” In The Economy of Indonesia: Selected Readings, edited by B. Glassburner, 16-69. Ithaca NY: Cornell University Press 1967.

Robison, Richard. Indonesia: The Rise of Capital. Sydney: Allen and Unwin, 1986.

Thee Kian Wie. “The Soeharto Era and After: Stability, Development and Crisis, 1966-2000.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 194-243. Sydney: Allen & Unwin, 2002.

World Bank. The East Asian Miracle: Economic Growth and Public Policy. Oxford: World Bank /Oxford University Press, 1993.

On economic growth:

Booth, Anne. The Indonesian Economy in the Nineteenth and Twentieth Centuries. A History of Missed Opportunities. London: Macmillan, 1998.

Van der Eng, Pierre. “The Real Domestic Product of Indonesia, 1880-1989.” Explorations in Economic History 39 (1992): 343-373.

Van der Eng, Pierre. “Indonesia’s Growth Performance in the Twentieth Century.” In The Asian Economies in the Twentieth Century, edited by Angus Maddison, D.S. Prasada Rao and W. Shepherd, 143-179. Cheltenham: Edward Elgar, 2002.

Van der Eng, Pierre. “Indonesia’s Economy and Standard of Living in the Twentieth Century.” In Indonesia Today: Challenges of History, edited by G. Lloyd and S. Smith, 181-199. Singapore: Institute of Southeast Asian Studies, 2001.

Citation: Touwen, Jeroen. “The Economic History of Indonesia”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-indonesia/

Hours of Work in U.S. History

Robert Whaples, Wake Forest University

In the 1800s, many Americans worked seventy hours or more per week and the length of the workweek became an important political issue. Since then the workweek’s length has decreased considerably. This article presents estimates of the length of the historical workweek in the U.S., describes the history of the shorter-hours “movement,” and examines the forces that drove the workweek’s decline over time.

Estimates of the Length of the Workweek

Measuring the length of the workweek (or workday or workyear) is a difficult task, full of ambiguities concerning what constitutes work and who is to be considered a worker. Estimating the length of the historical workweek is even more troublesome. Before the Civil War most Americans were employed in agriculture and most of these were self-employed. Like self-employed workers in other fields, they saw no reason to record the amount of time they spent working. Often the distinction between work time and leisure time was blurry. Therefore, estimates of the length of the typical workweek before the mid-1800s are very imprecise.

The Colonial Period

Based on the amount of work performed — for example, crops raised per worker — Carr (1992) concludes that in the seventeenth-century Chesapeake region, “for at least six months of the year, an eight to ten-hour day of hard labor was necessary.” This does not account for other required tasks, which probably took about three hours per day. This workday was considerably longer than for English laborers, who at the time probably averaged closer to six hours of heavy labor each day.

The Nineteenth Century

Some observers believe that most American workers adopted the practice of working from “first light to dark” — filling all their free hours with work — throughout the colonial period and into the nineteenth century. Others are skeptical of such claims and argue that work hours increased during the nineteenth century — especially its first half. Gallman (1975) calculates “changes in implicit hours of work per agricultural worker” and estimates that hours increased 11 to 18 percent from 1800 to 1850. Fogel and Engerman (1977) argue that agricultural hours in the North increased before the Civil War due to the shift into time-intensive dairy and livestock. Weiss and Craig (1993) find evidence suggesting that agricultural workers also increased their hours of work between 1860 and 1870. Finally, Margo (2000) estimates that “on an economy-wide basis, it is probable that annual hours of work rose over the (nineteenth) century, by around 10 percent.” He credits this rise to the shift out of agriculture, a decline in the seasonality of labor demand and reductions in annual periods of nonemployment. On the other hand, it is clear that working hours declined substantially for one important group. Ransom and Sutch (1977) and Ng and Virts (1989) estimate that annual labor hours per capita fell 26 to 35 percent among African-Americans with the end of slavery.

Manufacturing Hours before 1890

Our most reliable estimates of the workweek come from manufacturing, since most employers required that manufacturing workers remain at work during precisely specified hours. The Census of Manufactures began to collect this information in 1880 but earlier estimates are available. Much of what is known about average work hours in the nineteenth century comes from two surveys of manufacturing hours taken by the federal government. The first survey, known as the Weeks Report, was prepared by Joseph Weeks as part of the Census of 1880. The second was prepared in 1893 by Commissioner of Labor Carroll D. Wright, for the Senate Committee on Finance, chaired by Nelson Aldrich. It is commonly called the Aldrich Report. Both of these sources, however, have been criticized as flawed due to problems such as sample selection bias (firms whose records survived may not have been typical) and unrepresentative regional and industrial coverage. In addition, the two series differ in their estimates of the average length of the workweek by as much as four hours. These estimates are reported in Table 1. Despite the previously mentioned problems, it seems reasonable to accept two important conclusions based on these data — the length of the typical manufacturing workweek in the 1800s was very long by modern standards and it declined significantly between 1830 and 1890.

Table 1
Estimated Average Weekly Hours Worked in Manufacturing, 1830-1890

Year Weeks Report Aldrich Report
1830 69.1
1840 67.1 68.4
1850 65.5 69.0
1860 62.0 66.0
1870 61.1 63.0
1880 60.7 61.8
1890 60.0

Sources: U.S. Department of Interior (1883), U.S. Senate (1893)
Note: Atack and Bateman (1992), using data from census manuscripts, estimate average weekly hours to be 60.1 in 1880 — very close to Weeks’ contemporary estimate. They also find that the summer workweek was about 1.5 hours longer than the winter workweek.

Hours of Work during the Twentieth Century

Because of changing definitions and data sources there does not exist a consistent series of workweek estimates covering the entire twentieth century. Table 2 presents six sets of estimates of weekly hours. Despite differences among the series, there is a fairly consistent pattern, with weekly hours falling considerably during the first third of the century and much more slowly thereafter. In particular, hours fell strongly during the years surrounding World War I, so that by 1919 the eight-hour day (with six workdays per week) had been won. Hours fell sharply at the beginning of the Great Depression, especially in manufacturing, then rebounded somewhat and peaked during World War II. After World War II, the length of the workweek stabilized around forty hours. Owen’s nonstudent-male series shows little trend after World War II, but the other series show a slow, but steady, decline in the length of the average workweek. Greis’s two series are based on the average length of the workyear and adjust for paid vacations, holidays and other time-off. The last column is based on information reported by individuals in the decennial censuses and in the Current Population Survey of 1988. It may be the most accurate and representative series, as it is based entirely on the responses of individuals rather than employers.

Table 2
Estimated Average Weekly Hours Worked, 1900-1988

Year Census of Manu-facturing JonesManu-

facturing

OwenNonstudent Males GreisManu-

facturing

GreisAll Workers Census/CPS All Workers
1900 59.6* 55.0 58.5
1904 57.9 53.6 57.1
1909 56.8 (57.3) 53.1 55.7
1914 55.1 (55.5) 50.1 54.0
1919 50.8 (51.2) 46.1 50.0
1924 51.1* 48.8 48.8
1929 50.6 48.0 48.7
1934 34.4 40.6
1940 37.6 42.5 43.3
1944 44.2 46.9
1947 39.2 42.4 43.4 44.7
1950 38.7 41.1 42.7
1953 38.6 41.5 43.2 44.0
1958 37.8* 40.9 42.0 43.4
1960 41.0 40.9
1963 41.6 43.2 43.2
1968 41.7 41.2 42.0
1970 41.1 40.3
1973 40.6 41.0
1978 41.3* 39.7 39.1
1980 39.8
1988 39.2

Sources: Whaples (1990a), Jones (1963), Owen (1976, 1988), and Greis (1984). The last column is based on the author’s calculations using Coleman and Pencavel’s data from Table 4 (below).
* = these estimates are from one year earlier than the year listed.
(The figures in parentheses in the first column are unofficial estimates but are probably more precise, as they better estimate the hours of workers in industries with very long workweeks.)

Hours in Other Industrial Sectors

Table 3 compares the length of the workweek in manufacturing to that in other industries for which there is available information. (Unfortunately, data from the agricultural and service sectors are unavailable until late in this period.) The figures in Table 3 show that the length of the workweek was generally shorter in the other industries — sometimes considerably shorter. For example, in 1910 anthracite coalminers’ workweeks were about forty percent shorter than the average workweek among manufacturing workers. All of the series show an overall downward trend.

Table 3
Estimated Average Weekly Hours Worked, Other Industries

Year Manufacturing Construction Railroads Bituminous Coal Anthracite Coal
1850s about 66 about 66
1870s about 62 about 60
1890 60.0 51.3
1900 59.6 50.3 52.3 42.8 35.8
1910 57.3 45.2 51.5 38.9 43.3
1920 51.2 43.8 46.8 39.3 43.2
1930 50.6 42.9 33.3 37.0
1940 37.6 42.5 27.8 27.2
1955 38.5 37.1 32.4 31.4

Sources: Douglas (1930), Jones (1963), Licht (1983), and Tables 1 and 2.
Note: The manufacturing figures for the 1850s and 1870s are approximations based on averaging numbers from the Weeks and Aldrich reports from Table 1. The early estimates for the railroad industry are also approximations.

Recent Trends by Race and Gender

Some analysts, such as Schor (1992) have argued that the workweek increased substantially in the last half of the twentieth century. Few economists accept this conclusion, arguing that it is based on the use of faulty data (public opinion surveys) and unexplained methods of “correcting” more reliable sources. Schor’s conclusions are contradicted by numerous studies. Table 4 presents Coleman and Pencavel’s (1993a, 1993b) estimates of the average workweek of employed people — disaggregated by race and gender. For all four groups the average length of the workweek has dropped since 1950. Although median weekly hours were virtually constant for men, the upper tail of the hours distribution fell for those with little schooling and rose for the well-educated. In addition, Coleman and Pencavel also find that work hours declined for young and older men (especially black men), but changed little for white men in their prime working years. Women with relatively little schooling were working fewer hours in the 1980s than in 1940, while the reverse is true of well-educated women.

Table 4
Estimated Average Weekly Hours Worked, by Race and Gender, 1940-1988

Year White Men Black Men White Women Black Women
1940 44.1 44.5 40.6 42.2
1950 43.4 42.8 41.0 40.3
1960 43.3 40.4 36.8 34.7
1970 43.1 40.2 36.1 35.9
1980 42.9 39.6 35.9 36.5
1988 42.4 39.6 35.5 37.2

Source: Coleman and Pencavel (1993a, 1993b)

Broader Trends in Time Use, 1880 to 2040

In 1880 a typical male household head had very little leisure time — only about 1.8 hours per day over the course of a year. However, as Fogel’s (2000) estimates in Table 5 show, between 1880 and 1995 the amount of work per day fell nearly in half, allowing leisure time to more than triple. Because of the decline in the length of the workweek and the declining portion of a lifetime that is spent in paid work (due largely to lengthening periods of education and retirement) the fraction of the typical American’s lifetime devoted to work has become remarkably small. Based on these trends Fogel estimates that four decades from now less than one-fourth of our discretionary time (time not needed for sleep, meals, and hygiene) will be devoted to paid work — over three-fourths will be available for doing what we wish.

Table 5
Division of the Day for the Average Male Household Head over the Course of a Year, 1880 and 1995

Activity 1880 1995
Sleep 8 8
Meals and hygiene 2 2
Chores 2 2
Travel to and from work 1 1
Work 8.5 4.7
Illness .7 .5
Left over for leisure activities 1.8 5.8

Source: Fogel (2000)

Table 6
Estimated Trend in the Lifetime Distribution of Discretionary Time, 1880-2040

Activity 1880 1995 2040
Lifetime Discretionary Hours 225,900 298,500 321,900
Lifetime Work Hours 182,100 122,400 75,900
Lifetime Leisure Hours 43,800 176,100 246,000

Source: Fogel (2000)
Notes: Discretionary hours exclude hours used for sleep, meals and hygiene. Work hours include paid work, travel to and from work, and household chores.

Postwar International Comparisons

While hours of work have decreased slowly in the U.S. since the end of World War II, they have decreased more rapidly in Western Europe. Greis (1984) calculates that annual hours worked per employee fell from 1908 to 1704 in the U.S. between 1950 and 1979, a 10.7 percent decrease. This compares to a 21.8 percent decrease across a group of twelve Western European countries, where the average fell from 2170 hours to 1698 hours between 1950 and 1979. Perhaps the most precise way of measuring work hours is to have individuals fill out diaries on their day-to-day and hour-to-hour time use. Table 7 presents an international comparison of average work hours both inside and outside of the workplace, by adult men and women — averaging those who are employed with those who are not. (Juster and Stafford (1991) caution, however, that making these comparisons requires a good deal of guesswork.) These numbers show a significant drop in total work per week in the U.S. between 1965 and 1981. They also show that total work by men and women is very similar, although it is divided differently. Total work hours in the U.S. were fairly similar to those in Japan, but greater than in Denmark, while less than in the USSR.

Table 7
Weekly Work Time in Four Countries, Based on Time Diaries, 1960s-1980s

Activity US USSR (Pskov)
Men Women Men Women
1965 1981 1965 1981 1965 1981 1965 1981
Total Work 63.1 57.8 60.9 54.4 64.4 65.7 75.3 66.3
Market Work 51.6 44.0 18.9 23.9 54.6 53.8 43.8 39.3
Commuting 4.8 3.5 1.6 2.0 4.9 5.2 3.7 3.4
Housework 11.5 13.8 41.8 30.5 9.8 11.9 31.5 27.0
Activity Japan Denmark
Men Women Men Women
1965 1985 1965 1985 1964 1987 1964 1987
Total Work 60.5 55.5 64.7 55.6 45.4 46.2 43.4 43.9
Market Work 57.7 52.0 33.2 24.6 41.7 33.4 13.3 20.8
Commuting 3.6 4.5 1.0 1.2 n.a n.a n.a n.a
Housework 2.8 3.5 31.5 31.0 3.7 12.8 30.1 23.1

Source: Juster and Stafford (1991)

The Shorter Hours “Movement” in the U.S.

The Colonial Period

Captain John Smith, after mapping New England’s coast, came away convinced that three days’ work per week would satisfy any settler. Far from becoming a land of leisure, however, the abundant resources of British America and the ideology of its settlers, brought forth high levels of work. Many colonial Americans held the opinion that prosperity could be taken as a sign of God’s pleasure with the individual, viewed work as inherently good and saw idleness as the devil’s workshop. Rodgers (1978) argues that this work ethic spread and eventually reigned supreme in colonial America. The ethic was consistent with the American experience, since high returns to effort meant that hard work often yielded significant increases in wealth. In Virginia, authorities also transplanted the Statue of Artificers, which obliged all Englishmen (except the gentry) to engage in productive activity from sunrise to sunset. Likewise, a 1670 Massachusetts law demanded a minimum ten-hour workday, but it is unlikely that these laws had any impact on the behavior of most free workers.

The Revolutionary War Period

Roediger and Foner (1989) contend that the Revolutionary War era brought a series of changes that undermined support for sun-to-sun work. The era’s republican ideology emphasized that workers needed free time, away from work, to participate in democracy. Simultaneously, the development of merchant capitalism meant that there were, for the first time, a significant number of wageworkers. Roediger and Foner argue that reducing labor costs was crucial to the profitability of these workers’ employers, who reduced costs by squeezing more work from their employees — reducing time for meals, drink and rest and sometimes even rigging the workplace’s official clock. Incensed by their employers’ practice of paying a flat daily wage during the long summer shift and resorting to piece rates during short winter days, Philadelphia’s carpenters mounted America’s first ten-hour-day strike in May 1791. (The strike was unsuccessful.)

1820s: The Shorter Hours Movement Begins

Changes in the organization of work, with the continued rise of merchant capitalists, the transition from the artisanal shop to the early factory, and an intensified work pace had become widespread by about 1825. These changes produced the first extensive, aggressive movement among workers for shorter hours, as the ten-hour movement blossomed in New York City, Philadelphia and Boston. Rallying around the ten-hour banner, workers formed the first city-central labor union in the U.S., the first labor newspaper, and the first workingmen’s political party — all in Philadelphia — in the late 1820s.

Early Debates over Shorter Hours

Although the length of the workday is largely an economic decision arrived at by the interaction of the supply and demand for labor, advocates of shorter hours and foes of shorter hours have often argued the issue on moral grounds. In the early 1800s, advocates argued that shorter work hours improved workers’ health, allowed them time for self-improvement and relieved unemployment. Detractors countered that workers would abuse leisure time (especially in saloons) and that long, dedicated hours of work were the path to success, which should not be blocked for the great number of ambitious workers.

1840s: Early Agitation for Government Intervention

When Samuel Slater built the first textile mills in the U.S., “workers labored from sun up to sun down in summer and during the darkness of both morning and evening in the winter. These hours ? only attracted attention when they exceeded the common working day of twelve hours,” according to Ware (1931). During the 1830s, an increased work pace, tighter supervision, and the addition of about fifteen minutes to the work day (partly due to the introduction of artificial lighting during winter months), plus the growth of a core of more permanent industrial workers, fueled a campaign for a shorter workweek among mill workers in Lowell, Massachusetts, whose workweek averaged about 74 hours. This agitation was led by Sarah Bagley and the New England Female Labor Reform Association, which, beginning in 1845, petitioned the state legislature to intervene in the determination of hours. The petitions were followed by America’s first-ever examination of labor conditions by a governmental investigating committee. The Massachusetts legislature proved to be very unsympathetic to the workers’ demands, but similar complaints led to the passage of laws in New Hampshire (1847) and Pennsylvania (1848), declaring ten hours to be the legal length of the working day. However, these laws also specified that a contract freely entered into by employee and employer could set any length for the workweek. Hence, these laws had little impact. Legislation passed by the federal government had a more direct, though limited effect. On March 31, 1840, President Martin Van Buren issued an executive order mandating a ten-hour day for all federal employees engaged in manual work.

1860s: Grand Eight Hours Leagues

As the length of the workweek gradually declined, political agitation for shorter hours seems to have waned for the next two decades. However, immediately after the Civil War reductions in the length of the workweek reemerged as an important issue for organized labor. The new goal was an eight-hour day. Roediger (1986) argues that many of the new ideas about shorter hours grew out of the abolitionists’ critique of slavery — that long hours, like slavery, stunted aggregate demand in the economy. The leading proponent of this idea, Ira Steward, argued that decreasing the length of the workweek would raise the standard of living of workers by raising their desired consumption levels as their leisure expanded, and by ending unemployment. The hub of the newly launched movement was Boston and Grand Eight Hours Leagues sprang up around the country in 1865 and 1866. The leaders of the movement called the meeting of the first national organization to unite workers of different trades, the National Labor Union, which met in Baltimore in 1867. In response to this movement, eight states adopted general eight-hour laws, but again the laws allowed employer and employee to mutually consent to workdays longer than the “legal day.” Many critics saw these laws and this agitation as a hoax, because few workers actually desired to work only eight hours per day at their original hourly pay rate. The passage of the state laws did foment action by workers — especially in Chicago where parades, a general strike, rioting and martial law ensued. In only a few places did work hours fall after the passage of these laws. Many become disillusioned with the idea of using the government to promote shorter hours and by the late 1860s, efforts to push for a universal eight-hour day had been put on the back burner.

The First Enforceable Hours Laws

Despite this lull in shorter-hours agitation, in 1874, Massachusetts passed the nation’s first enforceable ten-hour law. It covered only female workers and became fully effective by 1879. This legislation was fairly late by European standards. Britain had passed its first effective Factory Act, setting maximum hours for almost half of its very young textile workers, in 1833.

1886: Year of Dashed Hopes

In the early 1880s organized labor in the U.S. was fairly weak. In 1884, the short-lived Federation of Organized Trades and Labor Unions (FOTLU) fired a “shot in the dark.” During its final meeting, before dissolving, the Federation “ordained” May 1, 1886 as the date on which workers would cease working beyond eight hours per day. Meanwhile, the Knights of Labor, which had begun as a secret fraternal society and evolved a labor union, began to gain strength. It appears that many nonunionized workers, especially the unskilled, came to see in the Knights a chance to obtain a better deal from their employers, perhaps even to obtain the eight-hour day. FOTLU’s call for workers to simply walk off the job after eight hours beginning on May 1, plus the activities of socialist and anarchist labor organizers and politicians, and the apparent strength of the Knights combined to attract members in record numbers. The Knights mushroomed and its new membership demanded that their local leaders support them in attaining the eight-hour day. Many smelled victory in the air — the movement to win the eight-hour day became frenzied and the goal became “almost a religious crusade” (Grob, 1961).

The Knights’ leader, Terence Powderly, thought that the push for a May 1 general strike for eight-hours was “rash, short-sighted and lacking in system” and “must prove abortive” (Powderly, 1890). He offered no effective alternative plan but instead tried to block the mass action, issuing a “secret circular” condemning the use of strikes. Powderly reasoned that low incomes forced workmen to accept long hours. Workers didn’t want shorter hours unless their daily pay was maintained, but employers were unwilling and/or unable to offer this. Powderly’s rival, labor leader Samuel Gompers, agreed that “the movement of ’86 did not have the advantage of favorable conditions” (Gompers, 1925). Nelson (1986) points to divisions among workers, which probably had much to do with the failure in 1886 of the drive for the eight-hour day. Some insisted on eight hours with ten hours’ pay, but others were willing to accept eight hours with eight hours’ pay,

Haymarket Square Bombing

The eight-hour push of 1886 was, in Norman Ware’s words, “a flop” (Ware, 1929). Lack of will and organization among workers was undoubtedly important, but its collapse was aided by violence that marred strikes and political rallies in Chicago and Milwaukee. The 1886 drive for eight-hours literally blew up in organized labor’s face. At Haymarket Square in Chicago an anarchist bomb killed fifteen policemen during an eight-hour rally, and in Milwaukee’s Bay View suburb nine strikers were killed as police tried to disperse roving pickets. The public backlash and fear of revolution damned the eight-hour organizers along with the radicals and dampened the drive toward eight hours — although it is estimated that the strikes of May 1886 shortened the workweek for about 200,000 industrial workers, especially in New York City and Cincinnati.

The AFL’s Strategy

After the demise of the Knights of Labor, the American Federation of Labor (AFL) became the strongest labor union in the U.S. It held shorter hours as a high priority. The inside cover of its Proceedings carried two slogans in large type: “Eight hours for work, eight hours for rest, eight hours for what we will” and “Whether you work by the piece or work by the day, decreasing the hours increases the pay.” (The latter slogan was coined by Ira Steward’s wife, Mary.) In the aftermath of 1886, the American Federation of Labor adopted a new strategy of selecting each year one industry in which it would attempt to win the eight-hour day, after laying solid plans, organizing, and building up a strike fund war chest by taxing nonstriking unions. The United Brotherhood of Carpenters and Joiners was selected first and May 1, 1890 was set as a day of national strikes. It is estimated that nearly 100,000 workers gained the eight-hour day as a result of these strikes in 1890. However, other unions turned down the opportunity to follow the carpenters’ example and the tactic was abandoned. Instead, the length of the workweek continued to erode during this period, sometimes as the result of a successful local strike, more often as the result of broader economic forces.

The Spread of Hours Legislation

Massachusetts’ first hours law in 1874 set sixty hours per week as the legal maximum for women, in 1892 this was cut to 58, in 1908 to 56, and in 1911 to 54. By 1900, 26 percent of states had maximum hours laws covering women, children and, in some, adult men (generally only those in hazardous industries). The percentage of states with maximum hours laws climbed to 58 percent in 1910, 76 percent in 1920, and 84 percent in 1930. Steinberg (1982) calculates that the percent of employees covered climbed from 4 percent nationally in 1900, to 7 percent in 1910, and 12 percent in 1920 and 1930. In addition, these laws became more restrictive with the average legal standard falling from a maximum of 59.3 hours per week in 1900 to 56.7 in 1920. According to her calculations, in 1900 about 16 percent of the workers covered by these laws were adult men, 49 percent were adult women and the rest were minors.

Court Rulings

The banner years for maximum hours legislation were right around 1910. This may have been partly a reaction to the Supreme Court’s ruling upholding female-hours legislation in the Muller vs. Oregon case (1908). The Court’s rulings were not always completely consistent during this period, however. In 1898 the Court upheld a maximum eight-hour day for workmen in the hazardous industries of mining and smelting in Utah in Holden vs. Hardy. In Lochner vs. New York (1905), it rejected as unconstitutional New York’s ten-hour day for bakers, which was also adopted (at least nominally) out of concerns for safety. The defendant showed that mortality rates in baking were only slightly above average, and lower than those for many unregulated occupations, arguing that this was special interest legislation, designed to favor unionized bakers. Several state courts, on the other hand, supported laws regulating the hours of men in only marginally hazardous work. By 1917, in Bunting vs. Oregon, the Supreme Court seemingly overturned the logic of the Lochner decision, supporting a state law that required overtime payment for all men working long hours. The general presumption during this period was that the courts would allow regulation of labor concerning women and children, who were thought to be incapable of bargaining on an equal footing with employers and in special need of protection. Men were allowed freedom of contract unless it could be proven that regulating their hours served a higher good for the population at large.

New Arguments about Shorter Hours

During the first decades of the twentieth century, arguments favoring shorter hours moved away from Steward’s line that shorter hours increased pay and reduced unemployment to arguments that shorter hours were good for employers because they made workers more productive. A new cadre of social scientists began to offer evidence that long hours produced health-threatening, productivity-reducing fatigue. This line of reasoning, advanced in the court brief of Louis Brandeis and Josephine Goldmark, was crucial in the Supreme Court’s decision to support state regulation of women’s hours in Muller vs. Oregon. Goldmark’s book, Fatigue and Efficiency (1912) was a landmark. In addition, data relating to hours and output among British and American war workers during World War I helped convince some that long hours could be counterproductive. Businessmen, however, frequently attacked the shorter hours movement as merely a ploy to raise wages, since workers were generally willing to work overtime at higher wage rates.

Federal Legislation in the 1910s

In 1912 the Federal Public Works Act was passed, which provided that every contract to which the U.S. government was a party must contain an eight-hour day clause. Three year later LaFollette’s Bill established maximum hours for maritime workers. These were preludes to the most important shorter-hours law enacted by Congress during this period — 1916’s Adamson Act, which was passed to counter a threatened nationwide strike, granted rail workers the basic eight hour day. (The law set eight hours as the basic workday and required higher overtime pay for longer hours.)

World War I and Its Aftermath

Labor markets became very tight during World War I as the demand for workers soared and the unemployment rate plunged. These forces put workers in a strong bargaining position, which they used to obtain shorter work schedules. The move to shorter hours was also pushed by the federal government, which gave unprecedented support to unionization. The federal government began to intervene in labor disputes for the first time, and the National War Labor Board “almost invariably awarded the basic eight-hour day when the question of hours was at issue” in labor disputes (Cahill, 1932). At the end of the war everyone wondered if organized labor would maintain its newfound power and the crucial test case was the steel industry. Blast furnace workers generally put in 84-hour workweeks. These abnormally long hours were the subject of much denunciation and a major issue in a strike that began in September 1919. The strike failed (and organized labor’s power receded during the 1920s), but four years later US Steel reduced its workday from twelve to eight hours. The move came after much arm-twisting by President Harding but its timing may be explained by immigration restrictions and the loss of immigrant workers who were willing to accept such long hours (Shiells, 1990).

The Move to a Five-day Workweek

During the 1920s agitation for shorter workdays largely disappeared, now that the workweek had fallen to about 50 hours. However, pressure arose to grant half-holidays on Saturday or Saturday off — especially in industries whose workers were predominantly Jewish. By 1927 at least 262 large establishments had adopted the five-day week, while only 32 had it by 1920. The most notable action was Henry Ford’s decision to adopt the five-day week in 1926. Ford employed more than half of the nation’s approximately 400,000 workers with five-day weeks. However, Ford’s motives were questioned by many employers who argued that productivity gains from reducing hours ceased beyond about forty-eight hours per week. Even the reformist American Labor Legislation Review greeted the call for a five-day workweek with lukewarm interest.

Changing Attitudes in the 1920s

Hunnicutt (1988) argues that during the 1920s businessmen and economists began to see shorter hours as a threat to future economic growth. With the development of advertising — the “gospel of consumption” — a new vision of progress was proposed to American workers. It replaced the goal of leisure time with a list of things to buy and business began to persuade workers that more work brought more tangible rewards. Many workers began to oppose further decreases in the length of the workweek. Hunnicutt concludes that a new work ethic arose as Americans threw off the psychology of scarcity for one of abundance.

Hours’ Reduction during the Great Depression

Then the Great Depression hit the American economy. By 1932 about half of American employers had shortened hours. Rather than slash workers’ real wages, employers opted to lay-off many workers (the unemployment rate hit 25 percent) and tried to protect the ones they kept on by the sharing of work among them. President Hoover’s Commission for Work Sharing pushed voluntary hours reductions and estimated that they had saved three to five million jobs. Major employers like Sears, GM, and Standard Oil scaled down their workweeks and Kellogg’s and the Akron tire industry pioneered the six-hour day. Amid these developments, the AFL called for a federally-mandated thirty-hour workweek.

The Black-Connery 30-Hours Bill and the NIRA

The movement for shorter hours as a depression-fighting work-sharing measure built such a seemingly irresistible momentum that by 1933 observers predicting that the “30-hour week was within a month of becoming federal law” (Hunnicutt, 1988). During the period after the 1932 election but before Franklin Roosevelt’s inauguration, Congressional hearings on thirty hours began, and less than one month into FDR’s first term, the Senate passed, 53 to 30, a thirty-hour bill authored by Hugo Black. The bill was sponsored in the House by William Connery. Roosevelt originally supported the Black-Connery proposals, but soon backed off, uneasy with a provision forbidding importation of goods produced by workers whose weeks were longer than thirty hours, and convinced by arguments of business that trying to legislate fewer hours might have disastrous results. Instead, FDR backed the National Industrial Recovery Act (NIRA). Hunnicutt argues that an implicit deal was struck in the NIRA. Labor leaders were persuaded by NIRA Section 7a’s provisions — which guaranteed union organization and collective bargaining — to support the NIRA rather than the Black-Connery Thirty-Hour Bill. Business, with the threat of thirty hours hanging over its head, fell raggedly into line. (Most historians cite other factors as the key to the NIRA’s passage. See Barbara Alexander’s article on the NIRA in this encyclopedia.) When specific industry codes were drawn up by the NIRA-created National Recovery Administration (NRA), shorter hours were deemphasized. Despite a plan by NRA Administrator Hugh Johnson to make blanket provisions for a thirty-five hour workweek in all industry codes, by late August 1933, the momentum toward the thirty-hour week had dissipated. About half of employees covered by NRA codes had their hours set at forty per week and nearly 40 percent had workweeks longer than forty hours.

The FSLA: Federal Overtime Law

Hunnicutt argues that the entire New Deal can be seen as an attempt to keep shorter-hours advocates at bay. After the Supreme Court struck down the NRA, Roosevelt responded to continued demands for thirty hours with the Works Progress Administration, the Wagner Act, Social Security, and, finally, the Fair Labor Standards Acts, which set a federal minimum wage and decreed that overtime beyond forty hours per week would be paid at one-and-a-half times the base rate in covered industries.

The Demise of the Shorter Hours’ Movement

As the Great Depression ended, average weekly work hours slowly climbed from their low reached in 1934. During World War II hours reached a level almost as high as at the end of World War I. With the postwar return of weekly work hours to the forty-hour level the shorter hours movement effectively ended. Occasionally organized labor’s leaders announced that they would renew the push for shorter hours, but they found that most workers didn’t desire a shorter workweek.

The Case of Kellogg’s

Offsetting isolated examples of hours reductions after World War II, there were noteworthy cases of backsliding. Hunnicutt (1996) has studied the case of Kellogg’s in great detail. In 1946, 87% of women and 71% of men working at Kellogg’s voted to return to the six-hour day, with the end of the war. Over the course of the next decade, however, the tide turned. By 1957 most departments had opted to switch to 8-hour shifts, so that only about one-quarter of the work force, mostly women, retained a six-hour shift. Finally, in 1985, the last department voted to adopt an 8-hour workday. Workers, especially male workers, began to favor additional money more than the extra two hours per day of free time. In interviews they explained that they needed the extra money to buy a wide range of consumer items and to keep up with the neighbors. Several men told about the friction that resulted when men spent too much time around the house: “The wives didn’t like the men underfoot all day.” “The wife always found something for me to do if I hung around.” “We got into a lot of fights.” During the 1950s, the threat of unemployment evaporated and the moral condemnation for being a “work hog” no longer made sense. In addition, the rise of quasi-fixed employment costs (such as health insurance) induced management to push workers toward a longer workday.

The Current Situation

As the twentieth century ended there was nothing resembling a shorter hours “movement.” The length of the workweek continues to fall for most groups — but at a glacial pace. Some Americans complain about a lack of free time but the vast majority seem content with an average workweek of roughly forty hours — channeling almost all of their growing wages into higher incomes rather than increased leisure time.

Causes of the Decline in the Length of the Workweek

Supply, Demand and Hours of Work

The length of the workweek, like other labor market outcomes, is determined by the interaction of the supply and demand for labor. Employers are torn by conflicting pressures. Holding everything else constant, they would like employees to work long hours because this means that they can utilize their equipment more fully and offset any fixed costs from hiring each worker (such as the cost of health insurance — common today, but not a consideration a century ago). On the other hand, longer hours can bring reduced productivity due to worker fatigue and can bring worker demands for higher hourly wages to compensate for putting in long hours. If they set the workweek too high, workers may quit and few workers will be willing to work for them at a competitive wage rate. Thus, workers implicitly choose among a variety of jobs — some offering shorter hours and lower earnings, others offering longer hours and higher earnings.

Economic Growth and the Long-Term Reduction of Work Hours

Historically employers and employees often agreed on very long workweeks because the economy was not very productive (by today’s standards) and people had to work long hours to earn enough money to feed, clothe and house their families. The long-term decline in the length of the workweek, in this view, has primarily been due to increased economic productivity, which has yielded higher wages for workers. Workers responded to this rise in potential income by “buying” more leisure time, as well as by buying more goods and services. In a recent survey, a sizeable majority of economic historians agreed with this view. Over eighty percent accepted the proposition that “the reduction in the length of the workweek in American manufacturing before the Great Depression was primarily due to economic growth and the increased wages it brought” (Whaples, 1995). Other broad forces probably played only a secondary role. For example, roughly two-thirds of economic historians surveyed rejected the proposition that the efforts of labor unions were the primary cause of the drop in work hours before the Great Depression.

Winning the Eight-Hour Day in the Era of World War I

The swift reduction of the workweek in the period around World War I has been extensively analyzed by Whaples (1990b). His findings support the consensus that economic growth was the key to reduced work hours. Whaples links factors such as wages, labor legislation, union power, ethnicity, city size, leisure opportunities, age structure, wealth and homeownership, health, education, alternative employment opportunities, industrial concentration, seasonality of employment, and technological considerations to changes in the average workweek in 274 cities and 118 industries. He finds that the rapid economic expansion of the World War I period, which pushed up real wages by more than 18 percent between 1914 and 1919, explains about half of the drop in the length of the workweek. The reduction of immigration during the war was important, as it deprived employers of a group of workers who were willing to put in long hours, explaining about one-fifth of the hours decline. The rapid electrification of manufacturing seems also to have played an important role in reducing the workweek. Increased unionization explains about one-seventh of the reduction, and federal and state legislation and policies that mandated reduced workweeks also had a noticeable role.

Cross-sectional Patterns from 1919

In 1919 the average workweek varied tremendously, emphasizing the point that not all workers desired the same workweek. The workweek exceeded 69 hours in the iron blast furnace, cottonseed oil, and sugar beet industries, but fell below 45 hours in industries such as hats and caps, fur goods, and women’s clothing. Cities’ averages also differed dramatically. In a few Midwestern steel mill towns average workweeks exceeded 60 hours. In a wide range of low-wage Southern cities they reached the high 50s, but in high-wage Western ports, like Seattle, the workweek fell below 45 hours.

Whaples (1990a) finds that among the most important city-level determinants of the workweek during this period were the availability of a pool of agricultural workers, the capital-labor ratio, horsepower per worker, and the amount of employment in large establishments. Hours rose as each of these increased. Eastern European immigrants worked significantly longer than others, as did people in industries whose output varied considerably from season to season. High unionization and strike levels reduced hours to a small degree. The average female employee worked about six and a half fewer hours per week in 1919 than did the average male employee. In city-level comparisons, state maximum hours laws appear to have had little affect on average work hours, once the influences of other factors have been taken into account. One possibility is that these laws were passed only after economic forces lowered the length of the workweek. Overall, in cities where wages were one percent higher, hours were about -0.13 to -0.05 percent lower. Again, this suggests that during the era of declining hours, workers were willing to use higher wages to “buy” shorter hours.

Annotated Bibliography

Perhaps the most comprehensive survey of the shorter hours movement in the U.S. is David Roediger and Philip Foner’s Our Own Time: A History of American Labor and the Working Day (1989). It contends that “the length of the working day has been the central issue for the American labor movement during its most vigorous periods of activity, uniting workers along lines of craft, gender, and ethnicity.” Critics argue that its central premise is flawed because workers have often been divided about the optimal length of the workweek. It explains the point of view of organized labor and recounts numerous historically important events and arguments, but does not attempt to examine in detail the broader economic forces that determined the length of the workweek. An earlier useful comprehensive work is Marion Cahill’s Shorter Hours: A Study of the Movement since the Civil War (1932).

Benjamin Hunnicutt’s Work Without End: Abandoning Shorter Hours for the Right to Work (1988) focuses on the period from 1920 to 1940 and traces the political, intellectual, and social “dialogues” that changed the American concept of progress from dreams of more leisure to an “obsession” with the importance of work and wage-earning. This work’s detailed analysis and insights are valuable, but it draws many of its inferences from what intellectuals said about shorter hours, rather than spending time on the actual decision makers — workers and employers. Hunnicutt’s Kellogg’s Six-Hour Day (1996), is important because it does exactly this — interviewing employees and examining the motives and decisions of a prominent employer. Unfortunately, it shows that one must carefully interpret what workers say on the subject, as they are prone to reinterpret their own pasts so that their choices can be more readily rationalized. (See EH.NET’s review: http://eh.net/book_reviews/kelloggs-six-hour-day/.)

Economists have given surprisingly little attention to the determinants of the workweek. The most comprehensive treatment is Robert Whaples’ “The Shortening of the American Work Week” (1990), which surveys estimates of the length of the workweek, the shorter hours movement, and economic theories about the length of the workweek. Its core is an extensive statistical examination of the determinants of the workweek in the period around World War I.

References

Atack, Jeremy and Fred Bateman. “How Long Was the Workday in 1880?” Journal of Economic History 52, no. 1 (1992): 129-160.

Cahill, Marion Cotter. Shorter Hours: A Study of the Movement since the Civil War. New York: Columbia University Press, 1932.

Carr, Lois Green. “Emigration and the Standard of Living: The Seventeenth Century Chesapeake.” Journal of Economic History 52, no. 2 (1992): 271-291.

Coleman, Mary T. and John Pencavel. “Changes in Work Hours of Male Employees, 1940-1988.” Industrial and Labor Relations Review 46, no. 2 (1993a): 262-283.

Coleman, Mary T. and John Pencavel. “Trends in Market Work Behavior of Women since 1940.” Industrial and Labor Relations Review 46, no. 4 (1993b): 653-676.

Douglas, Paul. Real Wages in the United States, 1890-1926. Boston: Houghton, 1930.

Fogel, Robert. The Fourth Great Awakening and the Future of Egalitarianism. Chicago: University of Chicago Press, 2000.

Fogel, Robert and Stanley Engerman. Time on the Cross: The Economics of American Negro Slavery. Boston: Little, Brown, 1974.

Gallman, Robert. “The Agricultural Sector and the Pace of Economic Growth: U.S. Experience in the Nineteenth Century.” In Essays in Nineteenth-Century Economic History: The Old Northwest, edited by David Klingaman and Richard Vedder. Athens, OH: Ohio University Press, 1975.

Goldmark, Josephine. Fatigue and Efficiency. New York: Charities Publication Committee, 1912.

Gompers, Samuel. Seventy Years of Life and Labor: An Autobiography. New York: Dutton, 1925.

Greis, Theresa Diss. The Decline of Annual Hours Worked in the United States, since 1947. Manpower and Human Resources Studies, no. 10, Wharton School, University of Pennsylvania, 1984.

Grob, Gerald. Workers and Utopia: A Study of Ideological Conflict in the American Labor Movement, 1865-1900. Evanston: Northwestern University Press, 1961.

Hunnicutt, Benjamin Kline. Work Without End: Abandoning Shorter Hours for the Right to Work. Philadelphia: Temple University Press, 1988.

Hunnicutt, Benjamin Kline. Kellogg’s Six-Hour Day. Philadelphia: Temple University Press, 1996.

Jones, Ethel. “New Estimates of Hours of Work per Week and Hourly Earnings, 1900-1957.” Review of Economics and Statistics 45, no. 4 (1963): 374-385.

Juster, F. Thomas and Frank P. Stafford. “The Allocation of Time: Empirical Findings, Behavioral Models, and Problems of Measurement.” Journal of Economic Literature 29, no. 2 (1991): 471-522.

Licht, Walter. Working for the Railroad: The Organization of Work in the Nineteenth Century. Princeton: Princeton University Press, 1983.

Margo, Robert. “The Labor Force in the Nineteenth Century.” In The Cambridge Economic History of the United States, Volume II, The Long Nineteenth Century, edited by Stanley Engerman and Robert Gallman, 207-243. New York: Cambridge University Press, 2000.

Nelson, Bruce. “‘We Can’t Get Them to Do Aggressive Work': Chicago’s Anarchists and the Eight-Hour Movement.” International Labor and Working Class History 29 (1986).

Ng, Kenneth and Nancy Virts. “The Value of Freedom.” Journal of Economic History 49, no. 4 (1989): 958-965.

Owen, John. “Workweeks and Leisure: An Analysis of Trends, 1948-1975.” Monthly Labor Review 99 (1976).

Owen, John. “Work-time Reduction in the United States and Western Europe.” Monthly Labor Review 111 (1988).

Powderly, Terence. Thirty Years of Labor, 1859-1889. Columbus: Excelsior, 1890.

Ransom, Roger and Richard Sutch. One Kind of Freedom: The Economic Consequences of Emancipation. New York: Cambridge University Press, 1977.

Rodgers, Daniel. The Work Ethic in Industrial America, 1850-1920. Chicago: University of Chicago Press, 1978.

Roediger, David. “Ira Steward and the Antislavery Origins of American Eight-Hour Theory.” Labor History 27 (1986).

Roediger, David and Philip Foner. Our Own Time: A History of American Labor and the Working Day. New York: Verso, 1989.

Schor, Juliet B. The Overworked American: The Unexpected Decline in Leisure. New York: Basic Books, 1992.

Shiells, Martha Ellen, “Collective Choice of Working Conditions: Hours in British and U.S. Iron and Steel, 1890-1923.” Journal of Economic History 50, no. 2 (1990): 379-392.

Steinberg, Ronnie. Wages and Hours: Labor and Reform in Twentieth-Century America. New Brunswick, NJ: Rutgers University Press, 1982.

United States, Department of Interior, Census Office. Report on the Statistics of Wages in Manufacturing Industries, by Joseph Weeks, 1880 Census, Vol. 20. Washington: GPO, 1883.

United States Senate. Senate Report 1394, Fifty-Second Congress, Second Session. “Wholesale Prices, Wages, and Transportation.” Washington: GPO, 1893.

Ware, Caroline. The Early New England Cotton Manufacture: A Study of Industrial Beginnings. Boston: Houghton-Mifflin, 1931.

Ware, Norman. The Labor Movement in the United States, 1860-1895. New York: Appleton, 1929.

Weiss, Thomas and Lee Craig. “Agricultural Productivity Growth during the Decade of the Civil War.” Journal of Economic History 53, no. 3 (1993): 527-548.

Whaples, Robert. “The Shortening of the American Work Week: An Economic and Historical Analysis of Its Context, Causes, and Consequences.” Ph.D. dissertation, University of Pennsylvania, 1990a.

Whaples, Robert. “Winning the Eight-Hour Day, 1909-1919.” Journal of Economic History 50, no. 2 (1990b): 393-406.

Whaples, Robert. “Where Is There Consensus Among American Economic Historians? The Results of a Survey on Forty Propositions.” Journal of Economic History 55, no. 1 (1995): 139-154.

Citation: Whaples, Robert. “Hours of Work in U.S. History”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/hours-of-work-in-u-s-history/

Economic History of Hong Kong

Catherine R. Schenk, University of Glasgow

Hong Kong’s economic and political history has been primarily determined by its geographical location. The territory of Hong Kong is comprised of two main islands (Hong Kong Island and Lantau Island) and a mainland hinterland. It thus forms a natural geographic port for Guangdong province in Southeast China. In a sense, there is considerable continuity in Hong Kong’s position in the international economy since its origins were as a commercial entrepot for China’s regional and global trade, and this is still a role it plays today. From a relatively unpopulated territory at the beginning of the nineteenth century, Hong Kong grew to become one of the most important international financial centers in the world. Hong Kong also underwent a rapid and successful process of industrialization from the 1950s that captured the imagination of economists and historians in the 1980s and 1990s.

Hong Kong from 1842 to 1949

After being ceded by China to the British under the Treaty of Nanking in 1842, the colony of Hong Kong quickly became a regional center for financial and commercial services based particularly around the Hongkong and Shanghai Bank and merchant companies such as Jardine Matheson. In 1841 there were only 7500 Chinese inhabitants of Hong Kong and a handful of foreigners, but by 1859 the Chinese community was over 85,000 supplemented by about 1600 foreigners. The economy was closely linked to commercial activity, dominated by shipping, banking and merchant companies. Gradually there was increasing diversification to services and retail outlets to meet the needs of the local population, and also shipbuilding and maintenance linked to the presence of the British naval and merchant shipping. There was some industrial expansion in the nineteenth century; notably sugar refining, cement and ice factories among the foreign sector, alongside smaller-scale local workshop manufactures. The mainland territory of Hong Kong was ceded to British rule by two further treaties in this period; Kowloon in 1860 and the New Territories in 1898.

Hong Kong was profoundly affected by the disastrous events in Mainland China in the inter-war period. After overthrow of the dynastic system in 1911, the Kuomintang (KMT) took a decade to pull together a republican nation-state. The Great Depression and fluctuations in the international price of silver then disrupted China’s economic relations with the rest of the world in the 1930s. From 1937, China descended into the Sino-Japanese War. Two years after the end of World War II, the civil war between the KMT and Chinese Communist Party pushed China into a downward economic spiral. During this period, Hong Kong suffered from the slowdown in world trade and in China’s trade in particular. However, problems on the mainland also diverted business and entrepreneurs from Shanghai and other cities to the relative safety and stability of the British colonial port of Hong Kong.

Post-War Industrialization

After the establishment of the People’s Republic of China (PRC) in 1949, the mainland began a process of isolation from the international economy, partly for ideological reasons and partly because of Cold War embargos on trade imposed first by the United States in 1949 and then by the United Nations in 1951. Nevertheless, Hong Kong was vital to the international economic links that the PRC continued in order to pursue industrialization and support grain imports. Even during the period of self-sufficiency in the 1960s, Hong Kong’s imports of food and water from the PRC were a vital source of foreign exchange revenue that ensured Hong Kong’s usefulness to the mainland. In turn, cheap food helped to restrain rises in the cost of living in Hong Kong thus helping to keep wages low during the period of labor-intensive industrialization.

The industrialization of Hong Kong is usually dated from the embargoes of the 1950s. Certainly, Hong Kong’s prosperity could no longer depend on the China trade in this decade. However, as seen above, industry emerged in the nineteenth century and it began to expand in the interwar period. Nevertheless, industrialization accelerated after 1945 with the inflow of refugees, entrepreneurs and capital fleeing the civil war on the mainland. The most prominent example is immigrants from Shanghai who created the cotton spinning industry in the colony. Hong Kong’s industry was founded in the textile sector in the 1950s before gradually diversifying in the 1960s to clothing, electronics, plastics and other labor-intensive production mainly for export.

The economic development of Hong Kong is unusual in a variety of respects. First, industrialization was accompanied by increasing numbers of small and medium-sized enterprises (SME) rather than consolidation. In 1955, 91 percent of manufacturing establishments employed fewer than one hundred workers, a proportion that increased to 96.5 percent by 1975. Factories employing fewer than one hundred workers accounted for 42 percent of Hong Kong’s domestic exports to the U.K. in 1968, amounting to HK$1.2 billion. At the end of 2002, SMEs still amounted to 98 percent of enterprises, providing 60 percent of total private employment.

Second, until the late 1960s, the government did not engage in active industrial planning. This was partly because the government was preoccupied with social spending on housing large flows of immigrants, and partly because of an ideological sympathy for free market forces. This means that Hong Kong fits outside the usual models of Asian economic development based on state-led industrialization (Japan, South Korea, Singapore, Taiwan) or domination of foreign firms (Singapore) or large firms with close relations to the state (Japan, South Korea). Low taxes, lax employment laws, absence of government debt, and free trade are all pillars of the Hong Kong experience of economic development.

In fact, of course, the reality was very different from the myth of complete laissez-faire. The government’s programs of public housing, land reclamation, and infrastructure investment were ambitious. New industrial towns were built to house immigrants, provide employment and aid industry. The government subsidized industry indirectly through this public housing, which restrained rises in the cost of living that would have threatened Hong Kong’s labor-cost advantage in manufacturing. The government also pursued an ambitious public education program, creating over 300,000 new primary school places between 1954 and 1961. By 1966, 99.8% of school-age children were attending primary school, although free universal primary school was not provided until 1971. Secondary school provision was expanded in the 1970s, and from 1978 the government offered compulsory free education for all children up to the age of 15. The hand of government was much lighter on international trade and finance. Exchange controls were limited to a few imposed by the U.K., and there were no controls on international flows of capital. Government expenditure even fell from 7.5% of GDP in the 1960s to 6.5% in the 1970s. In the same decades, British government spending as a percent of GDP rose from 17% to 20%.

From the mid-1950s Hong Kong’s rapid success as a textile and garment exporter generated trade friction that resulted in voluntary export restraints in a series of treaties with the U.K. beginning in 1959. Despite these agreements, Hong Kong’s exporters continued to exploit their flexibility and adaptability to increase production and find new markets. Indeed, exports increased from 54% of GDP in the 1960s to 64% in the 1970s. Figure 1 shows the annual changes in the growth of real GDP per capita. In the period from 1962 until the onset of the oil crisis in 1973, the average growth rate was 6.5% per year. From 1976 to 1996 GDP grew at an average of 5.6% per year. There were negative shocks in 1967-68 as a result of local disturbances from the onset of the Cultural Revolution in the PRC, and again in 1973 to 1975 from the global oil crisis. In the early 1980s there was another negative shock related to politics, as the terms of Hong Kong’s return to PRC control in 1997 were formalized.

 Annual percentage change of per capita GDP 1962-2001

Reintegration with China, 1978-1997

The Open Door Policy of the PRC announced by Deng Xiao-ping at the end of 1978 marked a new era for Hong Kong’s economy. With the newly vigorous engagement of China in international trade and investment, Hong Kong’s integration with the mainland accelerated as it regained its traditional role as that country’s main provider of commercial and financial services. From 1978 to 1997, visible trade between Hong Kong and the PRC grew at an average rate of 28% per annum. At the same time, Hong Kong firms began to move their labor-intensive activities to the mainland to take advantage of cheaper labor. The integration of Hong Kong with the Pearl River delta in Guangdong is the most striking aspect of these trade and investment links. At the end of 1997, the cumulative value of Hong Kong’s direct investment in Guangdong was estimated at US$48 billion, accounting for almost 80% of the total foreign direct investment there. Hong Kong companies and joint ventures in Guangdong province employed about five million people. Most of these businesses were labor-intensive assembly for export, but from 1997 onward there has been increased investment in financial services, tourism and retail trade.

While manufacturing was moved out of the colony during the 1980s and 1990s, there was a surge in the service sector. This transformation of the structure of Hong Kong’s economy from manufacturing to services was dramatic. Most remarkably it was accomplished without faltering growth rates overall, and with an average unemployment rate of only 2.5% from 1982 to 1997. Figure 2 shows that the value of manufacturing peaked in 1992 before beginning an absolute decline. In contrast, the value of commercial and financial services soared. This is reflected in the contribution of services and manufacturing to GDP shown in Figure 3. Employment in the service sector rose from 52% to 80% of the labor force from 1981 to 2000 while manufacturing employment fell from 39% to 10% in the same period.

 GDP by economic activity at current prices  Contribution to Hong Kong's GDP at factor prices

Asian Financial Crisis, 1997-2002

The terms for the return of Hong Kong to Chinese rule in July 1997 carefully protected the territory’s separate economic characteristics, which have been so beneficial to the Chinese economy. Under the Basic Law, a “one country-two systems” policy was formulated which left Hong Kong monetarily and economically separate from the mainland with exchange and trade controls remaining in place as well as restrictions on the movement of people. Hong Kong was hit hard by the Asian Financial Crisis that struck the region in mid-1997, just at the time of the handover of the colony back to Chinese administrative control. The crisis prompted a collapse in share prices and the property market that affected the ability of many borrowers to repay bank loans. Unlike most Asian countries, Hong Kong Special Administrative Region and mainland China maintained their currencies’ exchange rates with the U.S. dollar rather than devaluing. Along with the Sudden Acute Respiratory Syndrome (SARS) threat in 2002, the Asian Financial Crisis pushed Hong Kong into a new era of recession with a rise in unemployment (6% on average from 1998-2003) and absolute declines in output and prices. The longer-term impact of the crisis has been to increase the intensity and importance of Hong Kong’s trade and investment links with the PRC. Since the PRC did not fare as badly from the regional crisis, the economic prospects for Hong Kong have been tied more closely to the increasingly prosperous mainland.

Suggestions for Further Reading

For a general history of Hong Kong from the nineteenth century, see S. Tsang, A Modern History of Hong Kong, London: IB Tauris, 2004. For accounts of Hong Kong’s economic history see, D.R. Meyer, Hong Kong as a Global Metropolis, Cambridge: Cambridge University Press, 2000; C.R. Schenk, Hong Kong as an International Financial Centre: Emergence and Development, 1945-65, London: Routledge, 2001; and Y-P Ho, Trade, Industrial Restructuring and Development in Hong Kong, London: Macmillan, 1992. Useful statistics and summaries of recent developments are available on the website of the Hong Kong Monetary Authority www.info.gov.hk/hkma.

Citation: Schenk, Catherine. “Economic History of Hong Kong”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/economic-history-of-hong-kong/

A History of Futures Trading in the United States

Joseph Santos, South Dakota State University

Many contemporary [nineteenth century] critics were suspicious of a form of business in which one man sold what he did not own to another who did not want it… Morton Rothstein (1966)

Anatomy of a Futures Market

The Futures Contract

A futures contract is a standardized agreement between a buyer and a seller to exchange an amount and grade of an item at a specific price and future date. The item or underlying asset may be an agricultural commodity, a metal, mineral or energy commodity, a financial instrument or a foreign currency. Because futures contracts are derived from these underlying assets, they belong to a family of financial instruments called derivatives.

Traders buy and sell futures contracts on an exchange – a marketplace that is operated by a voluntary association of members. The exchange provides buyers and sellers the infrastructure (trading pits or their electronic equivalent), legal framework (trading rules, arbitration mechanisms), contract specifications (grades, standards, time and method of delivery, terms of payment) and clearing mechanisms (see section titled The Clearinghouse) necessary to facilitate futures trading. Only exchange members are allowed to trade on the exchange. Nonmembers trade through commission merchants – exchange members who service nonmember trades and accounts for a fee.

The September 2004 light sweet crude oil contract is an example of a petroleum (mineral) future. It trades on the New York Mercantile exchange (NYM). The contract is standardized – every one is an agreement to trade 1,000 barrels of grade light sweet crude in September, on a day of the seller’s choosing. As of May 25, 2004 the contract sold for $40,120=$40.12x1000 and debits Member S’s margin account the same amount.

The Clearinghouse

The clearinghouse is the counterparty to every trade – its members buy every contract that traders sell on the exchange and sell every contract that traders buy on the exchange. Absent a clearinghouse, traders would interact directly, and this would introduce two problems. First, traders. concerns about their counterparty’s credibility would impede trading. For example, Trader A might refuse to sell to Trader B, who is supposedly untrustworthy.

Second, traders would lose track of their counterparties. This would occur because traders typically settle their contractual obligations by offset – traders buy/sell the contracts that they sold/bought earlier. For example, Trader A sells a contract to Trader B, who sells a contract to Trader C to offset her position, and so on.

The clearinghouse eliminates both of these problems. First, it is a guarantor of all trades. If a trader defaults on a futures contract, the clearinghouse absorbs the loss. Second, clearinghouse members, and not outside traders, reconcile offsets at the end of trading each day. Margin accounts and a process called marking-to-market all but assure the clearinghouse’s solvency.

A margin account is a balance that a trader maintains with a commission merchant in order to offset the trader’s daily unrealized loses in the futures markets. Commission merchants also maintain margins with clearinghouse members, who maintain them with the clearinghouse. The margin account begins as an initial lump sum deposit, or original margin.

To understand the mechanics and merits of marking-to-market, consider that the values of the long and short positions of an existing futures contract change daily, even though futures trading is a zero-sum game – a buyer’s gain/loss equals a seller’s loss/gain. So, the clearinghouse breaks even on every trade, while its individual members. positions change in value daily.

With this in mind, suppose Trader B buys a 5,000 bushel soybean contract for $9.70 from Trader S. Technically, Trader B buys the contract from Clearinghouse Member S and Trader S sells the contract to Clearinghouse Member B. Now, suppose that at the end of the day the contract is priced at $9.71. That evening the clearinghouse marks-to-market each member’s account. That is to say, the clearinghouse credits Member B’s margin account $50 and debits Member S’s margin account the same amount.

Member B is now in a position to draw on the clearinghouse $50, while Member S must pay the clearinghouse a $50 variation margin – incremental margin equal to the difference between a contract’s price and its current market value. In turn, clearinghouse members debit and credit accordingly the margin accounts of their commission merchants, who do the same to the margin accounts of their clients (i.e., traders). This iterative process all but assures the clearinghouse a sound financial footing. In the unlikely event that a trader defaults, the clearinghouse closes out the position and loses, at most, the trader’s one day loss.

Active Futures Markets

Futures exchanges create futures contracts. And, because futures exchanges compete for traders, they must create contracts that appeal to the financial community. For example, the New York Mercantile Exchange created its light sweet crude oil contract in order to fill an unexploited niche in the financial marketplace.

Not all contracts are successful and those that are may, at times, be inactive – the contract exists, but traders are not trading it. For example, of all contracts introduced by U.S. exchanges between 1960 and 1977, only 32% traded in 1980 (Stein 1986, 7). Consequently, entire exchanges can become active – e.g., the New York Futures Exchange opened in 1980 – or inactive – e.g., the New Orleans Exchange closed in 1983 (Leuthold 1989, 18). Government price supports or other such regulation can also render trading inactive (see Carlton 1984, 245).

Futures contracts succeed or fail for many reasons, but successful contracts do share certain basic characteristics (see for example, Baer and Saxon 1949, 110-25; Hieronymus 1977, 19-22). To wit, the underlying asset is homogeneous, reasonably durable, and standardized (easily describable); its supply and demand is ample, its price is unfettered, and all relevant information is available to all traders. For example, futures contracts have never derived from, say, artwork (heterogeneous and not standardized) or rent-controlled housing rights (supply, and hence price is fettered by regulation).

Purposes and Functions

Futures markets have three fundamental purposes. The first is to enable hedgers to shift price risk – asset price volatility – to speculators in return for basis risk – changes in the difference between a futures price and the cash, or current spot price of the underlying asset. Because basis risk is typically less than asset price risk, the financial community views hedging as a form of risk management and speculating as a form of risk taking.

Generally speaking, to hedge is to take opposing positions in the futures and cash markets. Hedgers include (but are not restricted to) farmers, feedlot operators, grain elevator operators, merchants, millers, utilities, export and import firms, refiners, lenders, and hedge fund managers (see Peck 1985, 13-21). Meanwhile, to speculate is to take a position in the futures market with no counter-position in the cash market. Speculators may not be affiliated with the underlying cash markets.

To demonstrate how a hedge works, assume Hedger A buys, or longs, 5,000 bushels of corn, which is currently worth $2.40 per bushel, or $12,000=$2.40×5000; the date is May 1st and Hedger A wishes to preserve the value of his corn inventory until he sells it on June 1st. To do so, he takes a position in the futures market that is exactly opposite his position in the spot – current cash – market. For example, Hedger A sells, or shorts, a July futures contract for 5,000 bushels of corn at a price of $2.50 per bushel; put differently, Hedger A commits to sell in July 5,000 bushels of corn for $12,500=$2.50×5000. Recall that to sell (buy) a futures contract means to commit to sell (buy) an amount and grade of an item at a specific price and future date.

Absent basis risk, Hedger A’s spot and futures markets positions will preserve the value of the 5,000 bushels of corn that he owns, because a fall in the spot price of corn will be matched penny for penny by a fall in the futures price of corn. For example, suppose that by June 1st the spot price of corn has fallen five cents to $2.35 per bushel. Absent basis risk, the July futures price of corn has also fallen five cents to $2.45 per bushel.

So, on June 1st, Hedger A sells his 5,000 bushels of corn and loses $250=($2.35-$2.40)x5000 in the spot market. At the same time, he buys a July futures contract for 5,000 bushels of corn and gains $250=($2.50-$2.45)x5000 in the futures market. Notice, because Hedger A has both sold and bought a July futures contract for 5,000 bushels of corn, he has offset his commitment in the futures market.

This example of a textbook hedge – one that eliminates price risk entirely – is instructive but it is also a bit misleading because: basis risk exists; hedgers may choose to hedge more or less than 100% of their cash positions; and hedgers may cross hedge – trade futures contracts whose underlying assets are not the same as the assets that the hedger owns. So, in reality hedgers cannot immunize entirely their cash positions from market fluctuations and in some cases they may not wish to do so. Again, the purpose of a hedge is not to avoid risk, but rather to manage or even profit from it.

The second fundamental purpose of a futures market is to facilitate firms’ acquisitions of operating capital – short term loans that finance firms’ purchases of intermediate goods such as inventories of grain or petroleum. For example, lenders are relatively more likely to finance, at or near prime lending rates, hedged (versus non-hedged) inventories. The futures contact is an efficient form of collateral because it costs only a fraction of the inventory’s value, or the margin on a short position in the futures market.

Speculators make the hedge possible because they absorb the inventory’s price risk; for example, the ultimate counterparty to the inventory dealer’s short position is a speculator. In the absence of futures markets, hedgers could only engage in forward contracts – unique agreements between private parties, who operate independently of an exchange or clearinghouse. Hence, the collateral value of a forward contract is less than that of a futures contract.3

The third fundamental purpose of a futures market is to provide information to decision makers regarding the market’s expectations of future economic events. So long as a futures market is efficient – the market forms expectations by taking into proper consideration all available information – its forecasts of future economic events are relatively more reliable than an individual’s. Forecast errors are expensive, and well informed, highly competitive, profit-seeking traders have a relatively greater incentive to minimize them.

The Evolution of Futures Trading in the U.S.

Early Nineteenth Century Grain Production and Marketing

Into the early nineteenth century, the vast majority of American grains – wheat, corn, barley, rye and oats – were produced throughout the hinterlands of the United States by producers who acted primarily as subsistence farmers – agricultural producers whose primary objective was to feed themselves and their families. Although many of these farmers sold their surplus production on the market, most lacked access to large markets, as well as the incentive, affordable labor supply, and myriad technologies necessary to practice commercial agriculture – the large scale production and marketing of surplus agricultural commodities.

At this time, the principal trade route to the Atlantic seaboard was by river through New Orleans4; though the South was also home to terminal markets – markets of final destination – for corn, provisions and flour. Smaller local grain markets existed along the tributaries of the Ohio and Mississippi Rivers and east-west overland routes. The latter were used primarily to transport manufactured (high valued and nonperishable) goods west.

Most farmers, and particularly those in the East North Central States – the region consisting today of Illinois, Indiana, Michigan, Ohio and Wisconsin – could not ship bulk grains to market profitably (Clark 1966, 4, 15).5 Instead, most converted grains into relatively high value flour, livestock, provisions and whiskies or malt liquors and shipped them south or, in the case of livestock, drove them east (14).6 Oats traded locally, if at all; their low value-to-weight ratios made their shipment, in bulk or otherwise, prohibitive (15n).

The Great Lakes provided a natural water route east to Buffalo but, in order to ship grain this way, producers in the interior East North Central region needed local ports to receive their production. Although the Erie Canal connected Lake Erie to the port of New York by 1825, water routes that connected local interior ports throughout northern Ohio to the Canal were not operational prior to the mid-1830s. Indeed, initially the Erie aided the development of the Old Northwest, not because it facilitated eastward grain shipments, but rather because it allowed immigrants and manufactured goods easy access to the West (Clark 1966, 53).

By 1835 the mouths of rivers and streams throughout the East North Central States had become the hubs, or port cities, from which farmers shipped grain east via the Erie. By this time, shippers could also opt to go south on the Ohio River and then upriver to Pittsburgh and ultimately to Philadelphia, or north on the Ohio Canal to Cleveland, Buffalo and ultimately, via the Welland Canal, to Lake Ontario and Montreal (19).

By 1836 shippers carried more grain north on the Great Lakes and through Buffalo, than south on the Mississippi through New Orleans (Odle 1964, 441). Though, as late as 1840 Ohio was the only state/region who participated significantly in the Great Lakes trade. Illinois, Indiana, Michigan, and the region of modern day Wisconsin either produced for their respective local markets or relied upon Southern demand. As of 1837 only 4,107 residents populated the “village” of Chicago, which became an official city in that year (Hieronymus 1977, 72).7

Antebellum Grain Trade Finance in the Old Northwest

Before the mid-1860s, a network of banks, grain dealers, merchants, millers and commission houses – buying and selling agents located in the central commodity markets – employed an acceptance system to finance the U.S. grain trade (see Clark 1966, 119; Odle 1964, 442). For example, a miller who required grain would instruct an agent in, say, New York to establish, on the miller’s behalf, a line of credit with a merchant there. The merchant extended this line of credit in the form of sight drafts, which the merchant made payable, in sixty or ninety days, up to the amount of the line of credit.

With this credit line established, commission agents in the hinterland would arrange with grain dealers to acquire the necessary grain. The commission agent would obtain warehouse receipts – dealer certified negotiable titles to specific lots and quantities of grain in store – from dealers, attach these to drafts that he drew on the merchant’s line of credit, and discount these drafts at his local bank in return for banknotes; the local bank would forward these drafts on to the New York merchant’s bank for redemption. The commission agents would use these banknotes to advance – lend – grain dealers roughly three quarters of the current market value of the grain. The commission agent would pay dealers the remainder (minus finance and commission fees) when the grain was finally sold in the East. That is, commission agents and grain dealers entered into consignment contracts.

Unfortunately, this approach linked banks, grain dealers, merchants, millers and commission agents such that the “entire procedure was attended by considerable risk and speculation, which was assumed by both the consignee and consignor” (Clark 1966, 120). The system was reasonably adequate if grain prices went unchanged between the time the miller procured the credit and the time the grain (bulk or converted) was sold in the East, but this was rarely the case. The fundamental problem with this system of finance was that commission agents were effectively asking banks to lend them money to purchase as yet unsold grain. To be sure, this inadequacy was most apparent during financial panics, when many banks refused to discount these drafts (Odle 1964, 447).

Grain Trade Finance in Transition: Forward Contracts and Commodity Exchanges

In 1848 the Illinois-Michigan Canal connected the Illinois River to Lake Michigan. The canal enabled farmers in the hinterlands along the Illinois River to ship their produce to merchants located along the river. These merchants accumulated, stored and then shipped grain to Chicago, Milwaukee and Racine. At first, shippers tagged deliverables according to producer and region, while purchasers inspected and chose these tagged bundles upon delivery. Commercial activity at the three grain ports grew throughout the 1850s. Chicago emerged as a dominant grain (primarily corn) hub later that decade (Pierce 1957, 66).8

Amidst this growth of Lake Michigan commerce, a confluence of innovations transformed the grain trade and its method of finance. By the 1840s, grain elevators and railroads facilitated high volume grain storage and shipment, respectively. Consequently, country merchants and their Chicago counterparts required greater financing in order to store and ship this higher volume of grain.9 And, high volume grain storage and shipment required that inventoried grains be fungible – of such a nature that one part or quantity could be replaced by another equal part or quantity in the satisfaction of an obligation. For example, because a bushel of grade No. 2 Spring Wheat was fungible, its price did not depend on whether it came from Farmer A, Farmer B, Grain Elevator C, or Train Car D.

Merchants could secure these larger loans more easily and at relatively lower rates if they obtained firm price and quantity commitments from their buyers. So, merchants began to engage in forward (not futures) contracts. According to Hieronymus (1977), the first such “time contract” on record was made on March 13, 1851. It specified that 3,000 bushels of corn were to be delivered to Chicago in June at a price of one cent below the March 13th cash market price (74).10

Meanwhile, commodity exchanges serviced the trade’s need for fungible grain. In the 1840s and 1850s these exchanges emerged as associations for dealing with local issues such as harbor infrastructure and commercial arbitration (e.g., Detroit in 1847, Buffalo, Cleveland and Chicago in 1848 and Milwaukee in 1849) (see Odle 1964). By the 1850s they established a system of staple grades, standards and inspections, all of which rendered inventory grain fungible (Baer and Saxon 1949, 10; Chandler 1977, 211). As collection points for grain, cotton, and provisions, they weighed, inspected and classified commodity shipments that passed from west to east. They also facilitated organized trading in spot and forward markets (Chandler 1977, 211; Odle 1964, 439).11

The largest and most prominent of these exchanges was the Board of Trade of the City of Chicago, a grain and provisions exchange established in 1848 by a State of Illinois corporate charter (Boyle 1920, 38; Lurie 1979, 27); the exchange is known today as the Chicago Board of Trade (CBT). For at least its first decade, the CBT functioned as a meeting place for merchants to resolve contract disputes and discuss commercial matters of mutual concern. Participation was part-time at best. The Board’s first directorate of 25 members included “a druggist, a bookseller, a tanner, a grocer, a coal dealer, a hardware merchant, and a banker” and attendance was often encouraged by free lunches (Lurie 1979, 25).

However, in 1859 the CBT became a state- (of Illinois) chartered private association. As such, the exchange requested and received from the Illinois legislature sanction to establish rules “for the management of their business and the mode in which it shall be transacted, as they may think proper;” to arbitrate over and settle disputes with the authority as “if it were a judgment rendered in the Circuit Court;” and to inspect, weigh and certify grain and grain trades such that these certifications would be binding upon all CBT members (Lurie 1979, 27).

Nineteenth Century Futures Trading

By the 1850s traders sold and resold forward contracts prior to actual delivery (Hieronymus 1977, 75). A trader could not offset, in the futures market sense of the term, a forward contact. Nonetheless, the existence of a secondary market – market for extant, as opposed to newly issued securities – in forward contracts suggests, if nothing else, speculators were active in these early time contracts.

On March 27, 1863, the Chicago Board of Trade adopted its first rules and procedures for trade in forwards on the exchange (Hieronymus 1977, 76). The rules addressed contract settlement, which was (and still is) the fundamental challenge associated with a forward contract – finding a trader who was willing to take a position in a forward contract was relatively easy to do; finding that trader at the time of contract settlement was not.

The CBT began to transform actively traded and reasonably homogeneous forward contracts into futures contracts in May, 1865. At this time, the CBT: restricted trade in time contracts to exchange members; standardized contract specifications; required traders to deposit margins; and specified formally contract settlement, including payments and deliveries, and grievance procedures (Hieronymus 1977, 76).

The inception of organized futures trading is difficult to date. This is due, in part, to semantic ambiguities – e.g., was a “to arrive” contract a forward contract or a futures contract or neither? However, most grain trade historians agree that storage (grain elevators), shipment (railroad), and communication (telegraph) technologies, a system of staple grades and standards, and the impetus to speculation provided by the Crimean and U.S. Civil Wars enabled futures trading to ripen by about 1874, at which time the CBT was the U.S.’s premier organized commodities (grain and provisions) futures exchange (Baer and Saxon 1949, 87; Chandler 1977, 212; CBT 1936, 18; Clark 1966, 120; Dies 1925, 15; Hoffman 1932, 29; Irwin 1954, 77, 82; Rothstein 1966, 67).

Nonetheless, futures exchanges in the mid-1870s lacked modern clearinghouses, with which most exchanges began to experiment only in the mid-1880s. For example, the CBT’s clearinghouse got its start in 1884, and a complete and mandatory clearing system was in place at the CBT by 1925 (Hoffman 1932, 199; Williams 1982, 306). The earliest formal clearing and offset procedures were established by the Minneapolis Grain Exchange in 1891 (Peck 1985, 6).

Even so, rudiments of a clearing system – one that freed traders from dealing directly with one another – were in place by the 1870s (Hoffman 1920, 189). That is to say, brokers assumed the counter-position to every trade, much as clearinghouse members would do decades later. Brokers settled offsets between one another, though in the absence of a formal clearing procedure these settlements were difficult to accomplish.

Direct settlements were simple enough. Here, two brokers would settle in cash their offsetting positions between one another only. Nonetheless, direct settlements were relatively uncommon because offsetting purchases and sales between brokers rarely balanced with respect to quantity. For example, B1 might buy a 5,000 bushel corn future from B2, who then might buy a 6,000 bushel corn future from B1; in this example, 1,000 bushels of corn remain unsettled between B1 and B2. Of course, the two brokers could offset the remaining 1,000 bushel contract if B2 sold a 1,000 bushel corn future to B1. But what if B2 had already sold a 1,000 bushel corn future to B3, who had sold a 1,000 bushel corn future to B1? In this case, each broker’s net futures market position is offset, but all three must meet in order to settle their respective positions. Brokers referred to such a meeting as a ring settlement. Finally, if, in this example, B1 and B3 did not have positions with each other, B2 could settle her position if she transferred her commitment (which she has with B1) to B3. Brokers referred to this method as a transfer settlement. In either ring or transfer settlements, brokers had to find other brokers who held and wished to settle open counter-positions. Often brokers used runners to search literally the offices and corridors for the requisite counter-parties (see Hoffman 1932, 185-200).

Finally, the transformation in Chicago grain markets from forward to futures trading occurred almost simultaneously in New York cotton markets. Forward contracts for cotton traded in New York (and Liverpool, England) by the 1850s. And, like Chicago, organized trading in cotton futures began on the New York Cotton Exchange in about 1870; rules and procedures formalized the practice in 1872. Futures trading on the New Orleans Cotton Exchange began around 1882 (Hieronymus 1977, 77).

Other successful nineteenth century futures exchanges include the New York Produce Exchange, the Milwaukee Chamber of Commerce, the Merchant’s Exchange of St. Louis, the Chicago Open Board of Trade, the Duluth Board of Trade, and the Kansas City Board of Trade (Hoffman 1920, 33; see Peck 1985, 9).

Early Futures Market Performance

Volume

Data on grain futures volume prior to the 1880s are not available (Hoffman 1932, 30). Though in the 1870s “[CBT] officials openly admitted that there was no actual delivery of grain in more than ninety percent of contracts” (Lurie 1979, 59). Indeed, Chart 1 demonstrates that trading was relatively voluminous in the nineteenth century.

An annual average of 23,600 million bushels of grain futures traded between 1884 and 1888, or eight times the annual average amount of crops produced during that period. By comparison, an annual average of 25,803 million bushels of grain futures traded between 1966 and 1970, or four times the annual average amount of crops produced during that period. In 2002, futures volume outnumbered crop production by a factor of eleven.

The comparable data for cotton futures are presented in Chart 2. Again here, trading in the nineteenth century was significant. To wit, by 1879 futures volume had outnumbered production by a factor of five, and by 1896 this factor had reached eight.

Price of Storage

Nineteenth century observers of early U.S. futures markets either credited them for stabilizing food prices, or discredited them for wagering on, and intensifying, the economic hardships of Americans (Baer and Saxon 1949, 12-20, 56; Chandler 1977, 212; Ferris 1988, 88; Hoffman 1932, 5; Lurie 1979, 53, 115). To be sure, the performance of early futures markets remains relatively unexplored. The extant research on the subject has generally examined this performance in the context of two perspectives on the theory of efficiency: the price of storage and futures price efficiency more generally.

Holbrook Working pioneered research into the price of storage – the relationship, at a point in time, between prices (of storable agricultural commodities) applicable to different future dates (Working 1949, 1254).12 For example, what is the relationship between the current spot price of wheat and the current September 2004 futures price of wheat? Or, what is the relationship between the current September 2004 futures price of wheat and the current May 2005 futures price of wheat?

Working reasoned that these prices could not differ because of events that were expected to occur between these dates. For example, if the May 2004 wheat futures price is less than the September 2004 price, this cannot be due to, say, the expectation of a small harvest between May 2004 and September 2004. On the contrary, traders should factor such an expectation into both May and September prices. And, assuming that they do, then this difference can only reflect the cost of carrying – storing – these commodities over time.13 Though this strict interpretation has since been modified somewhat (see Peck 1985, 44).

So, for example, the September 2004 price equals the May 2004 price plus the cost of storing wheat between May 2004 and September 2004. If the difference between these prices is greater or less than the cost of storage, and the market is efficient, arbitrage will bring the difference back to the cost of storage – e.g., if the difference in prices exceeds the cost of storage, then traders can profit if they buy the May 2004 contract, sell the September 2004 contract, take delivery in May and store the wheat until September. Working (1953) demonstrated empirically that the theory of the price of storage could explain quite satisfactorily these inter-temporal differences in wheat futures prices at the CBT as early as the late 1880s (Working 1953, 556).

Futures Price Efficiency

Many contemporary economists tend to focus on futures price efficiency more generally (for example, Beck 1994; Kahl and Tomek 1986; Kofi 1973; McKenzie, et al. 2002; Tomek and Gray, 1970). That is to say, do futures prices shadow consistently (but not necessarily equal) traders’ rational expectations of future spot prices? Here, the research focuses on the relationship between, say, the cash price of wheat in September 2004 and the September 2004 futures price of wheat quoted two months earlier in July 2004.

Figure 1illustrates the behavior of corn futures prices and their corresponding spot prices between 1877 and 1890. The data consist of the average month t futures price in the last full week of month t-2 and the average cash price in the first full week of month t.

The futures price and its corresponding spot price need not be equal; futures price efficiency does not mean that the futures market is clairvoyant. But, a difference between the two series should exist only because of an unpredictable forecast error and a risk premium – futures prices may be, say, consistently below the expected future spot price if long speculators require an inducement, or premium, to enter the futures market. Recent work finds strong evidence that these early corn (and corresponding wheat) futures prices are, in the long run, efficient estimates of their underlying spot prices (Santos 2002, 35). Although these results and Working’s empirical studies on the price of storage support, to some extent, the notion that early U.S. futures markets were efficient, this question remains largely unexplored by economic historians.

The Struggle for Legitimacy

Nineteenth century America was both fascinated and appalled by futures trading. This is apparent from the litigation and many public debates surrounding its legitimacy (Baer and Saxon 1949, 55; Buck 1913, 131, 271; Hoffman 1932, 29, 351; Irwin 1954, 80; Lurie 1979, 53, 106). Many agricultural producers, the lay community and, at times, legislatures and the courts, believed trading in futures was tantamount to gambling. The difference between the latter and speculating, which required the purchase or sale of a futures contract but not the shipment or delivery of the commodity, was ostensibly lost on most Americans (Baer and Saxon 1949, 56; Ferris 1988, 88; Hoffman 1932, 5; Lurie 1979, 53, 115).

Many Americans believed that futures traders frequently manipulated prices. From the end of the Civil War until 1879 alone, corners – control of enough of the available supply of a commodity to manipulate its price – allegedly occurred with varying degrees of success in wheat (1868, 1871, 1878/9), corn (1868), oats (1868, 1871, 1874), rye (1868) and pork (1868) (Boyle 1920, 64-65). This manipulation continued throughout the century and culminated in the Three Big Corners – the Hutchinson (1888), the Leiter (1898), and the Patten (1909). The Patten corner was later debunked (Boyle 1920, 67-74), while the Leiter corner was the inspiration for Frank Norris’s classic The Pit: A Story of Chicago (Norris 1903; Rothstein 1982, 60).14 In any case, reports of market corners on America’s early futures exchanges were likely exaggerated (Boyle 1920, 62-74; Hieronymus 1977, 84), as were their long term effects on prices and hence consumer welfare (Rothstein 1982, 60).

By 1892 thousands of petitions to Congress called for the prohibition of “speculative gambling in grain” (Lurie, 1979, 109). And, attacks from state legislatures were seemingly unrelenting: in 1812 a New York act made short sales illegal (the act was repealed in 1858); in 1841 a Pennsylvania law made short sales, where the position was not covered in five days, a misdemeanor (the law was repealed in 1862); in 1882 an Ohio law and a similar one in Illinois tried unsuccessfully to restrict cash settlement of futures contracts; in 1867 the Illinois constitution forbade dealing in futures contracts (this was repealed by 1869); in 1879 California’s constitution invalidated futures contracts (this was effectively repealed in 1908); and, in 1882, 1883 and 1885, Mississippi, Arkansas, and Texas, respectively, passed laws that equated futures trading with gambling, thus making the former a misdemeanor (Peterson 1933, 68-69).

Two nineteenth century challenges to futures trading are particularly noteworthy. The first was the so-called Anti-Option movement. According to Lurie (1979), the movement was fueled by agrarians and their sympathizers in Congress who wanted to end what they perceived as wanton speculative abuses in futures trading (109). Although options were (are) not futures contracts, and were nonetheless already outlawed on most exchanges by the 1890s, the legislation did not distinguish between the two instruments and effectively sought to outlaw both (Lurie 1979, 109).

In 1890 the Butterworth Anti-Option Bill was introduced in Congress but never came to a vote. However, in 1892 the Hatch (and Washburn) Anti-Option bills passed both houses of Congress, and failed only on technicalities during reconciliation between the two houses. Had either bill become law, it would have effectively ended options and futures trading in the United States (Lurie 1979, 110).

A second notable challenge was the bucket shop controversy, which challenged the legitimacy of the CBT in particular. A bucket shop was essentially an association of gamblers who met outside the CBT and wagered on the direction of futures prices. These associations had legitimate-sounding names such as the Christie Grain and Stock Company and the Public Grain Exchange. To most Americans, these “exchanges” were no less legitimate than the CBT. That some CBT members were guilty of “bucket shopping” only made matters worse!

The bucket shop controversy was protracted and colorful (see Lurie 1979, 138-167). Between 1884 and 1887 Illinois, Iowa, Missouri and Ohio passed anti-bucket shop laws (Lurie 1979, 95). The CBT believed these laws entitled them to restrict bucket shops access to CBT price quotes, without which the bucket shops could not exist. Bucket shops argued that they were competing exchanges, and hence immune to extant anti-bucket shop laws. As such, they sued the CBT for access to these price quotes.15

The two sides and the telegraph companies fought in the courts for decades over access to these price quotes; the CBT’s very survival hung in the balance. After roughly twenty years of litigation, the Supreme Court of the U.S. effectively ruled in favor of the Chicago Board of Trade and against bucket shops (Board of Trade of the City of Chicago v. Christie Grain & Stock Co., 198 U.S. 236, 25 Sup. Ct. (1905)). Bucket shops disappeared completely by 1915 (Hieronymus 1977, 90).

Regulation

The anti-option movement, the bucket shop controversy and the American public’s discontent with speculation masks an ironic reality of futures trading: it escaped government regulation until after the First World War; though early exchanges did practice self-regulation or administrative law.16 The absence of any formal governmental oversight was due in large part to two factors. First, prior to 1895, the opposition tried unsuccessfully to outlaw rather than regulate futures trading. Second, strong agricultural commodity prices between 1895 and 1920 weakened the opposition, who blamed futures markets for low agricultural commodity prices (Hieronymus 1977, 313).

Grain prices fell significantly by the end of the First World War, and opposition to futures trading grew once again (Hieronymus 1977, 313). In 1922 the U.S. Congress enacted the Grain Futures Act, which required exchanges to be licensed, limited market manipulation and publicized trading information (Leuthold 1989, 369).17 However, regulators could rarely enforce the act because it enabled them to discipline exchanges, rather than individual traders. To discipline an exchange was essentially to suspend it, a punishment unfit (too harsh) for most exchange-related infractions.

The Commodity Exchange Act of 1936 enabled the government to deal directly with traders rather than exchanges. It established the Commodity Exchange Authority (CEA), a bureau of the U.S. Department of Agriculture, to monitor and investigate trading activities and prosecute price manipulation as a criminal offense. The act also: limited speculators’ trading activities and the sizes of their positions; regulated futures commission merchants; banned options trading on domestic agricultural commodities; and restricted futures trading – designated which commodities were to be traded on which licensed exchanges (see Hieronymus 1977; Leuthold, et al. 1989).

Although Congress amended the Commodity Exchange Act in 1968 in order to increase the regulatory powers of the Commodity Exchange Authority, the latter was ill-equipped to handle the explosive growth in futures trading in the 1960s and 1970s. So, in 1974 Congress passed the Commodity Futures Trading Act, which created far-reaching federal oversight of U.S. futures trading and established the Commodity Futures Trading Commission (CFTC).

Like the futures legislation before it, the Commodity Futures Trading Act seeks “to ensure proper execution of customer orders and to prevent unlawful manipulation, price distortion, fraud, cheating, fictitious trades, and misuse of customer funds” (Leuthold, et al. 1989, 34). Unlike the CEA, the CFTC was given broad regulator powers over all futures trading and related exchange activities throughout the U.S. The CFTC oversees and approves modifications to extant contracts and the creation and introduction of new contracts. The CFTC consists of five presidential appointees who are confirmed by the U.S. Senate.

The Futures Trading Act of 1982 amended the Commodity Futures Trading Act of 1974. The 1982 act legalized options trading on agricultural commodities and identified more clearly the jurisdictions of the CFTC and Securities and Exchange Commission (SEC). The regulatory overlap between the two organizations arose because of the explosive popularity during the 1970s of financial futures contracts. Today, the CFTC regulates all futures contracts and options on futures contracts traded on U.S. futures exchanges; the SEC regulates all financial instrument cash markets as well as all other options markets.

Finally, in 2000 Congress passed the Commodity Futures Modernization Act, which reauthorized the Commodity Futures Trading Commission for five years and repealed an 18-year old ban on trading single stock futures. The bill also sought to increase competition and “reduce systematic risk in markets for futures and over-the-counter derivatives” (H.R. 5660, 106th Congress 2nd Session).

Modern Futures Markets

The growth in futures trading has been explosive in recent years (Chart 3).

Futures trading extended beyond physical commodities in the 1970s and 1980s – currency futures in 1972; interest rate futures in 1975; and stock index futures in 1982 (Silber 1985, 83). The enormous growth of financial futures at this time was likely because of the breakdown of the Bretton Woods exchange rate regime, which essentially fixed the relative values of industrial economies’ exchange rates to the American dollar (see Bordo and Eichengreen 1993), and relatively high inflation from the late 1960s to the early 1980s. Flexible exchange rates and inflation introduced, respectively, exchange and interest rate risks, which hedgers sought to mitigate through the use of financial futures. Finally, although futures contracts on agricultural commodities remain popular, financial futures and options dominate trading today. Trading volume in metals, minerals and energy remains relatively small.

Trading volume in agricultural futures contracts first dropped below 50% in 1982. By 1985 this volume had dropped to less than one fourth all trading. In the same year the volume of futures trading in the U.S. Treasury bond contract alone exceeded trading volume in all agricultural commodities combined (Leuthold et al. 1989, 2). Today exchanges in the U.S. actively trade contracts on several underlying assets (Table 1). These range from the traditional – e.g., agriculture and metals – to the truly innovative – e.g. the weather. The latter’s payoff varies with the number of degree-days by which the temperature in a particular region deviates from 65 degrees Fahrenheit.

Table 1: Select Futures Contracts Traded as of 2002

Agriculture Currencies Equity Indexes Interest Rates Metals & Energy
Corn British pound S&P 500 index Eurodollars Copper
Oats Canadian dollar Dow Jones Industrials Euroyen Aluminum
Soybeans Japanese yen S&P Midcap 400 Euro-denominated bond Gold
Soybean meal Euro Nasdaq 100 Euroswiss Platinum
Soybean oil Swiss franc NYSE index Sterling Palladium
Wheat Australian dollar Russell 2000 index British gov. bond (gilt) Silver
Barley Mexican peso Nikkei 225 German gov. bond Crude oil
Flaxseed Brazilian real FTSE index Italian gov. bond Heating oil
Canola CAC-40 Canadian gov. bond Gas oil
Rye DAX-30 Treasury bonds Natural gas
Cattle All ordinary Treasury notes Gasoline
Hogs Toronto 35 Treasury bills Propane
Pork bellies Dow Jones Euro STOXX 50 LIBOR CRB index
Cocoa EURIBOR Electricity
Coffee Municipal bond index Weather
Cotton Federal funds rate
Milk Bankers’ acceptance
Orange juice
Sugar
Lumber
Rice

Source: Bodie, Kane and Marcus (2005), p. 796.

Table 2 provides a list of today’s major futures exchanges.

Table 2: Select Futures Exchanges as of 2002

Exchange Exchange
Chicago Board of Trade CBT Montreal Exchange ME
Chicago Mercantile Exchange CME Minneapolis Grain Exchange MPLS
Coffee, Sugar & Cocoa Exchange, New York CSCE Unit of Euronext.liffe NQLX
COMEX, a division of the NYME CMX New York Cotton Exchange NYCE
European Exchange EUREX New York Futures Exchange NYFE
Financial Exchange, a division of the NYCE FINEX New York Mercantile Exchange NYME
International Petroleum Exchange IPE OneChicago ONE
Kansas City Board of Trade KC Sydney Futures Exchange SFE
London International Financial Futures Exchange LIFFE Singapore Exchange Ltd. SGX
Marche a Terme International de France MATIF

Source: Wall Street Journal, 5/12/2004, C16.

Modern trading differs from its nineteenth century counterpart in other respects as well. First, the popularity of open outcry trading is waning. For example, today the CBT executes roughly half of all trades electronically. And, electronic trading is the rule, rather than the exception throughout Europe. Second, today roughly 99% of all futures contracts are settled prior to maturity. Third, in 1982 the Commodity Futures Trading Commission approved cash settlement – delivery that takes the form of a cash balance – on financial index and Eurodollar futures, whose underlying assets are not deliverable, as well as on several non-financial contracts including lean hog, feeder cattle and weather (Carlton 1984, 253). And finally, on Dec. 6, 2002, the Chicago Mercantile Exchange became the first publicly traded financial exchange in the U.S.

References and Further Reading

Baer, Julius B. and Olin. G. Saxon. Commodity Exchanges and Futures Trading. New York: Harper & Brothers, 1949.

Bodie, Zvi, Alex Kane and Alan J. Marcus. Investments. New York: McGraw-Hill/Irwin, 2005.

Bordo, Michael D. and Barry Eichengreen, editors. A Retrospective on the Bretton Woods System: Lessons for International Monetary Reform. Chicago: University of Chicago Press, 1993.

Boyle, James. E. Speculation and the Chicago Board of Trade. New York: MacMillan Company, 1920.

Buck, Solon. J. The Granger Movement: A Study of Agricultural Organization and Its Political,

Carlton, Dennis W. “Futures Markets: Their Purpose, Their History, Their Growth, Their Successes and Failures.” Journal of Futures Markets 4, no. 3 (1984): 237-271.

Chicago Board of Trade Bulletin. The Development of the Chicago Board of Trade. Chicago: Chicago Board of Trade, 1936.

Chandler, Alfred. D. The Visible Hand: The Managerial Revolution in American Business. Cambridge: Harvard University Press, 1977.

Clark, John. G. The Grain Trade in the Old Northwest. Urbana: University of Illinois Press, 1966.

Commodity Futures Trading Commission. Annual Report. Washington, D.C. 2003.

Dies, Edward. J. The Wheat Pit. Chicago: The Argyle Press, 1925.

Ferris, William. G. The Grain Traders: The Story of the Chicago Board of Trade. East Lansing, MI: Michigan State University Press, 1988.

Hieronymus, Thomas A. Economics of Futures Trading for Commercial and Personal Profit. New York: Commodity Research Bureau, Inc., 1977.

Hoffman, George W. Futures Trading upon Organized Commodity Markets in the United States. Philadelphia: University of Pennsylvania Press, 1932.

Irwin, Harold. S. Evolution of Futures Trading. Madison, WI: Mimir Publishers, Inc., 1954

Leuthold, Raymond M., Joan C. Junkus and Jean E. Cordier. The Theory and Practice of Futures Markets. Champaign, IL: Stipes Publishing L.L.C., 1989.

Lurie, Jonathan. The Chicago Board of Trade 1859-1905. Urbana: University of Illinois Press, 1979.

National Agricultural Statistics Service. “Historical Track Records.” Agricultural Statistics Board, U.S. Department of Agriculture, Washington, D.C. April 2004.

Norris, Frank. The Pit: A Story of Chicago. New York, NY: Penguin Group, 1903.

Odle, Thomas. “Entrepreneurial Cooperation on the Great Lakes: The Origin of the Methods of American Grain Marketing.” Business History Review 38, (1964): 439-55.

Peck, Anne E., editor. Futures Markets: Their Economic Role. Washington D.C.: American Enterprise Institute for Public Policy Research, 1985.

Peterson, Arthur G. “Futures Trading with Particular Reference to Agricultural Commodities.” Agricultural History 8, (1933): 68-80.

Pierce, Bessie L. A History of Chicago: Volume III, the Rise of a Modern City. New York: Alfred A. Knopf, 1957.

Rothstein, Morton. “The International Market for Agricultural Commodities, 1850-1873.” In Economic Change in the Civil War Era, edited by David. T. Gilchrist and W. David Lewis, 62-71. Greenville DE: Eleutherian Mills-Hagley Foundation, 1966.

Rothstein, Morton. “Frank Norris and Popular Perceptions of the Market.” Agricultural History 56, (1982): 50-66.

Santos, Joseph. “Did Futures Markets Stabilize U.S. Grain Prices?” Journal of Agricultural Economics 53, no. 1 (2002): 25-36.

Silber, William L. “The Economic Role of Financial Futures.” In Futures Markets: Their Economic Role, edited by Anne E. Peck, 83-114. Washington D.C.: American Enterprise Institute for Public Policy Research, 1985.

Stein, Jerome L. The Economics of Futures Markets. Oxford: Basil Blackwell Ltd, 1986.

Taylor, Charles. H. History of the Board of Trade of the City of Chicago. Chicago: R. O. Law, 1917.

Werner, Walter and Steven T. Smith. Wall Street. New York: Columbia University Press, 1991.

Williams, Jeffrey C. “The Origin of Futures Markets.” Agricultural History 56, (1982): 306-16.

Working, Holbrook. “The Theory of the Price of Storage.” American Economic Review 39, (1949): 1254-62.

Working, Holbrook. “Hedging Reconsidered.” Journal of Farm Economics 35, (1953): 544-61.

1 The clearinghouse is typically a corporation owned by a subset of exchange members. For details regarding the clearing arrangements of a specific exchange, go to www.cftc.gov and click on “Clearing Organizations.”

2 The vast majority of contracts are offset. Outright delivery occurs when the buyer receives from, or the seller “delivers” to the exchange a title of ownership, and not the actual commodity or financial security – the urban legend of the trader who neglected to settle his long position and consequently “woke up one morning to find several car loads of a commodity dumped on his front yard” is indeed apocryphal (Hieronymus 1977, 37)!

3 Nevertheless, forward contracts remain popular today (see Peck 1985, 9-12).

4 The importance of New Orleans as a point of departure for U.S. grain and provisions prior to the Civil War is unquestionable. According to Clark (1966), “New Orleans was the leading export center in the nation in terms of dollar volume of domestic exports, except for 1847 and a few years during the 1850s, when New York’s domestic exports exceeded those of the Crescent City” (36).

5 This area was responsible for roughly half of U.S. wheat production and a third of U.S. corn production just prior to 1860. Southern planters dominated corn output during the early to mid- 1800s.

6 Millers milled wheat into flour; pork producers fed corn to pigs, which producers slaughtered for provisions; distillers and brewers converted rye and barley into whiskey and malt liquors, respectively; and ranchers fed grains and grasses to cattle, which were then driven to eastern markets.

7 Significant advances in transportation made the grain trade’s eastward expansion possible, but the strong and growing demand for grain in the East made the trade profitable. The growth in domestic grain demand during the early to mid-nineteenth century reflected the strong growth in eastern urban populations. Between 1820 and 1860, the populations of Baltimore, Boston, New York and Philadelphia increased by over 500% (Clark 1966, 54). Moreover, as the 1840’s approached, foreign demand for U.S. grain grew. Between 1845 and 1847, U.S. exports of wheat and flour rose from 6.3 million bushels to 26.3 million bushels and corn exports grew from 840,000 bushels to 16.3 million bushels (Clark 1966, 55).

8 Wheat production was shifting to the trans-Mississippi West, which produced 65% of the nation’s wheat by 1899 and 90% by 1909, and railroads based in the Lake Michigan port cities intercepted the Mississippi River trade that would otherwise have headed to St. Louis (Clark 1966, 95). Lake Michigan port cities also benefited from a growing concentration of corn production in the West North Central region – Iowa, Kansas, Minnesota, Missouri, Nebraska, North Dakota and South Dakota, which by 1899 produced 40% percent of the country’s corn (Clark 1966, 4).

9 Corn had to be dried immediately after it was harvested and could only be shipped profitably by water to Chicago, but only after rivers and lakes had thawed; so, country merchants stored large quantities of corn. On the other hand, wheat was more valuable relative to its weight, and it could be shipped to Chicago by rail or road immediately after it was harvested; so, Chicago merchants stored large quantities of wheat.

10 This is consistent with Odle (1964), who adds that “the creators of the new system of marketing [forward contracts] were the grain merchants of the Great Lakes” (439). However, Williams (1982) presents evidence of such contracts between Buffalo and New York City as early as 1847 (309). To be sure, Williams proffers an intriguing case that forward and, in effect, future trading was active and quite sophisticated throughout New York by the late 1840s. Moreover, he argues that this trading grew not out of activity in Chicago, whose trading activities were quite primitive at this early date, but rather trading in London and ultimately Amsterdam. Indeed, “time bargains” were common in London and New York securities markets in the mid- and late 1700s, respectively. A time bargain was essentially a cash-settled financial forward contract that was unenforceable by law, and as such “each party was forced to rely on the integrity and credit of the other” (Werner and Smith 1991, 31). According to Werner and Smith, “time bargains prevailed on Wall Street until 1840, and were gradually replaced by margin trading by 1860” (68). They add that, “margin trading … had an advantage over time bargains, in which there was little protection against default beyond the word of another broker. Time bargains also technically violated the law as wagering contracts; margin trading did not” (135). Between 1818 and 1840 these contracts comprised anywhere from 0.7% (49-day average in 1830) to 34.6% (78-day average in 1819) of daily exchange volume on the New York Stock & Exchange Board (Werner and Smith 1991, 174).

11 Of course, forward markets could and indeed did exist in the absence of both grading standards and formal exchanges, though to what extent they existed is unclear (see Williams 1982).

12 In the parlance of modern financial futures, the term cost of carry is used instead of the term storage. For example, the cost of carrying a bond is comprised of the cost of acquiring and holding (or storing) it until delivery minus the return earned during the carry period.

13 More specifically, the price of storage is comprised of three components: (1) physical costs such as warehouse and insurance; (2) financial costs such as borrowing rates of interest; and (3) the convenience yield – the return that the merchant, who stores the commodity, derives from maintaining an inventory in the commodity. The marginal costs of (1) and (2) are increasing functions of the amount stored; the more the merchant stores, the greater the marginal costs of warehouse use, insurance and financing. Whereas the marginal benefit of (3) is a decreasing function of the amount stored; put differently, the smaller the merchant’s inventory, the more valuable each additional unit of inventory becomes. Working used this convenience yield to explain a negative price of storage – the nearby contract is priced higher than the faraway contract; an event that is likely to occur when supplies are exceptionally low. In this instance, there is little for inventory dealers to store. Hence, dealers face extremely low physical and financial storage costs, but extremely high convenience yields. The price of storage turns negative; essentially, inventory dealers are willing to pay to store the commodity.

14 Norris’ protagonist, Curtis Jadwin, is a wheat speculator emotionally consumed and ultimately destroyed, while the welfare of producers and consumers hang in the balance, when a nineteenth century CBT wheat futures corner backfires on him.

15 One particularly colorful incident in the controversy came when the Supreme Court of Illinois ruled that the CBT had to either make price quotes public or restrict access to everyone. When the Board opted for the latter, it found it needed to “prevent its members from running (often literally) between the [CBT and a bucket shop next door], but with minimal success. Board officials at first tried to lock the doors to the exchange…However, after one member literally battered down the door to the east side of the building, the directors abandoned this policy as impracticable if not destructive” (Lurie 1979, 140).

16 Administrative law is “a body of rules and doctrines which deals with the powers and actions of administrative agencies” that are organizations other than the judiciary or legislature. These organizations affect the rights of private parties “through either adjudication, rulemaking, investigating, prosecuting, negotiating, settling, or informally acting” (Lurie 1979, 9).

17 In 1921 Congress passed The Futures Trading Act, which was declared unconstitutional.

Citation: Santos, Joseph. “A History of Futures Trading in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/a-history-of-futures-trading-in-the-united-states/

An Economic History of Finland

Riitta Hjerppe, University of Helsinki

Finland in the early 2000s is a small industrialized country with a standard of living ranked among the top twenty in the world. At the beginning of the twentieth century it was a poor agrarian country with a gross domestic product per capita less than half of that of the United Kingdom and the United States, world leaders at the time in this respect. Finland was part of Sweden until 1809, and a Grand Duchy of Russia from 1809 to 1917, with relatively broad autonomy in its economic and many internal affairs. It became an independent republic in 1917. While not directly involved in the fighting in World War I, the country went through a civil war during the years of early independence in 1918, and fought against the Soviet Union during World War II. Participation in Western trade liberalization and bilateral trade with the Soviet Union required careful balancing of foreign policy, but also enhanced the welfare of the population. Finland has been a member of the European Union since 1995, and has belonged to the European Economic and Monetary Union since 1999, when it adopted the euro as its currency.

Gross Domestic Product per capita in Finland and in EU 15, 1860-2004, index 2004 = 100

Sources: Eurostat (2001–2005)

Finland has large forest areas of coniferous trees, and forests have been and still are an important natural resource in its economic development. Other natural resources are scarce: there is no coal or oil, and relatively few minerals. Outokumpu, the biggest copper mine in Europe in its time, was depleted in the 1980s. Even water power is scarce, despite the large number of lakes, because of the small height differences. The country is among the larger ones in Europe in area, but it is sparsely populated with 44 people per square mile, 5.3 million people altogether. The population is very homogeneous. There are a small number of people of foreign origin, about two percent, and for historical reasons there are two official language groups, the Finnish-speaking majority and a Swedish-speaking minority. In recent years population has grown at about 0.3 percent per year.

The Beginnings of Industrialization and Accelerating Growth

Finland was an agrarian country in the 1800s, despite poor climatic conditions for efficient grain growing. Seventy percent of the population was engaged in agriculture and forestry, and half of the value of production came from these primary industries in 1900. Slash and burn cultivation finally gave way to field cultivation during the nineteenth century, even in the eastern parts of the country.

Some iron works were founded in the southwestern part of the country in order to process Swedish iron ore as early as in the seventeenth century. Significant tar burning, sawmilling and fur trading brought cash with which to buy a few imported items such as salt, and some luxuries – coffee, sugar, wines and fine cloths. The small towns in the coastal areas flourished through the shipping of these items, even if restrictive legislation in the eighteenth century required transport via Stockholm. The income from tar and timber shipping accumulated capital for the first industrial plants.

The nineteenth century saw the modest beginnings of industrialization, clearly later than in Western Europe. The first modern cotton factories started up in the 1830s and 1840s, as did the first machine shops. The first steam machines were introduced in the cotton factories and the first rag paper machine in the 1840s. The first steam sawmills were allowed to start only in 1860. The first railroad shortened the traveling time from the inland towns to the coast in 1862, and the first telegraphs came at around the same time. Some new inventions, such as electrical power and the telephone, came into use early in the 1880s, but generally the diffusion of new technology to everyday use took a long time.

The export of various industrial and artisan products to Russia from the 1840s on, as well as the opening up of British markets to Finnish sawmill products in the 1860s were important triggers of industrial development. From the 1870s on pulp and paper based on wood fiber became major export items to the Russian market, and before World War I one-third of the demand of the vast Russian empire was satisfied with Finnish paper. Finland became a very open economy after the 1860s and 1870s, with an export share equaling one-fifth of GDP and an import share of one-fourth. A happy coincidence was the considerable improvement in the terms of trade (export prices/import prices) from the late 1860s to 1900, when timber and other export prices improved in relation to the international prices of grain and industrial products.

Openness of the economies (exports+imports of goods/GDP, percent) in Finland and EU 15, 1960-2005

Sources: Heikkinen and van Zanden 2004; Hjerppe 1989.

Finland participated fully in the global economy of the first gold-standard era, importing much of its grain tariff-free and a lot of other foodstuffs. Half of the imports consisted of food, beverages and tobacco. Agriculture turned to dairy farming, as in Denmark, but with poorer results. The Finnish currency, the markka from 1865, was tied to gold in 1878 and the Finnish Senate borrowed money from Western banking houses in order to build railways and schools.

GDP grew at a slightly accelerating average rate of 2.6 percent per annum, and GDP per capita rose 1.5 percent per year on average between 1860 and 1913. The population was also growing rapidly, and from two million in the 1860s it reached three million on the eve of World War I. Only about ten percent of the population lived in towns. The investment rate was a little over 10 percent of GDP between the 1860s and 1913 and labor productivity was low compared to the leading nations. Accordingly, economic growth depended mostly on added labor inputs, as well as a growing cultivated area.

Catching up in the Interwar Years

The revolution of 1917 in Russia and Finland’s independence cut off Russian trade, which was devastating for Finland’s economy. The food situation was particularly difficult as 60 percent of grain required had been imported.

Postwar reconstruction in Europe and the consequent demand for timber soon put the economy on a swift growth path. The gap between the Finnish economy and Western economies narrowed dramatically in the interwar period, although it remained the same among the Scandinavian countries, which also experienced fast growth: GDP grew by 4.7 percent per annum and GDP per capita by 3.8 percent in 1920–1938. The investment rate rose to new heights, which also improved labor productivity. The 1930s depression was milder than in many other European countries because of the continued demand for pulp and paper. On the other hand, Finnish industries went into depression at different times, which made the downturn milder than it would have been if all the industries had experienced their troughs simultaneously. The Depression, however, had serious and long-drawn-out consequences for poor people.

The land reform of 1918 secured land for tenant farmers and farm workers. A large number of new, small farms were established, which could only support families if they had extra income from forest work. The country remained largely agrarian. On the eve of World War II, almost half of the labor force and one-third of the production were still in the primary industries. Small-scale agriculture used horses and horse-drawn machines, lumberjacks went into the forest with axes and saws, and logs were transported from the forest by horses or by floating. Tariff protection and other policy measures helped to raise the domestic grain production to 80–90 percent of consumption by 1939.

Soon after the end of World War I, Finnish sawmill products, pulp and paper found old and new markets in the Western world. The structure of exports became more one-sided, however. Textiles and metal products found no markets in the West and had to compete hard with imports on the domestic market. More than four-fifths of exports were based on wood, and one-third of industrial production was in sawmilling, other wood products, pulp and paper. Other growing industries included mining, basic metal industries and machine production, but they operated on the domestic market, protected by the customs barriers that were typical of Europe at that time.

The Postwar Boom until the 1970s

Finland came out of World War II crippled by the loss of a full tenth of its territory, and with 400.000 evacuees from Karelia. Productive units were dilapidated and the raw material situation was poor. The huge war reparations to the Soviet Union were the priority problem of the decision makers. The favorable development of the domestic machinery and shipbuilding industries, which was based on domestic demand during the interwar period and arms deliveries to the army during the War made war-reparations deliveries possible. They were paid on time and according to the agreements. At the same time, timber exports to the West started again. Gradually the productive capacity was modernized and the whole industry was reformed. Evacuees and soldiers were given land on which to settle, and this contributed to the decrease in farm size.

Finland became part of the Western European trade-liberalization movement by joining the World Bank, the International Monetary Fund (IMF) and the Bretton Woods agreement in 1948, becoming a member of the General Agreement on Tariffs and Trade (GATT) two years later, and joining Finnefta (an agreement between the European Free Trade Area (EFTA) and Finland) in 1961. The government chose not to receive Marshall Aid because of the world political situation. Bilateral trade agreements with the Soviet Union started in 1947 and continued until 1991. Tariffs were eased and imports from market economies liberated from 1957. Exports and imports, which had stayed at internationally high levels during the interwar years, only slowly returned to the earlier relative levels.

The investment rate climbed to new levels soon after War World II under a government policy favoring investments and it remained on this very high level until the end of the 1980s. The labor-force growth stopped in the early 1960s, and economic growth has since depended on increases in productivity rather than increased labor inputs. GDP growth was 4.9 percent and GDP per capita 4.3 percent in 1950–1973 – matching the rapid pace of many other European countries.

Exports and, accordingly, the structure of the manufacturing industry were diversified by Soviet and, later, on Western orders for machinery products including paper machines, cranes, elevators, and special ships such as icebreakers. The vast Soviet Union provided good markets for clothing and footwear, while Finnish wool and cotton factories slowly disappeared because of competition from low-wage countries. The modern chemical industry started to develop in the early twentieth century, often led by foreign entrepreneurs, and the first small oil refinery was built by the government in the 1950s. The government became actively involved in industrial activities in the early twentieth century, with investments in mining, basic industries, energy production and transmission, and the construction of infrastructure, and this continued in the postwar period.

The new agricultural policy, the aim of which was to secure reasonable incomes and favorable loans to the farmers and the availability of domestic agricultural products for the population, soon led to overproduction in several product groups, and further to government-subsidized dumping on the international markets. The first limitations on agricultural production were introduced at the end of the 1960s.

The population reached four million in 1950, and the postwar baby boom put extra pressure on the educational system. The educational level of the Finnish population was low in Western European terms in the 1950s, even if everybody could read and write. The underdeveloped educational system was expanded and renewed as new universities and vocational schools were founded, and the number of years of basic, compulsory education increased. Education has been government run since the 1960s and 1970s, and is free at all levels. Finland started to follow the so-called Nordic welfare model, and similar improvements in health and social care have been introduced, normally somewhat later than in the other Nordic countries. Public child-health centers, cash allowances for children, and maternity leave were established in the 1940s, and pension plans have covered the whole population since the 1950s. National unemployment programs had their beginnings in the 1930s and were gradually expanded. A public health-care system was introduced in 1970, and national health insurance also covers some of the cost of private health care. During the 1980s the income distribution became one of the most even in the world.

Slower Growth from the 1970s

The oil crises of the 1970s put the Finnish economy under pressure. Although the oil reserves of the main supplier, the Soviet Union, showed no signs of running out, the price increased in line with world market prices. This was a source of devastating inflation in Finland. On the other hand, it was possible to increase exports under the terms of the bilateral trade agreement with the Soviet Union. This boosted export demand and helped Finland to avoid the high and sustained unemployment that plagued Western Europe.

Economic growth in the 1980s was somewhat better than in most Western economies, and at the end of the 1980s Finland caught up with the sluggishly-growing Swedish GDP per capita for the first time. In the early 1990s the collapse of the Soviet trade, Western European recession and problems in adjusting to the new liberal order of international capital movement led the Finnish economy into a depression that was worse than that of the 1930s. GDP fell by over 10 percent in three years, and unemployment rose to 18 percent. The banking crisis triggered a profound structural change in the Finnish financial sector. The economy revived again to a brisk growth rate of 3.6 percent in 1994-2005: GDP growth was 2.5 percent and GDP per capita 2.1 percent between 1973 and 2005.

Electronics started its spectacular rise in the 1980s and it is now the largest single manufacturing industry with a 25 percent share of all manufacturing. Nokia is the world’s largest producer of mobile phones and a major transmission-station constructor. Connected to this development was the increase in the research-and- development outlay to three percent of GDP, one of the highest in the world. The Finnish paper companies UPM-Kymmene and M-real and the Finnish-Swedish Stora-Enso are among the largest paper producers in the world, although paper production now accounts for only 10 percent of manufacturing output. The recent discussion on the future of the industry is alarming, however. The position of the Nordic paper industry, which is based on expensive, slowly-growing timber, is threatened by new paper factories founded near the expanding consumption areas in Asia and South America, which use local, fast-growing tropical timber. The formerly significant sawmilling operations now constitute a very small percentage of the activities, although the production volumes have been growing. The textile and clothing industries have shrunk into insignificance.

What has typified the last couple of decades is the globalization that has spread to all areas. Exports and imports have increased as a result of export-favoring policies. Some 80 percent of the stocks of Finnish public companies are now in foreign hands: foreign ownership was limited and controlled until the early 1990s. A quarter of the companies operating in Finland are foreign-owned, and Finnish companies have even bigger investments abroad. Most big companies are truly international nowadays. Migration to Finland has increased, and since the collapse of the eastern bloc Russian immigrants have become the largest single foreign group. The number of foreigners is still lower than in many other countries – there are about 120.000 people with foreign background out of a population of 5.3 million.

The directions of foreign trade have been changing because trade with the rising Asian economies has been gaining in importance and Russian trade has fluctuated. Otherwise, almost the same country distribution prevails as has been common for over a century. Western Europe has a share of three-fifths, which has been typical. The United Kingdom was for long Finland’s biggest trading partner, with a share of one-third, but this started to diminish in the 1960s. Russia accounted for one-third of Finnish foreign trade in the early 1900s, but the Soviet Union had minimal trade with the West at first, and its share of the Finnish foreign trade was just a few percentage points. After World War II Soviet-Finnish trade increased gradually until it reached 25 percent of Finnish foreign trade in the 1970s and early 1980s. Trade with Russia is now gradually gaining ground again from the low point of the early 1990s, and had risen to about ten percent in 2006. This makes Russia one of Finland’s three biggest trading partners, Sweden and Germany being the other two with a ten percent share each.

The balance of payments was a continuing problem in the Finnish economy until the 1990s. Particularly in the post-World War II period inflation repeatedly eroded the competitive capacity of the economy and led to numerous devaluations of the currency. An economic policy favoring exports helped the country out of the depression of the 1990s and improved the balance of payments.

Agriculture continued its problematic development of overproduction and high subsidies, which finally became very unpopular. The number of farms has shrunk since the 1960s and the average size has recently risen to average European levels. The share of agricultural production and labor are also on the Western European levels nowadays. Finnish agriculture is incorporated into the Common Agricultural Policy of the European Union and shares its problems, even if Finnish overproduction has been virtually eliminated.

The share of forestry is equally low, even if it supplies four-fifths of the wood used in Finnish sawmills and paper factories: the remaining fifth is imported mainly from the northwestern parts of Russia. The share of manufacturing is somewhat above Western European levels and, accordingly, that of services is high but slightly lower than in the old industrialized countries.

Recent discussion on the state of the economy mainly focuses on two issues. The very open economy of Finland is very much influenced by the rather sluggish economic development of the European Union. Accordingly, not very high growth rates are to be expected in Finland either. Since the 1990s depression, the investment rate has remained at a lower level than was common in the postwar period, and this is cause for concern.

The other issue concerns the prominent role of the public sector in the economy. The Nordic welfare model is basically approved of, but the costs create tensions. High taxation is one consequence of this and political parties discuss whether or not the high public-sector share slows down economic growth.

The aging population, high unemployment and the decreasing numbers of taxpayers in the rural areas of eastern and central Finland place a burden on the local governments. There is also continuing discussion about tax competition inside the European Union: how does the high taxation in some member countries affect the location decisions of companies?

Development of Finland’s exports by commodity group 1900-2005, percent

Source: Finnish National Board of Customs, Statistics Unit

Note on classification: Metal industry products SITC 28, 67, 68, 7, 87; Chemical products SITC 27, 32, 33, 34, 5, 66; Textiles SITC 26, 61, 65, 84, 85; Wood, paper and printed products SITC 24, 25, 63, 64, 82; Food, beverages, tobacco SITC 0, 1, 4.

Development of Finland’s imports by commodity group 1900-2005, percent

Source: Finnish National Board of Customs, Statistics Unit

Note on classification: Metal industry products SITC 28, 67, 68, 7, 87; Chemical products SITC 27, 32, 33, 34, 5, 66; Textiles SITC 26, 61, 65, 84, 85; Wood, paper and printed products SITC 24, 25, 63, 64, 82; Food, beverages, tobacco SITC 0, 1, 4.

References:

Heikkinen, S. and J.L van Zanden, eds. Explorations in Economic Growth. Amsterdam: Aksant, 2004.

Heikkinen, S. Labour and the Market: Workers, Wages and Living Standards in Finland, 1850–1913. Commentationes Scientiarum Socialium 51 (1997).

Hjerppe, R. The Finnish Economy 1860–1985: Growth and Structural Change. Studies on Finland’s Economic Growth XIII. Helsinki: Bank of Finland Publications, 1989.

Jalava, J., S. Heikkinen and R. Hjerppe. “Technology and Structural Change: Productivity in the Finnish Manufacturing Industries, 1925-2000.” Transformation, Integration and Globalization Economic Research (TIGER), Working Paper No. 34, December 2002.

Kaukiainen, Yrjö. A History of Finnish Shipping. London: Routledge, 1993.

Myllyntaus, Timo. Electrification of Finland: The Transfer of a New Technology into a Late Industrializing Economy. Worcester, MA: Macmillan, Worcester, 1991.

Ojala, J., J. Eloranta and J. Jalava, editors. The Road to Prosperity: An Economic History of Finland. Helsinki: Suomalaisen Kirjallisuuden Seura, 2006.

Pekkarinen, J. and J. Vartiainen. Finlands ekonomiska politik: den långa linjen 1918–2000, Stockholm: Stiftelsen Fackföreningsrörelsens institut för ekonomisk forskning FIEF, 2001.

Citation: Hjerppe, Riitta. “An Economic History of Finland”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/an-economic-history-of-finland/

The Economic History of the International Film Industry

Gerben Bakker, University of Essex

Introduction

Like other major innovations such as the automobile, electricity, chemicals and the airplane, cinema emerged in most Western countries at the same time. As the first form of industrialized mass-entertainment, it was all-pervasive. From the 1910s onwards, each year billions of cinema-tickets were sold and consumers who did not regularly visit the cinema became a minority. In Italy, today hardly significant in international entertainment, the film industry was the fourth-largest export industry before the First World War. In the depression-struck U.S., film was the tenth most profitable industry, and in 1930s France it was the fastest-growing industry, followed by paper and electricity, while in Britain the number of cinema-tickets sold rose to almost one billion a year (Bakker 2001b). Despite this economic significance, despite its rapid emergence and growth, despite its pronounced effect on the everyday life of consumers, and despite its importance as an early case of the industrialization of services, the economic history of the film industry has hardly been examined.

This article will limit itself exclusively to the economic development of the industry. It will discuss just a few countries, mainly the U.S., Britain and France, and then exclusively to investigate the economic issues it addresses, not to give complete histories of the industries in those countries. This entry cannot do justice to developments in each and every country, given the nature of an encyclopedia article. This entry also limits itself to the evolution of the Western film industry, because it has been and still is the largest film industry in the world, in revenue terms, although this may well change in the future.

Before Cinema

In the late eighteenth century most consumers enjoyed their entertainment in an informal, haphazard and often non-commercial way. When making a trip they could suddenly meet a roadside entertainer, and their villages were often visited by traveling showmen, clowns and troubadours. Seasonal fairs attracted a large variety of musicians, magicians, dancers, fortune-tellers and sword-swallowers. Only a few large cities harbored legitimate theaters, strictly regulated by the local and national rulers. This world was torn apart in two stages.

First, most Western countries started to deregulate their entertainment industries, enabling many more entrepreneurs to enter the business and make far larger investments, for example in circuits of fixed stone theaters. The U.S. was the first with liberalization in the late eighteenth century. Most European countries followed during the nineteenth century. Britain, for example, deregulated in the mid-1840s, and France in the late 1860s. The result of this was that commercial, formalized and standardized live entertainment emerged that destroyed a fair part of traditional entertainment. The combined effect of liberalization, innovation and changes in business organization, made the industry grow rapidly throughout the nineteenth century, and integrated local and regional entertainment markets into national ones. By the end of the nineteenth century, integrated national entertainment industries and markets maximized productivity attainable through process innovations. Creative inputs, for example, circulated swiftly along the venues – often in dedicated trains – coordinated by centralized booking offices, maximizing capital and labor utilization.

At the end of the nineteenth century, in the era of the second industrial revolution, falling working hours, rising disposable income, increasing urbanization, rapidly expanding transport networks and strong population growth resulted in a sharp rise in the demand for entertainment. The effect of this boom was further rapid growth of live entertainment through process innovations. At the turn of the century, the production possibilities of the existing industry configuration were fully realized and further innovation within the existing live-entertainment industry could only increase productivity incrementally.

At this moment, in a second stage, cinema emerged and in its turn destroyed this world, by industrializing it into the modern world of automated, standardized, tradable mass-entertainment, integrating the national entertainment markets into an international one.

Technological Origins

In the early 1890s, Thomas Edison introduced the kinematograph, which enabled the shooting of films and their play-back in slot-coin machines for individual viewing. In the mid-1890s, the Lumière brothers added projection to the invention and started to play films in theater-like settings. Cinema reconfigured different technologies that all were available from the late 1880s onwards: photography (1830s), taking negative pictures and printing positives (1880s), roll films (1850s), celluloid (1868), high-sensitivity photographic emulsion (late 1880s), projection (1645) and movement dissection/ persistence of vision (1872).

After the preconditions for motion pictures had been established, cinema technology itself was invented. Already in 1860/1861 patents were filed for viewing and projecting motion pictures, but not for the taking of pictures. The scientist Jean Marey completed the first working model of a film camera in 1888 in Paris. Edison visited Georges Demeney in 1888 and saw his films. In 1891, he filed an American patent for a film camera, which had a different moving mechanism than the Marey camera. In 1890, the Englishman Friese Green presented a working camera to a group of enthusiasts. In 1893 the Frenchman Demeney filed a patent for a camera. Finally, the Lumière brothers filed a patent for their type of camera and for projection in February 1895. In December of that year they gave the first projection for a paying audience. They were followed in February 1896 by the Englishman Robert W. Paul. Paul also invented the ‘Maltese cross,’ a device which is still used in film cameras today. It is instrumental in the smooth rolling of the film, and in the correcting of the lens for the space between the exposures (Michaelis 1958; Musser 1990: 65-67; Low and Manvell 1948).

Three characteristics stand out in this innovation process. First, it was an international process of invention, taking place in several countries at the same time, and the inventors building upon and improving upon each other’s inventions. This connects to Joel Mokyr’s notion that in the nineteenth century communication became increasingly important to innovations, and many innovations depended on international communication between inventors (Mokyr 1990: 123-124). Second, it was what Mokyr calls a typical nineteenth century invention, in that it was a smart combination of many existing technologies. Many different innovations in the technologies which it combined had been necessary to make possible the innovation of cinema. Third, cinema was a major innovation in the sense that it was quickly and universally adopted throughout the western world, quicker than the steam engine, the railroad or the steamship.

The Emergence of Cinema

For about the first ten years of its existence, cinema in the United States and elsewhere was mainly a trick and a gadget. Before 1896 the coin-operated Kinematograph of Edison was present at fairs and in entertainment venues. Spectators had to throw a coin in the machine and peek through glasses to see the film. The first projections, from 1896 onwards, attracted large audiences. Lumière had a group of operators who traveled around the world with the cinematograph, and showed the pictures in theaters. After a few years films became a part of the program in vaudeville and sometimes in theater as well. At the same time traveling cinema emerged: cinemas which traveled around with a tent or mobile theater and set up shop for a short time in towns and villages. These differed from the Lumière operators and others in that they catered for the general, popular audiences, while the former were more upscale parts of theater programs, or a special program for the bourgeoisie (Musser 1990: 140, 299, 417-20).

This whole era, which in the U.S. lasted up to about 1905, was a time in which cinema seemed just one of many new fashions, and it was not at all certain that it would persist, or that it would be forgotten or marginalized quickly, such as happened to the boom in skating rinks and bowling alleys at the time. This changed when Nickelodeons, fixed cinemas with a few hundred seats, emerged and quickly spread all over the country between 1905 and 1907. From this time onwards cinema changed into an industry in its own right, which was distinct from other entertainments, since it had its own buildings and its own advertising. The emergence of fixed cinemas coincided which a huge growth phase in the business in general; film production increased greatly, and film distribution developed into a special activity, often managed by large film producers. However, until about 1914, besides the cinemas, films also continued to be combined with live entertainment in vaudeville and other theaters (Musser 1990; Allen 1980).

Figure 1 shows the total length of negatives released on the U.S., British and French film markets. In the U.S., the total released negative length increased from 38,000 feet in 1897, to two million feet in 1910, to twenty million feet in 1920. Clearly, the initial U.S. growth between 1893 and 1898 was very strong: the market increased by over three orders of magnitude, but from an infinitesimal initial base. Between 1898 and 1906, far less growth took place, and in this period it may well have looked like the cinematograph would remain a niche product, a gimmick shown at fairs and used to be interspersed with live entertainment. From 1907, however, a new, sharp sustained growth phase starts: The market increased further again by two orders of magnitude – and from a far higher base this time. At the same time, the average film length increased considerably, from eighty feet in 1897 to seven hundred feet in 1910 to three thousand feet in 1920. One reel of film held about 1,500 feet and had a playing time of about fifteen minutes.

Between the mid-1900s and 1914 the British and French markets were growing at roughly the same rates as the U.S. one. World War I constituted a discontinuity: from 1914 onwards European growth rates are far lower those in the U.S.

The prices the Nickelodeons charged were between five and ten cents, for which spectators could stay as long as they liked. Around 1910, when larger cinemas emerged in hot city center locations, more closely resembling theaters than the small and shabby Nickelodeons, prices increased. They varied from between one dollar to one dollar and-a-half for ‘first run’ cinemas to five cents for sixth-run neighborhood cinemas (see also Sedgwick 1998).

Figure 1

Total Released Length on the U.S., British and French Film Markets (in Meters), 1893-1922

Note: The length refers to the total length of original negatives that were released commercially.

See Bakker 2005, appendix I for the method of estimation and for a discussion of the sources.

Source: Bakker 2001b; American Film Institute Catalogue, 1893-1910; Motion Picture World, 1907-1920.

The Quality Race

Once Nickelodeons and other types of cinemas were established, the industry entered a new stage with the emergence of the feature film. Before 1915, cinemagoers saw a succession of many different films, each between one and fifteen minutes, of varying genres such as cartoons, newsreels, comedies, travelogues, sports films, ‘gymnastics’ pictures and dramas. After the mid-1910s, going to the cinema meant watching a feature film, a heavily promoted dramatic film with a length that came closer to that of a theater play, based on a famous story and featuring famous stars. Shorts remained only as side dishes.

The feature film emerged when cinema owners discovered that films with a far higher quality and length, enabled them to ask far higher ticket prices and get far more people into their cinemas, resulting in far higher profits, even if cinemas needed to pay far more for the film rental. The discovery that consumers would turn their back on packages of shorts (newsreels, sports, cartoons and the likes) as the quality of features increased set in motion a quality race between film producers (Bakker 2005). They all started investing heavily in portfolios of feature films, spending large sums on well-known stars, rights to famous novels and theater plays, extravagant sets, star directors, etc. A contributing factor in the U.S. was the demise of the Motion Picture Patents Company (MPPC), a cartel that tried to monopolize film production and distribution. Between about 1908 and 1912 the Edison-backed MPPC had restricted quality artificially by setting limits on film length and film rental prices. When William Fox and the Department of Justice started legal action in 1912, the power of the MPPC quickly waned and the ‘independents’ came to dominate the industry.

In the U.S., the motion picture industry became the internet of the 1910s. When companies put the word motion pictures in their IPO investors would flock to it. Many of these companies went bankrupt, were dissolved or were taken over. A few survived and became the Hollywood studios most of which we still know today: Paramount, Metro-Goldwyn-Mayer (MGM), Warner Brothers, Universal, Radio-Keith-Orpheum (RKO), Twentieth Century-Fox, Columbia and United Artists.

A necessary condition for the quality race was some form of vertical integration. In the early film industry, films were sold. This meant that the cinema-owner who bought a film, would receive all the marginal revenues the film generated. In the film industry, these revenues were largely marginal profits, as most costs were fixed, so an additional film ticket sold was pure (gross) profit. Because the producer did not get any of these revenues, at the margin there was little incentive to increase quality. When outright sales made way for the rental of films to cinemas for a fixed fee, producers got a higher incentive to increase a film’s quality, because it would generate more rentals (Bakker 2005). This further increased when percentage contracts were introduced for large city center cinemas, and when producers-distributors actually started to buy large cinemas. The changing contractual relationship between cinemas and producers was paralleled between producers and distributors.

The Decline and Fall of the European Film Industry

Because the quality race happened when Europe was at war, European companies could not participate in the escalation of quality (and production costs) discussed above. This does not mean all of them were in crisis. Many made high profits during the war from newsreels, other short films, propaganda films and distribution. They also were able to participate in the shift towards the feature film, substantially increasing output in the new genre during the war (Figure 2). However, it was difficult for them to secure the massive amount of venture capital necessary to participate in the quality race while their countries were at war. Even if they would have managed it may have been difficult to justify these lavish expenditures when people were dying in the trenches.

Yet a few European companies did participate in the escalation phase. The Danish Nordisk company invested heavily in long feature-type films, and bought cinema chains and distributors in Germany, Austria and Switzerland. Its strategy ended when the German government forced it to sell its German assets to the newly founded UFA company, in return for a 33 percent minority stake. The French Pathé company was one of the largest U.S. film producers. It set up its own U.S. distribution network and invested in heavily advertised serials (films in weekly installments) expecting that this would become the industry standard. As it turned out, Pathé bet on the wrong horse and was overtaken by competitors riding high on the feature film. Yet it eventually switched to features and remained a significant company. In the early 1920s, its U.S. assets were sold to Merrill Lynch and eventually became part of RKO.

Figure 2

Number of Feature Films Produced in Britain, France and the U.S., 1911-1925

(semi-logarithmic scale)

Source: Bakker 2005 [American Film Institute Catalogue; British Film Institute; Screen Digest; Globe, World Film Index, Chirat, Longue métrage.]

Because it could not participate in the quality race, the European film industry started to decline in relative terms. Its market share at home and abroad diminished substantially (Figure 3). In the 1900s European companies supplied at least half of the films shown in the U.S. In the early 1910s this dropped to about twenty percent. In the mid-1910s, when the feature film emerged, the European market share declined to nearly undetectable levels.

By the 1920s, most large European companies gave up film production altogether. Pathé and Gaumont sold their U.S. and international business, left film making and focused on distribution in France. Éclair, their major competitor, went bankrupt. Nordisk continued as an insignificant Danish film company, and eventually collapsed into receivership. The eleven largest Italian film producers formed a trust, which terribly failed and one by one they fell into financial disaster. The famous British producer, Cecil Hepworth, went bankrupt. By late 1924, hardly any films were being made in Britain. American films were shown everywhere.

Figure 3

Market Shares by National Film Industries, U.S., Britain, France, 1893-1930

Note: EU/US is the share of European companies on the U.S. market, EU/UK is the share of European companies on the British market, and so on. For further details see Bakker 2005.

The Rise of Hollywood

Once they had lost out, it was difficult for European companies to catch up. First of all, since the sharply rising film production costs were fixed and sunk, market size was becoming of essential importance as it affected the amount of money that could be spent on a film. Exactly at this crucial moment, the European film market disintegrated, first because of war, later because of protectionism. The market size was further diminished by heavy taxes on cinema tickets that sharply increased the price of cinema compared to live entertainment.

Second, the emerging Hollywood studios benefited from first mover advantages in feature film production: they owned international distribution networks, they could offer cinemas large portfolios of films at a discount (block-booking), sometimes before they were even made (blind-bidding), the quality gap with European features was so large it would be difficult to close in one go, and, finally, the American origin of the feature films in the 1910s had established U.S. films as a kind of brand, leaving consumers with high switching costs to try out films from other national origins. It would be extremely costly for European companies to re-enter international distribution, produce large portfolios, jump-start film quality, and establish a new brand of films – all at the same time (Bakker 2005).

A third factor was the rise of Hollywood as production location. The large existing American Northeast coast film industry and the newly emerging film industry in Florida declined as U.S. film companies started to locate in Southern California. First of all, the ‘sharing’ of inputs facilitated knowledge spillovers and allowed higher returns. The studios lowered costs because creative inputs had less down-time, needed to travel less, could participate in many try-outs to achieve optimal casting and could be rented out easily to competitors when not immediately wanted. Hollywood also attracted new creative inputs through non-monetary means: even more than money creative inputs wanted to maximize fame and professional recognition. For an actress, an offer to work with the world’s best directors, costume designers, lighting specialists and make-up artists was difficult to decline.

Second, a thick market for specialized supply and demand existed. Companies could easily rent out excess studio capacity (for example, during the nighttime B-films were made), and a producer was quite likely to find the highly specific products or services needed somewhere in Hollywood (Christopherson and Storper 1987, 1989). While a European industrial ‘film’ district may have been competitive and even have a lower over-all cost/quality ratio than Hollywood, a first European major would have a substantially higher cost/quality ratio (lacking external economies) and would therefore not easily enter (see, for example, Krugman and Obstfeld 2003, chapter 6). If entry did happen, the Hollywood studios could and would buy successful creative inputs away, since they could realize higher returns on these inputs, which resulted in American films with even a higher perceived quality, thus perpetuating the situation.

Sunlight, climate and the variety of landscape in California were of course favorable to film production, but were not unique. Locations such as Florida, Italy, Spain and Southern France offered similar conditions.

The Coming of Sound

In 1927, sound films were introduced. The main innovator was Warner Brothers, backed by the bank Goldman, Sachs, which actually parachuted a vice-president to Warner. Although many other sound systems had been tried and marketed from the 1900s onwards, the electrical microphone, invented at Bell labs in the mid-1920s, sharply increased the quality of sound films and made possible the change of the industry. Sound increased the interests in the film industry of large industrial companies such as General Electric, Western Electric and RCA, as well as those of the banks who were eager the finance the new innovation, such as the Bank of America and Goldman, Sachs.

In economic terms, sound represented an exogenous jump in sunk costs (and product quality) which did not affect the basic industry structure very much: The industry structure was already highly concentrated before sound and the European, New York/Jersey and Florida film industries were already shattered. What it did do was industrialize away most of the musicians and entertainers that had complemented the silent films with sound and entertainment, especially those working in the smaller cinemas. This led to massive unemployment among musicians (see, for example, Gomery 1975; Kraft 1996).

The effect of sound film in Europe was to increase the domestic revenues of European films, because they became more culture-specific as they were in the local language, but at the same time it decreased the foreign revenues European films received (Bakker 2004b). It is difficult to completely assess the impact of sound film, as it coincided with increased protection; many European countries set quotas for the amount of foreign films that could be shown shortly before the coming of sound. In France, for example, where sound became widely adopted from 1930 onwards, the U.S. share of films dropped from eighty to fifty percent between 1926 and 1929, mainly the result of protectionist legislation. During the 1930s, the share temporarily declined to about forty percent, and then hovered to between fifty and sixty percent. In short, protectionism decreased the U.S. market share and increased the French market shares of French and other European films, while sound film increased French market share, mostly at the expense of other European films and less so at the expense of U.S. films.

In Britain, the share of releases of American films declined from eighty percent in 1927 to seventy percent in 1930, while British films increased from five percent to twenty percent, exactly in line with the requirements of the 1927 quota act. After 1930, the American share remained roughly stable. This suggests that sound film did not have a large influence, and that the share of U.S. films was mainly brought down by the introduction of the Cinematograph Films Act in 1927, which set quotas for British films. Nevertheless, revenue data, which are unfortunately lacking, would be needed to give a definitive answer, as little is known about effects on the revenue per film.

The Economics of the Interwar Film Trade

Because film production costs were mainly fixed and sunk, international sales or distribution were important, because these were additional sales without much additional cost to the producer; the film itself had already been made. Films had special characteristics that necessitated international sales. Because they essentially were copyrights rather than physical products, theoretically the costs of additional sales were zero. Film production involved high endogenous sunk costs, recouped through renting the copyright to the film. The marginal foreign revenue equaled marginal net revenue (and marginal profits after the film’s production costs had been fully amortized). All companies large or small had to take into account foreign sales when setting film budgets (Bakker 2004b).

Films were intermediate products sold to foreign distributors and cinemas. While the rent paid varied depending on perceived quality and general conditions of supply and demand, the ticket price paid by consumers generally did not vary. It only varied by cinema: highest in first-run city center cinemas and lowest in sixth-run ramshackle neighborhood cinemas. Cinemas used films to produce ‘spectator-hours’: a five-hundred-seat cinema providing one hour of film, produced five hundred spectator-hours of entertainment. If it sold three hundred tickets, the other two hundred spectator-hours produced would have perished.

Because film was an intermediate product and a capital good at that, international competition could not be on price alone, just as sales of machines depend on the price/performance ratio. If we consider a film’s ‘capacity to sell spectator-hours’ (hereafter called selling capacity) as proportional to production costs, a low-budget producer could not simply push down a film’s rental price in line with its quality in order to make a sale; even at a price of zero, some low-budget films could not be sold. The reasons were twofold.

First, because cinemas had mostly fixed costs and few variable costs, a film’s selling capacity needed to be at least as large as fixed cinema costs plus its rental price. A seven-hundred-seat cinema, with a production capacity of 39,200 spectator-hours a week, weekly fixed costs of five hundred dollars, and an average admission price of five cents per spectator-hour, needed a film selling at least ten thousand spectator-hours, and would not be prepared to pay for that (marginal) film, because it only recouped fixed costs. Films needed a minimum selling capacity to cover cinema fixed costs. Producers could only price down low-budget films to just above the threshold level. With a lower expected selling capacity, these films could not be sold at any price.

This reasoning assumes that we know a film’s selling capacity ex ante. A main feature distinguishing foreign markets from domestic ones was that uncertainty was markedly lower: from a film’s domestic launch the audience appeal was known, and each subsequent country added additional information. While a film’s audience appeal across countries was not perfectly correlated, uncertainty was reduced. For various companies, correlations between foreign and domestic revenues for entire film portfolios fluctuated between 0.60 and 0.95 (Bakker 2004b). Given the riskiness of film production, this reduction in uncertainty undoubtedly was important.

The second reason for limited price competition was the opportunity cost, given cinemas’ production capacities. If the hypothetical cinema obtained a high-capacity film for a weekly rental of twelve hundred dollars, which sold all 39,200 spectator-hours, the cinema made a profit of $260 (($0.05 times 39,200) – $1,200 – $500 = $260). If a film with half the budget and, we assume, half the selling capacity, rented for half the price, the cinema-owner would lose $120 (($0.05 times 19,600) – $600 – $500 = -$120). Thus, the cinema owner would want to pay no more than $220 for the lower budget film, given that the high budget film is available (($0.05 times 19,600) – $220- $500 = $260). So the low-capacity film with half the selling capacity of the high-capacity film would need to sell for under a fifth of the price of the high capacity film to even enable the possibility of a transaction.

These sharply increasing returns to selling capacity made the setting of production outlays important, as a right price/capacity ratio was crucial to win foreign markets.

How Films Became Branded Products

To make sure film revenues reached above cinema fixed costs, film companies transformed films into branded products. With the emergence of the feature film, they started to pay large sums to actors, actresses and directors and for rights to famous plays and novels. This is still a major characteristic of the film industry today that fascinates many people. Yet the huge sums paid for stars and stories are not as irrational and haphazard as they sometimes may seem. Actually, they might be just as ‘rational’ and have just as quantifiable a return as direct spending on marketing and promotion (Bakker 2001a).

To secure an audience, film producers borrowed branding techniques from other consumer goods’ industries, but the short product-life-cycle forced them to extend the brand beyond one product – using trademarks or stars – to buy existing ‘brands,’ such as famous plays or novels, and to deepen the product-life-cycle by licensing their brands.

Thus, the main value of stars and stories lay not in their ability to predict successes, but in their services as giant ‘publicity machines’ which optimized advertising effectiveness by rapidly amassing high levels of brand-awareness. After a film’s release, information such as word-of-mouth and reviews would affect its success. The young age at which stars reached their peak, and the disproportionate income distribution even among the superstars, confirm that stars were paid for their ability to generate publicity. Likewise, because ‘stories’ were paid several times as much as original screenplays, they were at least partially bought for their popular appeal (Bakker 2001a).

Stars and stories marked a film’s qualities to some extent, confirming that they at least contained themselves. Consumer preferences confirm that stars and stories were the main reason to see a film. Further, fame of stars is distributed disproportionately, possibly even twice as unequal as income. Film companies, aided by long-term contracts, probably captured part of the rent of their popularity. Gradually these companies specialized in developing and leasing their ‘instant brands’ to other consumer goods’ industries in the form of merchandising.

Already from the late 1930s onwards, the Hollywood studios used the new scientific market research techniques of George Gallup to continuously track the brand-awareness among the public of their major stars (Bakker 2003). Figure 4 is based on one such graph used by Hollywood. It shows that Lana Turner was a rising star, Gable was consistently a top star, while Stewart’s popularity was high but volatile. James Stewart was eleven percentage-points more popular among the richest consumers than among the poorest, while Lana Turner differed only a few percentage-points. Additional segmentation by city size seemed to matter, since substantial differences were found: Clark Gable was ten percentage-points more popular in small cities than in large ones. Of the richest consumers, 51 percent wanted to see a movie starring Gable, but altogether they constituted just 14 percent of Gable’s market, while the 57 percent poorest Gable-fans constituted 34 percent. The increases in Gable’s popularity roughly coincided with his releases, suggesting that while producers used Gable partially for the brand-awareness of his name, each use (film) subsequently increased or maintained that awareness in what seems to have been a self-reinforcing process.

Figure 4

Popularity of Clark Gable, James Stewart and Lana Turner among U.S. respondents

April 1940 – October 1942, in percentage

Source: Audience Research Inc.; Bakker 2003.

The Film Industry’s Contribution to Economic Growth and Welfare

By the late 1930s, cinema had become an important mass entertainment industry. Nearly everyone in the Western world went to the cinema and many at least once a week. Cinema had made possible a massive growth in productivity in the entertainment industry and thereby disproved the notions of some economists that productivity growth in certain service industries is inherently impossible. Between 1900 and 1938, output of the entertainment industry, measured in spectator-hours, grew substantially in the U.S., Britain and France, varying from three to eleven percent per year over a period of nearly forty years (Table 1). The output per worker increased from 2,453 spectator hours in the U.S. in 1900 to 34,879 in 1938. In Britain it increased from 16,404 to 37,537 spectator-hours and in France from 1,575 to 8,175 spectator-hours. This phenomenal growth could be explained partially by adding more capital (such as in the form of film technology and film production outlays) and partially by simply producing more efficiently with the existing amount of capital and labor. The increase in efficiency (‘total factor productivity’) varied from about one percent per year in Britain to over five percent in the U.S., with France somewhere in between. In all countries, this increase in efficiency was at least one and a half times the increase in efficiency at the level of the entire nation. For the U.S. it was as much as five times and for France it was more than three times the national increase in efficiency (Bakker 2004a).

Another noteworthy feature is that the labor productivity in entertainment varied less across countries in the late 1930s than it did in 1900. Part of the reason is that cinema technology made entertainment partially tradable and therefore forced productivity in similar directions in all countries; the tradable part of the entertainment industry would now exert competitive pressure on the non-tradable part (Bakker 2004a). It is therefore not surprising that cinema caused the lowest efficiency increase in Britain, which had already a well-developed and competitive entertainment industry (with the highest labor and capital productivity both in 1900 and in 1938) and higher efficiency increases in the U.S. and to a lesser extent in France, which had less well-developed entertainment industries in 1900.

Another way to measure the contribution of film technology to the economy in the late 1930s is by using a social savings methodology. If we assume that cinema did not exist and all demand for entertainment (measured in spectator-hours) would have to be met by live entertainment, we can calculate the extra costs to society and thus the amount saved by film technology. In the U.S., these social savings amounted to as much as 2.2 percent ($2.5 billion) of GDP, in France to just 1.4 percent ($0.16 billion) and in Britain to only 0.3 percent ($0.07 billion) of GDP.

A third and different way to look at the contribution of film technology to the economy is to look at the consumer surplus generated by cinema. Contrary to the TFP and social savings techniques used above, which assume that cinema is a substitute for live entertainment, this approach assumes that cinema is a wholly new good and that therefore the entire consumer surplus generated by it is ‘new’ and would not have existed without cinema. For an individual consumer, the surplus is the difference between the price she was willing to pay and the ticket she actually paid. This difference varies from consumer to consumer, but with econometric techniques, one can estimate the sum of individual surpluses for an entire country. The resulting national consumer surpluses for entertainment varied from about a fifth of total entertainment expenditure in the U.S., to about half in Britain and as much as three quarters in France.

All the measures show that by the late 1930s cinema was making an essential contribution in increasing total welfare as well as the entertainment industry’s productivity.

Vertical Disintegration

After the Second World War, the Hollywood film industry disintegrated: production, distribution and exhibition became separate activities that were not always owned by the same organization. Three main causes brought about the vertical disintegration. First, the U.S. Supreme Court forced the studios to divest their cinema chains in 1948. Second, changes in the social-demographic structure in the U.S. brought about a shift towards entertainment within the home: many young couples started to live in the new suburbs and wanted to stay home for entertainment. Initially, they mainly used radio for this purpose and later they switched to television (Gomery 1985). Third, television broadcasting in itself (without the social-demographic changes that increased demand for it) constituted a new distribution channel for audiovisual entertainment and thus decreased the scarcity of distribution capacity. This meant that television took over the focus on the lowest common denominator from radio and cinema, while the latter two differentiated their output and started to focus more on specific market segments.

Figure 5

Real Cinema Box Office Revenue, Real Ticket Price and Number of Screens in the U.S., 1945-2002

Note: The values are in dollars of 2002, using the EH.Net consumer price deflator.

Source: Adapted from Vogel 2004 and Robertson 2001.

The consequence was a sharp fall in real box office revenue in the decade after the war (Figure 5). After the mid-1950s, real revenue stabilized, and remained the same, with some fluctuations, until the mid-1990s. The decline in screens was more limited. After 1963 the number of screens increased again steadily to reach nearly twice the 1945 level in the 1990s. Since the 1990s there have been more movie screens in the U.S. than ever before. The proliferation of screens, coinciding with declining capacity per screen, facilitated market segmentation. Revenue per screen nearly halved in the decade after the war, then made a rebound during the 1960s, to start a long and steady decline from 1970 onwards. The real price of a cinema ticket was quite stable until the 1960s, after which it more than doubled. Since the early 1970s, the price has been declining again and nowadays the real admission price is about what it was in 1965.

It was in this adverse post-war climate that the vertical disintegration unfolded. It took place at three levels. First (obviously) the Hollywood studios divested their cinema-chains. Second, they outsourced part of their film production and most of their production factors to independent companies. This meant that the Hollywood studios would only produce part of the films they distributed themselves, that they changed the long-term, seven-year contracts with star actors for per-film contracts and that they sold off part of their studio facilities to rent them back for individual films. Third, the Hollywood studios’ main business became film distribution and financing. They specialized in planning and assembling a portfolio of films, contracting and financing most of them and marketing and distributing them world-wide.

The developments had three important effects. First, production by a few large companies was replaced by production by many small flexibly specialized companies. Southern California became an industrial district for the film industry and harbored an intricate network of these businesses, from set design companies and costume makers, to special effects firms and equipment rental outfits (Storper and Christopherson 1989). Only at the level of distribution and financing did concentration remain high. Second, films became more differentiated and tailored to specific market segments; they were now aimed at a younger and more affluent audience. Third, the European film market gained in importance: because the social-demographic changes (suburbanization) and the advent of television happened somewhat later in Europe, the drop in cinema attendance also happened later there. The result was that the Hollywood off-shored a large chunk – at times over half – of their production to Europe in the 1960s. This was stimulated by lower European production costs, difficulties in repatriating foreign film revenues and by the vertical disintegration in California, which severed the studios’ ties with their production units and facilitated outside contracting.

European production companies could better adapt to changes in post-war demand because they were already flexibly specialized. The British film production industry, for example, had been fragmented almost from its emergence in the 1890s. In the late 1930s, distribution became concentrated, mainly through the efforts of J. Arthur Rank, while the production sector, a network of flexibly specialized companies in and around London, boomed. After the war, the drop in admissions followed the U.S. with about a ten year delay (Figure 6). The drop in the number of screens experienced the same lag, but was more severe: about two-third of British cinema screens disappeared, versus only one-third in the U.S. In France, after the First World War film production had disintegrated rapidly and chaotically into a network of numerous small companies, while a few large firms dominated distribution and production finance. The result was a burgeoning industry, actually one of the fastest growing French industries in the 1930s.

Figure 6

Admissions and Number of Screens in Britain, 1945-2005

Source: Screen Digest/Screen Finance/British Film Institute and Robertson 2001.

Several European companies attempted to (re-)enter international film distribution, such as Rank in the 1930s and 1950s, the International Film Finance Corporation in the 1960s, Gaumont in the 1970s, PolyGram in the 1970s and again in the 1990s, Cannon in the 1980s. All of them failed in terms of long-run survival, even if they made profits during some years. The only postwar entry strategy that was successful in terms of survival was the direct acquisition of a Hollywood studio (Bakker 2000).

The Come-Back of Hollywood

From the mid-1970s onwards, the Hollywood studios revived. The slide of box office revenue was brought to a standstill. Revenues were stabilized by the joint effect of seven different factors. First, the blockbuster movie increased cinema attendance. This movie was heavily marketed and supported by intensive television advertisement. Jaws was one of the first of these kind of movies and an enormous success. Second, the U.S. film industry received several kinds of tax breaks from the early 1970s onwards, which were kept in force until the mid-1980s, when Hollywood was in good shape again. Third, coinciding with the blockbuster movie and tax-breaks film budgets increased substantially, resulting in a higher perceived quality and higher quality difference with television, drawing more consumers into the cinema. Fourth, a rise in multiplex cinemas, cinemas with several screens, increased consumer choice and increased the appeal of cinema by offering more variety within a specific cinema, thus decreasing the difference with television in this respect. Fifth, one could argue that the process of flexible specialization of the California film industry was completed in the early 1970s, thus making the film industry ready to adapt more flexibly to changes in the market. MGM’s sale of its studio complex in 1970 marked the final ending of an era. Sixth, new income streams from video sales and rentals and cable television increased the revenues a high-quality film could generate. Seventh, European broadcasting deregulation increased the demand for films by television stations substantially.

From the 1990s onwards further growth was driven by newer markets in Eastern Europe and Asia. Film industries from outside the West also grew substantially, such as those of Japan, Hong Kong, India and China. At the same time, the European Union started a large scale subsidy program for its audiovisual film industry, with mixed economic effects. By 1997, ten years after the start of the program, a film made in the European Union cost 500,000 euros on average, was seventy to eighty percent state-financed, and grossed 800,000 euros world-wide, reaching an audience of 150,000 persons. In contrast, the average American film cost fifteen million euros, was nearly hundred percent privately financed, grossed 58 million euros, and reached 10.5 million persons (Dale 1997). This seventy-fold difference in performance is remarkable. Even when measured in gross return on investment or gross margin, the U.S. still had a fivefold and twofold lead over Europe, respectively.[1] In few other industries does such a pronounced difference exist.

During the 1990s, the film industry moved into television broadcasting. In Europe, broadcasters often co-funded small-scale boutique film production. In the U.S., the Hollywood studios started to merge with broadcasters. In the 1950s they had experienced difficulties with obtaining broadcasting licenses, because their reputation had been compromised by the antitrust actions. They had to wait for forty years before they could finally complete what they intended.[2] Disney, for example, bought the ABC network, Paramount’s owner Viacom bought CBS, and General Electric, owner of NBC, bought Universal. At the same time, the feature film industry was also becoming more connected to other entertainment industries, such as videogames, theme parks and musicals. With video game revenues now exceeding films’ box office revenues, it seems likely that feature films will simply be the flagship part of large entertainment supply system that will exploit the intellectual property in feature films in many different formats and markets.

Conclusion

The take-off of the film industry in the early twentieth century had been driven mainly by changes in demand. Cinema industrialized entertainment by standardizing it, automating it and making it tradable. After its early years, the industry experienced a quality race that led to increasing industrial concentration. Only later did geographical concentration take place, in Southern California. Cinema made a substantial contribution to productivity and total welfare, especially before television. After television, the industry experienced vertical disintegration, the flexible specialization of production, and a self-reinforcing process of increasing distribution channels and capacity as well as market growth. Cinema, then, was not only the first in a row of media industries that industrialized entertainment, but also the first in a series of international industries that industrialized services. The evolution of the film industry thus may give insight into technological change and its attendant welfare gains in many service industries to come.

Selected Bibliography

Allen, Robert C. Vaudeville and Film, 1895-1915. New York: Arno Press, 1980.

Bächlin, Peter, Der Film als Ware. Basel: Burg-Verlag, 1945.

Bakker, Gerben, “American Dreams: The European Film Industry from Dominance to Decline.” EUI Review (2000): 28-36.

Bakker, Gerben. “Stars and Stories: How Films Became Branded Products.” Enterprise and Society 2, no. 3 (2001a): 461-502.

Bakker, Gerben. Entertainment Industrialised: The Emergence of the International Film Industry, 1890-1940. Ph.D. dissertation, European University Institute, 2001b.

Bakker, Gerben. “Building Knowledge about the Consumer: The Emergence of Market Research in the Motion Picture Industry.” Business History 45, no. 1 (2003): 101-27.

Bakker, Gerben. “At the Origins of Increased Productivity Growth in Services: Productivity, Social Savings and the Consumer Surplus of the Film Industry, 1900-1938.” Working Papers in Economic History, No. 81, Department of Economic History, London School of Economics, 2004a.

Bakker, Gerben. “Selling French Films on Foreign Markets: The International Strategy of a Medium-Sized Film Company.” Enterprise and Society 5 (2004b): 45-76.

Bakker, Gerben. “The Decline and Fall of the European Film Industry: Sunk Costs, Market Size and Market Structure, 1895-1926.” Economic History Review 58, no. 2 (2005): 311-52.

Caves, Richard E. Creative Industries: Contracts between Art and Commerce. Cambridge, MA: Harvard University Press, 2000.

Christopherson, Susan, and Michael Storper. “Flexible Specialization and Regional Agglomerations: The Case of the U.S. Motion Picture Industry.” Annals of the Association of American Geographers 77, no. 1 (1987).

Christopherson, Susan, and Michael Storper. “The Effects of Flexible Specialization on Industrial Politics and the Labor Market: The Motion Picture Industry.” Industrial and Labor Relations Review 42, no. 3 (1989): 331-47.

Gomery, Douglas, The Coming of Sound to the American Cinema: A History of the Transformation of an Industry. Ph.D. dissertation, University of Wisconsin, 1975.

Gomery, Douglas, “The Coming of television and the ‘Lost’ Motion Picture Audience.” Journal of Film and Video 37, no. 3 (1985): 5-11.

Gomery, Douglas. The Hollywood Studio System. London: MacMillan/British Film Institute, 1986; reprinted 2005.

Kraft, James P. Stage to Studio: Musicians and the Sound Revolution, 1890-1950. Baltimore: Johns Hopkins University Press, 1996.

Krugman, Paul R., and Maurice Obstfeld, International Economics: Theory and Policy (sixth edition). Reading, MA: Addison-Wesley, 2003.

Low, Rachael, and Roger Manvell, The History of the British Film, 1896-1906. London, George Allen & Unwin, 1948.

Michaelis, Anthony R. “The Photographic Arts: Cinematography.” In A History of Technology, Vol. V: The Late Nineteenth Century, c. 1850 to c. 1900, edited by Charles Singer, 734-51. Oxford, Clarendon Press, 1958, reprint 1980.

Mokyr, Joel. The Lever of Riches: Technological Creativity and Economic Progress. Oxford: Oxford University Press, 1990.

Musser, Charles. The Emergence of Cinema: The American Screen to 1907. The History of American Cinema, Vol. I. New York: Scribner, 1990.

Sedgwick, John, “Product Differentiation at the Movies: Hollywood, 1946-65.” Journal of Economic History 63 (2002): 676-705.

Sedgwick, John, and Michael Pokorny. “The Film Business in Britain and the United States during the 1930s.” Economic History Review 57, no. 1 (2005): 79-112.

Sedgwick, John, and Mike Pokorny, editors. An Economic History of Film. London: Routledge, 2004.

Thompson, Kristin.. Exporting Entertainment: America in the World Film Market, 1907-1934. London: British Film Institute, 1985.

Vogel, Harold L. Entertainment Industry Economics: A Guide for Financial Analysis. Cambridge: Cambridge University Press, Sixth Edition, 2004.

Gerben Bakker may be contacted at gbakker at essex.ac.uk


[1] Gross return on investment, disregarding interest costs and distribution charges was 60 percent for European vs. 287 percent for U.S. films. Gross margin was 37 percent for European vs. 74 percent for U.S. films. Costs per viewer are 3.33 vs. 1.43 euros, revenues per viewer are 5.30 vs. 5.52 euros.

[2] The author is indebted to Douglas Gomery for this point.

Citation: Bakker, Gerben. “The Economic History of the International Film Industry”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-the-international-film-industry/