EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

The Economic History of Taiwan

Kelly Olds, National Taiwan University

Geography

Taiwan is a sub-tropical island, roughly 180 miles long, located less than 100 miles offshore of China’s Fujian province. Most of the island is covered with rugged mountains that rise to over 13,000 feet. These mountains rise directly out of the ocean along the eastern shore facing the Pacific so that this shore, and the central parts of the island are sparsely populated. Throughout its history, most of Taiwan’s people have lived on the Western Coastal Plain that faces China. This plain is crossed by east-west rivers, which occasionally bring floods of water down from the mountains creating broad boulder strewn flood plains. Until modern times, these rivers have made north-south travel costly and limited the island’s economic integration. The most important river is the Chuo Shuei-Hsi (between present-day Changhua and Yunlin counties), which has been an important economic and cultural divide.

Aboriginal Economy

Little is known about Taiwan prior to the seventeenth-century. When the Dutch came to the island in 1622, they found a population of roughly 70,000 Austronesian aborigines, at least 1,000 Chinese and a smaller number of Japanese. The aborigine women practiced subsistence agriculture while aborigine men harvested deer for export. The Chinese and Japanese population was primarily male and transient. Some of the Chinese were fishermen who congregated at the mouths of Taiwanese rivers but most Chinese and Japanese were merchants. Chinese merchants usually lived in aborigine villages and acted as middlemen, exporting deerskins, primarily to Japan, and importing salt and various manufactures. The harbor alongside which the Dutch built their first fort (in present-day Tainan City) was already an established place of rendezvous for Chinese and Japanese trade when the Dutch arrived.

Taiwan under the Dutch and Koxinga

The Dutch took control of most of Taiwan in a series of campaigns that lasted from the mid-1630s to the mid-1640s. The Dutch taxed the deerskin trade, hired aborigine men as soldiers and tried to introduce new forms of agriculture, but otherwise interfered little with the aborigine economy. The Tainan harbor grew in importance as an international entrepot. The most important change in the economy was an influx of about 35,000 Chinese to the island. These Chinese developed land, mainly in southern Taiwan, and specialized in growing rice and sugar. Sugar became Taiwan’s primary export. One of the most important Chinese investors in the Taiwanese economy was the leader of the Chinese community in Dutch Batavia (on Java) and during this period the Chinese economy on Taiwan bore a marked resemblance to the Batavian economy.

Koxinga, a Chinese-Japanese sea lord, drove the Dutch off the island in 1661. Under the rule of Koxinga and his heirs (1661-1683), Chinese settlement continued to spread in southern Taiwan. On the one hand, Chinese civilians made the crossing to flee the chaos that accompanied the Ming-Qing transition. On the other hand, Koxinga and his heirs brought over soldiers who were required to clear land and farm when they were not being used in wars. The Chinese population probably rose to about 120,000. Taiwan’s exports changed little, but the Tainan harbor lost importance as a center of international trade, as much of this trade now passed through Xiamen (Amoy), a port across the strait in Fujian that was also under the control of Koxinga and his heirs.

Taiwan under Qing Rule

The Qing dynasty defeated Koxinga’s grandson and took control of Taiwan in 1683. Taiwan remained part of the Chinese empire until it ceded the island to Japan in 1895. The Qing government originally saw control of Taiwan as an economic burden that had to be borne in order to keep the island out of the hand of pirates. In the first year of occupation, the Qing government shipped as many Chinese residents as possible back to the mainland. The island lost perhaps one-third of its Chinese population. Travel to Taiwan by all but male migrant workers was illegal until 1732 and this prohibition was reinstated off-and-on until it was finally permanently rescinded in 1788. However, the island’s Chinese population grew about two percent per year in the century following the Qing takeover. Both illegal immigration and natural increase were important components of this growth. The Qing government feared the expense of Chinese-aborigine confrontations and tried futilely to restrain Chinese settlement and keep the populations apart. Chinese pioneers, however, were constantly pushing the bounds of Chinese settlement northward and eastward and the aborigines were forced to adapt. Some groups permanently leased their land to Chinese settlers. Others learned Chinese farming skills and eventually assimilated or else moved toward the mountains where they continued hunting, learned to raise cattle or served as Qing soldiers. Due to the lack of Chinese women, intermarriage was also common.

Individual entrepreneurs or land companies usually organized Chinese pioneering enterprises. These people obtained land from aborigines or the government, recruited settlers, supplied loans to the settlers and sometimes invested in irrigation projects. Large land developers often lived in the village during the early years but moved to a city after the village was established. They remained responsible for paying the land tax and they received “large rents” from the settlers amounting to 10-15 percent of the expected harvest. However, they did not retain control of land usage or have any say in land sales or rental. The “large rents” were, in effect, a tax paid to a tax farmer who shared this revenue with the government. The payers of the large rents were the true owners who controlled the land. These people often chose to rent out their property to tenants who did the actual farming and paid a “small rent” of about 50 percent of the expected harvest.

Chinese pioneers made extensive use of written contracts but government enforcement of contracts was minimal. In the pioneers’ homeland across the strait, protecting property and enforcing agreements was usually a function of the lineage. Being part of a strong lineage was crucial to economic success and violent struggles among lineages were a problem endemic to south China. Taiwanese settlers had crossed the strait as individuals or in small groups and lacked strong lineages. Like other Chinese immigrants throughout the world, they created numerous voluntary associations based on one’s place of residence, occupation, place of origin, surname, etc. These organizations substituted for lineages in protecting property and enforcing contracts, and violent conflict among these associations over land and water rights was frequent. Due to property rights problems, land sales contracts often included the signature of not only the owner, but also his family and neighbors agreeing to the transfer. The difficulty of seizing collateral led to the common use of “conditional sales” as a means of borrowing money. Under the terms of a conditional sale, the lender immediately took control of the borrower’s property and retained the right to the property’s production in lieu of rent until the borrower paid back the loan. Since the borrower could wait an indefinite period of time before repaying the loan, this led to an awkward situation in which the person who controlled the land did not have permanent ownership and had no incentive to invest in land improvements.

Taiwan prospered during a sugar boom in the early eighteenth century, but afterwards its sugar industry had a difficult time keeping up with advances in foreign production. Until the Japanese occupation in 1895, Taiwan’s sugar farms and sugar mills remained small-scale operations. The sugar industry was centered in the south of the island and throughout the nineteenth century, the southern population showed little growth and may have declined. By the end of the nineteenth century, the south of the island was poorer than the north of the island and its population was shorter in stature and had a lower life expectancy. The north of the island was better suited to rice production and the northern economy seems to have grown robustly. As the Chinese population moved into the foothills of the northern mountains in the mid-nineteenth century, they began growing tea, which added to the north’s economic vitality and became the island’s leading export during the last quarter of the nineteenth century. The tea industry’s most successful product was oolong tea produced primarily for the U.S. market.

During the last years of the Qing dynasty’s rule in Taiwan, Taiwan was made a full province of China and some attempts were made to modernize the island by carrying out a land survey and building infrastructure. Taiwan’s first railroad was constructed linking several cities in the north.

Taiwan under Japanese Rule

The Japanese gained control of Taiwan in 1895 after the Sino-Japanese War. After several years of suppressing both Chinese resistance and banditry, the Japanese began to modernize the island’s economy. A railroad was constructed running the length of the island and modern roads and bridges were built. A modern land survey was carried out. Large rents were eliminated and those receiving these rents were compensated with bonds. Ownership of approximately twenty percent of the land could not be established to Japanese satisfaction and was confiscated. Much of this land was given to Japanese conglomerates that wanted land for sugarcane. Several banks were established and reorganized irrigation districts began borrowing money to make improvements. Since many Japanese soldiers had died of disease, improving the island’s sanitation and disease environment was also a top priority.

Under the Japanese, Taiwan remained an agricultural economy. Although sugarcane continued to be grown mainly on family farms, sugar processing was modernized and sugar once again became Taiwan’s leading export. During the early years of modernization, native Taiwanese sugar refiners remained important but, largely due to government policy, Japanese refiners holding regional monopsony power came to control the industry. Taiwanese sugar remained uncompetitive on the international market, but was sold duty free within the protected Japanese market. Rice, also bound for the protected Japanese market, displaced tea to become the second major export crop. Altogether, almost half of Taiwan’s agricultural production was being exported in the 1930s. After 1935, the government began encouraging investment in non-agricultural industry on the island. The war that followed was a time of destruction and economic collapse.

Growth in Taiwan’s per-capita economic product during this colonial period roughly kept up with that of Japan. Population also grew quickly as health improved and death rates fell. The native Taiwanese population’s per-capita consumption grew about one percent per year, slower than the growth in consumption in Japan, but greater than the growth in China. Better property rights enforcement, population growth, transportation improvements and protected agricultural markets caused the value of land to increase quickly, but real wage rates increased little. Most Taiwanese farmers did own some land but since the poor were more dependent on wages, income inequality increased.

Taiwan Under Nationalist Rule

Taiwan’s economy recovered from the war slower than the Japanese economy. The Chinese Nationalist government took control of Taiwan in 1945 and lost control of their original territory on the mainland in 1949. The Japanese population, which had grown to over five percent of Taiwan’s population (and a much greater proportion of Taiwan’s urban population), was shipped to Japan and the new government confiscated Japanese property creating large public corporations. The late 1940s was a period of civil war in China, and Taiwan also experienced violence and hyperinflation. In 1949, soldiers and refugees from the mainland flooded onto the island increasing Taiwan’s population by about twenty percent. Mainlanders tended to settle in cities and were predominant in the public sector.

In the 1950s, Taiwan was dependent on American aid, which allowed its government to maintain a large military without overburdening the economy. Taiwan’s agricultural economy was left in shambles by the events of the 1940s. It had lost its protected Japanese markets and the low-interest-rate formal-sector loans to which even tenant farmers had access in the 1930s were no longer available. With American help, the government implemented a land reform program. This program (1) sold public land to tenant farmers, (2) limited rent to 37.5% of the expected harvest and (3) severely restricted the size of individual landholdings forcing landlords to sell most of their land to the government in exchange for stocks and bonds valued at 2.5 times the land’s annual expected harvest. This land was then redistributed. The land reform increased equality among the farm population and strengthened government control of the countryside. Its justice and effect on agricultural investment and productivity are still hotly debated.

High-speed growth accompanied by quick industrialization began in the late-1950s. Taiwan became known for its cheap manufactured exports produced by small enterprises bound together by flexible sub-contracting networks. Taiwan’s postwar industrialization is usually attributed to (1) the decline in land per capita, (2) the change in export markets and (3) government policy. Between 1940 and 1962, Taiwan’s population increased at an annual rate of slightly over three percent. This cut the amount of land per capita in half. Taiwan’s agricultural exports had been sold tariff-free at higher-than-world-market prices in pre-war Japan while Taiwan’s only important pre-war manufactured export, imitation panama hats, faced a 25% tariff in the U.S., their primary market. After the war, agricultural products generally faced the greatest trade barriers. As for government policy, Taiwan went through a period of import substitution policy in the 1950s, followed by promotion of manufactured exports in the 1960s and 1970s. Subsidies were available for certain manufactures under both regimes. During the import substitution regime, domestic manufactures were protected both by tariffs and multiple overvalued exchange rates. Under the later export promotion regime, export processing zones were set up in which privileges were extended to businesses which produced products which would not be sold domestically.

Historical research into the “Taiwanese miracle” has focused on government policy and its effects, but statistical data for the first few post-war decades is poor and the overall effect of the various government policies is unclear. During the 1960s and 1970s, real GDP grew about 10% (7% per capita) each year. Most of this growth can be explained by increases in factors of production. Savings rates began rising after the currency was stabilized and reached almost 30% by 1970. Meanwhile, primary education, in which 70% of Taiwanese children had participated under the Japanese, became universal, and students in higher education increased many-fold. Although recent research has emphasized the importance of factor growth in the Asian “miracle economies,” studies show that productivity also grew substantially in Taiwan.

Further Reading

Chang, Han-Yu and Ramon Myers. “Japanese Colonial Development Policy in Taiwan, 1895-1906.” Journal of Asian Studies 22, no. 4 (August 1963): 433-450.

Davidson, James. The Island of Formosa: Past and Present. London: MacMillan & Company, 1903.

Fei, John et.al. Growth with Equity: The Taiwan Case. New York: Oxford University Press, 1979.

Gardella, Robert. Harvesting Mountains: Fujian and the China Tea Trade, 1757-1937. Berkeley: University of California Press, 1994.

Ho, Samuel. Economic Development of Taiwan 1860-1970. New Haven: Yale University Press, 1978.

Ho, Yhi-Min. Agricultural Development of Taiwan, 1903-1960. Nashville: Vanderbilt University Press, 1966.

Ka, Chih-Ming. Japanese Colonialism in Taiwan: Land Tenure, Development, and Dependency, 1895-1945. Boulder: Westview Press, 1995.

Knapp, Ronald, editor. China’s Island Frontier: Studies in the Historical Geography of Taiwan. Honolulu: University Press of Hawaii, 1980.

Li, Kuo-Ting. The Evolution of Policy Behind Taiwan’s Development Success. New Haven: Yale University Press, 1988.

Koo Hui-Wen and Chun-Chieh Wang. “Indexed Pricing: Sugarcane Price Guarantees in Colonial Taiwan, 1930-1940.” Journal of Economic History 59, no. 4 (December 1999): 912-926.

Mazumdar, Sucheta. Sugar and Society in China: Peasants, Technology, and the World Market. Cambridge, MA: Harvard University Asia Center, 1998.

Meskill, Johanna. A Chinese Pioneer Family: The Lins of Wu-feng, Taiwan, 1729-1895. Princeton, NJ: Princeton University Press, 1979.

Ng, Chin-Keong. Trade and Society: The Amoy Network on the China Coast 1683-1735. Singapore: Singapore University Press, 1983.

Olds, Kelly. “The Risk Premium Differential in Japanese-Era Taiwan and Its Effect.” Journal of Institutional and Theoretical Economics 158, no. 3 (September 2002): 441-463.

Olds, Kelly. “The Biological Standard of Living in Taiwan under Japanese Occupation.” Economics and Human Biology, 1 (2003): 1-20.

Olds, Kelly and Ruey-Hua Liu. “Economic Cooperation in Nineteenth-Century Taiwan.” Journal of Institutional and Theoretical Economics 156, no. 2 (June 2000): 404-430.

Rubinstein, Murray, editor. Taiwan: A New History. Armonk, NY: M.E. Sharpe, 1999.

Shepherd, John. Statecraft and Political Economy on the Taiwan Frontier, 1600-1800. Stanford: Stanford University Press, 1993.

Citation: Olds, Kelly. “The Economic History of Taiwan”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-taiwan/

Economic History of Portugal

Luciano Amaral, Universidade Nova de Lisboa

Main Geographical Features

Portugal is the south-westernmost country of Europe. With the approximate shape of a vertical rectangle, it has a maximum height of 561 km and a maximum length of 218 km, and is delimited (in its north-south range) by the parallels 37° and 42° N, and (in its east-west range) by the meridians 6° and 9.5° W. To the west, it faces the Atlantic Ocean, separating it from the American continent by a few thousand kilometers. To the south, it still faces the Atlantic, but the distance to Africa is only of a few hundred kilometers. To the north and the east, it shares land frontiers with Spain, and both countries constitute the Iberian Peninsula, a landmass separated directly from France and, then, from the rest of the continent by the Pyrenees. Two Atlantic archipelagos are still part of Portugal, the Azores – constituted by eight islands in the same latitudinal range of mainland Portugal, but much further west, with a longitude between 25° and 31° W – and Madeira – two islands, to the southwest of the mainland, 16° and 17° W, 32.5° and 33° N.

Climate in mainland Portugal is of the temperate sort. Due to its southern position and proximity to the Mediterranean Sea, the country’s weather still presents some Mediterranean features. Temperature is, on average, higher than in the rest of the continent. Thanks to its elongated form, Portugal displays a significant variety of landscapes and sometimes brisk climatic changes for a country of such relatively small size. Following a classical division of the territory, it is possible to identify three main geographical regions: a southern half – with practically no mountains and a very hot and dry climate – and a northern half subdivided into two other vertical sub-halves – with a north-interior region, mountainous, cool but relatively dry, and a north-coast region, relatively mountainous, cool and wet. Portugal’s population is close to 10,000,000, in an area of about 92,000 square kilometers (35,500 square miles).

The Period before the Creation of Portugal

We can only talk of Portugal as a more or less clearly identified and separate political unit (although still far from a defined nation) from the eleventh or twelfth centuries onwards. The geographical area which constitutes modern Portugal was not, of course, an eventless void before that period. But scarcity of space allows only a brief examination of the earlier period, concentrating on its main legacy to future history.

Roman and Visigothic Roots

That legacy is overwhelmingly marked by the influence of the Roman Empire. Portugal owes to Rome its language (a descendant of Latin) and main religion (Catholicism), as well as its primary juridical and administrative traditions. Interestingly enough, little of the Roman heritage passed directly to the period of existence of Portugal as a proper nation. Momentous events filtered the transition. Romans first arrived in the Iberian Peninsula around the third century B.C., and kept their rule until the fifth century of the Christian era. Then, they succumbed to the so-called “barbarian invasions.” Of the various peoples that then roamed the Peninsula, certainly the most influential were the Visigoths, a people of Germanic origin. The Visigoths may be ranked as the second most important force in the shaping of future Portugal. The country owes them the monarchical institution (which lasted until the twentieth century), as well as the preservation both of Catholicism and (although substantially transformed) parts of Roman law.

Muslim Rule

The most spectacular episode following Visigoth rule was the Muslim invasion of the eighth century. Islam ruled the Peninsula from then until the fifteenth century, although occupying an increasingly smaller area from the ninth century onwards, as the Christian Reconquista started repelling it with growing efficiency. Muslim rule set the area on a path different from the rest of Western Europe for a few centuries. However, apart from some ethnic traits legated to its people, a few words in its lexicon, as well as certain agricultural, manufacturing and sailing techniques and knowledge (of which the latter had significant importance to the Portuguese naval discoveries), nothing of the magnitude of the Roman heritage was left in the peninsula by Islam. This is particularly true of Portugal, where Muslim rule was less effective and shorter than in the South of Spain. Perhaps the most important legacy of Muslim rule was, precisely, its tolerance towards the Roman heritage. Much representative of that tolerance was the existence during the Muslim period of an ethnic group, the so-called moçárabe or mozarabe population, constituted by traditional residents that lived within Muslim communities, accepted Muslim rule, and mixed with Muslim peoples, but still kept their language and religion, i.e. some form of Latin and the Christian creed.

Modern Portugal is a direct result of the Reconquista, the Christian fight against Muslim rule in the Iberian Peninsula. That successful fight was followed by the period when Portugal as a nation came to existence. The process of creation of Portugal was marked by the specific Roman-Germanic institutional synthesis that constituted the framework of most of the country’s history.

Portugal from the Late Eleventh Century to the Late Fourteenth Century

Following the Muslim invasion, a small group of Christians kept their independence, settling in a northern area of the Iberian Peninsula called Asturias. Their resistance to Muslim rule rapidly transformed into an offensive military venture. During the eighth century a significant part of northern Iberia was recovered to Christianity. This frontier, roughly cutting the peninsula in two halves, held firm until the eleventh century. Then, the crusaders came, mostly from France and Germany, inserting the area in the overall European crusade movement. By the eleventh century, the original Asturian unit had been divided into two kingdoms, Leon and Navarra, which in turn were subdivided into three new political units, Castile, Aragon and the Condado Portucalense. The Condado Portucalense (the political unit at the origin of future Portugal) resulted from a donation, made in 1096, by the Leonese king to a Crusader coming from Burgundy (France), Count Henry. He did not claim the title king, a job that would be fulfilled only by his son, Afonso Henriques (generally accepted as the first king of Portugal) in the first decade of the twelfth century.

Condado Portucalense as the King’s “Private Property”

Such political units as the various peninsular kingdoms of that time must be seen as entities differing in many respects from current nations. Not only did their peoples not possess any clear “national consciousness,” but also the kings themselves did not rule them based on the same sort of principle we tend to attribute to current rulers (either democratic, autocratic or any other sort). Both the Condado Portucalense and Portugal were understood by their rulers as something still close to “private property” – the use of quotes here is justified by the fact that private property, in the sense we give to it today, was a non-existent notion then. We must, nevertheless, stress this as the moment in which Portuguese rulers started seeing Portugal as a political unit separate from the remaining units in the area.

Portugal as a Military Venture

Such novelty was strengthened by the continuing war against Islam, still occupying most of the center and south of what later became Portugal. This is a crucial fact about Portugal in its infancy, and one that helps one understand the most important episode in Portuguese history , the naval discoveries, i.e. that the country in those days was largely a military venture against Islam. As, in that fight, the kingdom expanded to the south, it did so separately from the other Christian kingdoms existing in the peninsula. And these ended up constituting the two main negative forces for Portugal’s definition as an independent country, i.e. Islam and the remaining Iberian Christian kingdoms. The country achieved a clear geographical definition quite early in its history, more precisely in 1249, when King Sancho II conquered the Algarve from Islam. Remarkably for a continent marked by so much permanent frontier redesign, Portugal acquired then its current geographical shape.

The military nature of the country’s growth gave rise to two of its most important characteristics in early times: Portugal was throughout this entire period a frontier country, and one where the central authority was unable to fully control the territory in its entirety. This latter fact, together with the reception of the Germanic feudal tradition, shaped the nature of the institutions then established in the country. This was particularly important in understanding the land donations made by the crown. These were crucial, for they brought a dispersion of central powers, devolved to local entities, as well as a delegation of powers we would today call “public” to entities we would call “private.” Donations were made in favor of three sorts of groups: noble families, religious institutions and the people in general of particular areas or cities. They resulted mainly from the needs of the process of conquest: noblemen were soldiers, and the crown’s concession of the control of a certain territory was both a reward for their military feats as well as an expedient way of keeping the territory under control (even if in a more indirect way) in a period when it was virtually impossible to directly control the full extent of the conquered area. Religious institutions were crucial in the Reconquista, since the purpose of the whole military effort was to eradicate the Muslim religion from the country. Additionally, priests and monks were full military participants in the process, not limiting their activity to studying or preaching. So, as the Reconquista proceeded, three sorts of territories came into existence: those under direct control of the crown, those under the control of local seigneurs (which subdivided into civil and ecclesiastical) and the communities.

Economic Impact of the Military Institutional Framework

This was an institutional framework that had a direct economic impact. The crown’s donations were not comparable to anything we would nowadays call private property. The land’s donation had attached to it the ability conferred on the beneficiary to a) exact tribute from the population living in it, b) impose personal services or reduce peasants to serfdom, and c) administer justice. This is a phenomenon that is typical of Europe until at least the eighteenth century, and is quite representative of the overlap between the private and public spheres then prevalent. The crown felt it was entitled to give away powers we would nowadays call public, such as those of taxation and administering justice, and beneficiaries from the crown’s donations felt they were entitled to them. As a further limit to full private rights, the land was donated under certain conditions, restricting the beneficiaries’ power to divide, sell or buy it. They managed those lands, thus, in a manner entirely dissimilar from a modern enterprise. And the same goes for actual farmers, those directly toiling the land, since they were sometimes serfs, and even when they were not, had to give personal services to seigneurs and pay arbitrary tributes.

Unusually Tight Connections between the Crown and High Nobility

Much of the history of Portugal until the nineteenth century revolves around the tension between these three layers of power – the crown, the seigneurs and the communities. The main trend in that relationship was, however, in the direction of an increased weight of central power over the others. This is already visible in the first centuries of existence of the country. In a process that may look paradoxical, that increased weight was accompanied by an equivalent increase in seigneurial power at the expense of the communities. This gave rise to a uniquely Portuguese institution, which would be of extreme importance for the development of the Portuguese economy (as we will later see): the extremely tight connection between the crown and the high nobility. As a matter of fact, very early in the country’s history, the Portuguese nobility and Church became much dependent on the redistributive powers of the crown, in particular in what concerns land and the tributes associated with it. This led to an apparently contradictory process, in which at the same time as the crown was gaining ascendancy in the ruling of the country, it also gave away to seigneurs some of those powers usually considered as being public in nature. Such was the connection between the crown and the seigneurs that the intersection between private and public powers proved to be very resistant in Portugal. That intersection lasted longer in Portugal than in other parts of Europe, and consequently delayed the introduction in the country of the modern notion of property rights. But this is something to be developed later, and to fully understand it we must go through some further episodes of Portuguese history. For now, we must note the novelty brought by these institutions. Although they can be seen as unfriendly to property rights from a nineteenth- and twentieth-century vantage point, they represented in fact a first, although primitive and incomplete, definition of property rights of a certain sort.

Centralization and the Evolution of Property

As the crown’s centralization of power proceeded in the early history of the country, some institutions such as serfdom and settling colonies gave way to contracts that granted fuller personal and property rights to farmers. Serfdom was not exceptionally widespread in early Portugal – and tended to disappear from the thirteenth century onwards. More common was the settlement of colonies, a situation in which settlers were simple toilers of land, having to pay significant tributes to either the king or seigneurs, but had no rights over buying and selling the land. From the thirteenth century onwards, as the king and the seigneurs began encroaching on the kingdom’s land and the military situation got calmer, serfdom and settling contracts were increasingly substituted by contracts of the copyhold type. When compared with current concepts of private property, copyhold includes serious restrictions to the full use of private property. Yet, it represented an improvement when compared to the prior legal forms of land use. In the end, private property as we understand it today began its dissemination through the country at this time, although in a form we would still consider primitive. This, to a large extent, repeats with one to two centuries of delay, the evolution that had already occurred in the core of “feudal Europe,” i.e. the Franco-Germanic world and its extension to the British Isles.

Movement toward an Exchange Economy

Precisely as in that core “feudal Europe,” such institutional change brought a first moment of economic growth to the country – of course, there are no consistent figures for economic activity in this period, and, consequently, this is entirely based on more or less superficial evidence pointing in that direction. The institutional change just noted was accompanied by a change in the way noblemen and the Church understood their possessions. As the national territory became increasingly sheltered from the destruction of war, seigneurs became less interested in military activity and conquest, and more so in the good management of the land they already owned land. Accompanying that, some vague principles of specialization also appeared. Some of those possessions were thus significantly transformed into agricultural firms devoted to a certain extent to selling on the market. One should not, of course, exaggerate the importance acquired by the exchange of goods in this period. Most of the economy continued to be of a non-exchange or (at best) barter character. But the signs of change were important, as a certain part of the economy (small as it was) led the way to future more widespread changes. Not by chance, this is the period when we have evidence of the first signs of monetization of the economy, certainly a momentous change (even if initially small in scale), corresponding to an entirely new framework for economic relations.

These essential changes are connected with other aspects of the country’s evolution in this period. First, the war at the frontier (rather than within the territory) seems to have had a positive influence on the rest of the economy. The military front was constituted by a large number of soldiers, who needed constant supply of various goods, and this geared a significant part of the economy. Also, as the conquest enlarged the territory under the Portuguese crown’s control, the king’s court became ever more complex, thus creating one more demand pole. Additionally, together with enlargement of territory also came the insertion within the economy of various cities previously under Muslim control (such as the future capital, Lisbon, after 1147). All this was accompanied by a widespread movement of what we might call internal colonization, whose main purpose was to farm previously uncultivated agricultural land. This is also the time of the first signs of contact of Portuguese merchants with foreign markets, and foreign merchants with Portuguese markets. There are various signs of the presence of Portuguese merchants in British, French and Flemish ports, and vice versa. Much of Portuguese exports were of a typical Mediterranean nature, such as wine, olive oil, salt, fish and fruits, and imports were mainly of grain and textiles. The economy became, thus, more complex, and it is only natural that, to accompany such changes, the notions of property, management and “firm” changed in such a way as to accommodate the new evolution. The suggestion has been made that the success of the Christian Reconquista depended to a significant extent on the economic success of those innovations.

Role of the Crown in Economic Reforms

Of additional importance for the increasing sophistication of the economy is the role played by the crown as an institution. From the thirteenth century onwards, the rulers of the country showed a growing interest in having a well organized economy able to grant them an abundant tax base. Kings such as Afonso III (ruling from 1248 until 1279) and D. Dinis (1279-1325) became famous for their economic reforms. Monetary reforms, fiscal reforms, the promotion of foreign trade, and the promotion of local fairs and markets (an extraordinarily important institution for exchange in medieval times) all point in the direction of an increased awareness on the part of Portuguese kings of the relevance of promoting a proper environment for economic activity. Again, we should not exaggerate the importance of that awareness. Portuguese kings were still significantly (although not entirely) arbitrary rulers, able with one decision to destroy years of economic hard work. But changes were occurring, and some in a direction positive for economic improvement.

As mentioned above, the definition of Portugal as a separate political entity had two main negative elements: Islam as occupier of the Iberian Peninsula and the centralization efforts of the other political entities in the same area. The first element faded as the Portuguese Reconquista, by mid-thirteenth century, reached the southernmost point in the territory of what is today’s Portugal. The conflict (either latent or open) with the remaining kingdoms of the peninsula was kept alive much beyond that. As the early centuries of the first millennium unfolded, a major centripetal force emerged in the peninsula, the kingdom of Castile. Castile progressively became the most successful centralizing political unit in the area. Such success reached a first climatic moment by the middle of the fifteenth century, during the reign of Ferdinand and Isabella, and a second one by the end of the sixteenth century, with the brief annexation of Portugal by the Spanish king, Phillip II. Much of the effort of Portuguese kings was to keep Portugal independent of those other kingdoms, particularly Castile. But sometimes they envisaged something different, such as an Iberian union with Portugal as its true political head. It was one of those episodes that led to a major moment both for the centralization of power in the Portuguese crown within the Portuguese territory and for the successful separation of Portugal from Castile.

Ascent of John I (1385)

It started during the reign of King Ferdinand (of Portugal), during the sixth and seventh decades of the fourteenth century. Through various maneuvers to unite Portugal to Castile (which included war and the promotion of diverse coups), Ferdinand ended up marrying his daughter to the man who would later become king of Castile. Ferdinand was, however, generally unsuccessful in his attempts to tie the crowns under his heading, and when he died in 1383 the king of Castile (thanks to his marriage with Ferdinand’s daughter) became the legitimate heir to the Portuguese crown. This was Ferdinand’s dream in reverse. The crowns would unite, but not under Portugal. The prospect of peninsular unity under Castile was not necessarily loathed by a large part of Portuguese elites, particularly parts of the aristocracy, which viewed Castile as a much more noble-friendly kingdom. This was not, however, a unanimous sentiment, and a strong reaction followed, led by other parts of the same elite, in order to keep the Portuguese crown in the hands of a Portuguese king, separate from Castile. A war with Castile and intimations of civil war ensued, and in the end Portugal’s independence was kept. The man chosen to be the successor of Ferdinand, under a new dynasty, was the bastard son of Peter I (Ferdinand’s father), the man who became John I in 1385.

This was a crucial episode, not simply because of the change in dynasty, imposed against the legitimate heir to the throne, but also because of success in the centralization of power by the Portuguese crown and, as a consequence, of separation of Portugal from Castile. Such separation led Portugal, additionally, to lose interest in further political adventures concerning Castile, and switch its attention to the Atlantic. It was the exploration of this path that led to the most unique period in Portuguese history, one during which Portugal reached heights of importance in the world that find no match in either its past or future history. This period is the Discoveries, a process that started during John I’s reign, in particular under the forceful direction of the king’s sons, most famous among them the mythical Henry, the Navigator. The 1383-85 crisis and John’s victory can thus be seen as the founding moment of the Portuguese Discoveries.

The Discoveries and the Apex of Portuguese International Power

The Discoveries are generally presented as the first great moment of world capitalism, with markets all over the world getting connected under European leadership. Albeit true, this is a largely post hoc perspective, for the Discoveries became a big commercial adventure only somewhere half-way into the story. Before they became such a thing, the aims of the Discoveries’ protagonists were mostly of another sort.

The Conquest of Ceuta

An interesting way to have a fuller picture of the Discoveries is to study the Portuguese contribution to them. Portugal was the pioneer of transoceanic navigation, discovering lands and sea routes formerly unknown to Europeans, and starting trades and commercial routes that linked Europe to other continents in a totally unprecedented fashion. But, at the start, the aims of the whole venture were entirely other. The event generally chosen to date the beginning of the Portuguese discoveries is the conquest of Ceuta – a city-state across the Straits of Gibraltar from Spain – in 1415. In itself such voyage would not differ much from other attempts made in the Mediterranean Sea from the twelfth century onwards by various European travelers. The main purpose of all these attempts was to control navigation in the Mediterranean, in what constitutes a classical fight between Christianity and Islam. Other objectives of Portuguese travelers were the will to find the mythical Prester John – a supposed Christian king surrounded by Islam: there are reasons to suppose that the legend of Prester John is associated with the real existence of the Copt Christians of Ethiopia – and to reach, directly at the source, the gold of Sudan. Despite this latter objective, religious reasons prevailed over others in spurring the first Portuguese efforts of overseas expansion. This should not surprise us, however, for Portugal had since its birth been, precisely, an expansionist political unit under a religious heading. The jump to the other side of the sea, to North Africa, was little else than the continuation of that expansionist drive. Here we must understand Portugal’s position as determined by two elements, one that was general to the whole European continent, and another one, more specific. The first is that the expansion of Portugal in the Middle-Ages coincides with the general expansion of Europe. And Portugal was very much a part of that process. The second is that, by being part of the process, Portugal was (by geographical hazard) at the forefront of the process. Portugal (and Spain) was in the first line of attack and defense against Islam. The conquest of Ceuta, by Henry, the Navigator, is hence a part of that story of confrontation with Islam.

Exploration from West Africa to India

The first efforts of Henry along the Western African coast and in the Atlantic high sea can be put within this same framework. The explorations along the African coast had two main objectives: to have a keener perception of how far south Islam’s strength went, and to surround Morocco, both in order to attack Islam on a wider shore and to find alternative ways to reach Prester John. These objectives depended, of course, on geographical ignorance, as the line of coast Portuguese navigators eventually found was much larger than the one Henry expected to find. In these efforts, Portuguese navigators went increasingly south, but also, mainly due to accidental changes of direction, west. Such westbound dislocations led to the discovery, in the first decades of the fifteenth century, of three archipelagos, the Canaries, Madeira (and Porto Santo) and the Azores. But the major navigational feat of this period was the passage of Cape Bojador in 1434, in the sequence of which the whole western coast of the African continent was opened for exploration and increasingly (and here is the novelty) commerce. As Africa revealed its riches, mostly gold and slaves, these ventures began acquiring a more strict economic meaning. And all this kept on fostering the Portuguese to go further south, and when they reached the southernmost tip of the African continent, to pass it and go east. And so they did. Bartolomeu Dias crossed the Cape of Good Hope in 1487 and ten years later Vasco da Gama would entirely circumnavigate Africa to reach India by sea. By the time of Vasco da Gama’s journey, the autonomous economic importance of intercontinental trade was well established.

Feitorias and Trade with West Africa, the Atlantic Islands and India

As the second half of the fifteenth century unfolded, Portugal created a complex trade structure connecting India and the African coast to Portugal and, then, to the north of Europe. This consisted of a net of trading posts (feitorias) along the African coast, where goods were shipped to Portugal, and then re-exported to Flanders, where a further Portuguese feitoria was opened. This trade was based on such African goods as gold, ivory, red peppers, slaves and other less important goods. As was noted by various authors, this was somehow a continuation of the pattern of trade created during the Middle Ages, meaning that Portugal was able to diversify it, by adding new goods to its traditional exports (wine, olive oil, fruits and salt). The Portuguese established a virtual monopoly of these African commercial routes until the early sixteenth century. The only threats to that trade structure came from pirates originating in Britain, Holland, France and Spain. One further element of this trade structure was the Atlantic Islands (Madeira, the Azores and the African archipelagos of Cape Verde and São Tomé). These islands contributed with such goods as wine, wheat and sugar cane. After the sea route to India was discovered and the Portuguese were able to establish regular connections with India, the trading structure of the Portuguese empire became more complex. Now the Portuguese began bringing multiple spices, precious stones, silk and woods from India, again based on a net of feitorias there established. The maritime route to India acquired an extreme importance to Europe, precisely at this time, since the Ottoman Empire was then able to block the traditional inland-Mediterranean route that supplied the continent with Indian goods.

Control of Trade by the Crown

One crucial aspect of the Portuguese Discoveries is the high degree of control exerted by the crown over the whole venture. The first episodes in the early fifteenth century, under Henry the Navigator (as well as the first exploratory trips along the African coast) were entirely directed by the crown. Then, as the activity became more profitable, it was, first, liberalized, and then rented (in totu) to merchants, whom were constrained to pay the crown a significant share of their profits. Finally, when the full Indo-African network was consolidated, the crown controlled directly the largest share of the trade (although never monopolizing it), participated in “public-private” joint-ventures, or imposed heavy tributes on traders. The grip of the crown increased with growth of the size and complexity of the empire. Until the early sixteenth century, the empire consisted mainly of a network of trading posts. No serious attempt was made by the Portuguese crown to exert a significant degree of territorial control over the various areas constituting the empire.

The Rise of a Territorial Empire

This changed with the growth of trade from India and Brazil. As India was transformed into a platform for trade not only around Africa but also in Asia, a tendency was developed (in particular under Afonso de Albuquerque, in the early sixteenth century) to create an administrative structure in the territory. This was not particularly successful. An administrative structure was indeed created, but stayed forever incipient. A relatively more complex administrative structure would only appear in Brazil. Until the middle of the sixteenth century, Brazil was relatively ignored by the crown. But with the success of the system of sugar cane plantation in the Atlantic Isles, the Portuguese crown decided to transplant it to Brazil. Although political power was controlled initially by a group of seigneurs to whom the crown donated certain areas of the territory, the system got increasingly more centralized as time went on. This is clearly visible with the creation of the post of governor-general of Brazil, directly respondent to the crown, in 1549.

Portugal Loses Its Expansionary Edge

Until the early sixteenth century, Portugal capitalized on being the pioneer of European expansion. It monopolized African and, initially, Indian trade. But, by that time, changes were taking place. Two significant events mark the change in political tide. First, the increasing assertiveness of the Ottoman Empire in the Eastern Mediterranean, which coincided with a new bout of Islamic expansionism – ultimately bringing the Mughal dynasty to India – as well as the re-opening of the Mediterranean route for Indian goods. This put pressure on Portuguese control over Indian trade. Not only was political control over the subcontinent now directly threatened by Islamic rulers, but also the profits from Indian trade started declining. This is certainly one of the reasons why Portugal redirected its imperial interests to the south Atlantic, particularly Brazil – the other reasons being the growing demand for sugar in Europe and the success of the sugar cane plantation system in the Atlantic islands. The second event marking the change in tide was the increased assertiveness of imperial Spain, both within Europe and overseas. Spain, under the Habsburgs (mostly Charles V and Phillip II), exerted a dominance over the European continent which was unprecedented since Roman times. This was complemented by the beginning of exploration of the American continent (from the Caribbean to Mexico and the Andes), again putting pressure on the Portuguese empire overseas. What is more, this is the period when not only Spain, but also Britain, Holland and France acquired navigational and commercial skills equivalent to the Portuguese, thus competing with them in some of their more traditional routes and trades. By the middle of the sixteenth century, Portugal had definitely lost the expansionary edge. And this would come to a tragic conclusion in 1580, with the death of the heirless King Sebastian in North Africa and the loss of political independence to Spain, under Phillip II.

Empire and the Role, Power and Finances of the Crown

The first century of empire brought significant political consequences for the country. As noted above, the Discoveries were directed by the crown to a very large extent. As such, they constituted one further step in the affirmation of Portugal as a separate political entity in the Iberian Peninsula. Empire created a political and economic sphere where Portugal could remain independent from the rest of the peninsula. It thus contributed to the definition of what we might call “national identity.” Additionally, empire enhanced significantly the crown’s redistributive power. To benefit from profits from transoceanic trade, to reach a position in the imperial hierarchy or even within the national hierarchy proper, candidates had to turn to the crown. As it controlled imperial activities, the crown became a huge employment agency, capable of attracting the efforts of most of the national elite. The empire was, thus, transformed into an extremely important instrument of the crown in order to centralize power. It has already been mentioned that much of the political history of Portugal from the Middle Ages to the nineteenth century revolves around the tension between the centripetal power of the crown and the centrifugal powers of the aristocracy, the Church and the local communities. Precisely, the imperial episode constituted a major step in the centralization of the crown’s power. The way such centralization occurred was, however, peculiar, and that would bring crucial consequences for the future. Various authors have noted how, despite the growing centralizing power of the crown, the aristocracy was able to keep its local powers, thanks to the significant taxing and judicial autonomy it possessed in the lands under its control. This is largely true, but as other authors have noted, this was done with the crown acting as an intermediary agent. The Portuguese aristocracy was since early times much less independent from the crown than in most parts of Western Europe, and this situation accentuated during the days of empire. As we have seen above, the crown directed the Reconquista in a way that made it able to control and redistribute (through the famous donations) most of the land that was conquered. In those early medieval days, it was, thus, the service to the crown that made noblemen eligible to benefit from land donations. It is undoubtedly true that by donating land the crown was also giving away (at least partially) the monopoly of taxing and judging. But what is crucial here is its significant intermediary power. With empire, that power increased again. And once more a large part of the aristocracy became dependent on the crown to acquire political and economic power. The empire became, furthermore, the main means of financing of the crown. Receipts from trade activities related to the empire (either profits, tariffs or other taxes) never went below 40 percent of total receipts of the crown, until the nineteenth century, and this was only briefly in its worst days. Most of the time, those receipts amounted to 60 or 70 percent of total crown’s receipts.

Other Economic Consequences of the Empire

Such a role for the crown’s receipts was one of the most important consequences of empire. Thanks to it, tax receipts from internal economic activity became in large part unnecessary for the functioning of national government, something that was going to have deep consequences, precisely for that exact internal activity. This was not, however, the only economic consequence of empire. One of the most important was, obviously, the enlargement of the trade base of the country. Thanks to empire, the Portuguese (and Europe, through the Portuguese) gained access to vast sources of precious metals, stones, tropical goods (such as fruit, sugar, tobacco, rice, potatoes, maize, and more), raw materials and slaves. Portugal used these goods to enlarge its comparative advantage pattern, which helped it penetrate European markets, while at the same time enlarging the volume and variety of imports from Europe. Such a process of specialization along comparative advantage principles was, however, very incomplete. As noted above, the crown exerted a high degree of control over the trade activity of empire, and as a consequence, many institutional factors interfered in order to prevent Portugal (and its imperial complex) from fully following those principles. In the end, in economic terms, the empire was inefficient – something to be contrasted, for instance, with the Dutch equivalent, much more geared to commercial success, and based on clearer efficiency managing-methods. By so significantly controlling imperial trade, the crown became a sort of barrier between the empire’s riches and the national economy. Much of what was earned in imperial activity was spent either on maintaining it or on the crown’s clientele. Consequently, the spreading of the gains from imperial trade to the rest of the economy was highly centralized in the crown. A much visible effect of this phenomenon was the fantastic growth and size of the country’s capital, Lisbon. In the sixteenth century, Lisbon was the fifth largest city in Europe, and from the sixteenth century to the nineteenth century it was always in the top ten, a remarkable feat for a country with such a small population as Portugal. And it was also the symptom of a much inflated bureaucracy, living on the gains of empire, as well as of the low degree of repercussion of those gains of empire through the whole of the economy.

Portuguese Industry and Agriculture

The rest of the economy did, indeed, remain very much untouched by this imperial manna. Most of industry was untouched by it, and the only visible impact of empire on the sector was by fostering naval construction and repair, and all the accessory activities. Most of industry kept on functioning according to old standards, far from the impact of transoceanic prosperity. And much the same happened with agriculture. Although benefiting from the introduction of new crops (mostly maize, but also potatoes and rice), Portuguese agriculture did not benefit significantly from the income stream arising from imperial trade, in particular when we could expect it to be a source of investment. Maize constituted an important technological innovation which had a much important impact on the Portuguese agriculture’s productivity, but it was too localized in the north-western part of the country, thus leaving the rest of the sector untouched.

Failure of a Modern Land Market to Develop

One very important consequence of empire on agriculture and, hence, on the economy, was the preservation of the property structure coming from the Middle Ages, namely that resulting from the crown’s donations. The empire enhanced again the crown’s powers to attract talent and, consequently, donate land. Donations were regulated by official documents called Cartas de Foral, in which the tributes due to the beneficiaries were specified. During the time of the empire, the conditions ruling donations changed in a way that reveals an increased monarchical power: donations were made for long periods (for instance, one life), but the land could not be sold nor divided (and, thus, no parts of it could be sold separately) and renewal required confirmation on the part of the crown. The rules of donation, thus, by prohibiting buying, selling and partition of land, were a major obstacle to the existence not only of a land market, but also of a clear definition of property rights, as well as freedom in the management of land use.

Additionally, various tributes were due to the beneficiaries. Some were in kind, some in money, some were fixed, others proportional to the product of the land. This process dissociated land ownership and appropriation of land product, since the land was ultimately the crown’s. Furthermore, the actual beneficiaries (thanks to the donation’s rules) had little freedom in the management of the donated land. Although selling land in such circumstances was forbidden to the beneficiaries, renting it was not, and several beneficiaries did so. A new dissociation between ownership and appropriation of product was thus introduced. Although in these donations some tributes were paid by freeholders, most of them were paid by copyholders. Copyhold granted to its signatories the use of land in perpetuity or in lives (one to three), but did not allow them to sell it. This introduced a new dissociation between ownership, appropriation of land product and its management. Although it could not be sold, land under copyhold could be ceded in “sub-copyhold” contracts – a replication of the original contract under identical conditions. This introduced, obviously, a new complication to the system. As should be clear by now, such a “baroque” system created an accumulation of layers of rights over the land, as different people could exert different rights over it, and each layer of rights was limited by the other layers, and sometimes conflicting with them in an intricate way. A major consequence of all this was the limited freedom the various owners of rights had in the management of their assets.

High Levels of Taxation in Agriculture

A second direct consequence of the system was the complicated juxtaposition of tributes on agricultural product. The land and its product in Portugal in those days were loaded with tributes (a sort of taxation). This explains one recent historian’s claim (admittedly exaggerated) that, in that period, those who owned the land did not toil it, and those who toiled it did not hold it. We must distinguish these tributes from strict rent payments, as rent contracts are freely signed by the two (or more) sides taking part in it. The tributes we are discussing here represented, in reality, an imposition, which makes the use of the word taxation appropriate to describe them. This is one further result of the already mentioned feature of the institutional framework of the time, the difficulty to distinguish between the private and the public spheres.

Besides the tributes we have just described, other tributes also impended on the land. Some were, again, of a nature we would call private nowadays, others of a more clearly defined public nature. The former were the tributes due to the Church, the latter the taxes proper, due explicitly as such to the crown. The main tribute due to the Church was the tithe. In theory, the tithe was a tenth of the production of farmers and should be directly paid to certain religious institutions. In practice, not always was it a tenth of the production nor did the Church always receive it directly, as its collection was in a large number of cases rented to various other agents. Nevertheless, it was an important tribute to be paid by producers in general. The taxes due to the crown were the sisa (an indirect tax on consumption) and the décima (an income tax). As far as we know, these tributes weighted on average much less than the seigneurial tributes. Still, when added to them, they accentuated the high level of taxation or para-taxation typical of the Portuguese economy of the time.

Portugal under Spanish Rule, Restoration of Independence and the Eighteenth Century

Spanish Rule of Portugal, 1580-1640

The death of King Sebastian in North Africa, during a military mission in 1578, left the Portuguese throne with no direct heir. There were, however, various indirect candidates in line, thanks to the many kinship links established by the Portuguese royal family to other European royal and aristocratic families. Among them was Phillip II of Spain. He would eventually inherit the Portuguese throne, although only after invading the country in 1580. Between 1578 and 1580 leaders in Portugal tried unsuccessfully to find a “national” solution to the succession problem. In the end, resistance to the establishment of Spanish rule was extremely light.

Initial Lack of Resistance to Spanish Rule

To understand why resistance was so mild one must bear in mind the nature of such political units as the Portuguese and Spanish kingdoms at the time. These kingdoms were not the equivalent of contemporary nation-states. They had a separate identity, evident in such things as a different language, a different cultural history, and different institutions, but this didn’t amount to being a nation. The crown itself, when seen as an institution, still retained many features of a “private” venture. Of course, to some extent it represented the materialization of the kingdom and its “people,” but (by the standards of current political concepts) it still retained a much more ambiguous definition. Furthermore, Phillip II promised to adopt a set of rules allowing for extensive autonomy: the Portuguese crown would be “aggregated” to the Spanish crown although not “absorbed” or “associated” or even “integrated” with it. According to those rules, Portugal was to keep its separate identity as a crown and as a kingdom. All positions in the Portuguese government were to be attributed to Portuguese persons, the Portuguese language was the only one allowed in official matters in Portugal, positions in the Portuguese empire were to be attributed only to Portuguese.

The implementation of such rules depended largely on the willingness of the Portuguese nobility, Church and high-ranking officials to accept them. As there were no major popular revolts that could pressure these groups to decide otherwise, they did not have much difficulty in accepting them. In reality, they saw the new situation as an opportunity for greater power. After all, Spain was then the largest and most powerful political unit in Europe, with vast extensions throughout the world. To participate in such a venture under conditions of great autonomy was seen as an excellent opening.

Resistance to Spanish Rule under Phillip IV

The autonomous status was kept largely untouched until the third decade of the seventeenth century, i.e., until Phillip IV’s reign (1621-1640, in Portugal). This was a reign marked by an important attempt at centralization of power under the Spanish crown. A major impulse for this was Spain’s participation in the Thirty Years War. Simply put, the financial stress caused by the war forced the crown not only to increase fiscal pressure on the various political units under it but also to try to control them more closely. This led to serious efforts at revoking the autonomous status of Portugal (as well as other European regions of the empire). And it was as a reaction to those attempts that many Portuguese aristocrats and important personalities led a movement to recover independence. This movement must, again, be interpreted with care, paying attention to the political concepts of the time. This was not an overtly national reaction, in today’s sense of the word “national.” It was mostly a reaction from certain social groups that felt a threat to their power by the new plans of increased centralization under Spain. As some historians have noted, the 1640 revolt should be best understood as a movement to preserve the constitutional elements of the framework of autonomy established in 1580, against the new centralizing drive, rather than a national or nationalist movement.

Although that was the original intent of the movement, the fact is that, progressively, the new Portuguese dynasty (whose first monarch was John IV, 1640-1656) proceeded to an unprecedented centralization of power in the hands of the Portuguese crown. This means that, even if the original intent of the mentors of the 1640 revolt was to keep the autonomy prevalent both under pre-1580 Portuguese rule and post-1580 Spanish rule, the final result of their action was to favor centralization in the Portuguese crown, and thus help define Portugal as a clearly separate country. Again, we should be careful not to interpret this new bout of centralization in the seventeenth and eighteenth centuries as the creation of a national state and of a modern government. Many of the intermediate groups (in particular the Church and the aristocracy) kept their powers largely intact, even powers we would nowadays call public (such as taxation, justice and police). But there is no doubt that the crown increased significantly its redistributive power, and the nobility and the church had, increasingly, to rely on service to the crown to keep most of their powers.

Consequences of Spanish Rule for the Portuguese Empire

The period of Spanish rule had significant consequences for the Portuguese empire. Due to integration in the Spanish empire, Portuguese colonial territories became a legitimate target for all of Spain’s enemies. The European countries having imperial strategies (in particular, Britain, the Netherlands and France) no longer saw Portugal as a countervailing ally in their struggle with Spain, and consequently promoted serious assaults on Portuguese overseas possessions. There was one further element of the geopolitical landscape of the period that aggravated the willingness of competitors to attack Portugal, and that was Holland’s process of separation from the Spanish empire. Spain was not only a large overseas empire but also an enormous European one, of which Holland was a part until the 1560s. Holland, precisely, saw the Portuguese section of the Iberian empire as its weakest link, and, accordingly, attacked it in a fairly systematic way. The Dutch attack on Portuguese colonial possessions ranged from America (Brazil) to Africa (Sao Tome and Angola) to Asia (India, several points in Southeast Asia, and Indonesia), and in the course of it several Portuguese territories were conquered, mostly in Asia. Portugal, however, managed to keep most of its African and American territories.

The Shift of the Portuguese Empire toward the Atlantic

When it regained independence, Portugal had to re-align its external position in accordance with the new context. Interestingly enough, all those rivals that had attacked the country’s possessions during Spanish rule initially supported its separation. France was the most decisive partner in the first efforts to regain independence. Later (in the 1660s, in the final years of the war with Spain) Britain assumed that role. This was to inaugurate an essential feature of Portuguese external relations. From then on Britain became the most consistent Portuguese foreign partner. In the 1660s such a move was connected to the re-orientation of the Portuguese empire. What had until then been the center of empire (its Eastern part – India and the rest of Asia) lost importance. At first, this was due to the renewal in activity in the Mediterranean route, something that threatened the sea route to India. Then, this was because the Eastern empire was the part where the Portuguese had ceded more territory during Spanish rule, in particular to the Netherlands. Portugal kept most of its positions both in Africa and America, and this part of the world was to acquire extreme importance in the seventeenth and eighteenth centuries. In the last decades of the seventeenth century, Portugal was able to develop numerous trades mostly centered in Brazil (although some of the Atlantic islands also participated), involving sugar, tobacco and tropical woods, all sent to the growing market for luxury goods in Europe, to which was added a growing and prosperous trade of slaves from West Africa to Brazil.

Debates over the Role of Brazilian Gold and the Methuen Treaty

The range of goods in Atlantic trade acquired an important addition with the discovery of gold in Brazil in the late seventeenth century. It is the increased importance of gold in Portuguese trade relations that helps explain one of the most important diplomatic moments in Portuguese history, the Methuen Treaty (also called the Queen Anne Treaty), signed between Britain and Portugal in 1703. Many Portuguese economists and historians have blamed the treaty for Portugal’s inability to achieve modern economic growth during the eighteenth and nineteenth centuries. It must be remembered that the treaty stipulated tariffs to be reduced in Britain for imports of Portuguese wine (favoring it explicitly in relation to French wine), while, as a counterpart, Portugal had to eliminate all prohibitions on imports of British wool textiles (even if tariffs were left in place). Some historians and economists have seen this as Portugal’s abdication of having a national industrial sector and, instead, specializing in agricultural goods for export. As proof, such scholars present figures for the balance of trade between Portugal and Britain after 1703, with the former country exporting mainly wine and the latter textiles, and a widening trade deficit. Other authors, however, have shown that what mostly allowed for this trade (and the deficit) was not wine but the newly discovered Brazilian gold. Could, then, gold be the culprit for preventing Portuguese economic growth? Most historians now reject the hypothesis. The problem would lie not in a particular treaty signed in the early eighteenth century but in the existing structural conditions for the economy to grow – a question to be dealt with further below.

Portuguese historiography currently tends to see the Methuen Treaty mostly in the light of Portuguese diplomatic relations in the seventeenth and eighteenth centuries. The treaty would mostly mark the definite alignment of Portugal within the British sphere. The treaty was signed during the War of Spanish Succession. This was a war that divided Europe in a most dramatic manner. As the Spanish crown was left without a successor in 1700, the countries of Europe were led to support different candidates. The diplomatic choice ended up being polarized around Britain, on the one side, and France, on the other. Increasingly, Portugal was led to prefer Britain, as it was the country that granted more protection to the prosperous Portuguese Atlantic trade. As Britain also had an interest in this alignment (due to the important Portuguese colonial possessions), this explains why the treaty was economically beneficial to Portugal (contrary to what some of the older historiography tended to believe) In fact, in simple trade terms, the treaty was a good bargain for both countries, each having been given preferential treatment for certain of its more typical goods.

Brazilian Gold’s Impact on Industrialization

It is this sequence of events that has led several economists and historians to blame gold for the Portuguese inability to industrialize in the eighteenth and nineteenth centuries. Recent historiography, however, has questioned the interpretation. All these manufactures were dedicated to the production of luxury goods and, consequently, directed to a small market that had nothing to do (in both the nature of the market and technology) with those sectors typical of European industrialization. Were it to continue, it is very doubtful it would ever have become a full industrial spurt of the kind then underway in Britain. The problem lay elsewhere, as we will see below.

Prosperity in the Early 1700s Gives Way to Decline

Be that as it may, the first half of the eighteenth century was a period of unquestionable prosperity for Portugal, mostly thanks to gold, but also to the recovery of the remaining trades (both tropical and from the mainland). Such prosperity is most visible in the period of King John V (1706-1750). This is generally seen as the Portuguese equivalent to the reign of France’s Louis XIV. Palaces and monasteries of great dimensions were then built, and at the same time the king’s court acquired a pomp and grandeur not seen before or after, all financed largely by Brazilian gold. By the mid-eighteenth century, however, it all began to falter. The beginning of decline in gold remittances occurred in the sixth decade of the century. A new crisis began, which was compounded by the dramatic 1755 earthquake, which destroyed a large part of Lisbon and other cities. This new crisis was at the root of a political project aiming at a vast renaissance of the country. This was the first in a series of such projects, all of them significantly occurring in the sequence of traumatic events related to empire. The new project is associated with King Joseph I period (1750-1777), in particular with the policies of his prime-minister, the Marquis of Pombal.

Centralization under the Marquis of Pombal

The thread linking the most important political measures taken by the Marquis of Pombal is the reinforcement of state power. A major element in this connection was his confrontation with certain noble and church representatives. The most spectacular episodes in this respect were, first, the killing of an entire noble family and, second, the expulsion of the Jesuits from national soil. Sometimes this is taken as representing an outright hostile policy towards both aristocracy and church. However, it should be best seen as an attempt to integrate aristocracy and church into the state, thus undermining their autonomous powers. In reality, what the Marquis did was to use the power to confer noble titles, as well as the Inquisition, as means to centralize and increase state power. As a matter of fact, one of the most important instruments of recruitment for state functions during the Marquis’ rule was the promise of noble titles. And the Inquisition’s functions also changed form being mainly a religious court, mostly dedicated to the prosecution of Jews, to becoming a sort of civil political police. The Marquis’ centralizing policy covered a wide range of matters, in particular those most significant to state power. Internal police was reinforced, with the creation of new police institutions directly coordinated by the central government. The collection of taxes became more efficient, through an institution more similar to a modern Treasury than any earlier institutions. Improved collection also applied to tariffs and profits from colonial trade.

Centralizing power by the government had significant repercussions in certain aspects of the relationship between state and civil society. Although the Marquis’ rule is frequently pictured as violent, it included measures generally considered as “enlightened.” Such is the case of the abolition of the distinction between “New Christians” and Christians (new Christians were Jews converted to Catholicism, and as such suffered from a certain degree of segregation, constituting an intermediate category between Jews and Christians proper). Another very important political measure by the Marquis was the abolition of slavery in the empire’s mainland (even if slavery kept on being used in the colonies and the slave trade continued to prosper, there is no way of questioning the importance of the measure).

Economic Centralization under the Marquis of Pombal

The Marquis applied his centralizing drive to economic matters as well. This happened first in agriculture, with the creation of a monopolizing company for trade in Port wine. It continued in colonial trade, where the method applied was the same, that is, the creation of companies monopolizing trade for certain products or regions of the empire. Later, interventionism extended to manufacturing. Such interventionism was essentially determined by the international trade crisis that affected many colonial goods, the most important among them gold. As the country faced a new international payments crisis, the Marquis reverted to protectionism and subsidization of various industrial sectors. Again, as such state support was essentially devoted to traditional, low-tech, industries, this policy failed to boost Portugal’s entry into the group of countries that first industrialized.

Failure to Industrialize

The country would never be the same after the Marquis’ consulate. The “modernization” of state power and his various policies left a profound mark in the Portuguese polity. They were not enough, however, to create the necessary conditions for Portugal to enter a process of industrialization. In reality, most of the structural impediments to modern growth were left untouched or aggravated by the Marquis’ policies. This is particularly true of the relationship between central power and peripheral (aristocratic) powers. The Marquis continued the tradition exacerbated during the fifteenth and sixteenth centuries of liberally conferring noble titles to court members. Again, this accentuated the confusion between the public and the private spheres, with a particular incidence (for what concerns us here) in the definition of property and property rights. The act of granting a noble title by the crown, on many occasions implied a donation of land. The beneficiary of the donation was entitled to collect tributes from the population living in the territory but was forbidden to sell it and, sometimes, even rent it. This meant such beneficiaries were not true owners of the land. The land could not exactly be called their property. This lack of private rights was, however, compensated by the granting of such “public” rights as the ability to obtain tributes – a sort of tax. Beneficiaries of donations were, thus, neither true landowners nor true state representatives. And the same went for the crown. By giving away many of the powers we tend to call public today, the crown was acting as if it could dispose of land under its administration in the same manner as private property. But since this was not entirely private property, by doing so the crown was also conceding public powers to agents we would today call private. Such confusion did not help the creation of either a true entrepreneurial class or of a state dedicated to the protection of private property rights.

The whole property structure described above was kept, even after the reforming efforts of the Marquis of Pombal. The system of donations as a method of payment for jobs taken at the King’s court as well as the juxtaposition of various sorts of tributes, either to the crown or local powers, allowed for the perpetuation of a situation where the private and the public spheres were not clearly separated. Consequently, property rights were not well defined. If there is a crucial reason for Portugal’s impaired economic development, these are the things we should pay attention to. Next, we will begin the study of the nineteenth and twentieth centuries, and see how difficult was the dismantling of such an institutional structure and how it affected the growth potential of the Portuguese economy.

Suggested Reading:

Birmingham, David. A Concise History of Portugal. Cambridge: Cambridge University Press, 1993.

Boxer, C.R. The Portuguese Seaborne Empire, 1415-1825. New York: Alfred A. Knopf, 1969.

Godinho, Vitorino Magalhães. “Portugal and Her Empire, 1680-1720.” The New Cambridge Modern History, Vol. VI. Cambridge: Cambridge University Press, 1970.

Oliveira Marques, A.H. History of Portugal. New York: Columbia University Press, 1972.

Wheeler, Douglas. Historical Dictionary of Portugal. London: Scarecrow Press, 1993.

Citation: Amaral, Luciano. “Economic History of Portugal”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/economic-history-of-portugal/

An Economic History of Patent Institutions

B. Zorina Khan, Bowdoin College

Introduction

Such scholars as Max Weber and Douglass North have suggested that intellectual property systems had an important impact on the course of economic development. However, questions from other eras are still current today, ranging from whether patents and copyrights constitute optimal policies toward intellectual inventions and their philosophical rationale to the growing concerns of international political economy. Throughout their history, patent and copyright regimes have confronted and accommodated technological innovations that were no less significant and contentious for their time than those of the twenty-first century. An economist from the nineteenth century would have been equally familiar with considerations about whether uniformity in intellectual property rights across countries harmed or benefited global welfare and whether piracy might be to the advantage of developing countries. The nineteenth and early twentieth centuries in particular witnessed considerable variation in the intellectual property policies that individual countries implemented, and this allows economic historians to determine the consequences of different rules and standards.

This article outlines crucial developments in the patent policies of Europe, the United States, and follower countries. The final section discusses the harmonization of international patent laws that occurred after the middle of the nineteenth century.

Europe

The British Patent System

The grant of exclusive property rights vested in patents developed from medieval guild practices in Europe. Britain in particular is noted for the establishment of a patent system which has been in continuous operation for a longer period than any other in the world. English monarchs frequently used patents to reward favorites with privileges, such as monopolies over trade that increased the retail prices of commodities. It was not until the seventeenth century that patents were associated entirely with awards to inventors, when Section 6 of the Statute of Monopolies (21 Jac. I. C. 3, 1623, implemented in 1624) repealed the practice of royal monopoly grants to all except patentees of inventions. The Statute of Monopolies allowed patent rights of fourteen years for “the sole making or working of any manner of new manufacture within this realm to the first and true inventor…” Importers of foreign discoveries were allowed to obtain domestic patent protection in their own right.

The British patent system established significant barriers in the form of prohibitively high costs that limited access to property rights in invention to a privileged few. Patent fees for England alone amounted to £100-£120 ($585) or approximately four times per capita income in 1860. The fee for a patent that also covered Scotland and Ireland could cost as much as £350 pounds ($1,680). Adding a co-inventor was likely to increase the costs by another £24. Patents could be extended only by a private Act of Parliament, which required political influence, and extensions could cost as much as £700. These constraints favored the elite class of those with wealth, political connections or exceptional technical qualifications, and consciously created disincentives for inventors from humble backgrounds. Patent fees provided an important source of revenues for the Crown and its employees, and created a class of administrators who had strong incentives to block proposed reforms.

In addition to the monetary costs, complicated administrative procedures that inventors had to follow implied that transactions costs were also high. Patent applications for England alone had to pass through seven offices, from the Home Secretary to the Lord Chancellor, and twice required the signature of the Sovereign. If the patent were extended to Scotland and Ireland it was necessary to negotiate another five offices in each country. The cumbersome process of patent applications (variously described as “mediaeval” and “fantastical”) afforded ample material for satire, but obviously imposed severe constraints on the ordinary inventor who wished to obtain protection for his discovery. These features testify to the much higher monetary and transactions costs, in both absolute and relative terms, of obtaining property rights to inventions in England in comparison to the United States. Such costs essentially restricted the use of the patent system to inventions of high value and to applicants who already possessed or could raise sufficient capital to apply for the patent. The complicated system also inhibited the diffusion of information and made it difficult, if not impossible, for inventors outside of London to readily conduct patent searches. Patent specifications were open to public inspection on payment of a fee, but until 1852 they were not officially printed, published or indexed. Since the patent could be filed in any of three offices in Chancery, searches of the prior art involved much time and inconvenience. Potential patentees were well advised to obtain the help of a patent agent to aid in negotiating the numerous steps and offices that were required for pursuit of the application in London.

In the second half of the eighteenth century, nation-wide lobbies of manufacturers and patentees expressed dissatisfaction with the operation of the British patent system. However, it was not until after the Crystal Palace Exhibition in 1851 that their concerns were finally addressed, in an effort to meet the burgeoning competition from the United States. In 1852 the efforts of numerous societies and of individual engineers, inventors and manufacturers over many decades were finally rewarded. Parliament approved the Patent Law Amendment Act, which authorized the first major adjustment of the system in two centuries. The new patent statutes incorporated features that drew on testimonials to the superior functioning of the American patent regime. Significant changes in the direction of the American system included lower fees and costs, and the application procedures were rationalized into a single Office of the Commissioners of Patents for Inventions, or “Great Seal Patent Office.”

The 1852 patent reform bills included calls for a U.S.-style examination system but this was amended in the House of Commons and the measure was not included in the final version. Opponents were reluctant to vest examiners with the necessary discretionary power, and pragmatic observers pointed to the shortage of a cadre of officials with the required expertise. The law established a renewal system that required the payment of fees in installments if the patentee wished to maintain the patent for the full term. Patentees initially paid £25 and later installments of £50 (after three years) and £100 (after seven years) to maintain the patent for a full term of fourteen years. Despite the relatively low number of patents granted in England, between 1852 and 1880 the patent office still made a profit of over £2 million. Provision was made for the printing and publication of the patent records. The 1852 reforms undoubtedly instituted improvements over the former opaque procedures, and the lower fees had an immediate impact. Nevertheless, the system retained many of the former features that had implied that patents were in effect viewed as privileges rather than merited rights, and only temporarily abated expressions of dissatisfaction.

One source of dissatisfaction that endured until the end of the nineteenth century was the state of the common law regarding patents. At least partially in reaction to a history of abuse of patent privileges, patents were widely viewed as monopolies that restricted community rights, and thus to be carefully monitored and narrowly construed. Second, British patents were granted “by the grace of the Crown” and therefore were subject to any restrictions that the government cared to impose. According to the statutes, as a matter of national expediency, patents were to be granted if “they be not contrary to the law, nor mischievous to the State, by raising prices of commodities at home, or to the hurt of trade, or generally inconvenient.” The Crown possessed the ability to revoke any patents that were deemed inconvenient or contrary to public policy. After 1855, the government could also appeal to a need for official secrecy to prohibit the publication of patent specifications in order to protect national security and welfare. Moreover, the state could commandeer a patentee’s invention without compensation or consent, although in some cases the patentee was paid a royalty.

Policies towards patent assignments and trade in intellectual property rights also constrained the market for inventions. Ever vigilant to protect an unsuspecting public from fraudulent financial schemes on the scale of the South Sea Bubble, ownership of patent rights was limited to five investors (later extended to twelve). Nevertheless, the law did not offer any relief to the purchaser of an invalid or worthless patent, so potential purchasers were well advised to engage in extensive searches before entering into contracts. When coupled with the lack of assurance inherent in a registration system, the purchase of a patent right involved a substantive amount of risk and high transactions costs — all indicative of a speculative instrument. It is therefore not surprising that the market for assignments and licenses seems to have been quite limited, and even in the year after the 1852 reforms only 273 assignments were recorded.

In 1883 new legislation introduced procedures that were somewhat simpler, with fewer steps. The fees fell to £4 for the initial term of four years, and the remaining £150 could be paid in annual increments. For the first time, applications could be forwarded to the Patent Office through the post office. This statute introduced opposition proceedings, which enabled interested parties to contest the proposed patent within two months of the filing of the patent specifications. Compulsory licenses were introduced in 1883 (and strengthened in 1919 as “licenses of right”) for fear that foreign inventors might injure British industry by refusing to grant other manufacturers the right to use their patent. The 1883 act provided for the employment of “examiners” but their activity was limited to ensuring that the material was patentable and properly described. Indeed, it was not until 1902 that the British system included an examination for novelty, and even then the process was not regarded as stringent as in other countries. Many new provisions were designed to thwart foreign competition. Until 1907 patentees who manufactured abroad were required to also make the patented product in Britain. Between 1919 and 1949 chemical products were excluded from patent protection to counter the threat posed by the superior German chemical industry. Licenses of right enabled British manufacturers to compel foreign patentees to permit the use of their patents on pharmaceuticals and food products.

In sum, changes in the British patent system were initially unforthcoming despite numerous calls for change. Ultimately, the realization that England’s early industrial and technological supremacy was threatened by the United States and other nations in Europe led to a slow process of revisions that lasted well into the twentieth century. One commentator summed up the series of developments by declaring that the British patent system at the time of writing (1967) remained essentially “a modified version of a pre-industrial economic institution.”

The French Patent System

Early French policies towards inventions and innovations in the eighteenth century were based on an extensive but somewhat arbitrary array of rewards and incentives. During this period inventors or introducers of inventions could benefit from titles, pensions that sometimes extended to spouses and offspring, loans (some interest-free), lump-sum grants, bounties or subsidies for production, exemptions from taxes, or monopoly grants in the form of exclusive privileges. This complex network of state policies towards inventors and their inventions was revised but not revoked after the outbreak of the French Revolution.

The modern French patent system was established according to the laws of 1791 (amended in 1800) and 1844. Patentees filed through a simple registration system without any need to specify what was new about their claim, and could persist in obtaining the grant even if warned that the patent was likely to be legally invalid. On each patent document the following caveat was printed: “The government, in granting a patent without prior examination, does not in any manner guarantee either the priority, merit or success of an invention.” The inventor decided whether to obtain a patent for a period of five, ten or fifteen years, and the term could only be extended through legislative action. Protection extended to all methods and manufactured articles, but excluded theoretical or scientific discoveries without practical application, financial methods, medicines, and items that could be covered by copyright.

The 1791 statute stipulated patent fees that were costly, ranging from 300 livres through 1500 livres, based on the declared term of the patent. The 1844 statute maintained this policy since fees were set at 500 francs ($100) for a five year patent, 1000 francs for a 10 year patent and 1500 for a patent of fifteen years, payable in annual installments. In an obvious attempt to limit international diffusion of French discoveries, until 1844 patents were voided if the inventor attempted to obtain a patent overseas on the same invention. On the other hand, the first introducer of an invention covered by a foreign patent would enjoy the same “natural rights” as the patentee of an original invention or improvement. Patentees had to put the invention into practice within two years from the initial grant, or face a tribunal which had the power to repeal the patent, unless the patentee could point to unforeseen events which had prevented his complying with the provisions of the law. The rights of patentees were also restricted if the invention related to items that were controlled by the French government, such as printing presses and firearms.

In return for the limited monopoly right, the patentee was expected to describe the invention in such terms that a workman skilled in the arts could replicate the invention and this information was expected to be made public. However, no provision was made for the publication or diffusion of these descriptions. At least until the law of April 7 1902, specifications were only available in manuscript form in the office in which they had originally been lodged, and printed information was limited to brief titles in patent indexes. The attempt to obtain information on the prior art was also inhibited by restrictions placed on access: viewers had to state their motives; foreigners had to be assisted by French attorneys; and no extract from the manuscript could be copied until the patent had expired.

The state remained involved in the discretionary promotion of invention and innovation through policies beyond the granting of patents. In the first place, the patent statutes did not limit their offer of potential appropriation of returns only to property rights vested in patents. The inventor of a discovery of proven utility could choose between a patent or making a gift of the invention to the nation in exchange for an award from funds that were set aside for the encouragement of industry. Second, institutions such as the Société d’encouragement pour l’industrie nationale awarded a number of medals each year to stimulate new discoveries in areas they considered to be worth pursuing, and also to reward deserving inventors and manufacturers. Third, the award of assistance and pensions to inventors and their families continued well into the nineteenth century. Fourth, at times the Society purchased patent rights and turned the invention over into the public domain.

The basic principles of the modern French patent system were evident in the early French statutes and were retained in later revisions. Since France during the ancien régime was likely the first country to introduce systematic examinations of applications for privileges, it is somewhat ironic that commentators point to the retention of registration without prior examination as the defining feature of the “French system” until 1978. In 1910 fees remained high, although somewhat lower in real terms, at one hundred francs per year. Working requirements were still in place, and patentees were not allowed to satisfy the requirement by importing the article even if the patentee had manufactured it in another European country. However, the requirement was waived if the patentee could persuade the tribunal that the patent was not worked because of unavoidable circumstances.

Similar problems were evident in the market for patent rights. Contracts for patent assignments were filed in the office of the Prefect for the district, but since there was no central source of information it was difficult to trace the records for specific inventions. The annual fees for the entire term of the patent had to be paid in advance if the patent was assigned to a second party. Like patents themselves, assignments and licenses were issued with a caveat emptor clause. This was partially due to the nature of patent property under a registration system, and partially to the uncertainties of legal jurisprudence in this area. For both buyer and seller, the uncertainties associated with the exchange likely reduced the net expected value of trade.

The Spanish Patent System

French patent laws were adopted in its colonies, but also diffused to other countries through its influence on Spain’s system following the Spanish Decree of 1811. The Spanish experience during the nineteenth century is instructive since this country experienced lower rates and levels of economic development than the early industrializers. Like its European neighbors, early Spanish rules and institutions were vested in privileges which had lasting effects that could be detected even in the later period. The per capita rate of patenting in Spain was lower than other major European countries, and foreigners filed the majority of patented inventions. Between 1759 and 1878, roughly one half of all grants were to citizens of other countries, notably France and (to a lesser extent) Britain. Thus, the transfer of foreign technology was a major concern in the political economy of Spain.

This dependence on foreign technologies was reflected in the structure of the Spanish patent system, which permitted patents of introduction as well as patents for invention. Patents of introduction were granted to entrepreneurs who wished to produce foreign technologies that were new to Spain, with no requirement of claims to being the true inventor. Thus, the sole objective of these instruments was to enhance innovation and production in Spain. Since the owners of introduction patents could not prevent third parties from importing similar machines from abroad, they also had an incentive to maintain reasonable pricing structures. Introduction patents had a term of only five years, with a cost of 3000 reales, whereas the fees of patents for invention varied from 1000 reales for five years, 3000 reales for ten years, and 6000 reales for a term of fifteen years. Patentees were required to work the patent within one year, and about a quarter of patents granted between 1826 and 1878 were actually implemented. Since patents of introduction had a brief term, they encouraged the production of items with high expected profits and a quick payback period, after which monopoly rights expired, and the country could benefit from its diffusion.

The German Patent System

The German patent system was influenced by developments in the United States, and itself influenced legislation in Argentina, Austria, Brazil, Denmark, Finland, Holland, Norway, Poland, Russia and Sweden. The German Empire was founded in 1871, and in the first six years each state adopted its own policies. Alsace-Lorraine favored a French-style system, whereas others such as Hamburg and Bremen did not offer patent protection. However, after strong lobbying by supporters of both sides of the debate regarding the merits of patent regimes, Germany passed a unified national Patent Act of 1877.

The 1877 statute created a centralized administration for the grant of a federal patent for original inventions. Industrial entrepreneurs succeeded in their objective of creating a “first to file” system, so patents were granted to the first applicant rather than to the “first and true inventor,” but in 1936 the National Socialists introduced a first to invent system. Applications were examined by examiners in the Patent Office who were expert in their field. During the eight weeks before the grant, patent applications were open to the public and an opposition could be filed denying the validity of the patent. German patent fees were deliberately high to eliminate protection for trivial inventions, with a renewal system that required payment of 30 marks for the first year, 50 marks for the second year, 100 marks for the third, and 50 marks annually after the third year. In 1923 the patent term was extended from fifteen years to eighteen years.

German patent policies encouraged diffusion, innovation and growth in specific industries with a view to fostering economic development. Patents could not be obtained for food products, pharmaceuticals or chemical products, although the process through which such items were produced could be protected. It has been argued that the lack of restrictions on the use of innovations and the incentives to patent around existing processes spurred productivity and diffusion in these industries. The authorities further ensured the diffusion of patent information by publishing claims and specification before they were granted. The German patent system also facilitated the use of inventions by firms, with the early application of a “work for hire” doctrine that allowed enterprises access to the rights and benefits of inventions of employees.

Although the German system was close to the American patent system, it was in other ways more stringent, resulting in patent grants that were lower in number, but likely higher in average value. The patent examination process required that the patent should be new, nonobvious, and also capable of producing greater efficiency. As in the United States, once granted, the courts adopted an extremely liberal attitude in interpreting and enforcing existing patent rights. Penalties for willful infringement included not only fines, but also the possibility of imprisonment. The grant of a patent could be revoked after the first three years if the patent was not worked, if the owner refused to grant licenses for the use of an invention that was deemed in the public interest, or if the invention was primarily being exploited outside of Germany. However, in most cases, a compulsory license was regarded as adequate.

After 1891 a parallel and weaker version of patent protection could be obtained through a gebrauchsmuster or utility patent (sometimes called a petty patent), which was granted through a registration system. Patent protection was available for inventions that could be represented by drawings or models with only a slight degree of novelty, and for a limited term of three years (renewable once for a total life of six years). About twice as many utility patents as examined patents were granted early in the 1930s. Patent protection based on co-existing systems of registration and examination appears to have served distinct but complementary purposes. Remedies for infringement of utility patents also included fines and imprisonment.

Other European Patent Systems

Very few developed countries would now seriously consider eliminating statutory protection for inventions, but in the second half of the nineteenth century the “patent controversy” in Europe pitted advocates of patent rights against an effective abolitionist movement. For a short period, the abolitionists were strong enough to obtain support for dismantling patent systems in a number of European countries. In 1863 the Congress of German Economists declared “patents of invention are injurious to common welfare;” and the movement achieved its greatest victory in Holland, which repealed its patent legislation in 1869. The Swiss cantons did not adopt patent protection until 1888, with an extension in the scope of coverage in 1907. The abolitionists based their arguments on the benefits of free trade and competition, and viewed patents as part of an anticompetitive and protectionist strategy analogous to tariffs on imports. Instead of state-sponsored monopoly awards, they argued, inventors could be rewarded by alternative policies, such as stipends from the government, payments from private industry or associations formed for that purpose, or simply through the lead time that the first inventor acquired over competitors by virtue of his prior knowledge.

According to one authority, the Netherlands eventually reinstated its patent system in 1912 and Switzerland introduced patent laws in 1888 largely because of a keen sense of morality, national pride and international pressure to do so. The appeal to “morality” as an explanatory factor is incapable of explaining the timing and nature of changes in strategies. Nineteenth-century institutions were not exogenous and their introduction or revisions generally reflected the outcome of a self-interested balancing of costs and benefits. The Netherlands and Switzerland were initially able to benefit from their ability to free-ride on the investments that other countries had made in technological advances. As for the cost of lower incentives for discoveries by domestic inventors, the Netherlands was never vaunted as a leader in technological innovation, and this is reflected in their low per capita patenting rates both before and after the period without patent laws. They recorded a total of only 4561 patents in the entire period from 1800 to 1869 and, even after adjusting for population, the Dutch patenting rate in 1869 was a mere 13.4 percent of the U.S. patenting rate. Moreover, between 1851 and 1865 88.6 percent of patents in the Netherlands had been granted to foreigners. After the patent laws were reintroduced in 1912, the major beneficiaries were again foreign inventors, who obtained 79.3 of the patents issued in the Netherlands. Thus, the Netherlands had little reason to adopt patent protection, except for external political pressures and the possibility that some types of foreign investment might be deterred.

The case was somewhat different for Switzerland, which was noted for being innovative, but in a narrow range of pursuits. Since the scale of output and markets were quite limited, much of Swiss industry generated few incentives for invention. A number of the industries in which the Swiss excelled, such as hand-made watches, chocolates and food products, were less susceptible to invention that warranted patent protection. For instance, despite the much larger consumer market in the United States, during the entire nineteenth century fewer than 300 U.S. patents related to chocolate composition or production. Improvements in pursuits such as watch-making could be readily protected by trade secrecy as long as the industry remained artisanal. However, with increased mechanization and worker mobility, secrecy would ultimately prove to be ineffective, and innovators would be unable to appropriate returns without more formal means of exclusion.

According to contemporary observers, the Swiss resolved to introduce patent legislation not because of a sudden newfound sense of morality, but because they feared that American manufacturers were surpassing them as a result of patented innovations in the mass production of products such as boots, shoes and watches. Indeed, before 1890, American inventors obtained more than 2068 patents on watches, and the U.S. watch making industry benefited from mechanization and strong economies of scale that led to rapidly falling prices of output, making them more competitive internationally. The implications are that the rates of industrial and technical progress in the United States were more rapid, and technological change was rendering artisanal methods obsolete in products with mass markets. Thus, the Swiss endogenously adopted patent laws because of falling competitiveness in their key industrial sectors.

What was the impact of the introduction of patent protection in Switzerland? Foreign inventors could obtain patents in the United States regardless of their domestic legislation, so we can approach this question tangentially by examining the patterns of patenting in the United States by Swiss residents before and after the 1888 reforms. Between 1836 and 1888, Swiss residents obtained a grand total of 585 patents in the United States. Fully a third of these patents were for watches and music boxes, and only six were for textiles or dyeing, industries in which Switzerland was regarded as competitive early on. Swiss patentees were more oriented to the international market, rather than the small and unprotected domestic market where they could not hope to gain as much from their inventions. For instance, in 1872 Jean-Jacques Mullerpack of Basel collaborated with Leon Jarossonl of Lille, France to invent an improvement in dyeing black with aniline colors, which they assigned to William Morgan Brown of London, England. Another Basel inventor, Alfred Kern, assigned his 1883 patent for violet aniline dyes to the Badische Anilin and Soda Fabrik of Mannheim, Germany.

After the patent reforms, the rate of Swiss patenting in the United States immediately increased. Swiss patentees obtained an annual average of 32.8 patents in the United States in the decade before the patent law was enacted in Switzerland. After the Swiss allowed patenting, this figure increased to an average of 111 each year in the following six years, and in the period from 1895 to 1900 a total of 821 Swiss patents were filed in the United States. The decadal rate of patenting per million residents increased from 111.8 for the ten years up to the reforms, to 451 per million residents in the 1890s, 513 in the 1900s, 458 in the 1910s and 684 in the 1920s. U.S. statutes required worldwide novelty, and patents could not be granted for discoveries that had been in prior use, so the increase was not due to a backlog of trade secrets that were now patented.

Moreover, the introduction of Swiss patent laws also affected the direction of inventions that Swiss residents patented in the United States. After the passage of the law, such patents covered a much broader range of inventions, including gas generators, textile machines, explosives, turbines, paints and dyes, and drawing instruments and lamps. The relative importance of watches and music boxes immediately fell from about a third before the reforms to 6.2 percent and 2.1 percent respectively in the 1890s and even further to 3.8 percent and 0.3 percent between 1900 and 1909. Another indication that international patenting was not entirely unconnected to domestic Swiss inventions can be discerned from the fraction of Swiss patents (filed in the U.S.) that related to process innovations. Before 1888, 21 percent of the patent specifications mentioned a process. Between 1888 and 1907, the Swiss statutes included the requirement that patents should include mechanical models, which precluded patenting of pure processes. The fraction of specifications that mentioned a process fell during the period between 1888 and 1907, but returned to 22 percent when the restriction was modified in 1907.

In short, although the Swiss experience is often cited as proof of the redundancy of patent protection, the limitations of this special case should be taken into account. The domestic market was quite small and offered minimal opportunity or inducements for inventors to take advantage of economies of scale or cost-reducing innovations. Manufacturing tended to cluster in a few industries where innovation was largely irrelevant, such as premium chocolates, or in artisanal production that was susceptible to trade secrecy, such as watches and music boxes. In other areas, notably chemicals, dyes and pharmaceuticals, Swiss industries were export-oriented, but even today their output tends to be quite specialized and high-valued rather than mass-produced. Export-oriented inventors were likely to have been more concerned about patent protection in the important overseas markets, rather than in the home market. Thus, between 1888 and 1907, although Swiss laws excluded patents for chemicals, pharmaceuticals and dyes, 20.7 percent of the Swiss patents filed in the United States were for just these types of inventions. The scanty evidence on Switzerland suggests that the introduction of patent rights was accompanied by changes in the rate and direction of inventive activity. In any event, both the Netherlands and Switzerland featured unique circumstances that seem to hold few lessons for developing countries today.

The Patent System in the United States

The United States stands out as having established one of the most successful patent systems in the world. Over six million patents have been issued since 1790, and American industrial supremacy has frequently been credited to its favorable treatment of inventors and the inducements held out for inventive activity. The first Article of the U.S. Constitution included a clause to “promote the Progress of Science and the useful Arts by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.” Congress complied by passing a patent statute in April 1790. The United States created in 1836 the first modern patent institution in the world, a system whose features differed in significant respects from those of other major countries. The historical record indicates that the legislature’s creation of a uniquely American system was a deliberate and conscious process of promoting open access to the benefits of private property rights in inventions. The laws were enforced by a judiciary which was willing to grapple with difficult questions such as the extent to which a democratic and market-oriented political economy was consistent with exclusive rights. Courts explicitly attempted to implement decisions that promoted economic growth and social welfare.

The primary feature of the “American system” is that all applications are subject to an examination for conformity with the laws and for novelty. An examination system was set in place in 1790, when a select committee consisting of the Secretary of State (Thomas Jefferson), the Attorney General and the Secretary of War scrutinized the applications. These duties proved to be too time-consuming for highly ranked officials who had other onerous duties, so three years later it was replaced by a registration system. The validity of patents was left up to the district courts, which had the power to set in motion a process that could end in the repeal of the patent. However by the 1830s this process was viewed as cumbersome and the statute that was passed in 1836 set in place the essential structure of the current patent system. In particular, the 1836 Patent Law established the Patent Office, whose trained and technically qualified employees were authorized to examine applications. Employees of the Patent Office were not permitted to obtain patent rights. In order to constrain the ability of examiners to engage in arbitrary actions, the applicant was given the right to file a bill in equity to contest the decisions of the Patent Office with the further right of appeal to the Supreme Court of the United States.

American patent policy likewise stands out in its insistence on affordable fees. The legislature debated the question of appropriate fees, and the first patent law in 1790 set the rate at the minimal sum of $3.70 plus copy costs. In 1793 the fees were increased to $30, and were maintained at this level until 1861. In that year, they were raised to $35, and the term of the patent was changed from fourteen years (with the possibility of an extension) to seventeen years (with no extensions.) The 1869 Report of the Commissioner of Patents compared the $35 fee for a US patent to the significantly higher charges in European countries such as Britain, France, Russia ($450), Belgium ($420) and Austria ($350). The Commissioner speculated that both the private and social costs of patenting were lower in a system of impartial specialized examiners, than under a system where similar services were performed on a fee-per-service basis by private solicitors. He pointed out that in the U.S. the fees were not intended to exact a price for the patent privilege or to raise revenues for the state – the disclosure of information was the sole price for the patent property right – rather, they were imposed merely to cover the administrative expenses of the Office.

The basic parameters of the U.S. patent system were transparent and predictable, in itself an aid to those who wished to obtain patent rights. In addition, American legislators were concerned with ensuring that information about the stock of patented knowledge was readily available and diffused rapidly. As early as 1805 Congress stipulated that the Secretary of State should publish an annual list of patents granted the preceding year, and after 1832 also required the publication in newspapers of notices regarding expired patents. The Patent Office itself was a source of centralized information on the state of the arts. However, Congress was also concerned with the question of providing for decentralized access to patent materials. The Patent Office maintained repositories throughout the country, where inventors could forward their patent models at the expense of the Patent Office. Rural inventors could apply for patents without significant obstacles, because applications could be submitted by mail free of postage.

American laws employed the language of the English statute in granting patents to “the first and true inventor.” Nevertheless, unlike in England, the phrase was used literally, to grant patents for inventions that were original in the world, not simply within U.S. borders. American patent laws provided strong protection for citizens of the United States, but varied over time in its treatment of foreign inventors. Americans could not obtain patents for imported discoveries, but the earliest statutes of 1793, 1800 and 1832, restricted patent property to citizens or to residents who declared that they intended to become citizens. As such, while an American could not appropriate patent rights to a foreign invention, he could freely use the idea without any need to bear licensing or similar costs that would otherwise have been due if the inventor had been able to obtain a patent in this country. In 1836, the stipulations on citizenship or residency were removed, but were replaced with discriminatory patent fees: foreigners could obtain a patent in the U.S. for a fee of three hundred dollars, or five hundred if they were British. After 1861 patent rights (with the exception of caveats) were available to all applicants on the same basis without regard to nationality.

The American patent system was based on the presumption that social welfare coincided with the individual welfare of inventors. Accordingly, legislators rejected restrictions on the rights of American inventors. However, the 1832 and 1836 laws stipulated that foreigners had to exploit their patented invention within eighteen months. These clauses seem to have been interpreted by the courts in a fairly liberal fashion, since alien patentees “need not prove that they hawked the patented improvement to obtain a market for it, or that they endeavored to sell it to any person, but that it rested upon those who sought to defeat the patent to prove that the plaintiffs neglected or refused to sell the patented invention for reasonable prices when application was made to them to purchase.” Such provisions proved to be temporary aberrations and were not included in subsequent legislation. Working requirements or compulsory licenses were regarded as unwarranted infringements of the rights of “meritorious inventors,” and incompatible with the philosophy of U.S. patent grants. Patentees were not required to pay annuities to maintain their property, there were no opposition proceedings, and once granted a patent could not be revoked unless there was proven evidence of fraud.

One of the advantages of a system that secures property rights is that it facilitates contracts and trade. Assignments provide a straightforward index of the effectiveness of the American system, since trade in inventions would hardly proliferate if patent rights were uncertain or worthless. An extensive national network of licensing and assignments developed early on, aided by legal rulings that overturned contracts for useless or fraudulent patents. In 1845 the Patent Office recorded 2,108 assignments, which can be compared to the cumulative stock of 7188 patents that were still in force in that year. By the 1870s the number of assignments averaged over 9000 assignments per year, and this increased in the next decade to over 12,000 transactions recorded annually. This flourishing market for patented inventions provided an incentive for further inventive activity for inventors who were able to appropriate the returns from their efforts, and also linked patents and productivity growth.

Property rights are worth little unless they can be legally enforced in a consistent, certain, and predictable manner. A significant part of the explanation for the success of the American intellectual property system relates to the efficiency with which the laws were interpreted and implemented. United States federal courts from their inception attempted to establish a store of doctrine that fulfilled the intent of the Constitution to secure the rights of intellectual property owners. The judiciary acknowledged that inventive efforts varied with the extent to which inventors could appropriate the returns on their discoveries, and attempted to ensure that patentees were not unjustly deprived of the benefits from their inventions. Numerous reported decisions before the early courts declared that, rather than unwarranted monopolies, patent rights were “sacred” and to be regarded as the just recompense to inventive ingenuity. Early courts had to grapple with a number of difficult issues, such as the appropriate measure of damages, disputes between owners of conflicting patents, and how to protect the integrity of contracts when the law altered. Changes inevitably occurred when litigants and judiciary both adapted to a more complex inventive and economic environment. However, the system remained true to the Constitution in the belief that the defense of rights in patented invention was important in fostering industrial and economic development.

Economists such as Joseph Schumpeter have linked market concentration and innovation, and patent rights are often felt to encourage the establishment of monopoly enterprises. Thus, an important aspect of the enforcement of patents and intellectual property in general depends on competition or antitrust policies. The attitudes of the judiciary towards patent conflicts are primarily shaped by their interpretation of the monopoly aspect of the patent grant. The American judiciary in the early nineteenth century did not recognize patents as monopolies, arguing that patentees added to social welfare through innovations which had never existed before, whereas monopolists secured to themselves rights that already belong to the public. Ultimately, the judiciary came to openly recognize that the enforcement and protection of all property rights involved trade-offs between individual monopoly benefits and social welfare.

The passage of the Sherman Act in 1890 was associated with a populist emphasis on the need to protect the public from corporate monopolies, including those based on patent protection, and raised the prospect of conflicts between patent policies and the promotion of social welfare through industrial competition. Firms have rarely been charged directly with antitrust violations based on patent issues. At the same time, a number of landmark restraint of trade lawsuits have involved technological innovators. In the early decades of the 20th century these included innovative enterprises such as John Deere & Co., American Can and International Harvester, through to the numerous cases since 1970 against IBM, Xerox, Eastman Kodak and, most recently, Intel and Microsoft. The evidence suggests that, holding other factors constant, more innovative firms and those with larger patent stocks are more likely to be charged with antitrust violations. A growing fraction of cases involve firms jointly charged with antitrust violations that are linked to patent based market power and to concerns about “innovation markets.”

The Japanese Patent System

Japan emerged from the Meiji era as a follower nation which deliberately designed institutions to try to emulate those of the most advanced industrial countries. Accordingly, in 1886 Takahashi Korekiyo was sent on a mission to examine patent systems in Europe and the United States. The Japanese envoy was not favorably impressed with the European countries in this regard. Instead, he reported: ” … we have looked about us to see what nations are the greatest, so that we could be like them; … and we said, `What is it that makes the United States such a great nation?’ and we investigated and we found it was patents, and we will have patents.” The first national patent statute in Japan was passed in 1888, and copied many features of the U.S. system, including the examination procedures.

However, even in the first statute, differences existed that reflected Japanese priorities and the “wise eclecticism of Japanese legislators.” For instance, patents were not granted to foreigners, protection could not be obtained for fashion, food products, or medicines, patents that were not worked within three years could be revoked, and severe remedies were imposed for infringement, including penal servitude. After Japan became a signatory of the Paris Convention a new law was passed in 1899, which amended existing legislation to accord with the agreements of the Convention, and extended protection to foreigners. The influence of the German laws were evident in subsequent reforms in 1909 (petty or utility patents were protected) and 1921 (protection was removed from chemical products, work for hire doctrines were adopted, and an opposition procedure was introduced). The Act of 1921 also permitted the state to revoke a patent grant on payment of appropriate compensation if it was deemed in the public interest. Medicines, food and chemical products could not be patented, but protection could be obtained for processes relating to their manufacture.

The modern Japanese patent system is an interesting amalgam of features drawn from the major patent institutions in the world. Patent applications are filed, and the applicants then have seven years within which they can request an examination. Before 1996 examined patents were published prior to the actual grant, and could be opposed before the final grant; but at present, opposition can only occur in the first six months after the initial grant. Patents are also given for utility models or incremental inventions which are required to satisfy a lower standard of novelty and nonobviousness and can be more quickly commercialized. It has been claimed that the Japanese system favors the filing of a plethora of narrowly defined claims for utility models that build on the more substantive contributions of patent grants, leading to the prospect of an anti-commons through “patent flooding.” Others argue that utility models aid diffusion and innovation in the early stages of the patent term, and that the pre-grant publication of patent specifications also promotes diffusion.

Harmonization of International Patent Laws

Today very few developed countries would seriously consider eliminating statutory protection for intellectual property, but in the second half of the nineteenth century the “patent controversy” pitted advocates of patent rights against an effective abolitionist movement. For a short period the latter group was strong enough to obtain support in favor of dismantling the patent systems in countries such as England, and in 1863 the Congress of German Economists declared “patents of invention are injurious to common welfare.” The movement achieved its greatest victory in Holland, which repealed its patent legislation in 1869. The abolitionists based their arguments on the benefits of free trade and competition and viewed patents as part of a protectionist strategy analogous to tariffs. Instead of monopoly awards to inventors, their efforts could be rewarded by alternative policies, such as stipends from the government, payments from private industry or associations formed for that purpose, or simply through the lead time that the first inventor acquired over competitors by virtue of his prior knowledge.

The decisive victory of the patent proponents shifted the focus of interest to the other extreme, and led to efforts to attain uniformity in intellectual property rights regimes across countries. Part of the impetus for change occurred because the costs of discordant national rules became more burdensome as the volume of international trade in industrial products grew over time. Americans were also concerned about the lack of protection accorded to their exhibits in the increasingly more prominent World’s Fairs. Indeed, the first international patent convention was held in Austria in 1873, at the suggestion of U.S. policy makers, who wanted to be certain that their inventors would be adequately protected at the International Exposition in Vienna that year. It also yielded an opportunity to protest the provisions in Austrian law which discriminated against foreigners, including a requirement that patents had to be worked within one year or risk invalidation. The Vienna Convention adopted several resolutions, including a recommendation that the United States opposed, in favor of compulsory licenses if they were deemed in the public interest. However, the convention followed U.S. lead and did not approve compulsory working requirements.

International conventions proliferated in subsequent years, and their tenor tended to reflect the opinions of the conveners. Their objective was not to reach compromise solutions that would reflect the needs and wishes of all participants, but rather to promote preconceived ideas. The overarching goal was to pursue uniform international patent laws, although there was little agreement about the finer points of these laws. It became clear that the goal of complete uniformity was not practicable, given the different objectives, ideologies and economic circumstances of participants. Nevertheless, in 1884 the International Union for the Protection of Industrial Property was signed by Belgium, Portugal, France, Guatemala, Italy, the Netherlands, San Salvador, Serbia, Spain and Switzerland. The United States became a member in 1887, and a significant number of developing countries followed suit, including Brazil, Bulgaria, Cuba, the Dominican Republic, Ceylon, Mexico, Trinidad and Tobago and Indonesia, among others.

The United States was the most prolific patenting nation in the world, many of the major American enterprises owed their success to patents and were expanding into international markets, and the U.S. patent system was recognized as the most successful. It is therefore not surprising that patent harmonization implied convergence towards the American model despite resistance from other nations. Countries such as Germany were initially averse to extending equal protection to foreigners because they feared that their domestic industry would be overwhelmed by American patents. Ironically, because its patent laws were the most liberal towards patentees, the United States found itself with weaker bargaining abilities than nations who could make concessions by changing their provisions. The U.S. pressed for the adoption of reciprocity (which would ensure that American patentees were treated as favorably abroad as in the United States) but this principle was rejected in favor of “national treatment” (American patentees were to be granted the same rights as nationals of the foreign country). This likely influenced the U.S. tendency to use bilateral trade sanctions rather than multilateral conventions to obtain reforms in international patent policies.

It was commonplace in the nineteenth century to rationalize and advocate close links between trade policies, protection, and international laws regarding intellectual property. These links were evident at the most general philosophical level, and at the most specific, especially in terms of compulsory working requirements and provisions to allow imports by the patentee. For instance, the 1880 Paris Convention considered the question of imports of the patented product by the patentee. According to the laws of France, Mexico and Tunisia, such importation would result in the repeal of the patent grant. The Convention inserted an article that explicitly ruled out forfeiture of the patent under these circumstances, which led some French commentators to argue that “the laws on industrial property… will be truly disastrous if they do not have a counterweight in tariff legislation.” The movement to create an international patent system elucidated the fact that intellectual property laws do not exist in a vacuum, but are part of a bundle of rights that are affected by other laws and policies.

Conclusion

Appropriate institutions to promote creations in the material and intellectual sphere are especially critical because ideas and information are public goods that are characterized by nonrivalry and nonexclusion. Once the initial costs are incurred, ideas can be reproduced at zero marginal cost and it may be difficult to exclude others from their use. Thus, in a competitive market, public goods may suffer from underprovision or may never be created because of a lack of incentive on the part of the original provider who bears the initial costs but may not be able to appropriate the benefits. Market failure can be ameliorated in several ways, for instance through government provision, rewards or subsidies to original creators, private patronage, and through the creation of intellectual property rights.

Patents allow the initial producers a limited period during which they are able to benefit from a right of exclusion. If creativity is a function of expected profits, these grants to inventors have the potential to increase social production possibilities at lower cost. Disclosure requirements promote diffusion, and the expiration of the temporary monopoly right ultimately adds to the public domain. Overall welfare is enhanced if the social benefits of diffusion outweigh the deadweight and social costs of temporary exclusion. This period of exclusion may be costly for society, especially if future improvements are deterred, and if rent-seeking such as redistributive litigation results in wasted resources. Much attention has also been accorded to theoretical features of the optimal system, including the breadth, longevity, and height of patent and copyright grants.

However, strongly enforced rights do not always benefit the producers and owners of intellectual property rights, especially if there is a prospect of cumulative invention where follow-on inventors build on the first discovery. Thus, more nuanced models are ambivalent about the net welfare benefits of strong exclusive rights to inventions. Indeed, network models imply that the social welfare of even producers may increase from weak enforcement if more extensive use of the product increases the value to all users. Under these circumstances, the patent owner may benefit from the positive externalities created by piracy. In the absence of royalties, producers may appropriate returns through ancillary means, such as the sale of complementary items or improved reputation. In a variant of the durable-goods monopoly problem, it has been shown that piracy can theoretically increase the demand for products by ensuring that producers can credibly commit to uniform prices over time. Also in this vein, price and/or quality discrimination of non-private goods across pirates and legitimate users can result in net welfare benefits for society and for the individual firm. If the cost of imitation increases with quality, infringement can also benefit society if it causes firms to adopt a strategy of producing higher quality commodities.

Economic theorists who are troubled by the imperfections of intellectual property grants have proposed alternative mechanisms that lead to more satisfactory mathematical solutions. Theoretical analyses have advanced our understanding in this area, but such models by their nature cannot capture many complexities. They tend to overlook such factors as the potential for greater corruption or arbitrariness in the administration of alternatives to patents. Similarly, they fail to appreciate the role of private property rights in conveying information and facilitating markets, and their value in reducing risk and uncertainty for independent inventors with few private resources. The analysis becomes even less satisfactory when producers belong to different countries than consumers. Thus, despite the flurry of academic research on the economics of intellectual property, we have not progressed far beyond Fritz Machlup’s declaration that our state of knowledge does not allow to us to either recommend the introduction or the removal of such systems. Existing studies leave a wide area of ambiguity about the causes and consequences of institutional structures in general, and their evolution across time and region.

In the realm of intellectual property, questions from four centuries ago are still current, ranging from its philosophical underpinnings, to whether patents and copyrights constitute optimal policies towards intellectual inventions, to the growing concerns of international political economy. A number of scholars are so impressed with technological advances in the twenty-first century that they argue we have reached a critical juncture where we need completely new institutions. Throughout their history, patent and copyright regimes have confronted and accommodated technological innovations that were no less significant and contentious for their time. An economist from the nineteenth century would have been equally familiar with considerations about whether uniformity in intellectual property rights across countries harmed or benefited global welfare, and whether piracy might be to the advantage of developing countries. Similarly, the link between trade and intellectual property rights that informs the TRIPS (trade-related aspects of intellectual property rights) agreement was quite standard two centuries ago.

Today the majority of patents are filed in developed countries by the residents of developed countries, most notably those of Japan and the United States. The developing countries of the twenty-first century are under significant political pressure to adopt stronger patent laws and enforcement, even though few patents are filed by residents of the developing countries. Critics of intellectual property rights point to costs, such as monopoly rents and higher barriers to entry, administrative costs, outflows of royalty payments to foreign entities, and a lack of indigenous innovation. Other studies, however, have more optimistic findings regarding the role of patents in economic and social development. They suggest that stronger protection can encourage more foreign direct investment, greater access to technology, and increased benefits from trade openness. Moreover, both economic history and modern empirical research indicate that stronger patent rights and more effective markets in invention can, by encouraging and enabling the inventiveness of ordinary citizens of developing countries, help to increase social and economic welfare.

Patents Statistics for France, Britain, the United States and Germany, 1790-1960
YEAR FRANCE BRITAIN U.S. GERMANY
1790 . 68 3 .
1791 34 57 33 .
1792 29 85 11 .
1793 4 43 20 .
1794 0 55 22 .
1795 1 51 12 .
1796 8 75 44 .
1797 4 54 51 .
1798 10 77 28 .
1799 22 82 44 .
1800 16 96 41 .
1801 34 104 44 .
1802 29 107 65 .
1803 45 73 97 .
1804 44 60 84 .
1805 63 95 57 .
1806 101 99 63 .
1807 66 94 99 .
1808 61 95 158 .
1809 52 101 203 .
1810 93 108 223 .
1811 66 115 215 0
1812 96 119 238 2
1813 88 142 181 2
1814 53 96 210 1
1815 77 102 173 10
1816 115 118 206 10
1817 162 103 174 16
1818 153 132 222 18
1819 138 101 156 10
1820 151 97 155 10
1821 180 109 168 11
1822 175 113 200 8
1823 187 138 173 22
1824 217 180 228 25
1825 321 250 304 17
1826 281 131 323 67
1827 333 150 331 69
1828 388 154 368 87
1829 452 130 447 59
1830 366 180 544 57
1831 220 150 573 34
1832 287 147 474 46
1833 431 180 586 76
1834 576 207 630 66
1835 556 231 752 73
1836 582 296 702 65
1837 872 256 426 46
1838 1312 394 514 104
1839 730 411 404 125
1840 947 440 458 156
1841 925 440 490 162
1842 1594 371 488 153
1843 1397 420 493 160
1844 1863 450 478 158
1845 2666 572 473 256
1846 2750 493 566 252
1847 2937 493 495 329
1848 1191 388 583 256
1849 1953 514 984 253
1850 2272 523 883 308
1851 2462 455 752 274
1852 3279 1384 885 272
1853 4065 2187 844 287
1854 4563 1878 1755 276
1855 5398 2046 1881 287
1856 5761 1094 2302 393
1857 6110 2028 2674 414
1858 5828 1954 3455 375
1859 5439 1977 4160 384
1860 6122 2063 4357 550
1861 5941 2047 3020 551
1862 5859 2191 3214 630
1863 5890 2094 3773 633
1864 5653 2024 4630 557
1865 5472 2186 6088 609
1866 5671 2124 8863 549
1867 6098 2284 12277 714
1868 6103 2490 12526 828
1869 5906 2407 12931 616
1870 3850 2180 12137 648
1871 2782 2376 11659 458
1872 4875 2771 12180 958
1873 5074 2974 11616 1130
1874 5746 3162 12230 1245
1875 6007 3112 13291 1382
1876 6736 3435 14169 1947
1877 7101 3317 12920 1604
1878 7981 3509 12345 4200
1879 7828 3524 12165 4410
1880 7660 3741 12902 3960
1881 7813 3950 15500 4339
1882 7724 4337 18091 4131
1883 8087 3962 21162 4848
1884 8253 9983 19118 4459
1885 8696 8775 23285 4018
1886 9011 9099 21767 4008
1887 8863 9226 20403 3882
1888 8669 9309 19551 3923
1889 9287 10081 23324 4406
1890 9009 10646 25313 4680
1891 9292 10643 22312 5550
1892 9902 11164 22647 5900
1893 9860 11600 22750 6430
1894 10433 11699 19855 6280
1895 10257 12191 20856 5720
1896 11430 12473 21822 5410
1897 12550 14210 22067 5440
1898 12421 14167 20377 5570
1899 12713 14160 23278 7430
1900 12399 13710 24644 8784
1901 12103 13062 25546 10508
1902 12026 13764 27119 10610
1903 12469 15718 31029 9964
1904 12574 15089 30258 9189
1905 12953 14786 29775 9600
1906 13097 14707 31170 13430
1907 13170 16272 35859 13250
1908 13807 16284 32735 11610
1909 13466 15065 36561 11995
1910 16064 15269 35141 12100
1911 15593 17164 32856 12640
1912 15737 15814 36198 13080
1913 15967 16599 33917 13520
1914 12161 15036 39892 12350
1915 5056 11457 43118 8190
1916 3250 8424 43892 6271
1917 4100 9347 40935 7399
1918 4400 10809 38452 7340
1919 10500 12301 36797 7766
1920 18950 14191 37060 14452
1921 17700 17697 37798 15642
1922 18300 17366 38369 20715
1923 19200 17073 38616 20526
1924 19200 16839 42584 18189
1925 18000 17199 46432 15877
1926 18200 17333 44733 15500
1927 17500 17624 41717 15265
1928 22000 17695 42357 15598
1929 24000 18937 45267 20202
1930 24000 20888 45226 26737
1931 24000 21949 51761 25846
1932 21850 21150 53504 26201
1933 20000 17228 48807 21755
1934 19100 16890 44452 17011
1935 18000 17675 40663 16139
1936 16700 17819 39831 16750
1937 16750 17614 37738 14526
1938 14000 19314 38102 15068
1939 15550 17605 43118 16525
1940 10100 11453 42323 14647
1941 8150 11179 41171 14809
1942 10000 7962 38514 14648
1943 12250 7945 31101 14883
1944 11650 7712 28091 .
1945 7360 7465 25712 .
1946 11050 8971 21859 .
1947 13500 11727 20191 .
1948 13700 15558 24007 .
1949 16700 20703 35224 .
1950 17800 13509 43219 .
1951 25200 13761 44384 27767
1952 20400 21380 43717 37179
1953 43000 17882 40546 37113
1954 34000 17985 33910 19140
1955 23000 20630 30535 14760
1956 21900 19938 46918 18150
1957 23000 25205 42873 20467
1958 24950 18531 48450 19837
1959 41600 18157 52509 22556
1960 35000 26775 47286 19666

Additional Reading

Khan, B. Zorina. The Democratization of Invention: Patents and Copyrights in American Economic Development. New York: Cambridge University Press, 2005.

Khan, B. Zorina, and Kenneth L. Sokoloff. “Institutions and Technological Innovation during Early Economic Growth, 1790-1930.” NBER Working Paper No. 10966. Cambridge, MA: December 2004. (Available at www.nber.org.)

Bibliography

Besen, Stanley M., and Leo J. Raskind, “Introduction to the Law and Economics of Intellectual Property.” Journal of Economic Perspectives 5, no. 1 (1991): 3-27.

Bugbee, Bruce. The Genesis of American Patent and Copyright Law. Washington, DC: Public Affairs Press, 1967.

Coulter, Moureen. Property in Ideas: The Patent Question in Mid-Victorian England. Kirksville, MO: Thomas Jefferson Press, 1991

Dutton, H. I. The Patent System and Inventive Activity during the Industrial Revolution, 1750-1852, Manchester, UK: Manchester University Press, 1984.

Epstein, R. “Industrial Inventions: Heroic or Systematic?” Quarterly Journal of Economics 40 (1926): 232-72.

Gallini, Nancy T. “The Economics of Patents: Lessons from Recent U.S. Patent Reform.” Journal of Economic Perspectives 16, no. 2 (2002): 131–54.

Gilbert, Richard and Carl Shapiro. “Optimal Patent Length and Breadth.” Rand Journal of Economics 21 (1990): 106-12.

Gilfillan, S. Colum. The Sociology of Invention. Cambridge, MA: Follett, 1935.

Gomme, A. A. Patents of Invention: Origin and Growth of the Patent System in Britain, London: Longmans Green, 1946.

Harding, Herbert. Patent Office Centenary, London: Her Majesty’s Stationery Office, 1953.

Hilaire-Pérez, Liliane. Inventions et Inventeurs en France et en Angleterre au XVIIIe siècle. Lille: Université de Lille, 1994.

Hilaire-Pérez, Liliane. L’invention technique au siècle des Lumières. Paris: Albin Michel, 2000.

Jeremy, David J., Transatlantic Industrial Revolution: The Diffusion of Textile Technologies between Britain and America, 1790-1830s. Cambridge, MA: MIT Press, 1981.

Khan, B. Zorina. “Property Rights and Patent Litigation in Early Nineteenth-Century America.” Journal of Economic History 55, no. 1 (1995): 58-97.

Khan, B. Zorina. “Married Women’s Property Right Laws and Female Commercial Activity.” Journal of Economic History 56, no. 2 (1996): 356-88.

Khan, B. Zorina. “Federal Antitrust Agencies and Public Policy towards Patents and Innovation.” Cornell Journal of Law and Public Policy 9 (1999): 133-69.

Khan, B. Zorina, “`Not for Ornament': Patenting Activity by Women Inventors.” Journal of Interdisciplinary History 33, no. 2 (2000): 159-95.

Khan, B. Zorina. “Technological Innovations and Endogenous Changes in U.S. Legal Institutions, 1790-1920.” NBER Working Paper No. 10346. Cambridge, MA: March 2004. (available at www.nber.org)

Khan, B. Zorina, and Kenneth L. Sokoloff. “‘Schemes of Practical Utility’: Entrepreneurship and Innovation among ‘Great Inventors’ in the United States, 1790-1865.” Journal of Economic History 53, no. 2 (1993): 289-307.

Khan, B. Zorina, and Kenneth L. Sokoloff. “Entrepreneurship and Technological Change in Historical Perspective.” Advances in the Study of Entrepreneurship, Innovation, and Economic Growth 6 (1993): 37-66.

Khan, B. Zorina, and Kenneth L. Sokoloff. “Two Paths to Industrial Development and Technological Change.” In Technological Revolutions in Europe, 1760-1860, edited by Maxine Berg and Kristine Bruland. London: Edward Elgar, London, 1997.

Khan, B. Zorina, and Kenneth L. Sokoloff. “The Early Development of Intellectual Property Institutions in the United States.” Journal of Economic Perspectives 15, no. 3 (2001): 233-46.

Khan, B. Zorina, and Kenneth L. Sokoloff. “Innovation of Patent Systems in the Nineteenth Century: A Comparative Perspective.” Unpublished manuscript (2001).

Khan, B. Zorina, and Kenneth L. Sokoloff. “Institutions and Democratic Invention in Nineteenth-century America.” American Economic Review Papers and Proceedings 94 (2004): 395-401.

Khan, B. Zorina, and Kenneth L. Sokoloff. “Institutions and Technological Innovation during Early Economic Growth: Evidence from the Great Inventors of the United States, 1790-1930.” In Institutions and Economic Growth, edited by Theo Eicher and Cecilia Garcia-Penalosa. Cambridge, MA: MIT Press, 2006.

Lamoreaux, Naomi R. and Kenneth L. Sokoloff. “Long-Term Change in the Organization of Inventive Activity.” Science, Technology and the Economy 93 (1996): 1286-92.

Lamoreaux, Naomi R. and Kenneth L. Sokoloff. “The Geography of Invention in the American Glass Industry, 1870-1925.” Journal of Economic History 60, no. 3 (2000): 700-29.

Lamoreaux, Naomi R. and Kenneth L. Sokoloff. “Market Trade in Patents and the Rise of a Class of Specialized Inventors in the Nineteenth-century United States.” American Economic Review 91, no. 2 (2001): 39-44.

Landes, David S. Unbound Prometheus: Technological Change and Industrial Development in Western Europe from 1750 to the Present. Cambridge: Cambridge University Press, 1969.

Lerner, Josh. “Patent Protection and Innovation over 150 Years.” NBER Working Paper No. 8977. Cambridge, MA: June 2002.

Levin, Richard, A. Klevorick, R. Nelson and S. Winter. “Appropriating the Returns from Industrial Research and Development.” Brookings Papers on Economic Activity 3 (1987): 783-820.

Lo, Shih-Tse. “Strengthening Intellectual Property Rights: Evidence from the 1986 Taiwanese Patent Reforms.” Ph.D. diss., University of California at Los Angeles, 2005.

Machlup, Fritz. An Economic Review of the Patent System. Washington, DC: U.S. Government Printing Office, 1958.

Machlup, Fritz. “The Supply of Inventors and Inventions.” In The Rate and Direction of Inventive Activity, edited by R. Nelson. Princeton: Princeton University Press, 1962.

Machlup, Fritz, and Edith Penrose. “The Patent Controversy in the Nineteenth Century.” Journal of Economic History 10, no. 1 (1950): 1-29.

Macleod, Christine. Inventing the Industrial Revolution. Cambridge: Cambridge University Press, 1988.

McCloy, Shelby T. French Inventions of the Eighteenth Century. Lexington: University of Kentucky Press, 1952.

Mokyr, Joel. The Lever of Riches: Technological Creativity and Economic Growth. New York: Oxford University Press, 1990.

Moser, Petra. “How Do Patent Laws Influence Innovation? Evidence from Nineteenth-century World Fairs.” American Economic Review 95, no. 4 (2005): 1214-36.

O’Dell, T. H. Inventions and Official Secrecy: A History of Secret Patents in the United Kingdom, Oxford: Clarendon Press, 1994.

Penrose, Edith. The Economics of the International Patent System. Baltimore: John Hopkins University Press, 1951.

Sáiz González, Patricio. Invención, patentes e innovación en la Espaňa contemporánea. Madrid: OEPM, 1999.

Schmookler, Jacob. “Economic Sources of Inventive Activity.” Journal of Economic History 22 (1962): 1-20.

Schmookler, Jacob. Invention and Economic Growth. Cambridge, MA: Harvard University Press, 1966.

Schmookler, Jacob, and Zvi Griliches. “Inventing and Maximizing.” American Economic Review (1963): 725-29.

Schiff, Eric. Industrialization without National Patents: The Netherlands, 1869-1912; Switzerland, 1850-1907. Princeton: Princeton University Press, 1971.

Sokoloff, Kenneth L. “Inventive Activity in Early Industrial America: Evidence from Patent Records, 1790-1846.” Journal of Economic History 48, no. 4 (1988): 813-50.

Sokoloff, Kenneth L. “Invention, Innovation, and Manufacturing Productivity Growth in the Antebellum Northeast.” In American Economic Growth and Standards of Living before the Civil War, edited by Robert E. Gallman and John Joseph Wallis, 345-78. Chicago: University of Chicago Press, 1992.

Sokoloff, Kenneth L., and B. Zorina Khan. “The Democratization of Invention in during Early Industrialization: Evidence from the United States, 1790-1846.” Journal of Economic History 50, no. 2 (1990): 363-78.

Sutthiphisal, Dhanoos. “Learning-by-Producing and the Geographic Links between Invention and Production.” Unpublished manuscript, McGill University, 2005.

Takeyama, Lisa N. “The Welfare Implications of Unauthorized Reproduction of Intellectual Property in the Presence of Demand Network Externalities.” Journal of Industrial Economics 42, no. 2 (1994): 155-66.

U.S. Patent Office. Annual Report of the Commissioner of Patents. Washington, DC: various years.

Van Dijk, T. “Patent Height and Competition in Product Improvements.” Journal of Industrial Economics 44, no. 2 (1996): 151-67.

Vojacek, Jan. A Survey of the Principal National Patent Systems. New York: Prentice-Hall, 1936.

Woodcroft, Bennet. Alphabetical Index of Patentees of Inventions [1617-1852]. New York: A. Kelley, 1854, reprinted 1969.

Woodcroft, Bennet. Titles of Patents of Invention: Chronologically Arranged from March 2, 1617 to October 1, 1852. London: Queen’s Printing Office, 1854.

Citation: Khan, B. “An Economic History of Patent Institutions”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/an-economic-history-of-patent-institutions/

The Economic History of Norway

Ola Honningdal Grytten, Norwegian School of Economics and Business Administration

Overview

Norway, with its population of 4.6 million on the northern flank of Europe, is today one of the most wealthy nations in the world, both measured as GDP per capita and in capital stock. On the United Nation Human Development Index, Norway has been among the three top countries for several years, and in some years the very top nation. Huge stocks of natural resources combined with a skilled labor force and the adoption of new technology made Norway a prosperous country during the nineteenth and twentieth century.

Table 1 shows rates of growth in the Norwegian economy from 1830 to the present using inflation-adjusted gross domestic product (GDP). This article splits the economic history of Norway into two major phases — before and after the nation gained its independence in 1814.

Table 1
Phases of Growth in the Real Gross Domestic Product of Norway, 1830-2003

(annual growth rates as percentages)

Year GDP GDP per capita
1830-1843 1.91 0.86
1843-1875 2.68 1.59
1875-1914 2.02 1.21
1914-1945 2.28 1.55
1945-1973 4.73 3.81
1973-2003 3.28 2.79
1830-2003 2.83 2.00

Source: Grytten (2004b)

Before Independence

The Norwegian economy was traditionally based on local farming communities combined with other types of industry, basically fishing, hunting, wood and timber along with a domestic and international-trading merchant fleet. Due to topography and climatic conditions the communities in the North and the West were more dependent on fish and foreign trade than the communities in the south and east, which relied mainly on agriculture. Agricultural output, fish catches and wars were decisive for the waves in the economy previous to independence. This is reflected in Figure 1, which reports a consumer price index for Norway from 1516 to present.

The peaks in this figure mark the sixteenth-century Price Revolution (1530s to 1590s), the Thirty Years War (1618-1648), the Great Nordic War (1700-1721), the Napoleonic Wars (1800-1815), the only period of hyperinflation in Norway — World War I (1914-1918) — and the stagflation period, i.e. high rates of inflation combined with a slowdown in production, in the 1970s and early 1980s.

Figure 1
Consumer Price Index for Norway, 1516-2003 (1850 = 100).

Figure 1
Source: Grytten (2004a)

During the last decades of the eighteenth century the Norwegian economy bloomed along with a first era of liberalism. Foreign trade of fish and timber had already been important for the Norwegian economy for centuries, and now the merchant fleet was growing rapidly. Bergen, located at the west coast, was the major city, with a Hanseatic office and one of the Nordic countries’ largest ports for domestic and foreign trade.

When Norway gained its independence from Denmark in 1814, after a tight union covering 417 years, it was a typical egalitarian country with a high degree of self-supply from agriculture, fisheries and hunting. According to the population censuses from 1801 and 1815 more than ninety percent of the population of 0.9 million lived in rural areas, mostly on small farms.

After Independence (1814)

Figure 2 shows annual development in GDP by expenditure (in fixed 2000 prices) from 1830 to 2003. The series, with few exceptions, reveal steady growth rates with few huge fluctuations. However, economic growth as a more or less continuous process started in the 1840s. We can also conclude that the growth process slowed down during the last three decades of the nineteenth century. The years 1914-1945 were more volatile than any other period in question, while there was an impressive and steady rate of growth until the mid 1970s and from then on slower growth.

Figure 2
Gross Domestic Product for Norway by Expenditure Category
(in 2000 Norwegian Kroner)

Figure 2
Source: Grytten (2004b)

Stagnation and Institution Building, 1814-1843

The newborn state lacked its own institutions, industrial entrepreneurs and domestic capital. However, due to its huge stocks of natural resources and its geographical closeness to the sea and to the United Kingdom, the new state, linked to Sweden in a loose royal union, seized its opportunities after some decades. By 1870 it had become a relatively wealthy nation. Measured in GDP per capita Norway was well over the European average, in the middle of the West European countries, and in fact, well above Sweden.

During the first decades after its independence from Denmark, the new state struggled with the international recession after the Napoleonic wars, deflationary monetary policy, and protectionism from the UK.

The Central Bank of Norway was founded in 1816, and a national currency, the spesidaler pegged to silver was introduced. The daler depreciated heavily during the first troubled years of recession in the 1820s.

The Great Boom, 1843-1875

After the Norwegian spesidaler gained its par value to silver in 1842, Norway saw a period of significant economic growth up to the mid 1870s. This impressive growth was mirrored in only a few other countries. The growth process was very much initiated by high productivity growth in agriculture and the success of the foreign sector. The adoption of new structures and technology along with substitution from arable to lifestock production made labor productivity in agriculture increase by about 150 percent between 1835 and 1910. The exports of timber, fish and in particular maritime services achieved high growth rates. In fact, Norway became a major power in shipping services during this period, accounting for about seven percent of the world merchant fleet in 1875. Norwegian sailing vessels freighted international goods all over the world at low prices.

The success of the Norwegian foreign sector can be explained by a number of factors. Liberalization of world trade and high international demand secured a market for Norwegian goods and services. In addition, Norway had vast stocks of fish and timber along with maritime skills. According to recent calculations, GDP per capita had an annual growth rate of 1.6 percent 1843 to 1876, well above the European average. At the same time the Norwegian annual rate of growth for exports was 4.8 percent. The first modern large-scale manufacturing industry in Norway saw daylight in the 1840s, when textile plants and mechanized industry were established. A second wave of industrialization took place in the 1860s and 1870s. Following the rapid productivity growth in agriculture, food processing and dairy production industries showed high growth in this period.

During this great boom, capital was imported mainly from Britain, but also from Sweden, Denmark and Germany, the four most important Norwegian trading partners at the time. In 1536 the King of Denmark and Norway chose the Lutheran faith as the state religion. In consequence of the Reformation, reading became compulsory; consequently Norway acquired a generally skilled and independent labor force. The constitution from 1814 also cleared the way for liberalism and democracy. The puritan revivals during the nineteenth century created a business environment, which raised entrepreneurship, domestic capital and a productive labor force. In the western and southern parts of the country these puritan movements are still strong, both in daily life and within business.

Relative Stagnation with Industrialization, 1875-1914

Norway’s economy was hit hard during the “depression” from mid 1870s to the early 1890s. GDP stagnated, particular during the 1880s, and prices fell until 1896. This stagnation is mirrored in the large-scale emigration from Norway to North America in the 1880s. At its peak in 1882 as many as 28,804 persons, 1.5 percent of the population, left the country. All in all, 250,000 emigrated in the period 1879-1893, equal to 60 percent of the birth surplus. Only Ireland had higher emigration rates than Norway between 1836 and 1930, when 860,000 Norwegians left the country.

The long slow down can largely been explained by Norway’s dependence on the international economy and in particular the United Kingdom, which experienced slower economic growth than the other major economies of the time. As a result of the international slowdown, Norwegian exports contracted in several years, but expanded in others. A second reason for the slowdown in Norway was the introduction of the international gold standard. Norway adopted gold in January 1874, and due to the trade deficit, lack of gold and lack of capital, the country experienced a huge contraction in gold reserves and in the money stock. The deflationary effect strangled the economy. Going onto the gold standard caused the appreciation of the Norwegian currency, the krone, as gold became relatively more expensive compared to silver. A third explanation of Norway’s economic problems in the 1880s is the transformation from sailing to steam vessels. Norway had by 1875 the fourth biggest merchant fleet in the world. However, due to lack of capital and technological skills, the transformation from sail to steam was slow. Norwegian ship owners found a niche in cheap second-hand sailing vessels. However, their market was diminishing, and finally, when the Norwegian steam fleet passed the size of the sailing fleet in 1907, Norway was no longer a major maritime power.

A short boom occurred from the early 1890s to 1899. Then, a crash in the Norwegian building industry led to a major financial crash and stagnation in GDP per capita from 1900 to 1905. Thus from the middle of the 1870s until 1905 Norway performed relatively bad. Measured in GDP per capita, Norway, like Britain, experienced a significant stagnation relative to most western economies.

After 1905, when Norway gained full independence from Sweden, a heavy wave of industrialization took place. In the 1890s the fish preserving and cellulose and paper industries started to grow rapidly. From 1905, when Norsk Hydro was established, manufacturing industry connected to hydroelectrical power took off. It is argued, quite convincingly, that if there was an industrial breakthrough in Norway, it must have taken place during the years 1905-1920. However, the primary sector, with its labor-intensive agriculture and increasingly more capital-intensive fisheries, was still the biggest sector.

Crises and Growth, 1914-1945

Officially Norway was neutral during World War I. However, in terms of the economy, the government clearly took the side of the British and their allies. Through several treaties Norway gave privileges to the allied powers, which protected the Norwegian merchant fleet. During the war’s first years, Norwegian ship owners profited from the war, and the economy boomed. From 1917, when Germany declared war against non-friendly vessels, Norway took heavy losses. A recession replaced the boom.

Norway suspended gold redemption in August 1914, and due to inflationary monetary policy during the war and in the first couple of years afterward, demand was very high. When the war came to an end this excess demand was met by a positive shift in supply. Thus, Norway, like other Western countries experienced a significant boom in the economy from the spring of 1919 to the early autumn 1920. The boom was followed by high inflation, trade deficits, currency depreciation and an overheated economy.

The international postwar recession beginning in autumn 1920, hit Norway more severely than most other countries. In 1921 GDP per capita fell by eleven percent, which was only exceeded by the United Kingdom. There are two major reasons for the devastating effect of the post-war recession. In the first place, as a small open economy, Norway was more sensitive to international recessions than most other countries. This was in particular the case because the recession hit the country’s most important trading partners, the United Kingdom and Sweden, so hard. Secondly, the combination of strong and mostly pro-cyclical inflationary monetary policy from 1914 to 1920 and thereafter a hard deflationary policy made the crisis worse (Figure 3).

Figure 3
Money Aggregates for Norway, 1910-1930

Figure 3
Source: Klovland (2004a)

In fact, Norway pursued a long, but non-persistent deflationary monetary policy aimed at restoring the par value of the krone (NOK) up to May 1928. In consequence, another recession hit the economy during the middle of the 1920s. Hence, Norway was one of the worst performers in the western world in the 1920s. This can best be seen in the number of bankruptcies, a huge financial crisis and mass unemployment. Bank losses amounted to seven percent of GDP in 1923. Total unemployment rose from about one percent in 1919 to more than eight percent in 1926 and 1927. In manufacturing it reached more than 18 percent the same years.

Despite a rapid boom and success within the whaling industry and shipping services, the country never saw a convincing recovery before the Great Depression hit Europe in late summer 1930. The worst year for Norway was 1931, when GDP per capita fell by 8.4 percent. This, however, was not only due to the international crisis, but also to a massive and violent labor conflict that year. According to the implicit GDP deflator prices fell more than 63 percent from 1920 to 1933.

All in all, however, the depression of the 1930s was milder and shorter in Norway than in most western countries. This was partly due to the deflationary monetary policy in the 1920s, which forced Norwegian companies to become more efficient in order to survive. However, it was probably more important that Norway left gold as early as September 27th, 1931 only a week after the United Kingdom. Those countries that left gold early, and thereby employed a more inflationary monetary policy, were the best performers in the 1930s. Among them were Norway and its most important trading partners, the United Kingdom and Sweden.

During the recovery period, Norway in particular saw growth in manufacturing output, exports and import substitution. This can to a large extent be explained by currency depreciation. Also, when the international merchant fleet contracted during the drop in international trade, the Norwegian fleet grew rapidly, as Norwegian ship owners were pioneers in the transformation from steam to diesel engines, tramp to line freights and into a new expanding niche: oil tankers.

The primary sector was still the largest in the economy during the interwar years. Both fisheries and agriculture struggled with overproduction problems, however. These were dealt with by introducing market controls and cartels, partly controlled by the industries themselves and partly by the government.

The business cycle reached its bottom in late 1932. Despite relatively rapid recovery and significant growth both in GDP and in employment, unemployment stayed high, and reached 10-11 percent on annual basis from 1931 to 1933 (Figure 4).

Figure 4
Unemployment Rate and Public Relief Work
as a Percent of the Work Force, 1919-1939

Figure 4
Source: Hodne and Grytten (2002)

The standard of living became poorer in the primary sector, among those employed in domestic services and for the underemployed and unemployed and their households. However, due to the strong deflation, which made consumer prices fall by than 50 percent from autumn 1920 to summer 1933, employees in manufacturing, construction and crafts experienced an increase in real wages. Unemployment stayed persistently high due to huge growth in labor supply, as result of immigration restrictions by North American countries from the 1920s onwards.

Denmark and Norway were both victims of a German surprise attack the 9th of April 1940. After two months of fighting, the allied troops surrendered in Norway on June 7th and the Norwegian royal family and government escaped to Britain.

From then until the end of the war there were two Norwegian economies, the domestic German-controlled and the foreign Norwegian- and Allied-controlled economy. The foreign economy was primarily established on the basis of the huge Norwegian merchant fleet, which again was among the biggest in the world accounting for more than seven percent of world total tonnage. Ninety percent of this floating capital escaped the Germans. The ships were united into one state-controlled company, NORTASHIP, which earned money to finance the foreign economy. The domestic economy, however, struggled with a significant fall in production, inflationary pressure and rationing of important goods, which three million Norwegians had to share with 400.000 Germans occupying the country.

Economic Planning and Growth, 1945-1973

After the war the challenge was to reconstruct the economy and re-establish political and economic order. The Labor Party, in office from 1935, grabbed the opportunity to establish a strict social democratic rule, with a growing public sector and widespread centralized economic planning. Norway first declined the U.S. proposition of financial aid after the world. However, due to lack of hard currencies they accepted the Marshall aid program. By receiving 400 million dollars from 1948 to 1952, Norway was one of the biggest per capita recipients.

As part of the reconstruction efforts Norway joined the Bretton Woods system, GATT, the IMF and the World Bank. Norway also chose to become member of NATO and the United Nations. In 1958 the country also joined the European Free Trade Area (EFTA). The same year Norway made the krone convertible to the U.S. dollar, as many other western countries did with their currencies.

The years from 1950 to 1973 are often called the golden era of the Norwegian economy. GDP per capita showed an annual growth rate of 3.3 percent. Foreign trade stepped up even more, unemployment barely existed and the inflation rate was stable. This has often been explained by the large public sector and good economic planning. The Nordic model, with its huge public sector, has been said to be a success in this period. If one takes a closer look into the situation, one will, nevertheless, find that the Norwegian growth rate in the period was lower than that for most western nations. The same is true for Sweden and Denmark. The Nordic model delivered social security and evenly-distributed wealth, but it did not necessarily give very high economic growth.

Figure 5
Public Sector as a Percent of GDP, 1900-1990

Figure 5
Source: Hodne and Grytten (2002)

Petroleum Economy and Neoliberalism, 1973 to the Present

After the Bretton Woods system fell apart (between August 1971 and March 1973) and the oil price shock in autumn 1973, most developed economies went into a period of prolonged recession and slow growth. In 1969 Philips Petroleum discovered petroleum resources at the Ekofisk field, which was defined as part of the Norwegian continental shelf. This enabled Norway to run a countercyclical financial policy during the stagflation period in the 1970s. Thus, economic growth was higher and unemployment lower than for most other western countries. However, since the countercyclical policy focused on branch and company subsidies, Norwegian firms soon learned to adapt to policy makers rather than to the markets. Hence, both productivity and business structure did not have the incentives to keep pace with changes in international markets.

Norway lost significant competitive power, and large-scale deindustrialization took place, despite efforts to save manufacturing industry. Another reason for deindustrialization was the huge growth in the profitable petroleum sector. Persistently high oil prices from the autumn 1973 to the end of 1985 pushed labor costs upward, through spillover effects from high wages in the petroleum sector. High labor costs made the Norwegian foreign sector less competitive. Thus, Norway saw deindustrialization at a more rapid pace than most of her largest trading partners. Due to the petroleum sector, however, Norway experienced high growth rates in all the three last decades of the twentieth century, bringing Norway to the top of the world GDP per capita list at the dawn of the new millennium. Nevertheless, Norway had economic problems both in the eighties and in the nineties.

In 1981 a conservative government replaced Labor, which had been in power for most of the post-war period. Norway had already joined the international wave of credit liberalization, and the new government gave fuel to this policy. However, along with the credit liberalization, the parliament still ran a policy that prevented market forces from setting interest rates. Instead they were set by politicians, in contradiction to the credit liberalization policy. The level of interest rates was an important part of the political game for power, and thus, they were set significantly below the market level. In consequence, a substantial credit boom was created in the early 1980s, and continued to the late spring of 1986. As a result, Norway had monetary expansion and an artificial boom, which created an overheated economy. When oil prices fell dramatically from December 1985 onwards, the trade surplus was suddenly turned to a huge deficit (Figure 6).

Figure 6
North Sea Oil Prices and Norway’s Trade Balance, 1975-2000

Figure 6
Source: Statistics Norway

The conservative-center government was forced to keep a tighter fiscal policy. The new Labor government pursued this from May 1986. Interest rates were persistently high as the government now tried to run a trustworthy fixed-currency policy. In the summer of 1990 the Norwegian krone was officially pegged to the ECU. When the international wave of currency speculation reached Norway during autumn 1992 the central bank finally had to suspend the fixed exchange rate and later devaluate.

In consequence of these years of monetary expansion and thereafter contraction, most western countries experienced financial crises. It was relatively hard in Norway. Prices of dwellings slid, consumers couldn’t pay their bills, and bankruptcies and unemployment reached new heights. The state took over most of the larger commercial banks to avoid a total financial collapse.

After the suspension of the ECU and the following devaluation, Norway had growth until 1998, due to optimism, an international boom and high prices of petroleum. The Asian financial crisis also rattled the Norwegian stock market. At the same time petroleum prices fell rapidly, due to internal problems among the OPEC countries. Hence, the krone depreciated. The fixed exchange rate policy had to be abandoned and the government adopted inflation targeting. Along with changes in monetary policy, the center coalition government was also able to monitor a tighter fiscal policy. At the same time interest rates were high. As result, Norway escaped the overheating process of 1993-1997 without any devastating effects. Today the country has a strong and sound economy.

The petroleum sector is still very important in Norway. In this respect the historical tradition of raw material dependency has had its renaissance. Unlike many other countries rich in raw materials, natural resources have helped make Norway one of the most prosperous economies in the world. Important factors for Norway’s ability to turn resource abundance into economic prosperity are an educated work force, the adoption of advanced technology used in other leading countries, stable and reliable institutions, and democratic rule.

References

Basberg, Bjørn L. Handelsflåten i krig: Nortraship: Konkurrent og alliert. Oslo: Grøndahl and Dreyer, 1992.

Bergh, Tore Hanisch, Even Lange and Helge Pharo. Growth and Development. Oslo: NUPI, 1979.

Brautaset, Camilla. “Norwegian Exports, 1830-1865: In Perspective of Historical National Accounts.” Ph.D. dissertation. Norwegian School of Economics and Business Administration, 2002.

Bruland, Kristine. British Technology and European Industrialization. Cambridge: Cambridge University Press, 1989.

Danielsen, Rolf, Ståle Dyrvik, Tore Grønlie, Knut Helle and Edgar Hovland. Norway: A History from the Vikings to Our Own Times. Oslo: Scandinavian University Press, 1995.

Eitrheim. Øyvind, Jan T. Klovland and Jan F. Qvigstad, editors. Historical Monetary Statistics for Norway, 1819-2003. Oslo: Norges Banks skriftserie/Occasional Papers, no 35, 2004.

Hanisch, Tore Jørgen. “Om virkninger av paripolitikken.” Historisk tidsskrift 58, no. 3 (1979): 223-238.

Hanisch, Tore Jørgen, Espen Søilen and Gunhild Ecklund. Norsk økonomisk politikk i det 20. århundre. Verdivalg i en åpen økonomi. Kristiansand: Høyskoleforlaget, 1999.

Grytten, Ola Honningdal. “A Norwegian Consumer Price Index 1819-1913 in a Scandinavian Perspective.” European Review of Economic History 8, no.1 (2004): 61-79.

Grytten, Ola Honningdal. “A Consumer Price Index for Norway, 1516-2003.” Norges Bank: Occasional Papers, no. 1 (2004a): 47-98.

Grytten. Ola Honningdal. “The Gross Domestic Product for Norway, 1830-2003.” Norges Bank: Occasional Papers, no. 1 (2004b): 241-288.

Hodne, Fritz. An Economic History of Norway, 1815-1970. Tapir: Trondheim, 1975.

Hodne, Fritz. The Norwegian Economy, 1920-1980. London: Croom Helm and St. Martin’s, 1983.

Hodne, Fritz and Ola Honningdal Grytten. Norsk økonomi i det 19. århundre. Bergen: Fagbokforlaget, 2000.

Hodne, Fritz and Ola Honningdal Grytten. Norsk økonomi i det 20. århundre. Bergen: Fagbokforlaget, 2002.

Klovland, Jan Tore. “Monetary Policy and Business Cycles in the Interwar Years: The Scandinavian Experience.” European Review of Economic History 2, no. 2 (1998):

Klovland, Jan Tore. “Monetary Aggregates in Norway, 1819-2003.” Norges Bank: Occasional Papers, no. 1 (2004a): 181-240.

Klovland, Jan Tore. “Historical Exchange Rate Data, 1819-2003”. Norges Bank: Occasional Papers, no. 1 (2004b): 289-328.

Lange, Even, editor. Teknologi i virksomhet. Verkstedsindustri i Norge etter 1840. Oslo: Ad Notam Forlag, 1989.

Nordvik, Helge W. “Finanspolitikken og den offentlige sektors rolle i norsk økonomi i mellomkrigstiden”. Historisk tidsskrift 58, no. 3 (1979): 239-268.

Sejersted, Francis. Demokratisk kapitalisme. Oslo: Universitetsforlaget, 1993.

Søilen. Espen. “Fra frischianisme til keynesianisme? En studie av norsk økonomisk politikk i lys av økonomisk teori, 1945-1980.” Ph.D. dissertation. Bergen: Norwegian School of Economics and Business Administration, 1998.

Citation: Grytten, Ola. “The Economic History of Norway”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-norway/

Military Spending Patterns in History

Jari Eloranta, Appalachian State University

Introduction

Determining adequate levels of military spending and sustaining the burden of conflicts have been among key fiscal problems in history. Ancient societies were usually less complicated in terms of the administrative, fiscal, technological, and material demands of warfare. The most pressing problem was frequently the adequate maintenance of supply routes for the armed forces. On the other hand, these societies were by and large subsistence societies, so they could not extract massive resources for such ventures, at least until the arrival of the Roman and Byzantine Empires. The emerging nation states of the early modern period were much better equipped to fight wars. On the one hand, the frequent wars, new gunpowder technologies, and the commercialization of warfare forced them to consolidate resources for the needs of warfare. On the other hand, the rulers had to – slowly but surely – give up some of their sovereignty to be able to secure required credit both domestically and abroad. The Dutch and the British were masters at this, with the latter amassing an empire that spanned the globe at the eve of the First World War.

The early modern expansion of Western European states started to challenge other regimes all over the world, made possible by their military and naval supremacy as well as later on by their industrial prowess. The age of total war in the nineteenth and twentieth centuries finally pushed these states to adopt more and more efficient fiscal systems and enabled some of them to dedicate more than half of their GDP to the war effort during the world wars. Comparatively, even though military spending was regularly the biggest item in the budget for most states before the twentieth century, it still represented only a modest amount of their GDP. The Cold War period again saw high relative spending levels, due to the enduring rivalry between the West and the Communist Bloc. Finally, the collapse of the Soviet Union alleviated some of these tensions and lowered the aggregate military spending in the world. Newer security challenges such as terrorism and various interstate rivalries have again pushed the world towards growing overall military spending.

This article will first elaborate on some of the research trends in studying military spending and the multitude of theories attempting to explain the importance of warfare and military finance in history. This survey will be followed by a chronological sweep, starting with the military spending of the ancient empires and ending with a discussion of the current behavior of states in the post-Cold War international system. By necessity, this chronological review will be selective at best, given the enormity of the time period in question and the complexity of the topic at hand.

Theoretical Approaches

Military spending is a key phenomenon in order to understand various aspects of economic history: the cost, funding, and burden of conflicts; the creation of nation states; and in general the increased role of government in everyone’s lives especially since the nineteenth century. Nonetheless, certain characteristics can be distinguished from the efforts to study this complex topic among different sciences (mainly history, economics, and political sciences). Historians, especially diplomatic and military historians, have been keen on studying the origins of the two World Wars and perhaps certain other massive conflicts. Nonetheless, many of the historical studies on war and societies have analyzed developments at an elusive macro-level, often without a great deal of elaboration on the quantitative evidence behind the assumptions on the effects of military spending. For example, Paul Kennedy argued in his famous The Rise and Fall of the Great Powers: Economic Change and Military Conflict from 1500 to 2000 (1989) that military spending by hegemonic states eventually becomes excessive and a burden on its economy, finally leading to economic ruin. This argument has been criticized by many economists and historians, since it seems to lack the proper quantitative sources to support his notion of interaction between military spending and economic growth.[2] Quite frequently, as emerging from the classic studies by A.J.P. Taylor and many of the more current works, historians tend to be more interested in the impact of foreign policy decision-making and alliances, in addition to resolving the issue of “blame,” on the road towards major conflicts[3], rather than how reliable quantitative evidence can be mustered to support or disprove the key arguments. Economic historians, in turn, have not been particularly interested in the long-term economic impacts of military spending. Usually the interest of economic historians has centered on the economics of global conflicts — of which a good example of recent work combining the theoretical aspects of economics with historical case studies is The Economics of World War II, a compilation edited by Mark Harrison — as well as the immediate short-term economic impacts of wartime mobilization.[4]

The study of defense economics and military spending patterns as such is related to the immense expansion of military budgets and military establishments in the Cold War era. It involves the application of the methods and tools of economics to the study of issues arising from such a huge expansion. At least three aspects in defense economics set it apart from other fields of economics: 1) the actors (both private and public, for example in contracting); 2) theoretical challenges introduced by the interaction of different institutional and organizational arrangements, both in the budgeting and the allocation procedures; 3) the nature of military spending as a tool for destruction as well as providing security.[5] One of the shortcomings in the study of defense economics has been, at least so far, the lack of interest in periods before the Second World War.[6] For example, how much has the overall military burden (military expenditures as a percentage of GDP) of nation states changed over the last couple of centuries? Or, how big of a financial burden did the Thirty Years War (1618-1648) impose on the participating Great Powers?

A “typical” defense economist (see especially Sandler and Hartley (1995)) would model and attempt, based on public good theories, to explain military spending behavior (essentially its demand) by states with the following base equation:

(1)

In Equation 1, ME represents military expenditures by state i in year t, PRICE the price of military goods (affected by technological changes as well), INCOME most commonly the real GDP of the state in question, SPILLINS the impact of friendly states’ military spending (for example in an alliance), THREATS the impact of hostile states’ or alliances’ military expenditures, and STRATEGY the constraints imposed by changes in the overall strategic parameters of a nation. Most commonly, a higher price for military goods lowers military spending; higher income tends to increase ME (like during the industrial revolutions); alliances often lower ME due to the free riding tendencies of most states; threats usually increase military spending (and sometimes spur on arms races); and changes in the overall defensive strategy of a nation can affect ME in either direction, depending on the strategic framework implemented. While this model may be suitable for the study of, for example, the Cold War period, it fails to capture many other important explanatory factors, such as the influence of various organizations and interest groups in the budgetary processes as well as the impact of elections and policy-makers in general. For example, interest groups can get policy-makers to ignore price increases (on, for instance, domestic military goods), and election years usually alter (or focus) the behavior of elected officials.

In turn within peace sciences, a broader yet overlapping school of thought compared to defense economics, the focus in research has been to find the causal factors behind the most destructive conflicts. One of the most significant of such interdisciplinary efforts has been the Correlates of War (COW) project, which started in the spring of 1963. This project and the researchers loosely associated with it, not to mention its importance in producing comparative statistics, have had a big impact on the study of conflicts.[7] As Daniel S. Geller and J. David Singer have noted, the number of territorial states in the global system has ranged from fewer than 30 after the Napoleonic Wars to nearly 200 at the end of the twentieth century, and it is essential to test the various indicators collected by peace scientists against the historical record until theoretical premises can be confirmed or rejected.[8] In fact, a typical feature in most studies of this type is that they are focused on finding those sets of variables that might predict major wars and other conflicts, in a way similar to the historians’ origins-of-wars approach, whereas studies investigating the military spending behavior of monads (single states), dyads (pairs of states), or systems in particular are quite rare. Moreover, even though some cycle theorists and conflict scientists have been interested in the formation of modern nation states and the respective system of states since 1648, they have not expressed any real interest in pre-modern societies and warfare.[9]

Nevertheless, these contributions have had a lot to offer to the study of long-run dynamics of military spending, state formation, and warfare. According to Charles Tilly, there are four approximate approaches to the study of the relationships between war and power: 1) the statist; 2) the geopolitical; 3) the world system; and 4) the mode of production approach. The statist approach presents war, international relations, and state formation chiefly as a conse­quence of events within particular states. The geopolitical analysis is centered on the argument that state formation responds strongly to the current system of relations among states. The world system approach, á la Wallerstein, is mainly rooted in the idea that the different paths of state formation are influenced by the division of resources in the world system. In the mode of production framework, the way that production is organized determines the outcome of state formation. None of the approaches, as Tilly has pointed out, are adequate in their purest form in explaining state formation, international power relations, and economic growth as a whole.[10] Tilly himself maintains that coercion (a monopoly of violence by rulers and ability to wield coercion also externally) and capital (means of financing warfare) were the key elements in the European ascendancy to world domination in the early modern era. Warfare, state formation, and technological supremacy were all interrelated fundamentals of the same process.[11]

How can these theories of state behavior at the system level be linked to the analysis of military spending? According to George Modelski and William R. Thompson, proponents of Kondratieff waves and long cycles as explanatory forces in the development of world leadership patterns, the key aspect in a state’s ascendancy to prominence via such cycles in such models is naval power; i.e., a state’s ability to vie for world political leadership, colonization, and domination in trade.[12] One of the less explored aspects in most studies of hegemonic patterns is the military expenditure component in the competition between the states for military and economic leadership in the system. It is often argued, for example, that uneven economic growth levels cause nations to compete for economic and military prow­ess. The leader nation(s) thus has to dedicate increasing resources to armaments in order to maintain its position, while the other states, the so-called followers, can benefit from greater investments in other areas of economic activity. Therefore, the follower states act as free-riders in the international system stabilized by the hegemon. A built-in assumption in this hypothesized development pattern is that military spending eventually becomes harmful for economic development; a notion that has often been challenged based on empirical studies.[13]

Overall, the assertion arising from such a framework is that economic development and military spending are closely interdependent, with military spending being the driving force behind economic cycles. Moreover, based on this development pattern, it has been suggested that a country’s poor economic performance is linked to the “wasted” economic resources represented by military expenditures. However, as recent studies have shown, economic development is often more significant in explaining military spending rather than vice versa. The development of the U.S. economy since the Second World War certainly does not the type of hegemonic decline as predicted by Kennedy.[14] The aforementioned development pattern can be paraphrased as the so-called war chest hypothesis. As some of the hegemonic theorists reviewed above suggest, economic prosperity might be a necessary prerequisite for war and expansion. Thus, as Brian M. Pollins and Randall L. Schweller have indicated, economic growth would induce rising government expenditures, which in turn would enable higher military spending — therefore military expenditures would be “caused” by economic growth at a certain time lag.[15] In order for military spending to hinder economic performance, it would have to surpass all other areas of an economy, such as is often the case during wartime.

There have been relatively few credible attempts to model the military (or budgetary) spending behavior of states based on their long-run regime characteristics. Here I am going to focus on three in particular: 1) the Webber-Wildawsky model of budgeting; 2) the Richard Bonney model of fiscal systems; and 3) the Niall Ferguson model of interaction between public debts and forms of government. Caroly Webber and Aaron Wildawsky maintain essentially that each political culture generates its characteristic budgetary objectives; namely, productivity in market regimes, redistribution in sects (specific groups dissenting from an established authority), and more complex procedures in hierarchical regimes.[16] Thus, according to them the respective budgetary consequences arising from the chosen regime can be divided into four categories: despotism, state capitalism, American individualism, and social democracy. All of them in turn have implications for the respective regimes’ revenue and spending needs.

This model, however, is essentially a static one. It does not provide clues as to why nations’ behavior may change over time. Richard Bonney has addressed this problem in his writings on mainly the early modern states.[17] He has emphasized that the states’ revenue and tax collection systems, the backbone of any militarily successful nation state, have evolved over time. For example, in most European states the government became the arbiter of disputes and the defender of certain basic rights in the society by the early modern period. During the Middle Ages, the European fiscal systems were relatively backward and autarchic, with mostly predatory rulers (or roving bandits, as Mancur Olson has coined them).[18] In his model this would be the stage of the so-called tribute state. Next in the evolution came, respectively, the domain state (with stationary bandits, providing some public goods), the tax state (more reliance on credit and revenue collection), and finally the fiscal state (embodying more complex fiscal and political structures). A superpower like Great Britain in the nineteenth century, in fact, had to be a fiscal state to be able to dominate the world, due to all the burdens that went with an empire.[19]

While both of the models mentioned above have provided important clues as to how and why nations have prepared fiscally for wars, the most complete account of this process (along with Charles Tilly’s framework covered earlier) has been provided by Niall Ferguson.[20] He has maintained that wars have shaped all the most relevant institutions of modern economic life: tax-collecting bureaucracies, central banks, bond markets, and stock exchanges. Moreover, he argues that the invention of public debt instruments has gone hand-in-hand with more democratic forms of government and military supremacy – hence, the so-called Dutch or British model. These types of regimes have also been the most efficient economically, which has in turned reinforced the success of this fiscal regime model. In fact, military expenditures may have been the principal cause of fiscal innovation for most of history. Ferguson’s model highlights the importance, for a state’s survival among its challengers, of the adoption of the right types of institutions, technology, and a sufficient helping of external ambitions. All in all, I would summarize the required model, combining elements from the various frameworks, as being evolutionary, with regimes during different stages having different priorities and burdens imposed by military spending, depending also on their position in the international system. A successful ascendancy to a leadership position required higher expenditures, a substantial navy, fiscal and political structures conducive to increasing the availability of credit, and reoccurring participation in international conflicts.

Military Spending and the Early Empires

For most societies since the ancient river valley civilizations, military exertions and the means by which to finance them have been the crucial problems of governance. A centralized ability to plan and control spending were lacking in most governments until the nineteenth century. In fact, among the ancient civilizations, financial administration and the government were inseparable. Governments were organized on hierarchical basis, with the rulers having supreme control over military decisions. Taxes were often paid in kind to support the rulers, thus making it more difficult to monitor and utilize the revenues for military campaigns over great distances. For these agricultural economies, victory in war usually yielded lavish tribute to supplement royal wealth and helped to maintain the army and control the population. Thus, support of the large military forces and expeditions, contingent on food and supplies, was the ancient government’s principal expense and problem. Dependence on distant, often external suppliers of food limited the expansion of these empires. Fiscal management in turn was usually cumbersome and costly, and all of the ancient governments were internally unstable and vulnerable to external incursions.[21]

Soldiers, however, often supplemented their supplies by looting the enemy territory. The optimal size of an ancient empire was determined by the efficiency of tax collection and allocation, resource extraction, and its transportation system. Moreover, the supply of metal and weaponry, though important, was seldom the only critical variable for the military success an ancient empire. There were, however, important changing points in this respect, for example the introduction of bronze weaponry, starting with Mesopotamia about 3500 B.C. The introduction of iron weaponry about 1200 B.C. in eastern parts of Asia Minor, although the subsequent spread of this technology was fairly slow and gathered momentum from about 1000 B.C. onwards, and the use of chariot warfare introduced a new phase in warfare, due to the superior efficiency and cheapness of iron armaments as well as the hierarchical structures that were needed to use them during the chariot era.[22]

The river valley civilizations, nonetheless, paled in comparison with the military might and economy of one of the most efficient military behemoths of all time: the Roman Empire. Military spending was the largest item of public spending throughout Roman history. All Roman governments, similar to Athens during the time of Pericles, had problems in gathering enough revenue. Therefore, for example in the third century A.D. Roman citizenship was extended to all residents of the empire in order to raise revenue, as only citizens paid taxes. There were also other constraints on their spending, such as technological, geographic, and other productivity concerns. Direct taxation was, however, regarded as a dishonor, only to be extended in crisis times. Thus, taxation during most of the empire remained moderate, consisting of extraordinary taxes (so-called liturgies in ancient Athens) during such episodes. During the first two centuries of empire, the Roman army had about 150,000 to 160,000 legionnaires, in addition to 150,000 other troops, and during the first two centuries of empire soldiers’ wages began to increase rapidly to ensure the army’s loyalty. For example, in republican and imperial Rome military wages accounted for more than half of the revenue. The demands of the empire became more and more extensive during the third and fourth centuries A.D., as the internal decline of the empire became more evident and Rome’s external challengers became stronger. For example, the limited use of direct taxes and the commonness of tax evasion could not fulfill the fiscal demands of the crumbling empire. Armed forces were in turn used to maintain internal order. Societal unrest, inflation, and external incursions finally brought the Roman Empire, at least in the West, to an end.[23]

Warfare and the Rise of European Supremacy

During the Middle Ages, following the decentralized era of barbarian invasions, a varied system of European feudalism emerged, in which often feudal lords provided protection for communities for service or price. Since the Merovingian era, soldiers became more specialized professionals, with expensive horses and equipment. By the Carolingian era, military service had become largely the prerogative of an aristocratic elite. Prior to 1000 A.D., the command system was preeminent in mobilizing human and material resources for large-scale military enterprises, mostly on a contingency basis.[24] The isolated European societies, with the exception of the Byzantine Empire, paled in comparison with the splendor and accomplishment of the empires in China and the Muslim world. Also, in terms of science and inventions the Europeans were no match for these empires until the early modern period. Moreover, it was not until the twelfth century and the Crusades that the feudal kings needed to supplement the ordinary revenues to finance large armies. Internal discontent in the Middle Ages often led to an expansionary drive as the spoils of war helped calm the elite — for example, the French kings had to establish firm taxing power in the fourteenth century out of military necessity. The political ambitions of medieval kings, however, still relied on revenue strategies that catered to the short-term deficits, which made long-term credit and prolonged military campaigns difficult.[25]

Innovations in the ways of waging war and technology invented by the Chinese and the Islamic societies permeated Europe with a delay, such as the use of pikes in the fourteenth century and the gunpowder revolution of the fifteenth century, which in turn permitted armies to attack and defend larger territories. This also made possible a commercialization of warfare in Europe in the fourteenth and fifteenth centuries as feudal armies had to give way to professional mercenary forces. Accordingly, medieval states had to increase their taxation levels and tax collection to support the growing costs of warfare and the maintenance of larger standing armies. Equally, the age of commercialization of warfare was accompanied by the rising importance of sea power as European states began to build their overseas empires (as opposed to for example the isolationist turn of Ming China in the fifteenth century). States such as Portugal, the Netherlands, and England, respectively, became the “systemic leaders” due to their extensive fleets and commercial expansion in the period before the Napoleonic Wars. These were also states that were economically cohesive due to internal waterways and small geographic size as well. The early winners in the fight for world leadership, such as England, were greatly influenced by the availability of inexpensive credit, enabling them to mobilize limited resources effectively to meet military expenses. Their rise was of course preceded by the naval exploration and empire-building of many successful European states, especially Spain, both in Europe and around the globe.[26]

This pattern from command to commercialized warfare, from short-term to more permanent military management system, can be seen in the English case. In the period 1535-1547, the English defense share (military expenditures as a percentage of central government expenditures) averaged at 29.4 percent, with large fluctuations from year to year. However, in the period 1685-1813, the mean English defense share was 74.6 percent, never dropping below 55 percent in the said period. The newly-emerging nation states began to develop more centralized and productive revenue-expenditure systems, the goal of which was to enhance the state’s power, especially in the absolutist era. This also reflected on the growing cost and scale of warfare: During the Thirty Years’ War between 100,000 and 200,000 men fought under arms, whereas twenty years later 450,000 to 500,000 men fought on both sides in the War of the Spanish Succession. The numbers notwithstanding, the Thirty Years’ War was a conflict directly comparable to the world wars in terms of destruction. For example, Charles Tilly has estimated the battle deaths to have exceeded two million. Henry Kamen, in turn, has emphasized the mass scale destruction and economic dislocation this caused in the German lands, especially to the civilian population.[27]

With the increasing scale of armed conflicts in the seventeenth century, the participants became more and more dependent on access to long-term credit, because whichever government ran out of money had to surrender first. For example, even though the causes of Spain’s supposed decline in the seventeenth century are still disputed, nonetheless it can be said that the lack of royal credit and the poor management of government finances resulted in heavy deficit spending as military exertions followed one after another in the seventeenth century. Therefore, the Spanish Crown defaulted repeatedly during the sixteenth and seventeenth centuries, and on several occasions forced Spain to seek an end to its military activities. Spain still remained one of the most important Great Powers of the period, and was able to sustain its massive empire mostly intact until the nineteenth century.[28]

What about other country cases – can they shed further light into the importance of military spending and warfare in their early modern economic and political development? A key question for France, for example, was the financing of its military exertions. According to Richard Bonney, the cost of France’s armed forces in its era of “national greatness” were stupendous, with expenditure on the army by the period 1708-1714 averaging 218 million livres, whereas during the Dutch War of 1672-1678 it had averaged only 99 million in nominal terms. This was due to both growth in the size of the army and the navy, and the decline in the purchasing power of the French livre. The overall burden of war, however, remained roughly similar in this period: War expenditures accounted roughly 57 percent of total expenditure in 1683, whereas they represented about 52 percent in 1714. Moreover, as for all the main European monarchies, it was the expenditure on war that brought fiscal change in France, especially after the Napoleonic wars. Between 1815 and 1913, there was a 444 percent increase in French public expenditure and a consolidation of the emerging fiscal state. This also embodied a change in the French credit market structure.[29]

A success story, in a way a predecessor to the British model, was the Dutch state in this period. As Marjolein ‘t Hart has noted, the domestic investors were instrumental in supporting their new-born state as the state was able to borrow the money it needed from the credit markets, thus providing a stability in public finances even during crises. This financial regime lasted up until the end of the eighteenth century. Here again we can observe the intermarriage of military spending and the availability of credit, essentially the basic logic in the Ferguson model. One of the key features in the Dutch success in the seventeenth century was their ability to pay their soldiers relatively promptly. The Dutch case also underlines the primacy of military spending in state budgets and the burden involved for the early modern states. As we can see in Figure 1, the defense share of the Dutch region of Groningen remained consistently around 80 to 90 percent until the mid-seventeenth century, and then it declined, at least temporarily during periods of peace.[30]

Figure 1

Groningen’s Defense Share (Military Spending as a Percentage of Central Government Expenditures), 1596-1795

Source: L. van der Ent, et al. European State Finance Database. ESFD, 1999 [cited 1.2.2001]. Available from: http://www.le.ac.uk/hi/bon/ESFDB/frameset.html.

Respectively, in the eighteenth century, with rapid population growth in Europe, armies also grew in size, especially the Russian army. In Western Europe, a mounting intensity of warfare with the Seven Years War (1756-1763) finally culminated in the French Revolution and Napoleon’s conquests and defeat (1792-1815). The new style of warfare brought on by the Revolutionary Wars, with conscription and war of attrition as new elements, can be seen in the growth of army sizes. For example, the French army grew over 3.5 times in size from 1789 to 1793 – up to 650,000 men. Similarly, the British army grew from 57,000 in 1783 to 255,000 men in 1816. The Russian army acquired the massive size of 800,000 men in 1816, and Russia also kept the size of its armed forces at similar levels in the nineteenth century. However, the number of Great Power wars declined in number (see Table 1), as did the average duration of these wars. Yet, some of the conflicts of the industrial era became massive and deadly events, drawing in most parts of the world into essentially European skirmishes.

Table 1

Wars Involving the Great Powers

Century Number of wars Average duration of wars (years) Proportion of years war was underway, percentage
16th 34 1.6 95
17th 29 1.7 94
18th 17 1.0 78
19th 20 0.4 40
20th 15 0.4 53

Source: Charles Tilly. Coercion, Capital, and European States, AD 990-1990. Cambridge, Mass: Basil Blackwell, 1990.

The Age of Total War and Industrial Revolutions

With the new kind of mobilization, which became more or less a permanent state of affairs in the nineteenth century, centralized governments required new methods of finance. The nineteenth century brought on reforms, such as centralized public administration, reliance on specific, balanced budgets, innovations in public banking and public debt management, and reliance on direct taxation for revenue. However, for the first time in history, these reforms were also supported with the spread of industrialization and rising productivity. The nineteenth century was also the century of the industrialization of war, starting in the mid-century and gathering breakneck speed quickly. By the 1880s, military engineering began to forge ahead of even civil engineering. Also, a revolution in transportation with steamships and railroads made massive, long-distance mobilizations possible, as shown by the Prussian example against the French in 1870-1871.[31]

The demands posed by these changes on the state finances and economies differed. In the French case, the defense share stayed roughly the same, a little over 30 percent, throughout the nineteenth and early twentieth centuries, whereas its military burden increased about one percent to 4.2 percent. In the UK case, the defense share mean declined two percent to 36.7 percent in 1870-1913, compared to early nineteenth century. However, the strength of the British economy made it possible that the military burden actually declined a little to 2.6 percent, a similar figure incurred by Germany in the same period. For most countries the period leading to the First World War meant higher military burdens than that, such as Japan’s 6.1 percent. However, the United States, the new economic leader by the closing decades of the century, averaged spending a meager 0.7 percent of its GDP for military purposes, a trend that continued throughout the interwar period as well (military burden of 1.2 percent). As seen in Figure 2, the military burdens incurred by the Great Powers also varied in terms of timing, suggesting different reactions to external and internal pressures. Nonetheless, the aggregate, systemic real military spending of the period showed a clear upward trend for the entire period. Moreover, the impact of the Russo-Japanese was immense for the total (real) spending of the sixteen states represented in the figure below, due to the fact that both countries were Great Powers and Russian military expenditures alone were massive. The unexpected defeat of the Russians unleashed, along with the arrival of dreadnoughts, an intensive arms race.[32]

Figure 2

Military Burdens of Four Great Powers and Aggregate Real Military Expenditure (ME) for Sixteen Countries on the Aggregate, 1870-1913

Sources: See Jari Eloranta, “Struggle for Leadership? Military Spending Behavior of the Great Powers, 1870-1913,” Appalachian State University, Department of History, unpublished manuscript 2005b, also on the constructed system of states and the methods involved in converting the expenditures into a common currency (using exchange rates and purchasing power parities), which is always a controversial exercise.

With the beginning of the First World War in 1914, this military potential was unleashed in Europe with horrible consequences, as most of the nations anticipated a quick victory but ended up fighting a war of attrition in the trenches. Mankind had finally, even officially, entered the age of total war.[33] It has been estimated that about nine million combatants and twelve million civilians died during the so-called Great War, with property damage especially in France, Belgium, and Poland. According to Rondo Cameron and Larry Neal, the direct financial losses arising from the Great War were about 180-230 billion 1914 U.S. dollars, whereas the indirect losses of property and capital rose to over 150 billion dollars.[34] According to the most recent estimates, the economic losses arising from the war could be as high as 692 billion 1938 U.S. dollars.[35] But how much of their resources did they have to mobilize and what were the human costs of the war?

As Table 2 displays, the French military burden was fairly high, in addition to the size of its military forces and the number of battle deaths. Therefore, France mobilized the most resources in the war and, subsequently, suffered the greatest losses. The mobilization by Germany was also quite efficient, because almost the entire state budget was used to support the war effort. On the other hand, the United States barely participated in the war, and its personnel losses in the conflict were relatively small, as were its economic burdens. In comparison, the massive population reserves of Russia enabled fairly high personnel losses, quite similar to the Soviet experience in the Second World War.

Table 2

Resource Mobilization by the Great Powers in the First World War

Country and years in the war Average military burden (percent of GDP) Average defense share of government spending Military personnel as a percentage of population Battle deaths as a percentage of population
France

1914-1918

43 77 11 3.5
Germany

1914-1918

.. 91 7.3 2.7
Russia

1914-1917

.. .. 4.3 1.4
UK

1914-1918

22 49 7.3 2.0
US

1917-1918

7 47 1.7 0.1

Sources: Historical Statistics of the United States, Colonial Times to 1970, Washington, DC: U.S. Bureau of Census, 1975; Louis Fontvieille. Evolution et croissance de l’Etat Français: 1815-1969, Economies et sociëtës, Paris: Institut de Sciences Mathematiques et Economiques Appliquees, 1976 ; B. R. Mitchell. International Historical Statistics: Europe, 1750-1993, 4th edition, Basingstoke: Macmillan Academic and Professional, 1998a; E. V. Morgan, Studies in British Financial Policy, 1914-1925., London: Macmillan, 1952; J. David Singer and Melvin Small. National Material Capabilities Data, 1816-1985. Ann Arbor, MI: Inter-university Consortium for Political and Social Research, 1993. See also Jari Eloranta, “Sotien taakka: Makrotalouden ongelmat ja julkisen talouden kipupisteet maailmansotien jälkeen (The Burden of Wars: The Problems of Macro Economy and Public Sector after the World Wars),” in Kun sota on ohi, edited by Petri Karonen and Kerttu Tarjamo (forthcoming), 2005a.

In the interwar period, the pre-existing tendencies to continue social programs and support new bureaucracies made it difficult for the participants to cut their public expenditure, leading to a displacement of government spending to a slightly higher level for many countries. Public spending especially in the 1920s was in turn very static by nature, plagued by budgetary immobility and standoffs especially in Europe. This meant that although in many countries, except the authoritarian regimes, defense shares dropped noticeably, their respective military burdens stayed either at similar levels or even increased — for example, the French military burden rose to a mean level of 7.2 percent in this period. In Great Britain also, the defense share mean dropped to 18.0 percent, although the military burden mean actually increased compared to the pre-war period, despite the military expenditure cuts and the “Ten-Year Rule” in the 1920s. For these countries, the mid-1930s marked the beginning of intense rearmament whereas some of the authoritarian regimes had begun earlier in the decade. Germany under Hitler increased its military burden from 1.6 percent in 1933 to 18.9 percent in 1938, a rearmament program combining creative financing and promising both guns and butter for the Germans. Mussolini was not quite as successful in his efforts to realize the new Roman Empire, with a military burden fluctuating between four and five percent in the 1930s (5.0 percent in 1938). The Japanese rearmament drive was perhaps the most impressive, with as high as 22.7 percent military burden and over 50 percent defense share in 1938. For many countries, such as France and Russia, the rapid pace of technological change in the 1930s rendered many of the earlier armaments obsolete only two or three years later.[36]

Figure 3
Military Burdens of Denmark, Finland, France, and the UK, 1920-1938

Source: Jari Eloranta, “External Security by Domestic Choices: Military Spending as an Impure Public Good among Eleven European States, 1920-1938,” Dissertation, European University Institute, 2002.

There were differences between democracies as well, as seen in Figure 3. Finland’s behavior was similar to the UK and France, i.e. part of the so-called high spending group among European democracies. This was also similar to the actions of most East European states. Denmark was among the low-spending group, perhaps due to the futility of trying to defend its borders amidst probable conflicts involving giants in the south, France and Germany. Overall, the democracies maintained fairly steady military burdens throughout the period. Their rearmament was, however, much slower than the effort amassed by most autocracies. This is also amply displayed in Figure 4.

Figure 4
Military Burdens of Germany, Italy, Japan, and Russia/USSR, 1920-1938

Sources: Eloranta (2002), see especially appendices for the data sources. There are severe limitations and debates related to, for example, the German (see e.g. Werner Abelshauser, “Germany: Guns, Butter, and Economic Miracles,” in The Economics of World War II: Six Great Powers in International Comparison, edited by Mark Harrison, 122-176, Cambridge: Cambridge University Press, 2000) and the Soviet data (see especially R. W. Davies, “Soviet Military Expenditure and the Armaments Industry, 1929-33: A Reconsideration,” Europe-Asia Studies 45, no. 4 (1993): 577-608, as well as R. W. Davies and Mark Harrison. “The Soviet Military-Economic Effort under the Second Five-Year Plan, 1933-1937,” Europe-Asia Studies 49, no. 3 (1997): 369-406).

In the ensuing conflict, the Second World War, the initial phase from 1939 to early 1942 favored the Axis as far as strategic and economic potential was concerned. After that, the war of attrition, with the United States and the USSR joining the Allies, turned the tide in favor of the Allies. For example, in 1943 the Allied total GDP was 2,223 billion international dollars (in 1990 prices), whereas the Axis accounted for only 895 billion. Also, the impact of the Second World War was much more profound for the participants’ economies. For example, Great Britain at the height of the First World War incurred a military burden of about 27 percent, whereas the military burden level consistently held throughout the Second World War was over 50 percent.[37]

Table 3

Resource Mobilization by the Great Powers in the Second World War

Country and years in the war Average military burden (percent of GDP) Average defense share of government spending Military personnel as a percentage of population Battle deaths as a percentage of population
France

1939-1945

.. .. 4.2 0.5
Germany

1939-1945

50 .. 6.4 4.4
Soviet Union

1939-1945

44 48 3.3 4.4
UK

1939-1945

45 69 6.2 0.9
USA

1941-1945

32 71 5.5 0.3

Sources: Singer and Small (1993); Stephen Broadberry and Peter Howlett, “The United Kingdom: ‘Victory at All Costs’,” in The Economics of World War II: Six Great Powers in International Comparisons, edited by Mark Harrison (Cambridge University Press, 1998); Mark Harrison. “The Economics of World War II: An Overview,” in The Economics of World War II: Six Great Powers in International Comparisons, edited by Mark Harrison (Cambridge: Cambridge University Press, 1998a); Mark Harrison, “The Soviet Union: The Defeated Victor,” in The Economics of World War II: Six Great Powers in International Comparison, edited by Mark Harrison, 268-301 (Cambridge: Cambridge University Press, 2000); Mitchell (1998a); B.R. Mitchell. International Historical Statistics: The Americas, 1750-1993, fourth edition, London: Macmillan, 1998b. The Soviet defense share only applies to years 1940-1945, whereas the military burden applies to 1940-1944. These two measures are not directly comparable, since the former is measured in current prices and the latter in constant prices.

As Table 3 shows, the greatest military burden was most likely incurred by Germany, even though the other Great Powers experienced similar levels. Only the massive economic resources of the United States made possible its lower military burden. Also the UK and the United States mobilized their central/federal government expenditures efficiently for the military effort. In this sense the Soviet Union fared the worst, and additionally the share of military personnel out of the population was relatively small compared to the other Great Powers. On the other hand, the economic and demographic resources that the Soviet Union possessed ultimately ensured its survival during the German onslaught. On the aggregate, the largest personnel losses were incurred by Germany and the Soviet Union, in fact many times those of the other Great Powers.[38] In comparison with the First World War, the second one was even more destructive and lethal, and the aggregate economic losses from the war exceeded even 4,000 billion 1938 U.S. dollars. After the war, the European industrial and agricultural production amounted to only half of the 1938 total.[39]

The Atomic Age and Beyond

The Second World War brought with it also a new role for the United States in world politics, a military-political leadership role warranted by its dominant economic status established over fifty years earlier. With the establishment of NATO in 1949, a formidable defense alliance was formed for the capitalist countries. The USSR, rising to new prominence due to the war, established the Warsaw Pact in 1955 to counter these efforts. The war also meant a change in the public spending and taxation levels of most Western nations. The introduction of welfare states brought the OECD government expenditure average from just under 30 percent of the GDP in the 1950s to over 40 percent in the 1970s. Military spending levels followed suit and peaked during the early Cold War. The American military burden increased above 10 percent in 1952-1954, and the United States has retained a high mean value for the post-war period of 6.7 percent. Great Britain and France followed the American example after the Korean War.[40]

The Cold War embodied a relentless armaments race, with nuclear weapons now as the main investment item, between the two superpowers (see Figure 5). The USSR, according to some figures, spent about 60 to 70 percent of the American level in the 1950s, and actually spent more than the United States in the 1970s. Nonetheless, the United States maintained a massive advantage over the Soviets in terms of nuclear warheads. However, figures collected by SIPRI (Stockholm International Peace Research Institute), suggest an enduring yet dwindling lead for the US even in the 1970s. On the other hand, the same figures point to a 2-to-1 lead in favor of the NATO countries over the Warsaw Pact members in the 1970s and early 1980s. Part of this armaments race was due to technological advances that led to increases in the cost per soldier — it has been estimated that technological increases have produced a mean annual increase in real costs of around 5.5 percent in the post-war period. Nonetheless, spending on personnel and their maintenance has remained the biggest spending item for most countries.

Figure 5

Military Burdens (=MILBUR) of the United States and the United Kingdom, and the Soviet Military Spending as a Percentage of the US Military Spending (ME), 1816-1993

Sources: References to the economic data can be found in Jari Eloranta, “National Defense,” in The Oxford Encyclopedia of Economic History, edited by Joel Mokyr, 30-33 (Oxford: Oxford University Press, 2003b). ME (Military Expenditure) data from Singer and Small (1993), supplemented with the SIPRI (available from: http://www.sipri.org/) data for 1985-1993. Details are available from the author upon request. Exchange rates from Global Financial Data (Online databank), 2003. Available from http://www.globalfindata.com/. The same caveats apply to the underlying currency conversion methods as in Figure 2.

The one outcome of this Cold War arms race that is often cited is the so-called Military Industrial Complex (MIC), referring usually to the influence that the military and industry would have on each other’s policies. The more nefarious connotation refers to the unduly large influence that military producers might have over public sector’s acquisitions and foreign policy in particular in such a collusive relationship. In fact, the origins of this type of interaction can be found further back in history. As Paul Koistinen has emphasized, the First World War was a watershed in business-government relationships, since businessmen were often brought into government, to make supply decisions during this total conflict. Most governments, as a matter of fact, needed the expertise of the core business elites during the world wars. In the United States some form of an MIC came into existence before 1940. Similar developments can be seen in other countries before the Second World War, for example in the Soviet Union. The Cold War simply reinforced these tendencies.[41] Findings by, for example, Robert Higgs establish that the financial performance of the leading defense contracting companies was, on the average, much better than that of comparable large corporations during the period 1948-1989. Nonetheless, his findings do not support the normative conclusion that the profits of defense contractors were “too high.”[42]

World spending levels began a slow decline from the 1970s onwards, with the Reagan years being an exception for the US. In 1986, the US military burden was 6.5 percent, whereas in 1999 it was down to 3.0 percent. In France during the period 1977-1999, the military burden has declined from the post-war peak levels in the 1950s to a mean level of 3.6 percent at the turn of the millennium. This has been mostly the outcome of the reduction in tensions between the rival groups and the downfall of the USSR and the communist regimes in Eastern Europe. The USSR was spending almost as much on its armed forces as the United States up until mid-1980s, and the Soviet military burden was still 12.3 percent in 1990. Under the Russian Federation, with a declining GDP, this level has dropped rapidly to 3.2 percent in 1998. Similarly, other nations have downscaled their military spending since the late 1980s and the 1990s. For example, German military spending in constant US dollars in 1991 was over 52 billion, whereas in 1999 it declined to less than 40 billion. In the French case, the decline was from little over 52 billion in 1991 to below 47 billion in 1999, with its military burden decreasing from 3.6 percent to 2.8 percent.[43]

Overall, according to the SIPRI figures, there was a reduction of about one-third in real terms in world military spending in 1989-1996, with some fluctuation and even small increase since then. In the global scheme, world military expenditure is still highly concentrated on a few countries, with the 15 major spenders accounting for 80 percent of the world total in 1999. The newest military spending estimates (see e.g. http://www.sipri.org/) put the world military expenditures on a growth trend once again due to new threats such as international terrorism and the conflicts related to terrorism. In terms of absolute figures, the United States still dominates the world military spending with a 47 percent share of the world total in 2003. The U.S. spending total becomes less impressive when purchasing power parities are utilized. Nonetheless, the United States has entered the third millennium as the world’s only real superpower – a role that it embraces sometimes awkwardly. Whereas the United States was an absent hegemon in the late nineteenth and first half of the twentieth century, it now has to maintain its presence in many parts of the world, sometimes despite objections from the other players in the international system.[44]

Conclusions

Warfare has played a crucial role in the evolution of human societies. The ancient societies were usually less complicated in terms of the administrative, fiscal, technological, and material demands of warfare. The most pressing problem was commonly the maintenance of adequate supply for the armed forces during prolonged campaigns. This also put constraints on the size and expansion of the early empires, at least until the introduction of iron weaponry. The Roman Empire, for example, was able to sustain a large, geographically diverse empire for a long time period. The disjointed Middle Ages splintered the European societies into smaller communities, in which so-called roving bandits ruled, at least until the arrival of more organized military forces from the tenth century onwards. At the same time, the empires in China and the Muslim world developed into cradles of civilization in terms of scientific discoveries and military technologies.

The geographic and economic expansion of early modern European states started to challenge other regimes all over the world, made possible in part by their military and naval supremacy as well as their industrial prowess later on. The age of total war and revolutions in the nineteenth and twentieth centuries finally pushed these states to adopt more and more efficient fiscal systems and enabled some of them to dedicate more than half of their GDP to the war effort during the world wars. Even though military spending was regularly the biggest item in the budget for most states before the twentieth century, it still represented only a modest amount of their respective GDP. The Cold War period again saw high relative spending levels, due to the enduring rivalry between the West and the Communist bloc. Finally, the collapse of the Soviet Union alleviated some of these tensions and lowered the aggregate military spending in the world, if only temporarily. Newer security challenges such as terrorism and various interstate rivalries have again pushed the world towards a growth path in terms of overall military spending.

The cost of warfare has increased especially since the early modern period. The adoption of new technologies and massive standing armies, in addition to the increase in the “bang-for-buck” (namely, the destructive effect of military investments), have kept military expenditures in a central role vis-à-vis modern fiscal regimes. Although the growth of welfare states in the twentieth century has forced some tradeoffs between “guns and butter,” usually the spending choices have not been competing rather than complementary. Thus, the size and spending of governments have increased. Even though the growth in welfare spending has abated somewhat since the 1980s, according to Peter Lindert they will most likely still experience at least modest expansion in the future. Nor is it likely that military spending will be displaced as a major spending item in national budgets. Various international threats and the lack of international cooperation will ensure that military spending will remain the main contender to social expenditures.[45]


[1] I thank several colleagues for their helpful comments, especially Mark Harrison, Scott Jessee, Mary Valante, Ed Behrend, David Reid, as well as an anonymous referee and EH.Net editor Robert Whaples. The remaining errors and interpretations are solely my responsibility.

[2] See Paul Kennedy, The Rise and Fall of the Great Powers: Economic Change and Military Conflict from 1500 to 2000 (London: Fontana, 1989). Kennedy calls this type of approach, following David Landes, “large history.” On criticism of Kennedy’s “theory,” see especially Todd Sandler and Keith Hartley, The Economics of Defense, ed. Mark Perlman, Cambridge Surveys of Economic Literature (Cambridge: Cambridge University Press, 1995) and the studies listed in it. Other examples of long-run explanations can be found in, e.g., Maurice Pearton, The Knowledgeable State: Diplomacy, War, and Technology since 1830 (London: Burnett Books: Distributed by Hutchinson, 1982) and William H. McNeill, The Pursuit of Power: Technology, Armed Force, and Society since A.D. 1000 (Chicago: University of Chicago Press, 1982).

[3] Jari Eloranta, “Kriisien ja konfliktien tutkiminen kvantitatiivisena ilmiönä: Poikkitieteellisyyden haaste suomalaiselle sotahistorian tutkimukselle (The Study of Crises and Conflicts as Quantitative Phenomenon: The Challenge of Interdisciplinary Approaches to Finnish Study of Military History),” in Toivon historia – Toivo Nygårdille omistettu juhlakirja, ed. Kalevi Ahonen, et al. (Jyväskylä: Gummerus Kirjapaino Oy, 2003a).

[4] See Mark Harrison, ed., The Economics of World War II: Six Great Powers in International Comparisons (Cambridge, UK: Cambridge University Press, 1998b). Classic studies of this type are Alan Milward’s works on the European war economies; see e.g. Alan S. Milward, The German Economy at War (London: Athlon Press, 1965) and Alan S. Milward, War, Economy and Society 1939-1945 (London: Allen Lane, 1977).

[5] Sandler and Hartley, The Economics of Defense, xi; Jari Eloranta, “Different Needs, Different Solutions: The Importance of Economic Development and Domestic Power Structures in Explaining Military Spending in Eight Western Democracies during the Interwar Period” (Licentiate Thesis, University of Jyväskylä, 1998).

[6] See Jari Eloranta, “External Security by Domestic Choices: Military Spending as an Impure Public Good among Eleven European States, 1920-1938″ (Dissertation, European University Institute, 2002) for details.

[7] Ibid.

[8] Daniel S. Geller and J. David Singer, Nations at War. A Scientific Study of International Conflict, vol. 58, Cambridge Studies in International Relations (Cambridge: Cambridge University Press, 1998), e.g. 1-7.

[9] See e.g. Jack S. Levy, “Theories of General War,” World Politics 37, no. 3 (1985). For an overview, see especially Geller and Singer, Nations at War: A Scientific Study of International Conflict. A classic study of war from the holistic perspective is Quincy Wright, A Study of War (Chicago: University of Chicago Press, 1942). See also Geoffrey Blainey, The Causes of War (New York: Free Press, 1973). On rational explanations of conflicts, see James D. Fearon, “Rationalist Explanations for War,” International Organization 49, no. 3 (1995).

[10] Charles Tilly, Coercion, Capital, and European States, AD 990-1990 (Cambridge, MA: Basil Blackwell, 1990), 6-14.

[11] For more, see especially ibid., Chapters 1 and 2.

[12] George Modelski and William R. Thompson, Leading Sectors and World Powers: The Coevolution of Global Politics and Economics, Studies in International Relations (Columbia, SC: University of South Carolina Press, 1996), 14-40. George Modelski and William R. Thompson, Seapower in Global Politics, 1494-1993 (Houndmills, UK: Macmillan Press, 1988).

[13] Kennedy, The Rise and Fall of the Great Powers: Economic Change and Military Conflict from 1500 to 2000, xiii. On specific criticism, see e,g, Jari Eloranta, “Military Competition between Friends? Hegemonic Development and Military Spending among Eight Western Democracies, 1920-1938,” Essays in Economic and Business History XIX (2001).

[14] Eloranta, “External Security by Domestic Choices: Military Spending as an Impure Public Good among Eleven European States, 1920-1938,” Sandler and Hartley, The Economics of Defense.

[15] Brian M. Pollins and Randall L. Schweller, “Linking the Levels: The Long Wave and Shifts in U.S. Foreign Policy, 1790- 1993,” American Journal of Political Science 43, no. 2 (1999), e.g. 445-446. E.g. Alex Mintz and Chi Huang, “Guns versus Butter: The Indirect Link,” American Journal of Political Science 35, no. 1 (1991) suggest an indirect (negative) growth effect via investment at a lag of at least five years.

[16] Caroly Webber and Aaron Wildavsky, A History of Taxation and Expenditure in the Western World (New York: Simon and Schuster, 1986).

[17] He outlines most of the following in Richard Bonney, “Introduction,” in The Rise of the Fiscal State in Europe c. 1200-1815, ed. Richard Bonney (Oxford: Oxford University Press, 1999b).

[18] Mancur Olson, “Dictatorship, Democracy, and Development,” American Political Science Review 87, no. 3 (1993).

[19] On the British Empire, see especially Niall Ferguson, Empire: The Rise and Demise of the British World Order and the Lessons for Global Power (New York: Basic Books, 2003). Ferguson has also tackled the issue of a possible American empire in a more polemical Niall Ferguson, Colossus: The Price of America’s Empire (New York: Penguin Press, 2004).

[20] Ferguson outlines his analytical framework most concisely in Niall Ferguson, The Cash Nexus: Money and Power in the Modern World, 1700-2000 (New York: Basic Books, 2001), especially Chapter 1.

[21] Webber and Wildavsky, A History of Taxation and Expenditure in the Western World, 39-67. See also McNeill, The Pursuit of Power: Technology, Armed Force, and Society since A.D. 1000.

[22] McNeill, The Pursuit of Power: Technology, Armed Force, and Society since A.D. 1000 , 9-12.

[23] Webber and Wildavsky, A History of Taxation and Expenditure in the Western World.

[24] This interpretation of early medieval warfare and societies, including the concept of feudalism, has been challenged in more recent military history literature. See especially John France, “Recent Writing on Medieval Warfare: From the Fall of Rome to c. 1300,” Journal of Military History 65, no. 2 (2001).

[25] Webber and Wildavsky, A History of Taxation and Expenditure in the Western World, McNeill, The Pursuit of Power. Technology, Armed Force, and Society since A.D. 1000. See also Richard Bonney, ed., The Rise of the Fiscal State in Europe c. 1200-1815 (Oxford: Oxford University Press, 1999c).

[26] Ferguson, The Cash Nexus: Money and Power in the Modern World, 1700-2000, Tilly, Coercion, Capital, and European States, AD 990-1990, Jari Eloranta, “National Defense,” in The Oxford Encyclopedia of Economic History, ed. Joel Mokyr (Oxford: Oxford University Press, 2003b). See also Modelski and Thompson, Seapower in Global Politics, 1494-1993.

[27] Tilly, Coercion, Capital, and European States, AD 990-1990, 165, Henry Kamen, “The Economic and Social Consequences of the Thirty Years’ War,” Past and Present April (1968).

[28] Eloranta, “National Defense,” Henry Kamen, Empire: How Spain Became a World Power, 1492-1763, 1st American ed. (New York: HarperCollins, 2003), Douglass C. North, Institutions, Institutional Change, and Economic Performance (New York.: Cambridge University Press, 1990).

[29] Richard Bonney, “France, 1494-1815,” in The Rise of the Fiscal State in Europe c. 1200-1815, ed. Richard Bonney (Oxford: Oxford University Press, 1999a). War expenditure percentages (for the seventeenth and eighteenth centuries) were calculated using the so-called Forbonnais (and Bonney) database(s), available from European State Finance Database: http://www.le.ac.uk/hi/bon/ESFDB/RJB/FORBON/forbon.html and should be considered only illustrative.

[30] Marjolein ’t Hart, “The United Provinces, 1579-1806,” in The Rise of the Fiscal State in Europe c. 1200-1815, ed. Richard Bonney (Oxford: Oxford University Press, 1999). See also Ferguson, The Cash Nexus..

[31] See especially McNeill, The Pursuit of Power..

[32] Eloranta, “External Security by Domestic Choices: Military Spending as an Impure Public Good Among Eleven European States, 1920-1938,” Eloranta, “National Defense”. See also Ferguson, The Cash Nexus.. On the military spending patterns of Great Powers in particular, see J. M. Hobson, “The Military-Extraction Gap and the Wary Titan: The Fiscal Sociology of British Defence Policy 1870-1914,” Journal of European Economic History 22, no. 3 (1993).

[33] The practice of total war, of course, is as old as civilizations themselves, ranging from the Punic Wars to the more modern conflicts. Here total war refers to the twentieth century connotation of this term, embodying the use of all economic, political, and military might of a nation to destroy another in war. Therefore, even though the destruction of Carthage certainly qualifies as an action of total war, it is only in the nineteenth and twentieth centuries that this type of warfare and strategic thinking comes to full fruition. For example, the famous ancient military genius Sun Tzu advocated caution and planning in warfare, rather than using all means possible to win a war: “Thus, those skilled in war subdue the enemy’s army without battle. They capture his cities without assaulting them and overthrow his state without protracted operations.” Sun Tzu, The Art of War (Oxford: Oxford University Press, 1963), 79. With the ideas put forth by Clausewitz (see Carl von Clausewitz, On War (London: Penguin Books, 1982, e.g. Book Five, Chapter II) in the century century, the French Revolution, and Napoleon, the nature of warfare began to change. Clausewitz’s absolute war did not go as far as prescribing indiscriminate slaughter or other ruthless means to subdue civilian populations, but did contribute to the new understanding of the means of warfare and military strategy in the industrial age. The generals and despots of the twentieth century drew their own conclusions, and thus total war came to include not only subjugating the domestic economy to the needs of the war effort but also propaganda, destruction of civilian (economic) targets, and genocide.

[34] Rondo Cameron and Larry Neal, A Concise Economic History of the World: From Paleolithic Times to the Present, 4th ed. (Oxford: The Oxford University Press, 2003), 339. Thus, the estimate in e.g. Eloranta, “National Defense” is a hypothetical minimum estimate originally expressed in Gerard J. de Groot, The First World War (New York: Palgrave, 2001).

[35] See Table 13 in Stephen Broadberry and Mark Harrison, “The Economics of World War I: An Overview,” in The Economics of World War I, ed. Stephen Broadberry and Mark Harrison ((forthcoming), Cambridge University Press, 2005). The figures are, as the authors point out, only tentative.

[36] Eloranta, “External Security by Domestic Choices: Military Spending as an Impure Public Good Among Eleven European States, 1920-1938″, Eloranta, “National Defense”, Webber and Wildavsky, A History of Taxation and Expenditure in the Western World.

[37] Eloranta, “National Defense”.

[38] Mark Harrison, “The Economics of World War II: An overview,” in The Economics of World War II: Six Great Powers in International Comparisons, ed. Mark Harrison (Cambridge, UK: Cambridge University Press, 1998a), Eloranta, “National Defense.”

[39] Cameron and Neal, A Concise Economic History of the World, Harrison, “The Economics of World War II: An Overview,” Broadberry and Harrison, “The Economics of World War I: An Overview.” Again, the same caveats apply to the Harrison-Broadberry figures as disclaimed earlier.

[40] Eloranta, “National Defense”.

[41] Mark Harrison, “Soviet Industry and the Red Army under Stalin: A Military-Industrial Complex?” Les Cahiers du Monde russe 44, no. 2-3 (2003), Paul A.C. Koistinen, The Military-Industrial Complex: A Historical Perspective (New York: Praeger Publishers, 1980).

[42] Robert Higgs, “The Cold War Economy: Opportunity Costs, Ideology, and the Politics of Crisis,” Explorations in Economic History 31, no. 3 (1994); Ruben Trevino and Robert Higgs. 1992. “Profits of U.S. Defense Contractors,” Defense Economics Vol. 3, no. 3: 211-18.

[43] Eloranta, “National Defense”.

[44] See more Eloranta, “Military Competition between Friends? Hegemonic Development and Military Spending among Eight Western Democracies, 1920-1938.”

[45] For more, see especially Ferguson, The Cash Nexus, Peter H. Lindert, Growing Public. Social Spending and Economic Growth since the Eighteenth Century, 2 Vols., Vol. 1 (Cambridge: Cambridge University Press, 2004). On tradeoffs, see e.g. David R. Davis and Steve Chan, “The Security-Welfare Relationship: Longitudinal Evidence from Taiwan,” Journal of Peace Research 27, no. 1 (1990), Herschel I. Grossman and Juan Mendoza, “Butter and Guns: Complementarity between Economic and Military Competition,” Economics of Governance, no. 2 (2001), Alex Mintz, “Guns Versus Butter: A Disaggregated Analysis,” The American Political Science Review 83, no. 4 (1989), Mintz and Huang, “Guns versus Butter: The Indirect Link,” Kevin Narizny, “Both Guns and Butter, or Neither: Class Interests in the Political Economy of Rearmament,” American Political Science Review 97, no. 2 (2003).

Citation: Eloranta, Jari. “Military Spending Patterns in History”. EH.Net Encyclopedia, edited by Robert Whaples. September 16, 2005. URL http://eh.net/encyclopedia/military-spending-patterns-in-history/

Economic History of Malaysia

John H. Drabble, University of Sydney, Australia

General Background

The Federation of Malaysia (see map), formed in 1963, originally consisted of Malaya, Singapore, Sarawak and Sabah. Due to internal political tensions Singapore was obliged to leave in 1965. Malaya is now known as Peninsular Malaysia, and the two other territories on the island of Borneo as East Malaysia. Prior to 1963 these territories were under British rule for varying periods from the late eighteenth century. Malaya gained independence in 1957, Sarawak and Sabah (the latter known previously as British North Borneo) in 1963, and Singapore full independence in 1965. These territories lie between 2 and 6 degrees north of the equator. The terrain consists of extensive coastal plains backed by mountainous interiors. The soils are not naturally fertile but the humid tropical climate subject to monsoonal weather patterns creates good conditions for plant growth. Historically much of the region was covered in dense rainforest (jungle), though much of this has been removed for commercial purposes over the last century leading to extensive soil erosion and silting of the rivers which run from the interiors to the coast.

SINGAPORE

The present government is a parliamentary system at the federal level (located in Kuala Lumpur, Peninsular Malaysia) and at the state level, based on periodic general elections. Each Peninsular state (except Penang and Melaka) has a traditional Malay ruler, the Sultan, one of whom is elected as paramount ruler of Malaysia (Yang dipertuan Agung) for a five-year term.

The population at the end of the twentieth century approximated 22 million and is ethnically diverse, consisting of 57 percent Malays and other indigenous peoples (collectively known as bumiputera), 24 percent Chinese, 7 percent Indians and the balance “others” (including a high proportion of non-citizen Asians, e.g., Indonesians, Bangladeshis, Filipinos) (Andaya and Andaya, 2001, 3-4)

Significance as a Case Study in Economic Development

Malaysia is generally regarded as one of the most successful non-western countries to have achieved a relatively smooth transition to modern economic growth over the last century or so. Since the late nineteenth century it has been a major supplier of primary products to the industrialized countries; tin, rubber, palm oil, timber, oil, liquified natural gas, etc.

However, since about 1970 the leading sector in development has been a range of export-oriented manufacturing industries such as textiles, electrical and electronic goods, rubber products etc. Government policy has generally accorded a central role to foreign capital, while at the same time working towards more substantial participation for domestic, especially bumiputera, capital and enterprise. By 1990 the country had largely met the criteria for a Newly-Industrialized Country (NIC) status (30 percent of exports to consist of manufactured goods). While the Asian economic crisis of 1997-98 slowed growth temporarily, the current plan, titled Vision 2020, aims to achieve “a fully developed industrialized economy by that date. This will require an annual growth rate in real GDP of 7 percent” (Far Eastern Economic Review, Nov. 6, 2003). Malaysia is perhaps the best example of a country in which the economic roles and interests of various racial groups have been pragmatically managed in the long-term without significant loss of growth momentum, despite the ongoing presence of inter-ethnic tensions which have occasionally manifested in violence, notably in 1969 (see below).

The Premodern Economy

Malaysia has a long history of internationally valued exports, being known from the early centuries A.D. as a source of gold, tin and exotics such as birds’ feathers, edible birds’ nests, aromatic woods, tree resins etc. The commercial importance of the area was enhanced by its strategic position athwart the seaborne trade routes from the Indian Ocean to East Asia. Merchants from both these regions, Arabs, Indians and Chinese regularly visited. Some became domiciled in ports such as Melaka [formerly Malacca], the location of one of the earliest local sultanates (c.1402 A.D.) and a focal point for both local and international trade.

From the early sixteenth century the area was increasingly penetrated by European trading interests, first the Portuguese (from 1511), then the Dutch East India Company [VOC](1602) in competition with the English East India Company [EIC] (1600) for the trade in pepper and various spices. By the late eighteenth century the VOC was dominant in the Indonesian region while the EIC acquired bases in Malaysia, beginning with Penang (1786), Singapore (1819) and Melaka (1824). These were major staging posts in the growing trade with China and also served as footholds from which to expand British control into the Malay Peninsula (from 1870), and northwest Borneo (Sarawak from 1841 and North Borneo from 1882). Over these centuries there was an increasing inflow of migrants from China attracted by the opportunities in trade and as a wage labor force for the burgeoning production of export commodities such as gold and tin. The indigenous people also engaged in commercial production (rice, tin), but remained basically within a subsistence economy and were reluctant to offer themselves as permanent wage labor. Overall, production in the premodern economy was relatively small in volume and technologically undeveloped. The capitalist sector, already foreign dominated, was still in its infancy (Drabble, 2000).

The Transition to Capitalist Production

The nineteenth century witnessed an enormous expansion in world trade which, between 1815 and 1914, grew on average at 4-5 percent a year compared to 1 percent in the preceding hundred years. The driving force came from the Industrial Revolution in the West which saw the innovation of large scale factory production of manufactured goods made possible by technological advances, accompanied by more efficient communications (e.g., railways, cars, trucks, steamships, international canals [Suez 1869, Panama 1914], telegraphs) which speeded up and greatly lowered the cost of long distance trade. Industrializing countries required ever-larger supplies of raw materials as well as foodstuffs for their growing populations. Regions such as Malaysia with ample supplies of virgin land and relative proximity to trade routes were well placed to respond to this demand. What was lacking was an adequate supply of capital and wage labor. In both aspects, the deficiency was supplied largely from foreign sources.

As expanding British power brought stability to the region, Chinese migrants started to arrive in large numbers with Singapore quickly becoming the major point of entry. Most arrived with few funds but those able to amass profits from trade (including opium) used these to finance ventures in agriculture and mining, especially in the neighboring Malay Peninsula. Crops such as pepper, gambier, tapioca, sugar and coffee were produced for export to markets in Asia (e.g. China), and later to the West after 1850 when Britain moved toward a policy of free trade. These crops were labor, not capital, intensive and in some cases quickly exhausted soil fertility and required periodic movement to virgin land (Jackson, 1968).

Tin

Besides ample land, the Malay Peninsula also contained substantial deposits of tin. International demand for tin rose progressively in the nineteenth century due to the discovery of a more efficient method for producing tinplate (for canned food). At the same time deposits in major suppliers such as Cornwall (England) had been largely worked out, thus opening an opportunity for new producers. Traditionally tin had been mined by Malays from ore deposits close to the surface. Difficulties with flooding limited the depth of mining; furthermore their activity was seasonal. From the 1840s the discovery of large deposits in the Peninsula states of Perak and Selangor attracted large numbers of Chinese migrants who dominated the industry in the nineteenth century bringing new technology which improved ore recovery and water control, facilitating mining to greater depths. By the end of the century Malayan tin exports (at approximately 52,000 metric tons) supplied just over half the world output. Singapore was a major center for smelting (refining) the ore into ingots. Tin mining also attracted attention from European, mainly British, investors who again introduced new technology – such as high-pressure hoses to wash out the ore, the steam pump and, from 1912, the bucket dredge floating in its own pond, which could operate to even deeper levels. These innovations required substantial capital for which the chosen vehicle was the public joint stock company, usually registered in Britain. Since no major new ore deposits were found, the emphasis was on increased efficiency in production. European operators, again employing mostly Chinese wage labor, enjoyed a technical advantage here and by 1929 accounted for 61 percent of Malayan output (Wong Lin Ken, 1965; Yip Yat Hoong, 1969).

Rubber

While tin mining brought considerable prosperity, it was a non-renewable resource. In the early twentieth century it was the agricultural sector which came to the forefront. The crops mentioned previously had boomed briefly but were hard pressed to survive severe price swings and the pests and diseases that were endemic in tropical agriculture. The cultivation of rubber-yielding trees became commercially attractive as a raw material for new industries in the West, notably for tires for the booming automobile industry especially in the U.S. Previously rubber had come from scattered trees growing wild in the jungles of South America with production only expandable at rising marginal costs. Cultivation on estates generated economies of scale. In the 1870s the British government organized the transport of specimens of the tree Hevea Brasiliensis from Brazil to colonies in the East, notably Ceylon and Singapore. There the trees flourished and after initial hesitancy over the five years needed for the trees to reach productive age, planters Chinese and European rushed to invest. The boom reached vast proportions as the rubber price reached record heights in 1910 (see Fig.1). Average values fell thereafter but investors were heavily committed and planting continued (also in the neighboring Netherlands Indies [Indonesia]). By 1921 the rubber acreage in Malaysia (mostly in the Peninsula) had reached 935 000 hectares (about 1.34 million acres) or some 55 percent of the total in South and Southeast Asia while output stood at 50 percent of world production.

Fig.1. Average London Rubber Prices, 1905-41 (current values)

As a result of this boom, rubber quickly surpassed tin as Malaysia’s main export product, a position that it was to hold until 1980. A distinctive feature of the industry was that the technology of extracting the rubber latex from the trees (called tapping) by an incision with a special knife, and its manufacture into various grades of sheet known as raw or plantation rubber, was easily adopted by a wide range of producers. The larger estates, mainly British-owned, were financed (as in the case of tin mining) through British-registered public joint stock companies. For example, between 1903 and 1912 some 260 companies were registered to operate in Malaya. Chinese planters for the most part preferred to form private partnerships to operate estates which were on average smaller. Finally, there were the smallholdings (under 40 hectares or 100 acres) of which those at the lower end of the range (2 hectares/5 acres or less) were predominantly owned by indigenous Malays who found growing and selling rubber more profitable than subsistence (rice) farming. These smallholders did not need much capital since their equipment was rudimentary and labor came either from within their family or in the form of share-tappers who received a proportion (say 50 percent) of the output. In Malaya in 1921 roughly 60 percent of the planted area was estates (75 percent European-owned) and 40 percent smallholdings (Drabble, 1991, 1).

The workforce for the estates consisted of migrants. British estates depended mainly on migrants from India, brought in under government auspices with fares paid and accommodation provided. Chinese business looked to the “coolie trade” from South China, with expenses advanced that migrants had subsequently to pay off. The flow of immigration was directly related to economic conditions in Malaysia. For example arrivals of Indians averaged 61 000 a year between 1900 and 1920. Substantial numbers also came from the Netherlands Indies.

Thus far, most capitalist enterprise was located in Malaya. Sarawak and British North Borneo had a similar range of mining and agricultural industries in the 19th century. However, their geographical location slightly away from the main trade route (see map) and the rugged internal terrain costly for transport made them less attractive to foreign investment. However, the discovery of oil by a subsidiary of Royal Dutch-Shell starting production from 1907 put Sarawak more prominently in the business of exports. As in Malaya, the labor force came largely from immigrants from China and to a lesser extent Java.

The growth in production for export in Malaysia was facilitated by development of an infrastructure of roads, railways, ports (e.g. Penang, Singapore) and telecommunications under the auspices of the colonial governments, though again this was considerably more advanced in Malaya (Amarjit Kaur, 1985, 1998)

The Creation of a Plural Society

By the 1920s the large inflows of migrants had created a multi-ethnic population of the type which the British scholar, J.S. Furnivall (1948) described as a plural society in which the different racial groups live side by side under a single political administration but, apart from economic transactions, do not interact with each other either socially or culturally. Though the original intention of many migrants was to come for only a limited period (say 3-5 years), save money and then return home, a growing number were staying longer, having children and becoming permanently domiciled in Malaysia. The economic developments described in the previous section were unevenly located, for example, in Malaya the bulk of the tin mines and rubber estates were located along the west coast of the Peninsula. In the boom-times, such was the size of the immigrant inflows that in certain areas they far outnumbered the indigenous Malays. In social and cultural terms Indians and Chinese recreated the institutions, hierarchies and linguistic usage of their countries of origin. This was particularly so in the case of the Chinese. Not only did they predominate in major commercial centers such as Penang, Singapore, and Kuching, but they controlled local trade in the smaller towns and villages through a network of small shops (kedai) and dealerships that served as a pipeline along which export goods like rubber went out and in return imported manufactured goods were brought in for sale. In addition Chinese owned considerable mining and agricultural land. This created a distribution of wealth and division of labor in which economic power and function were directly related to race. In this situation lay the seeds of growing discontent among bumiputera that they were losing their ancestral inheritance (land) and becoming economically marginalized. As long as British colonial rule continued the various ethnic groups looked primarily to government to protect their interests and maintain peaceable relations. An example of colonial paternalism was the designation from 1913 of certain lands in Malaya as Malay Reservations in which only indigenous people could own and deal in property (Lim Teck Ghee, 1977).

Benefits and Drawbacks of an Export Economy

Prior to World War II the international economy was divided very broadly into the northern and southern hemispheres. The former contained most of the industrialized manufacturing countries and the latter the principal sources of foodstuffs and raw materials. The commodity exchange between the spheres was known as the Old International Division of Labor (OIDL). Malaysia’s place in this system was as a leading exporter of raw materials (tin, rubber, timber, oil, etc.) and an importer of manufactures. Since relatively little processing was done on the former prior to export, most of the value-added component in the final product accrued to foreign manufacturers, e.g. rubber tire manufacturers in the U.S.

It is clear from this situation that Malaysia depended heavily on earnings from exports of primary commodities to maintain the standard of living. Rice had to be imported (mainly from Burma and Thailand) because domestic production supplied on average only 40 percent of total needs. As long as export prices were high (for example during the rubber boom previously mentioned), the volume of imports remained ample. Profits to capital and good smallholder incomes supported an expanding economy. There are no official data for Malaysian national income prior to World War II, but some comparative estimates are given in Table 1 which indicate that Malayan Gross Domestic Product (GDP) per person was easily the leader in the Southeast and East Asian region by the late 1920s.

Table 1
GDP per Capita: Selected Asian Countries, 1900-1990
(in 1985 international dollars)

1900 1929 1950 1973 1990
Malaya/Malaysia1 6002 1910 1828 3088 5775
Singapore - - 22763 5372 14441
Burma 523 651 304 446 562
Thailand 594 623 652 1559 3694
Indonesia 617 1009 727 1253 2118
Philippines 735 1106 943 1629 1934
South Korea 568 945 565 1782 6012
Japan 724 1192 1208 7133 13197

Notes: Malaya to 19731; Guesstimate2; 19603

Source: van der Eng (1994).

However, the international economy was subject to strong fluctuations. The levels of activity in the industrialized countries, especially the U.S., were the determining factors here. Almost immediately following World War I there was a depression from 1919-22. Strong growth in the mid and late-1920s was followed by the Great Depression (1929-32). As industrial output slumped, primary product prices fell even more heavily. For example, in 1932 rubber sold on the London market for about one one-hundredth of the peak price in 1910 (Fig.1). The effects on export earnings were very severe; in Malaysia’s case between 1929 and 1932 these dropped by 73 percent (Malaya), 60 percent (Sarawak) and 50 percent (North Borneo). The aggregate value of imports fell on average by 60 percent. Estates dismissed labor and since there was no social security, many workers had to return to their country of origin. Smallholder incomes dropped heavily and many who had taken out high-interest secured loans in more prosperous times were unable to service these and faced the loss of their land.

The colonial government attempted to counteract this vulnerability to economic swings by instituting schemes to restore commodity prices to profitable levels. For the rubber industry this involved two periods of mandatory restriction of exports to reduce world stocks and thus exert upward pressure on market prices. The first of these (named the Stevenson scheme after its originator) lasted from 1 October 1922- 1 November 1928, and the second (the International Rubber Regulation Agreement) from 1 June 1934-1941. Tin exports were similarly restricted from 1931-41. While these measures did succeed in raising world prices, the inequitable treatment of Asian as against European producers in both industries has been debated. The protective policy has also been blamed for “freezing” the structure of the Malaysian economy and hindering further development, for instance into manufacturing industry (Lim Teck Ghee, 1977; Drabble, 1991).

Why No Industrialization?

Malaysia had very few secondary industries before World War II. The little that did appear was connected mainly with the processing of the primary exports, rubber and tin, together with limited production of manufactured goods for the domestic market (e.g. bread, biscuits, beverages, cigarettes and various building materials). Much of this activity was Chinese-owned and located in Singapore (Huff, 1994). Among the reasons advanced are; the small size of the domestic market, the relatively high wage levels in Singapore which made products uncompetitive as exports, and a culture dominated by British trading firms which favored commerce over industry. Overshadowing all these was the dominance of primary production. When commodity prices were high, there was little incentive for investors, European or Asian, to move into other sectors. Conversely, when these prices fell capital and credit dried up, while incomes contracted, thus lessening effective demand for manufactures. W.G. Huff (2002) has argued that, prior to World War II, “there was, in fact, never a good time to embark on industrialization in Malaya.”

War Time 1942-45: The Japanese Occupation

During the Japanese occupation years of World War II, the export of primary products was limited to the relatively small amounts required for the Japanese economy. This led to the abandonment of large areas of rubber and the closure of many mines, the latter progressively affected by a shortage of spare parts for machinery. Businesses, especially those Chinese-owned, were taken over and reassigned to Japanese interests. Rice imports fell heavily and thus the population devoted a large part of their efforts to producing enough food to stay alive. Large numbers of laborers (many of whom died) were conscripted to work on military projects such as construction of the Thai-Burma railroad. Overall the war period saw the dislocation of the export economy, widespread destruction of the infrastructure (roads, bridges etc.) and a decline in standards of public health. It also saw a rise in inter-ethnic tensions due to the harsh treatment meted out by the Japanese to some groups, notably the Chinese, compared to a more favorable attitude towards the indigenous peoples among whom (Malays particularly) there was a growing sense of ethnic nationalism (Drabble, 2000).

Postwar Reconstruction and Independence

The returning British colonial rulers had two priorities after 1945; to rebuild the export economy as it had been under the OIDL (see above), and to rationalize the fragmented administrative structure (see General Background). The first was accomplished by the late 1940s with estates and mines refurbished, production restarted once the labor force had been brought back and adequate rice imports regained. The second was a complex and delicate political process which resulted in the formation of the Federation of Malaya (1948) from which Singapore, with its predominantly Chinese population (about 75%), was kept separate. In Borneo in 1946 the state of Sarawak, which had been a private kingdom of the English Brooke family (so-called “White Rajas”) since 1841, and North Borneo, administered by the British North Borneo Company from 1881, were both transferred to direct rule from Britain. However, independence was clearly on the horizon and in Malaya tensions continued with the guerrilla campaign (called the “Emergency”) waged by the Malayan Communist Party (membership largely Chinese) from 1948-60 to force out the British and set up a Malayan Peoples’ Republic. This failed and in 1957 the Malayan Federation gained independence (Merdeka) under a “bargain” by which the Malays would hold political paramountcy while others, notably Chinese and Indians, were given citizenship and the freedom to pursue their economic interests. The bargain was institutionalized as the Alliance, later renamed the National Front (Barisan Nasional) which remains the dominant political grouping. In 1963 the Federation of Malaysia was formed in which the bumiputera population was sufficient in total to offset the high proportion of Chinese arising from the short-lived inclusion of Singapore (Andaya and Andaya, 2001).

Towards the Formation of a National Economy

Postwar two long-term problems came to the forefront. These were (a) the political fragmentation (see above) which had long prevented a centralized approach to economic development, coupled with control from Britain which gave primacy to imperial as opposed to local interests and (b) excessive dependence on a small range of primary products (notably rubber and tin) which prewar experience had shown to be an unstable basis for the economy.

The first of these was addressed partly through the political rearrangements outlined in the previous section, with the economic aspects buttressed by a report from a mission to Malaya from the International Bank for Reconstruction and Development (IBRD) in 1954. The report argued that Malaya “is now a distinct national economy.” A further mission in 1963 urged “closer economic cooperation between the prospective Malaysia[n] territories” (cited in Drabble, 2000, 161, 176). The rationale for the Federation was that Singapore would serve as the initial center of industrialization, with Malaya, Sabah and Sarawak following at a pace determined by local conditions.

The second problem centered on economic diversification. The IBRD reports just noted advocated building up a range of secondary industries to meet a larger portion of the domestic demand for manufactures, i.e. import-substitution industrialization (ISI). In the interim dependence on primary products would perforce continue.

The Adoption of Planning

In the postwar world the development plan (usually a Five-Year Plan) was widely adopted by Less-Developed Countries (LDCs) to set directions, targets and estimated costs. Each of the Malaysian territories had plans during the 1950s. Malaya was the first to get industrialization of the ISI type under way. The Pioneer Industries Ordinance (1958) offered inducements such as five-year tax holidays, guarantees (to foreign investors) of freedom to repatriate profits and capital etc. A modest degree of tariff protection was granted. The main types of goods produced were consumer items such as batteries, paints, tires, and pharmaceuticals. Just over half the capital invested came from abroad, with neighboring Singapore in the lead. When Singapore exited the federation in 1965, Malaysia’s fledgling industrialization plans assumed greater significance although foreign investors complained of stifling bureaucracy retarding their projects.

Primary production, however, was still the major economic activity and here the problem was rejuvenation of the leading industries, rubber in particular. New capital investment in rubber had slowed since the 1920s, and the bulk of the existing trees were nearing the end of their economic life. The best prospect for rejuvenation lay in cutting down the old trees and replanting the land with new varieties capable of raising output per acre/hectare by a factor of three or four. However, the new trees required seven years to mature. Corporately owned estates could replant progressively, but smallholders could not face such a prolonged loss of income without support. To encourage replanting, the government offered grants to owners, financed by a special duty on rubber exports. The process was a lengthy one and it was the 1980s before replanting was substantially complete. Moreover, many estates elected to switch over to a new crop, oil palms (a product used primarily in foodstuffs), which offered quicker returns. Progress was swift and by the 1960s Malaysia was supplying 20 percent of world demand for this commodity.

Another priority at this time consisted of programs to improve the standard of living of the indigenous peoples, most of whom lived in the rural areas. The main instrument was land development, with schemes to open up large areas (say 100,000 acres or 40 000 hectares) which were then subdivided into 10 acre/4 hectare blocks for distribution to small farmers from overcrowded regions who were either short of land or had none at all. Financial assistance (repayable) was provided to cover housing and living costs until the holdings became productive. Rubber and oil palms were the main commercial crops planted. Steps were also taken to increase the domestic production of rice to lessen the historical dependence on imports.

In the primary sector Malaysia’s range of products was increased from the 1960s by a rapid increase in the export of hardwood timber, mostly in the form of (unprocessed) saw-logs. The markets were mainly in East Asia and Australasia. Here the largely untapped resources of Sabah and Sarawak came to the fore, but the rapid rate of exploitation led by the late twentieth century to damaging effects on both the environment (extensive deforestation, soil-loss, silting, changed weather patterns), and the traditional hunter-gatherer way of life of forest-dwellers (decrease in wild-life, fish, etc.). Other development projects such as the building of dams for hydroelectric power also had adverse consequences in all these respects (Amarjit Kaur, 1998; Drabble, 2000; Hong, 1987).

A further major addition to primary exports came from the discovery of large deposits of oil and natural gas in East Malaysia, and off the east coast of the Peninsula from the 1970s. Gas was exported in liquified form (LNG), and was also used domestically as a substitute for oil. At peak values in 1982, petroleum and LNG provided around 29 percent of Malaysian export earnings but had declined to 18 percent by 1988.

Industrialization and the New Economic Policy 1970-90

The program of industrialization aimed primarily at the domestic market (ISI) lost impetus in the late 1960s as foreign investors, particularly from Britain switched attention elsewhere. An important factor here was the outbreak of civil disturbances in May 1969, following a federal election in which political parties in the Peninsula (largely non-bumiputera in membership) opposed to the Alliance did unexpectedly well. This brought to a head tensions, which had been rising during the 1960s over issues such as the use of the national language, Malay (Bahasa Malaysia) as the main instructional medium in education. There was also discontent among Peninsular Malays that the economic fruits since independence had gone mostly to non-Malays, notably the Chinese. The outcome was severe inter-ethnic rioting centered in the federal capital, Kuala Lumpur, which led to the suspension of parliamentary government for two years and the implementation of the New Economic Policy (NEP).

The main aim of the NEP was a restructuring of the Malaysian economy over two decades, 1970-90 with the following aims:

  1. to redistribute corporate equity so that the bumiputera share would rise from around 2 percent to 30 percent. The share of other Malaysians would increase marginally from 35 to 40 percent, while that of foreigners would fall from 63 percent to 30 percent.
  2. to eliminate the close link between race and economic function (a legacy of the colonial era) and restructure employment so that that the bumiputera share in each sector would reflect more accurately their proportion of the total population (roughly 55 percent). In 1970 this group had about two-thirds of jobs in the primary sector where incomes were generally lowest, but only 30 percent in the secondary sector. In high-income middle class occupations (e.g. professions, management) the share was only 13 percent.
  3. To eradicate poverty irrespective of race. In 1970 just under half of all households in Peninsular Malaysia had incomes below the official poverty line. Malays accounted for about 75 percent of these.

The principle underlying these aims was that the redistribution would not result in any one group losing in absolute terms. Rather it would be achieved through the process of economic growth, i.e. the economy would get bigger (more investment, more jobs, etc.). While the primary sector would continue to receive developmental aid under the successive Five Year Plans, the main emphasis was a switch to export-oriented industrialization (EOI) with Malaysia seeking a share in global markets for manufactured goods. Free Trade Zones (FTZs) were set up in places such as Penang where production was carried on with the undertaking that the output would be exported. Firms locating there received concessions such as duty-free imports of raw materials and capital goods, and tax concessions, aimed at primarily at foreign investors who were also attracted by Malaysia’s good facilities, relatively low wages and docile trade unions. A range of industries grew up; textiles, rubber and food products, chemicals, telecommunications equipment, electrical and electronic machinery/appliances, car assembly and some heavy industries, iron and steel. As with ISI, much of the capital and technology was foreign, for example the Japanese firm Mitsubishi was a partner in a venture to set up a plant to assemble a Malaysian national car, the Proton, from mostly imported components (Drabble, 2000).

Results of the NEP

Table 2 below shows the outcome of the NEP in the categories outlined above.

Table 2
Restructuring under the NEP, 1970-90

1970 1990
Wealth Ownership (%) Bumiputera 2.0 20.3
Other Malaysians 34.6 54.6
Foreigners 63.4 25.1
Employment
(%) of total
workers
in each
sector
Primary sector (agriculture, mineral
extraction, forest products and fishing)
Bumiputera 67.6 [61.0]* 71.2 [36.7]*
Others 32.4 28.8
Secondary sector
(manufacturing and construction)
Bumiputera 30.8 [14.6]* 48.0 [26.3]*
Others 69.2 52.0
Tertiary sector (services) Bumiputera 37.9 [24.4]* 51.0 [36.9]*
Others 62.1 49.0

Note: [ ]* is the proportion of the ethnic group thus employed. The “others” category has not been disaggregated by race to avoid undue complexity.
Source: Drabble, 2000, Table 10.9.

Section (a) shows that, overall, foreign ownership fell substantially more than planned, while that of “Other Malaysians” rose well above the target. Bumiputera ownership appears to have stopped well short of the 30 percent mark. However, other evidence suggests that in certain sectors such as agriculture/mining (35.7%) and banking/insurance (49.7%) bumiputera ownership of shares in publicly listed companies had already attained a level well beyond the target. Section (b) indicates that while bumiputera employment share in primary production increased slightly (due mainly to the land schemes), as a proportion of that ethnic group it declined sharply, while rising markedly in both the secondary and tertiary sectors. In middle class employment the share rose to 27 percent.

As regards the proportion of households below the poverty line, in broad terms the incidence in Malaysia fell from approximately 49 percent in 1970 to 17 percent in 1990, but with large regional variations between the Peninsula (15%), Sarawak (21 %) and Sabah (34%) (Drabble, 2000, Table 13.5). All ethnic groups registered big falls, but on average the non-bumiputera still enjoyed the lowest incidence of poverty. By 2002 the overall level had fallen to only 4 percent.

The restructuring of the Malaysian economy under the NEP is very clear when we look at the changes in composition of the Gross Domestic Product (GDP) in Table 3 below.

Table 3
Structural Change in GDP 1970-90 (% shares)

Year Primary Secondary Tertiary
1970 44.3 18.3 37.4
1990 28.1 30.2 41.7

Source: Malaysian Government, 1991, Table 3-2.

Over these three decades Malaysia accomplished a transition from a primary product-dependent economy to one in which manufacturing industry had emerged as the leading growth sector. Rubber and tin, which accounted for 54.3 percent of Malaysian export value in 1970, declined sharply in relative terms to a mere 4.9 percent in 1990 (Crouch, 1996, 222).

Factors in the structural shift

The post-independence state played a leading role in the transformation. The transition from British rule was smooth. Apart from the disturbances in 1969 government maintained a firm control over the administrative machinery. Malaysia’s Five Year Development plans were a model for the developing world. Foreign capital was accorded a central role, though subject to the requirements of the NEP. At the same time these requirements discouraged domestic investors, the Chinese especially, to some extent (Jesudason, 1989).

Development was helped by major improvements in education and health. Enrolments at the primary school level reached approximately 90 percent by the 1970s, and at the secondary level 59 percent of potential by 1987. Increased female enrolments, up from 39 percent to 58 percent of potential from 1975 to 1991, were a notable feature, as was the participation of women in the workforce which rose to just over 45 percent of total employment by 1986/7. In the tertiary sector the number of universities increased from one to seven between 1969 and 1990 and numerous technical and vocational colleges opened. Bumiputera enrolments soared as a result of the NEP policy of redistribution (which included ethnic quotas and government scholarships). However, tertiary enrolments totaled only 7 percent of the age group by 1987. There was an “educational-occupation mismatch,” with graduates (bumiputera especially) preferring jobs in government, and consequent shortfalls against strong demand for engineers, research scientists, technicians and the like. Better living conditions (more homes with piped water and more rural clinics, for example) led to substantial falls in infant mortality, improved public health and longer life-expectancy, especially in Peninsular Malaysia (Drabble, 2000, 248, 284-6).

The quality of national leadership was a crucial factor. This was particularly so during the NEP. The leading figure here was Dr Mahathir Mohamad, Malaysian Prime Minister from 1981-2003. While supporting the NEP aim through positive discrimination to give bumiputera an economic stake in the country commensurate with their indigenous status and share in the population, he nevertheless emphasized that this should ultimately lead them to a more modern outlook and ability to compete with the other races in the country, the Chinese especially (see Khoo Boo Teik, 1995). There were, however, some paradoxes here. Mahathir was a meritocrat in principle, but in practice this period saw the spread of “money politics” (another expression for patronage) in Malaysia. In common with many other countries Malaysia embarked on a policy of privatization of public assets, notably in transportation (e.g. Malaysian Airlines), utilities (e.g. electricity supply) and communications (e.g. television). This was done not through an open process of competitive tendering but rather by a “nebulous ‘first come, first served’ principle” (Jomo, 1995, 8) which saw ownership pass directly to politically well-connected businessmen, mainly bumiputera, at relatively low valuations.

The New Development Policy

Positive action to promote bumiputera interests did not end with the NEP in 1990, this was followed in 1991 by the New Development Policy (NDP), which emphasized assistance only to “Bumiputera with potential, commitment and good track records” (Malaysian Government, 1991, 17) rather than the previous blanket measures to redistribute wealth and employment. In turn the NDP was part of a longer-term program known as Vision 2020. The aim here is to turn Malaysia into a fully industrialized country and to quadruple per capita income by the year 2020. This will require the country to continue ascending the technological “ladder” from low- to high-tech types of industrial production, with a corresponding increase in the intensity of capital investment and greater retention of value-added (i.e. the value added to raw materials in the production process) by Malaysian producers.

The Malaysian economy continued to boom at historically unprecedented rates of 8-9 percent a year for much of the 1990s (see next section). There was heavy expenditure on infrastructure, for example extensive building in Kuala Lumpur such as the Twin Towers (currently the highest buildings in the world). The volume of manufactured exports, notably electronic goods and electronic components increased rapidly.

Asian Financial Crisis, 1997-98

The Asian financial crisis originated in heavy international currency speculation leading to major slumps in exchange rates beginning with the Thai baht in May 1997, spreading rapidly throughout East and Southeast Asia and severely affecting the banking and finance sectors. The Malaysian ringgit exchange rate fell from RM 2.42 to 4.88 to the U.S. dollar by January 1998. There was a heavy outflow of foreign capital. To counter the crisis the International Monetary Fund (IMF) recommended austerity changes to fiscal and monetary policies. Some countries (Thailand, South Korea, and Indonesia) reluctantly adopted these. The Malaysian government refused and implemented independent measures; the ringgitbecame non-convertible externally and was pegged at RM 3.80 to the US dollar, while foreign capital repatriated before staying at least twelve months was subject to substantial levies. Despite international criticism these actions stabilized the domestic situation quite effectively, restoring net growth (see next section) especially compared to neighboring Indonesia.

Rates of Economic Growth

Malaysia’s economic growth in comparative perspective from 1960-90 is set out in Table 4 below.

Table 4
Asia-Pacific Region: Growth of Real GDP (annual average percent)

1960-69 1971-80 1981-89
Japan 10.9 5.0 4.0
Asian “Tigers”
Hong Kong 10.0 9.5 7.2
South Korea 8.5 8.7 9.3
Singapore 8.9 9.0 6.9
Taiwan 11.6 9.7 8.1
ASEAN-4
Indonesia 3.5 7.9 5.2
Malaysia 6.5 8.0 5.4
Philippines 4.9 6.2 1.7
Thailand 8.3 9.9 7.1

Source: Drabble, 2000, Table 10.2; figures for Japan are for 1960-70, 1971-80, and 1981-90.

The data show that Japan, the dominant Asian economy for much of this period, progressively slowed by the 1990s (see below). The four leading Newly Industrialized Countries (Asian “Tigers” as they were called) followed EOF strategies and achieved very high rates of growth. Among the four ASEAN (Association of Southeast Asian Nations formed 1967) members, again all adopting EOI policies, Thailand stood out followed closely by Malaysia. Reference to Table 1 above shows that by 1990 Malaysia, while still among the leaders in GDP per head, had slipped relative to the “Tigers.”

These economies, joined by China, continued growth into the 1990s at such high rates (Malaysia averaged around 8 percent a year) that the term “Asian miracle” became a common method of description. The exception was Japan which encountered major problems with structural change and an over-extended banking system. Post-crisis the countries of the region have started recovery but at differing rates. The Malaysian economy contracted by nearly 7 percent in 1998, recovered to 8 percent growth in 2000, slipped again to under 1 percent in 2001 and has since stabilized at between 4 and 5 percent growth in 2002-04.

The new Malaysian Prime Minister (since October 2003), Abdullah Ahmad Badawi, plans to shift the emphasis in development to smaller, less-costly infrastructure projects and to break the previous dominance of “money politics.” Foreign direct investment will still be sought but priority will be given to nurturing the domestic manufacturing sector.

Further improvements in education will remain a key factor (Far Eastern Economic Review, Nov.6, 2003).

Overview

Malaysia owes its successful historical economic record to a number of factors. Geographically it lies close to major world trade routes bringing early exposure to the international economy. The sparse indigenous population and labor force has been supplemented by immigrants, mainly from neighboring Asian countries with many becoming permanently domiciled. The economy has always been exceptionally open to external influences such as globalization. Foreign capital has played a major role throughout. Governments, colonial and national, have aimed at managing the structure of the economy while maintaining inter-ethnic stability. Since about 1960 the economy has benefited from extensive restructuring with sustained growth of exports from both the primary and secondary sectors, thus gaining a double impetus.

However, on a less positive assessment, the country has so far exchanged dependence on a limited range of primary products (e.g. tin and rubber) for dependence on an equally limited range of manufactured goods, notably electronics and electronic components (59 percent of exports in 2002). These industries are facing increasing competition from lower-wage countries, especially India and China. Within Malaysia the distribution of secondary industry is unbalanced, currently heavily favoring the Peninsula. Sabah and Sarawak are still heavily dependent on primary products (timber, oil, LNG). There is an urgent need to continue the search for new industries in which Malaysia can enjoy a comparative advantage in world markets, not least because inter-ethnic harmony depends heavily on the continuance of economic prosperity.

Select Bibliography

General Studies

Amarjit Kaur. Economic Change in East Malaysia: Sabah and Sarawak since 1850. London: Macmillan, 1998.

Andaya, L.Y. and Andaya, B.W. A History of Malaysia, second edition. Basingstoke: Palgrave, 2001.

Crouch, Harold. Government and Society in Malaysia. Sydney: Allen and Unwin, 1996.

Drabble, J.H. An Economic History of Malaysia, c.1800-1990: The Transition to Modern Economic Growth. Basingstoke: Macmillan and New York: St. Martin’s Press, 2000.

Furnivall, J.S. Colonial Policy and Practice: A Comparative Study of Burma and Netherlands India. Cambridge (UK), 1948.

Huff, W.G. The Economic Growth of Singapore: Trade and Development in the Twentieth Century. Cambridge: Cambridge University Press, 1994.

Jomo, K.S. Growth and Structural Change in the Malaysian Economy. London: Macmillan, 1990.

Industries/Transport

Alavi, Rokiah. Industrialization in Malaysia: Import Substitution and Infant Industry Performance. London: Routledge, 1966.

Amarjit Kaur. Bridge and Barrier: Transport and Communications in Colonial Malaya 1870-1957. Kuala Lumpur: Oxford University Press, 1985.

Drabble, J.H. Rubber in Malaya 1876-1922: The Genesis of the Industry. Kuala Lumpur: Oxford University Press, 1973.

Drabble, J.H. Malayan Rubber: The Interwar Years. London: Macmillan, 1991.

Huff, W.G. “Boom or Bust Commodities and Industrialization in Pre-World War II Malaya.” Journal of Economic History 62, no. 4 (2002): 1074-1115.

Jackson, J.C. Planters and Speculators: European and Chinese Agricultural Enterprise in Malaya 1786-1921. Kuala Lumpur: University of Malaya Press, 1968.

Lim Teck Ghee. Peasants and Their Agricultural Economy in Colonial Malaya, 1874-1941. Kuala Lumpur: Oxford University Press, 1977.

Wong Lin Ken. The Malayan Tin Industry to 1914. Tucson: University of Arizona Press, 1965.

Yip Yat Hoong. The Development of the Tin Mining Industry of Malaya. Kuala Lumpur: University of Malaya Press, 1969.

New Economic Policy

Jesudason, J.V. Ethnicity and the Economy: The State, Chinese Business and Multinationals in Malaysia. Kuala Lumpur: Oxford University Press, 1989.

Jomo, K.S., editor. Privatizing Malaysia: Rents, Rhetoric, Realities. Boulder, CO: Westview Press, 1995.

Khoo Boo Teik. Paradoxes of Mahathirism: An Intellectual Biography of Mahathir Mohamad. Kuala Lumpur: Oxford University Press, 1995.

Vincent, J.R., R.M. Ali and Associates. Environment and Development in a Resource-Rich Economy: Malaysia under the New Economic Policy. Cambridge, MA: Harvard University Press, 1997

Ethnic Communities

Chew, Daniel. Chinese Pioneers on the Sarawak Frontier, 1841-1941. Kuala Lumpur: Oxford University Press, 1990.

Gullick, J.M. Malay Society in the Late Nineteenth Century. Kuala Lumpur: Oxford University Press, 1989.

Hong, Evelyne. Natives of Sarawak: Survival in Borneo’s Vanishing Forests. Penang: Institut Masyarakat Malaysia, 1987.

Shamsul, A.B. From British to Bumiputera Rule. Singapore: Institute of Southeast Asian Studies, 1986.

Economic Growth

Far Eastern Economic Review. Hong Kong. An excellent weekly overview of current regional affairs.

Malaysian Government. The Second Outline Perspective Plan, 1991-2000. Kuala Lumpur: Government Printer, 1991.

Van der Eng, Pierre. “Assessing Economic Growth and the Standard of Living in Asia 1870-1990.” Milan, Eleventh International Economic History Congress, 1994.

Citation: Drabble, John. “The Economic History of Malaysia”. EH.Net Encyclopedia, edited by Robert Whaples. July 31, 2004. URL http://eh.net/encyclopedia/economic-history-of-malaysia/

The History of American Labor Market Institutions and Outcomes

Joshua Rosenbloom, University of Kansas

One of the most important implications of modern microeconomic theory is that perfectly competitive markets produce an efficient allocation of resources. Historically, however, most markets have not approached the level of organization of this theoretical ideal. Instead of the costless and instantaneous communication envisioned in theory, market participants must rely on a set of incomplete and often costly channels of communication to learn about conditions of supply and demand; and they may face significant transaction costs to act on the information that they have acquired through these channels.

The economic history of labor market institutions is concerned with identifying the mechanisms that have facilitated the allocation of labor effort in the economy at different times, tracing the historical processes by which they have responded to shifting circumstances, and understanding how these mechanisms affected the allocation of labor as well as the distribution of labor’s products in different epochs.

Labor market institutions include both formal organizations (such as union hiring halls, government labor exchanges, and third party intermediaries such as employment agents), and informal mechanisms of communication such as word-of-mouth about employment opportunities passed between family and friends. The impact of these institutions is broad ranging. It includes the geographic allocation of labor (migration and urbanization), decisions about education and training of workers (investment in human capital), inequality (relative wages), the allocation of time between paid work and other activities such as household production, education, and leisure, and fertility (the allocation of time between production and reproduction).

Because each worker possesses a unique bundle of skills and attributes and each job is different, labor market transactions require the communication of a relatively large amount of information. In other words, the transactions costs involved in the exchange of labor are relatively high. The result is that the barriers separating different labor markets have sometimes been quite high, and these markets are relatively poorly integrated with one another.

The frictions inherent in the labor market mean that even during macroeconomic expansions there may be both a significant number of unemployed workers and a large number of unfilled vacancies. When viewed from some distance and looked at in the long-run, however, what is most striking is how effective labor market institutions have been in adapting to the shifting patterns of supply and demand in the economy. Over the past two centuries American labor markets have accomplished a massive redistribution of labor out of agriculture into manufacturing, and then from manufacturing into services. At the same time they have accomplished a huge geographic reallocation of labor between the United States and other parts of the world as well as within the United States itself, both across states and regions and from rural locations to urban areas.

This essay is organized topically, beginning with a discussion of the evolution of institutions involved in the allocation of labor across space and then taking up the development of institutions that fostered the allocation of labor across industries and sectors. The third section considers issues related to labor market performance.

The Geographic Distribution of Labor

One of the dominant themes of American history is the process of European settlement (and the concomitant displacement of the native population). This movement of population is in essence a labor market phenomenon. From the beginning of European settlement in what became the United States, labor markets were characterized by the scarcity of labor in relation to abundant land and natural resources. Labor scarcity raised labor productivity and enabled ordinary Americans to enjoy a higher standard of living than comparable Europeans. Counterbalancing these inducements to migration, however, were the high costs of travel across the Atlantic and the significant risks posed by settlement in frontier regions. Over time, technological changes lowered the costs of communication and transportation. But exploiting these advantages required the parallel development of new labor market institutions.

Trans-Atlantic Migration in the Colonial Period

During the seventeenth and eighteenth centuries a variety of labor market institutions developed to facilitate the movement of labor in response to the opportunities created by American factor proportions. While some immigrants migrated on their own, the majority of immigrants were either indentured servants or African slaves.

Because of the cost of passage—which exceeded half a year’s income for a typical British immigrant and a full year’s income for a typical German immigrant—only a small portion of European migrants could afford to pay for their passage to the Americas (Grubb 1985a). They did so by signing contracts, or “indentures,” committing themselves to work for a fixed number of years in the future—their labor being their only viable asset—with British merchants, who then sold these contracts to colonists after their ship reached America. Indentured servitude was introduced by the Virginia Company in 1619 and appears to have arisen from a combination of the terms of two other types of labor contract widely used in England at the time: service in husbandry and apprenticeship (Galenson 1981). In other cases, migrants borrowed money for their passage and committed to repay merchants by pledging to sell themselves as servants in America, a practice known as “redemptioner servitude (Grubb 1986). Redemptioners bore increased risk because they could not predict in advance what terms they might be able to negotiate for their labor, but presumably they did so because of other benefits, such as the opportunity to choose their own master, and to select where they would be employed.

Although data on immigration for the colonial period are scattered and incomplete a number of scholars have estimated that between half and three quarters of European immigrants arriving in the colonies came as indentured or redemptioner servants. Using data for the end of the colonial period Grubb (1985b) found that close to three-quarters of English immigrants to Pennsylvania and nearly 60 percent of German immigrants arrived as servants.

A number of scholars have examined the terms of indenture and redemptioner contracts in some detail (see, e.g., Galenson 1981; Grubb 1985a). They find that consistent with the existence of a well-functioning market, the terms of service varied in response to differences in individual productivity, employment conditions, and the balance of supply and demand in different locations.

The other major source of labor for the colonies was the forced migration of African slaves. Slavery had been introduced in the West Indies at an early date, but it was not until the late seventeenth century that significant numbers of slaves began to be imported into the mainland colonies. From 1700 to 1780 the proportion of blacks in the Chesapeake region grew from 13 percent to around 40 percent. In South Carolina and Georgia, the black share of the population climbed from 18 percent to 41 percent in the same period (McCusker and Menard, 1985, p. 222). Galenson (1984) explains the transition from indentured European to enslaved African labor as the result of shifts in supply and demand conditions in England and the trans-Atlantic slave market. Conditions in Europe improved after 1650, reducing the supply of indentured servants, while at the same time increased competition in the slave trade was lowering the price of slaves (Dunn 1984). In some sense the colonies’ early experience with indentured servants paved the way for the transition to slavery. Like slaves, indentured servants were unfree, and ownership of their labor could be freely transferred from one owner to another. Unlike slaves, however, they could look forward to eventually becoming free (Morgan 1971).

Over time a marked regional division in labor market institutions emerged in colonial America. The use of slaves was concentrated in the Chesapeake and Lower South, where the presence of staple export crops (rice, indigo and tobacco) provided economic rewards for expanding the scale of cultivation beyond the size achievable with family labor. European immigrants (primarily indentured servants) tended to concentrate in the Chesapeake and Middle Colonies, where servants could expect to find the greatest opportunities to enter agriculture once they had completed their term of service. While New England was able to support self-sufficient farmers, its climate and soil were not conducive to the expansion of commercial agriculture, with the result that it attracted relatively few slaves, indentured servants, or free immigrants. These patterns are illustrated in Table 1, which summarizes the composition and destinations of English emigrants in the years 1773 to 1776.

Table 1

English Emigration to the American Colonies, by Destination and Type, 1773-76

Total Emigration
Destination Number Percentage Percent listed as servants
New England 54 1.20 1.85
Middle Colonies 1,162 25.78 61.27
New York 303 6.72 11.55
Pennsylvania 859 19.06 78.81
Chesapeake 2,984 66.21 96.28
Maryland 2,217 49.19 98.33
Virginia 767 17.02 90.35
Lower South 307 6.81 19.54
Carolinas 106 2.35 23.58
Georgia 196 4.35 17.86
Florida 5 0.11 0.00
Total 4,507 80.90

Source: Grubb (1985b, p. 334).

International Migration in the Nineteenth and Twentieth Centuries

American independence marks a turning point in the development of labor market institutions. In 1808 Congress prohibited the importation of slaves. Meanwhile, the use of indentured servitude to finance the migration of European immigrants fell into disuse. As a result, most subsequent migration was at least nominally free migration.

The high cost of migration and the economic uncertainties of the new nation help to explain the relatively low level of immigration in the early years of the nineteenth century. But as the costs of transportation fell, the volume of immigration rose dramatically over the course of the century. Transportation costs were of course only one of the obstacles to international population movements. At least as important were problems of communication. Potential migrants might know in a general way that the United States offered greater economic opportunities than were available at home, but acting on this information required the development of labor market institutions that could effectively link job-seekers with employers.

For the most part, the labor market institutions that emerged in the nineteenth century to direct international migration were “informal” and thus difficult to document. As Rosenbloom (2002, ch. 2) describes, however, word-of-mouth played an important role in labor markets at this time. Many immigrants were following in the footsteps of friends or relatives already in the United States. Often these initial pioneers provided material assistance—helping to purchase ship and train tickets, providing housing—as well as information. The consequences of this so-called “chain migration” are readily reflected in a variety of kinds of evidence. Numerous studies of specific migration streams have documented the role of a small group of initial migrants in facilitating subsequent migration (for example, Barton 1975; Kamphoefner 1987; Gjerde 1985). At a more aggregate level, settlement patterns confirm the tendency of immigrants from different countries to concentrate in different cities (Ward 1971, p. 77; Galloway, Vedder and Shukla 1974).

Informal word-of-mouth communication was an effective labor market institution because it served both employers and job-seekers. For job-seekers the recommendations of friends and relatives were more reliable than those of third parties and often came with additional assistance. For employers the recommendations of current employees served as a kind of screening mechanism, since their employees were unlikely to encourage the immigration of unreliable workers.

While chain migration can explain a quantitatively large part of the redistribution of labor in the nineteenth century it is still necessary to explain how these chains came into existence in the first place. Chain migration always coexisted with another set of more formal labor market institutions that grew up largely to serve employers who could not rely on their existing labor force to recruit new hires (such as railroad construction companies). Labor agents, often themselves immigrants, acted as intermediaries between these employers and job-seekers, providing labor market information and frequently acting as translators for immigrants who could not speak English. Steamship companies operating between Europe and the United States also employed agents to help recruit potential migrants (Rosenbloom 2002, ch. 3).

By the 1840s networks of labor agents along with boarding houses serving immigrants and other similar support networks were well established in New York, Boston, and other major immigrant destinations. The services of these agents were well documented in published guides and most Europeans considering immigration must have known that they could turn to these commercial intermediaries if they lacked friends and family to guide them. After some time working in America these immigrants, if they were successful, would find steadier employment and begin to direct subsequent migration, thus establishing a new link in the stream of chain migration.

The economic impacts of immigration are theoretically ambiguous. Increased labor supply, by itself, would tend to lower wages—benefiting employers and hurting workers. But because immigrants are also consumers, the resulting increase in demand for goods and services will increase the demand for labor, partially offsetting the depressing effect of immigration on wages. As long as the labor to capital ratio rises, however, immigration will necessarily lower wages. But if, as was true in the late nineteenth century, foreign lending follows foreign labor, then there may be no negative impact on wages (Carter and Sutch 1999). Whatever the theoretical considerations, however, immigration became an increasingly controversial political issue during the late nineteenth and early twentieth centuries. While employers and some immigrant groups supported continued immigration, there was a growing nativist sentiment among other segments of the population. Anti-immigrant sentiments appear to have arisen out of a mix of perceived economic effects and concern about the implications of the ethnic, religious and cultural differences between immigrants and the native born.

In 1882, Congress passed the Chinese Exclusion Act. Subsequent legislative efforts to impose further restrictions on immigration passed Congress but foundered on presidential vetoes. The balance of political forces shifted, however, in the wake of World War I. In 1917 a literacy requirement was imposed for the first time, and in 1921 an Emergency Quota Act was passed (Goldin 1994).

With the passage of the Emergency Quota Act in 1921 and subsequent legislation culminating in the National Origins Act, the volume of immigration dropped sharply. Since this time international migration into the United States has been controlled to varying degrees by legal restrictions. Variations in the rules have produced variations in the volume of legal immigration. Meanwhile the persistence of large wage gaps between the United States and Mexico and other developing countries has encouraged a substantial volume of illegal immigration. It remains the case, however, that most of this migration—both legal and illegal—continues to be directed by chains of friends and relatives.

Recent trends in outsourcing and off-shoring have begun to create a new channel by which lower-wage workers outside the United States can respond to the country’s high wages without physically relocating. Workers in India, China, and elsewhere possessing technical skills can now provide services such as data entry or technical support by phone and over the internet. While the novelty of this phenomenon has attracted considerable attention, the actual volume of jobs moved off-shore remains limited, and there are important obstacles to overcome before more jobs can be carried out remotely (Edwards 2004).

Internal Migration in the Nineteenth and Twentieth Centuries

At the same time that American economic development created international imbalances between labor supply and demand it also created internal disequilibrium. Fertile land and abundant natural resources drew population toward less densely settled regions in the West. Over the course of the century, advances in transportation technologies lowered the cost of shipping goods from interior regions, vastly expanding the area available for settlement. Meanwhile transportation advances and technological innovations encouraged the growth of manufacturing and fueled increased urbanization. The movement of population and economic activity from the Eastern Seaboard into the interior of the continent and from rural to urban areas in response to these incentives is an important element of U.S. economic history in the nineteenth century.

In the pre-Civil War era, the labor market response to frontier expansion differed substantially between North and South, with profound effects on patterns of settlement and regional development. Much of the cost of migration is a result of the need to gather information about opportunities in potential destinations. In the South, plantation owners could spread these costs over a relatively large number of potential migrants—i.e., their slaves. Plantations were also relatively self-sufficient, requiring little urban or commercial infrastructure to make them economically viable. Moreover, the existence of well-established markets for slaves allowed western planters to expand their labor force by purchasing additional labor from eastern plantations.

In the North, on the other hand, migration took place through the relocation of small, family farms. Fixed costs of gathering information and the risks of migration loomed larger in these farmers’ calculations than they did for slaveholders, and they were more dependent on the presence of urban merchants to supply them with inputs and market their products. Consequently the task of mobilizing labor fell to promoters who bought up large tracts of land at low prices and then subdivided them into individual lots. To increase the value of these lands promoters offered loans, actively encourage the development of urban services such as blacksmith shops, grain merchants, wagon builders and general stores, and recruited settlers. With the spread of railroads, railroad construction companies also played a role in encouraging settlement along their routes to speed the development of traffic.

The differences in processes of westward migration in the North and South were reflected in the divergence of rates of urbanization, transportation infrastructure investment, manufacturing employment, and population density, all of which were higher in the North than in the South in 1860 (Wright 1986, pp. 19-29).

The Distribution of Labor among Economic Activities

Over the course of U.S. economic development technological changes and shifting consumption patterns have caused the demand for labor to increase in manufacturing and services and decline in agriculture and other extractive activities. These broad changes are illustrated in Table 2. As technological changes have increased the advantages of specialization and the division of labor, more and more economic activity has moved outside the scope of the household, and the boundaries of the labor market have been enlarged. As a result more and more women have moved into the paid labor force. On the other hand, with the increasing importance of formal education, there has been a decline in the number of children in the labor force (Whaples 2005).

Table 2

Sectoral Distribution of the Labor Force, 1800-1999

Share in
Non-Agriculture
Year Total Labor Force (1000s) Agriculture Total Manufacturing Services
1800 1,658 76.2 23.8
1850 8,199 53.6 46.4
1900 29,031 37.5 59.4 35.8 23.6
1950 57,860 11.9 88.1 41.0 47.1
1999 133,489 2.3 97.7 24.7 73.0

Notes and Sources: 1800 and 1850 from Weiss (1986), pp. 646-49; remaining years from Hughes and Cain (2003), 547-48. For 1900-1999 Forestry and Fishing are included in the Agricultural labor force.

As these changes have taken place they have placed strains on existing labor market institutions and encouraged the development of new mechanisms to facilitate the distribution of labor. Over the course of the last century and a half the tendency has been a movement away from something approximating a “spot” market characterized by short-term employment relationships in which wages are equated to the marginal product of labor, and toward a much more complex and rule-bound set of long-term transactions (Goldin 2000, p. 586) While certain segments of the labor market still involve relatively anonymous and short-lived transactions, workers and employers are much more likely today to enter into long-term employment relationships that are expected to last for many years.

The evolution of labor market institutions in response to these shifting demands has been anything but smooth. During the late nineteenth century the expansion of organized labor was accompanied by often violent labor-management conflict (Friedman 2002). Not until the New Deal did unions gain widespread acceptance and a legal right to bargain. Yet even today, union organizing efforts are often met with considerable hostility.

Conflicts over union organizing efforts inevitably involved state and federal governments because the legal environment directly affected the bargaining power of both sides, and shifting legal opinions and legislative changes played an important part in determining the outcome of these contests. State and federal governments were also drawn into labor markets as various groups sought to limit hours of work, set minimum wages, provide support for disabled workers, and respond to other perceived shortcomings of existing arrangements. It would be wrong, however, to see the growth of government regulation as simply a movement from freer to more regulated markets. The ability to exchange goods and services rests ultimately on the legal system, and to this extent there has never been an entirely unregulated market. In addition, labor market transactions are never as simple as the anonymous exchange of other goods or services. Because the identities of individual buyers and sellers matter and the long-term nature of many employment relationships, adjustments can occur along other margins besides wages, and many of these dimensions involve externalities that affect all workers at a particular establishment, or possibly workers in an entire industry or sector.

Government regulations have responded in many cases to needs voiced by participants on both sides of the labor market for assistance to achieve desired ends. That has not, of course, prevented both workers and employers from seeking to use government to alter the way in which the gains from trade are distributed within the market.

The Agricultural Labor Market

At the beginning of the nineteenth century most labor was employed in agriculture, and, with the exception of large slave plantations, most agricultural labor was performed on small, family-run farms. There were markets for temporary and seasonal agricultural laborers to supplement family labor supply, but in most parts of the country outside the South, families remained the dominant institution directing the allocation of farm labor. Reliable estimates of the number of farm workers are not readily available before 1860, when the federal Census first enumerated “farm laborers.” At this time census enumerators found about 800 thousand such workers, implying an average of less than one-half farm worker per farm. Interpretation of this figure is complicated, however, and it may either overstate the amount of hired help—since farm laborers included unpaid family workers—or understate it—since it excluded those who reported their occupation simply as “laborer” and may have spent some of their time working in agriculture (Wright 1988, p. 193). A possibly more reliable indicator is provided by the percentage of gross value of farm output spent on wage labor. This figure fell from 11.4 percent in 1870 to around 8 percent by 1900, indicating that hired labor was on average becoming even less important (Wright 1988, pp. 194-95).

In the South, after the Civil War, arrangements were more complicated. Former plantation owners continued to own large tracts of land that required labor if they were to be made productive. Meanwhile former slaves needed access to land and capital if they were to support themselves. While some land owners turned to wage labor to work their land, most relied heavily on institutions like sharecropping. On the supply side, croppers viewed this form of employment as a rung on the “agricultural ladder” that would lead eventually to tenancy and possibly ownership. Because climbing the agricultural ladder meant establishing one’s credit-worthiness with local lenders, southern farm laborers tended to sort themselves into two categories: locally established (mostly older, married men) croppers and renters on the one hand, and mobile wage laborers (mostly younger and unmarried) on the other. While the labor market for each of these types of workers appears to have been relatively competitive, the barriers between the two markets remained relatively high (Wright 1987, p. 111).

While the predominant pattern in agriculture then was one of small, family-operated units, there was an important countervailing trend toward specialization that both depended on, and encouraged the emergence of a more specialized market for farm labor. Because specialization in a single crop increased the seasonality of labor demand, farmers could not afford to employ labor year-round, but had to depend on migrant workers. The use of seasonal gangs of migrant wage laborers developed earliest in California in the 1870s and 1880s, where employers relied heavily on Chinese immigrants. Following restrictions on Chinese entry, they were replaced first by Japanese, and later by Mexican workers (Wright 1988, pp. 201-204).

The Emergence of Internal Labor Markets

Outside of agriculture, at the beginning of the nineteenth century most manufacturing took place in small establishments. Hired labor might consist of a small number of apprentices, or, as in the early New England textile mills, a few child laborers hired from nearby farms (Ware 1931). As a result labor market institutions remained small-scale and informal, and institutions for training and skill acquisition remained correspondingly limited. Workers learned on the job as apprentices or helpers; advancement came through establishing themselves as independent producers rather than through internal promotion.

With the growth of manufacturing, and the spread of factory methods of production, especially in the years after the end of the Civil War, an increasing number of people could expect to spend their working-lives as employees. One reflection of this change was the emergence in the 1870s of the problem of unemployment. During the depression of 1873 for the first time cities throughout the country had to contend with large masses of industrial workers thrown out of work and unable to support themselves through, in the language of the time, “no fault of their own” (Keyssar 1986, ch. 2).

The growth of large factories and the creation of new kinds of labor skills specific to a particular employer created returns to sustaining long-term employment relationships. As workers acquired job- and employer-specific skills their productivity increased giving rise to gains that were available only so long as the employment relationship persisted. Employers did little, however, to encourage long-term employment relationships. Instead authority over hiring, promotion and retention was commonly delegated to foremen or inside contractors (Nelson 1975, pp. 34-54). In the latter case, skilled craftsmen operated in effect as their own bosses contracting with the firm to supply components or finished products for an agreed price, and taking responsibility for hiring and managing their own assistants.

These arrangements were well suited to promoting external mobility. Foremen were often drawn from the immigrant community and could easily tap into word-of-mouth channels of recruitment. But these benefits came increasingly into conflict with rising costs of hiring and training workers.

The informality of personnel policies prior to World War I seems likely to have discouraged lasting employment relationships, and it is true that rates of labor turnover at the beginning of the twentieth century were considerably higher than they were to be later (Owen, 2004). Scattered evidence on the duration of employment relationships gathered by various state labor bureaus at the end of the century suggests, however, at least some workers did establish lasting employment relationship (Carter 1988; Carter and Savocca 1990; Jacoby and Sharma 1992; James 1994).

The growing awareness of the costs of labor-turnover and informal, casual labor relations led reformers to advocate the establishment of more centralized and formal processes of hiring, firing and promotion, along with the establishment of internal job-ladders, and deferred payment plans to help bind workers and employers. The implementation of these reforms did not make significant headway, however, until the 1920s (Slichter 1929). Why employers began to establish internal labor markets in the 1920s remains in dispute. While some scholars emphasize pressure from workers (Jacoby 1984; 1985) others have stressed that it was largely a response to the rising costs of labor turnover (Edwards 1979).

The Government and the Labor Market

The growth of large factories contributed to rising labor tensions in the late nineteenth- and early twentieth-centuries. Issues like hours of work, safety, and working conditions all have a significant public goods aspect. While market forces of entry and exit will force employers to adopt policies that are sufficient to attract the marginal worker (the one just indifferent between staying and leaving), less mobile workers may find that their interests are not adequately represented (Freeman and Medoff 1984). One solution is to establish mechanisms for collective bargaining, and the years after the American Civil War were characterized by significant progress in the growth of organized labor (Friedman 2002). Unionization efforts, however, met strong opposition from employers, and suffered from the obstacles created by the American legal system’s bias toward protecting property and the freedom of contract. Under prevailing legal interpretation, strikes were often found by the courts to be conspiracies in restraint of trade with the result that the apparatus of government was often arrayed against labor.

Although efforts to win significant improvements in working conditions were rarely successful, there were still areas where there was room for mutually beneficial change. One such area involved the provision of disability insurance for workers injured on the job. Traditionally, injured workers had turned to the courts to adjudicate liability for industrial accidents. Legal proceedings were costly and their outcome unpredictable. By the early 1910s it became clear to all sides that a system of disability insurance was preferable to reliance on the courts. Resolution of this problem, however, required the intervention of state legislatures to establish mandatory state workers compensation insurance schemes and remove the issue from the courts. Once introduced workers compensation schemes spread quickly: nine states passed legislation in 1911; 13 more had joined the bandwagon by 1913, and by 1920 44 states had such legislation (Fishback 2001).

Along with workers compensation state legislatures in the late nineteenth century also considered legislation restricting hours of work. Prevailing legal interpretations limited the effectiveness of such efforts for adult males. But rules restricting hours for women and children were found to be acceptable. The federal government passed legislation restricting the employment of children under 14 in 1916, but this law was found unconstitutional in 1916 (Goldin 2000, p. 612-13).

The economic crisis of the 1930s triggered a new wave of government interventions in the labor market. During the 1930s the federal government granted unions the right to organize legally, established a system of unemployment, disability and old age insurance, and established minimum wage and overtime pay provisions.

In 1933 the National Industrial Recovery Act included provisions legalizing unions’ right to bargain collectively. Although the NIRA was eventually ruled to be unconstitutional, the key labor provisions of the Act were reinstated in the Wagner Act of 1935. While some of the provisions of the Wagner Act were modified in 1947 by the Taft-Hartley Act, its passage marks the beginning of the golden age of organized labor. Union membership jumped very quickly after 1935 from around 12 percent of the non-agricultural labor force to nearly 30 percent, and by the late 1940s had attained a peak of 35 percent, where it stabilized. Since the 1960s, however, union membership has declined steadily, to the point where it is now back at pre-Wagner Act levels.

The Social Security Act of 1935 introduced a federal unemployment insurance scheme that was operated in partnership with state governments and financed through a tax on employers. It also created government old age and disability insurance. In 1938, the federal Fair Labor Standards Act provided for minimum wages and for overtime pay. At first the coverage of these provisions was limited, but it has been steadily increased in subsequent years to cover most industries today.

In the post-war era, the federal government has expanded its role in managing labor markets both directly—through the establishment of occupational safety regulations, and anti-discrimination laws, for example—and indirectly—through its efforts to manage the macroeconomy to insure maximum employment.

A further expansion of federal involvement in labor markets began in 1964 with passage of the Civil Rights Act, which prohibited employment discrimination against both minorities and women. In 1967 the Age Discrimination and Employment Act was passed prohibiting discrimination against people aged 40 to 70 in regard to hiring, firing, working conditions and pay. The Family and Medical Leave Act of 1994 allows for unpaid leave to care for infants, children and other sick relatives (Goldin 2000, p. 614).

Whether state and federal legislation has significantly affected labor market outcomes remains unclear. Most economists would argue that the majority of labor’s gains in the past century would have occurred even in the absence of government intervention. Rather than shaping market outcomes, many legislative initiatives emerged as a result of underlying changes that were making advances possible. According to Claudia Goldin (2000, p. 553) “government intervention often reinforced existing trends, as in the decline of child labor, the narrowing of the wage structure, and the decrease in hours of work.” In other cases, such as Workers Compensation and pensions, legislation helped to establish the basis for markets.

The Changing Boundaries of the Labor Market

The rise of factories and urban employment had implications that went far beyond the labor market itself. On farms women and children had found ready employment (Craig 1993, ch. 4). But when the male household head worked for wages, employment opportunities for other family members were more limited. Late nineteenth-century convention largely dictated that married women did not work outside the home unless their husband was dead or incapacitated (Goldin 1990, p. 119-20). Children, on the other hand, were often viewed as supplementary earners in blue-collar households at this time.

Since 1900 changes in relative earnings power related to shifts in technology have encouraged women to enter the paid labor market while purchasing more of the goods and services that were previously produced within the home. At the same time, the rising value of formal education has lead to the withdrawal of child labor from the market and increased investment in formal education (Whaples 2005). During the first half of the twentieth century high school education became nearly universal. And since World War II, there has been a rapid increase in the number of college educated workers in the U.S. economy (Goldin 2000, p. 609-12).

Assessing the Efficiency of Labor Market Institutions

The function of labor markets is to match workers and jobs. As this essay has described the mechanisms by which labor markets have accomplished this task have changed considerably as the American economy has developed. A central issue for economic historians is to assess how changing labor market institutions have affected the efficiency of labor markets. This leads to three sets of questions. The first concerns the long-run efficiency of market processes in allocating labor across space and economic activities. The second involves the response of labor markets to short-run macroeconomic fluctuations. The third deals with wage determination and the distribution of income.

Long-Run Efficiency and Wage Gaps

Efforts to evaluate the efficiency of market allocation begin with what is commonly know as the “law of one price,” which states that within an efficient market the wage of similar workers doing similar work under similar circumstances should be equalized. The ideal of complete equalization is, of course, unlikely to be achieved given the high information and transactions costs that characterize labor markets. Thus, conclusions are usually couched in relative terms, comparing the efficiency of one market at one point in time with those of some other markets at other points in time. A further complication in measuring wage equalization is the need to compare homogeneous workers and to control for other differences (such as cost of living and non-pecuniary amenities).

Falling transportation and communications costs have encouraged a trend toward diminishing wage gaps over time, but this trend has not always been consistent over time, nor has it applied to all markets in equal measure. That said, what stands out is in fact the relative strength of forces of market arbitrage that have operated in many contexts to promote wage convergence.

At the beginning of the nineteenth century, the costs of trans-Atlantic migration were still quite high and international wage gaps large. By the 1840s, however, vast improvements in shipping cut the costs of migration, and gave rise to an era of dramatic international wage equalization (O’Rourke and Williamson 1999, ch. 2; Williamson 1995). Figure 1 shows the movement of real wages relative to the United States in a selection of European countries. After the beginning of mass immigration wage differentials began to fall substantially in one country after another. International wage convergence continued up until the 1880s, when it appears that the accelerating growth of the American economy outstripped European labor supply responses and reversed wage convergence briefly. World War I and subsequent immigration restrictions caused a sharper break, and contributed to widening international wage differences during the middle portion of the twentieth century. From World War II until about 1980, European wage levels once again began to converge toward the U.S., but this convergence reflected largely internally-generated improvements in European living standards rather then labor market pressures.

Figure 1

Relative Real Wages of Selected European Countries, 1830-1980 (US = 100)

Source: Williamson (1995), Tables A2.1-A2.3.

Wage convergence also took place within some parts of the United States during the nineteenth century. Figure 2 traces wages in the North Central and Southern regions of the U.S relative to those in the Northeast across the period from 1820 to the early twentieth century. Within the United States, wages in the North Central region of the country were 30 to 40 percent higher than in the East in the 1820s (Margo 2000a, ch. 5). Thereafter, wage gaps declined substantially, falling to the 10-20 percent range before the Civil War. Despite some temporary divergence during the war, wage gaps had fallen to 5 to 10 percent by the 1880s and 1890s. Much of this decline was made possible by faster and less expensive means of transportation, but it was also dependent on the development of labor market institutions linking the two regions, for while transportation improvements helped to link East and West, there was no corresponding North-South integration. While southern wages hovered near levels in the Northeast prior to the Civil War, they fell substantially below northern levels after the Civil War, as Figure 2 illustrates.

Figure 2

Relative Regional Real Wage Rates in the United States, 1825-1984

(Northeast = 100 in each year)

Notes and sources: Rosenbloom (2002, p. 133); Montgomery (1992). It is not possible to assemble entirely consistent data on regional wage variations over such an extended period. The nature of the wage data, the precise geographic coverage of the data, and the estimates of regional cost-of-living indices are all different. The earliest wage data—Margo (2000); Sundstrom and Rosenbloom (1993) and Coelho and Shepherd (1976) are all based on occupational wage rates from payroll records for specific occupations; Rosenbloom (1996) uses average earnings across all manufacturing workers; while Montgomery (1992) uses individual level wage data drawn from the Current Population Survey, and calculates geographic variations using a regression technique to control for individual differences in human capital and industry of employment. I used the relative real wages that Montgomery (1992) reported for workers in manufacturing, and used an unweighted average of wages across the cities in each region to arrive at relative regional real wages. Interested readers should consult the various underlying sources for further details.

Despite the large North-South wage gap Table 3 shows there was relatively little migration out of the South until large-scale foreign immigration came to an end. Migration from the South during World War I and the 1920s created a basis for future chain migration, but the Great Depression of the 1930s interrupted this process of adjustment. Not until the 1940s did the North-South wage gap begin to decline substantially (Wright 1986, pp. 71-80). By the 1970s the southern wage disadvantage had largely disappeared, and because of the decline fortunes of older manufacturing districts and the rise of Sunbelt cities, wages in the South now exceed those in the Northeast (Coelho and Ghali 1971; Bellante 1979; Sahling and Smith 1983; Montgomery 1992). Despite these shocks, however, the overall variation in wages appears comparable to levels attained by the end of the nineteenth century. Montgomery (1992), for example finds that from 1974 to 1984 the standard deviation of wages across SMSAs was only about 10 percent of the average wage.

Table 3

Net Migration by Region, and Race, 1870-1950

South Northeast North Central West
Period White Black White Black White Black White Black
Number (in 1,000s)
1870-80 91 -68 -374 26 26 42 257 0
1880-90 -271 -88 -240 61 -43 28 554 0
1890-00 -30 -185 101 136 -445 49 374 0
1900-10 -69 -194 -196 109 -1,110 63 1,375 22
1910-20 -663 -555 -74 242 -145 281 880 32
1920-30 -704 -903 -177 435 -464 426 1,345 42
1930-40 -558 -480 55 273 -747 152 1,250 55
1940-50 -866 -1581 -659 599 -1,296 626 2,822 356
Rate (migrants/1,000 Population)
1870-80 11 -14 -33 55 2 124 274 0
1880-90 -26 -15 -18 107 -3 65 325 0
1890-00 -2 -26 6 200 -23 104 141 0
1900-10 -4 -24 -11 137 -48 122 329 542
1910-20 -33 -66 -3 254 -5 421 143 491
1920-30 -30 -103 -7 328 -15 415 160 421
1930-40 -20 -52 2 157 -22 113 116 378
1940-50 -28 -167 -20 259 -35 344 195 964

Note: Net migration is calculated as the difference between the actual increase in population over each decade and the predicted increase based on age and sex specific mortality rates and the demographic structure of the region’s population at the beginning of the decade. If the actual increase exceeds the predicted increase this implies a net migration into the region; if the actual increase is less than predicted this implies net migration out of the region. The states included in the Southern region are Oklahoma, Texas, Arkansas, Louisiana, Mississippi, Alabama, Tennessee, Kentucky, West Virginia, Virginia, North Carolina, South Carolina, Georgia, and Florida.

Source: Eldridge and Thomas (1964, pp. 90, 99).

In addition to geographic wage gaps economists have considered gaps between farm and city, between black and white workers, between men and women, and between different industries. The literature on these topics is quite extensive and this essay can only touch on a few of the more general themes raised here as they relate to U.S. economic history.

Studies of farm-city wage gaps are a variant of the broader literature on geographic wage variation, related to the general movement of labor from farms to urban manufacturing and services. Here comparisons are complicated by the need to adjust for the non-wage perquisites that farm laborers typically received, which could be almost as large as cash wages. The issue of whether such gaps existed in the nineteenth century has important implications for whether the pace of industrialization was impeded by the lack of adequate labor supply responses. By the second half of the nineteenth century at least, it appears that farm-manufacturing wage gaps were small and markets were relatively integrated (Wright 1988, pp. 204-5). Margo (2000, ch. 4) offers evidence of a high degree of equalization within local labor markets between farm and urban wages as early as 1860. Making comparisons within counties and states, he reports that farm wages were within 10 percent of urban wages in eight states. Analyzing data from the late nineteenth century through the 1930s, Hatton and Williamson (1991) find that farm and city wages were nearly equal within U.S. regions by the 1890s. It appears, however that during the Great Depression farm wages were much more flexible than urban wages causing a large gap to emerge at this time (Alston and Williamson 1991).

Much attention has been focused on trends in wage gaps by race and sex. The twentieth century has seen a substantial convergence in both of these differentials. Table 4 displays comparisons of earnings of black males relative to white males for full time workers. In 1940, full-time black male workers earned only about 43 percent of what white male full-time workers did. By 1980 the racial pay ratio had risen to nearly 73 percent, but there has been little subsequent progress. Until the mid-1960s these gains can be attributed primarily to migration from the low-wage South to higher paying areas in the North, and to increases in the quantity and quality of black education over time (Margo 1995; Smith and Welch 1990). Since then, however, most gains have been due to shifts in relative pay within regions. Although it is clear that discrimination was a key factor in limiting access to education, the role of discrimination within the labor market in contributing to these differentials has been a more controversial topic (see Wright 1986, pp. 127-34). But the episodic nature of black wage gains, especially after 1964 is compelling evidence that discrimination has played a role historically in earnings differences and that federal anti-discrimination legislation was a crucial factor in reducing its effects (Donohue and Heckman 1991).

Table 4

Black Male Wages as a Percentage of White Male Wages, 1940-2004

Date Black Relative Wage
1940 43.4
1950 55.2
1960 57.5
1970 64.4
1980 72.6
1990 70.0
2004 77.0

Notes and Sources: Data for 1940 through 1980 are based on Census data as reported in Smith and Welch (1989, Table 8). Data for 1990 are from Ehrenberg and Smith (2000, Table 12.4) and refer to earnings of full time, full year workers. Data from 2004 are for median weekly earnings of full-time wage and salary workers derived from data in the Current Population Survey accessed on-line from the Bureau of Labor Statistic on 13 December 2005; URL ftp://ftp.bls.gov/pub/special.requests/lf/aat37.txt.

Male-Female wage gaps have also narrowed substantially over time. In the 1820s women’s earnings in manufacturing were a little less than 40 percent of those of men, but this ratio rose over time reaching about 55 percent by the 1920s. Across all sectors women’s relative pay rose during the first half of the twentieth century, but gains in female wages stalled during the 1950s and 1960s at the time when female labor force participation began to increase rapidly. Beginning in the late 1970s or early 1980s, relative female pay began to rise again, and today women earn about 80 percent what men do (Goldin 1990, table 3.2; Goldin 2000, pp. 606-8). Part of this remaining difference is explained by differences in the occupational distribution of men and women, with women tending to be concentrated in lower paying jobs. Whether these differences are the result of persistent discrimination or arise because of differences in productivity or a choice by women to trade off greater flexibility in terms of labor market commitment for lower pay remains controversial.

In addition to locational, sectoral, racial and gender wage differentials, economists have also documented and analyzed differences by industry. Krueger and Summers (1987) find that there are pronounced differences in wages by industry within well-specified occupational classes, and that these differentials have remained relatively stable over several decades. One interpretation of this phenomenon is that in industries with substantial market power workers are able to extract some of the monopoly rents as higher pay. An alternative view is that workers are in fact heterogeneous, and differences in wages reflect a process of sorting in which higher paying industries attract more able workers.

The Response to Short-run Macroeconomic Fluctuations

The existence of unemployment is one of the clearest indications of the persistent frictions that characterize labor markets. As described earlier, the concept of unemployment first entered common discussion with the growth of the factory labor force in the 1870s. Unemployment was not a visible social phenomenon in an agricultural economy, although there was undoubtedly a great deal of hidden underemployment.

Although one might have expected that the shift from spot toward more contractual labor markets would have increased rigidities in the employment relationship that would result in higher levels of unemployment there is in fact no evidence of any long-run increase in the level of unemployment.

Contemporaneous measurements of the rate of unemployment only began in 1940. Prior to this date, economic historians have had to estimate unemployment levels from a variety of other sources. Decennial censuses provide benchmark levels, but it is necessary to interpolate between these benchmarks based on other series. Conclusions about long-run changes in unemployment behavior depend to a large extent on the method used to interpolate between benchmark dates. Estimates prepared by Stanley Lebergott (1964) suggest that the average level of unemployment and its volatility have declined between the pre-1930 and post-World War II periods. Christina Romer (1986a, 1986b), however, has argued that there was no decline in volatility. Rather, she argues that the apparent change in behavior is the result of Lebergott’s interpolation procedure.

While the aggregate behavior of unemployment has changed surprisingly little over the past century, the changing nature of employment relationships has been reflected much more clearly in changes in the distribution of the burden of unemployment (Goldin 2000, pp. 591-97). At the beginning of the twentieth century, unemployment was relatively widespread, and largely unrelated to personal characteristics. Thus many employees faced great uncertainty about the permanence of their employment relationship. Today, on the other hand, unemployment is highly concentrated: falling heavily on the least skilled, the youngest, and the non-white segments of the labor force. Thus, the movement away from spot markets has tended to create a two-tier labor market in which some workers are highly vulnerable to economic fluctuations, while others remain largely insulated from economic shocks.

Wage Determination and Distributional Issues

American economic growth has generated vast increases in the material standard of living. Real gross domestic product per capita, for example, has increased more than twenty-fold since 1820 (Steckel 2002). This growth in total output has in large part been passed on to labor in the form of higher wages. Although labor’s share of national output has fluctuated somewhat, in the long-run it has remained surprisingly stable. According to Abramovitz and David (2000, p. 20), labor received 65 percent of national income in the years 1800-1855. Labor’s share dropped in the late nineteenth and early twentieth centuries, falling to a low of 54 percent of national income between 1890 and 1927, but has since risen, reaching 65 percent again in 1966-1989. Thus, over the long term, labor income has grown at the same rate as total output in the economy.

The distribution of labor’s gains across different groups in the labor force has also varied over time. I have already discussed patterns of wage variation by race and gender, but another important issue revolves around the overall level of inequality of pay, and differences in pay between groups of skilled and unskilled workers. Careful research by Picketty and Saez (2003) using individual income tax returns has documented changes in the overall distribution of income in the United States since 1913. They find that inequality has followed a U-shaped pattern over the course of the twentieth century. Inequality was relatively high at the beginning of the period they consider, fell sharply during World War II, held steady until the early 1970s and then began to increase, reaching levels comparable to those in the early twentieth century by the 1990s.

An important factor in the rising inequality of income since 1970 has been growing dispersion in wage rates. The wage differential between workers in the 90th percentile of the wage distribution and those in the 10th percentile increased by 49 percent between 1969 and 1995 (Plotnick et al 2000, pp. 357-58). These shifts are mirrored in increased premiums earned by college graduates relative to high school graduates. Two primary explanations have been advanced for these trends. First, there is evidence that technological changes—especially those associated with the increased use of information technology—has increased relative demand for more educated workers (Murnane, Willett and Levy (1995). Second, increased global integration has allowed low-wage manufacturing industries overseas to compete more effectively with U.S. manufacturers, thus depressing wages in what have traditionally been high-paying blue collar jobs.

Efforts to expand the scope of analysis over a longer-run encounter problems with more limited data. Based on selected wage ratios of skilled and unskilled workers Willamson and Lindert (1980) have argued that there was an increase in wage inequality over the course of the nineteenth century. But other scholars have argued that the wage series that Williamson and Lindert used are unreliable (Margo 2000b, pp. 224-28).

Conclusions

The history of labor market institutions in the United States illustrates the point that real world economies are substantially more complex than the simplest textbook models. Instead of a disinterested and omniscient auctioneer, the process of matching buyers and sellers takes place through the actions of self-interested market participants. The resulting labor market institutions do not respond immediately and precisely to shifting patterns of incentives. Rather they are subject to historical forces of increasing-returns and lock-in that cause them to change gradually and along path-dependent trajectories.

For all of these departures from the theoretically ideal market, however, the history of labor markets in the United States can also be seen as a confirmation of the remarkable power of market processes of allocation. From the beginning of European settlement in mainland North America, labor markets have done a remarkable job of responding to shifting patterns of demand and supply. Not only have they accomplished the massive geographic shifts associated with the settlement of the United States, but they have also dealt with huge structural changes induced by the sustained pace of technological change.

References

Abramovitz, Moses and Paul A. David. “American Macroeconomic Growth in the Era of Knowledge-Based Progress: The Long-Run Perspective.” In The Cambridge Economic History of the United States, Volume 3: The Twentieth Century, edited by Stanley L. Engerman and Robert Gallman. New York: Cambridge University Press, 2000.

Alston, Lee J. and Jeffery G. Williamson. “The Earnings Gap between Agricultural and Manufacturing Laborers, 1925-1941. Journal of Economic History 51, no. 1 (1991): 83-99.

Barton, Josef J. Peasants and Strangers: Italians, Rumanians, and Slovaks in an American City, 1890-1950. Cambridge, MA: Harvard University Press, 1975.

Bellante, Don. “The North-South Differential and the Migration of Heterogeneous Labor.” American Economic Review 69, no. 1 (1979): 166-75.

Carter, Susan B. “The Changing Importance of Lifetime Jobs in the U.S. Economy, 1892-1978.” Industrial Relations 27 (1988): 287-300.

Carter, Susan B. and Elizabeth Savoca. “Labor Mobility and Lengthy Jobs in Nineteenth-Century America.” Journal of Economic History 50, no. 1 (1990): 1-16.

Carter, Susan B. and Richard Sutch. “Historical Perspectives on the Economic Consequences of Immigration into the United States.” In The Handbook of International Migration: The American Experience, edited by Charles Hirschman, Philip Kasinitz and Josh DeWind. New York: Russell Sage Foundation, 1999.

Coelho, Philip R.P. and Moheb A. Ghali. “The End of the North-South Wage Differential.” American Economic Review 61, no. 5 (1971): 932-37.

Coelho, Philip R.P. and James F. Shepherd. “Regional Differences in Real Wages: The United States in 1851-1880.” Explorations in Economic History 13 (1976): 203-30.

Craig, Lee A. To Sow One Acre More: Childbearing and Farm Productivity in the Antebellum North. Baltimore: Johns Hopkins University Press, 1993.

Donahue, John H. III and James J. Heckman. “Continuous versus Episodic Change: The Impact of Civil Rights Policy on the Economic Status of Blacks.” Journal of Economic Literature 29, no. 4 (1991): 1603-43.

Dunn, Richard S. “Servants and Slaves: The Recruitment and Employment of Labor.” In Colonial British America: Essays in the New History of the Early Modern Era, edited by Jack P. Greene and J.R. Pole. Baltimore: Johns Hopkins University Press, 1984.

Edwards, B. “A World of Work: A Survey of Outsourcing.” Economist 13 November 2004.

Edwards, Richard. Contested Terrain: The Transformation of the Workplace in the Twentieth Century. New York: Basic Books, 1979.

Ehrenberg, Ronald G. and Robert S. Smith. Modern Labor Economics: Theory and Public Policy, seventh edition. Reading, MA; Addison-Wesley, 2000.

Eldridge, Hope T. and Dorothy Swaine Thomas. Population Redistribution and Economic Growth, United States 1870-1950, vol. 3: Demographic Analyses and Interrelations. Philadelphia: American Philosophical Society, 1964.

Fishback, Price V. “Workers’ Compensation.” EH.Net Encyclopedia, edited by Robert Whaples. August 15, 2001. URL http://www.eh.net/encyclopedia/articles/fishback.workers.compensation.

Freeman, Richard and James Medoff. What Do Unions Do? New York: Basic Books, 1984.

Friedman, Gerald (2002). “Labor Unions in the United States.” EH.Net Encyclopedia, edited by Robert Whaples. May 8, 2002. URL http://www.eh.net/encyclopedia/articles/friedman.unions.us.

Galenson, David W. White Servitude in Colonial America. New York: Cambridge University Press, 1981.

Galenson, David W. “The Rise and Fall of Indentured Servitude in the Americas: An Economic Analysis.” Journal of Economic History 44, no. 1 (1984): 1-26.

Galloway, Lowell E., Richard K. Vedder and Vishwa Shukla. “The Distribution of the Immigrant Population in the United States: An Econometric Analysis.” Explorations in Economic History 11 (1974): 213-26.

Gjerde, John. From Peasants to Farmers: Migration from Balestrand, Norway to the Upper Middle West. New York: Cambridge University Press, 1985.

Goldin, Claudia. “The Political Economy of Immigration Restriction in the United States, 1890 to 1921.” In The Regulated Economy: A Historical Approach to Political Economy, edited by Claudia Goldin and Gary Libecap. Chicago: University of Chicago Press, 1994.

Goldin, Claudia. “Labor Markets in the Twentieth Century.” In The Cambridge Economic History of the United States, Volume 3: The Twentieth Century, edited by Stanley L. Engerman and Robert Gallman. Cambridge: Cambridge University Press, 2000.

Grubb, Farley. “The Market for Indentured Immigrants: Evidence on the Efficiency of Forward Labor Contracting in Philadelphia, 1745-1773.” Journal of Economic History 45, no. 4 (1985a): 855-68.

Grubb, Farley. “The Incidence of Servitude in Trans-Atlantic Migration, 1771-1804.” Explorations in Economic History 22 (1985b): 316-39.

Grubb, Farley. “Redemptioner Immigration to Pennsylvania: Evidence on Contract Choice and Profitability.” Journal of Economic History 46, no. 2 (1986): 407-18.

Hatton, Timothy J. and Jeffrey G. Williamson (1991). “Integrated and Segmented Labor Markets: Thinking in Two Sectors.” Journal of Economic History 51, no. 2 (1991): 413-25.

Hughes, Jonathan and Louis Cain. American Economic History, sixth edition. Boston: Addison-Wesley, 2003.

Jacoby, Sanford M. “The Development of Internal Labor markets in American Manufacturing Firms.” In Internal Labor Markets, edited by Paul Osterman, 23-69. Cambridge, MA: MIT Press, 1984

Jacoby, Sanford M. Employing Bureaucracy: Managers, Unions, and the Transformation of Work in American Industry, 1900-1945. New York: Columbia University Press, 1985.

Jacoby, Sanford M. and Sunil Sharma. “Employment Duration and Industrial Labor Mobility in the United States, 1880-1980.” Journal of Economic History 52, no. 1 (1992): 161-79.

James, John A. “Job Tenure in the Gilded Age.” In Labour Market Evolution: The Economic History of Market Integration, Wage Flexibility, and the Employment Relation, edited by George Grantham and Mary MacKinnon. New York: Routledge, 1994.

Kamphoefner, Walter D. The Westfalians: From Germany to Missouri. Princeton, NJ: Princeton University Press, 1987.

Keyssar, Alexander. Out of Work: The First Century of Unemployment in Massachusetts. New York: Cambridge University Press, 1986.

Krueger, Alan B. and Lawrence H. Summers. “Reflections on the Inter-Industry Wage Structure.” In Unemployment and the Structure of Labor Markets, edited by Kevin Lang and Jonathan Leonard, 17-47. Oxford: Blackwell, 1987.

Lebergott, Stanley. Manpower in Economic Growth: The American Record since 1800. New York: McGraw-Hill, 1964.

Margo, Robert. “Explaining Black-White Wage Convergence, 1940-1950: The Role of the Great Compression.” Industrial and Labor Relations Review 48 (1995): 470-81.

Margo, Robert. Wages and Labor Markets in the United States, 1820-1860. Chicago: University of Chicago Press, 2000a.

Margo, Robert. “The Labor Force in the Nineteenth Century.” In The Cambridge Economic History of the United States, Volume 2: The Long Nineteenth Century, edited by Stanley L. Engerman and Robert E. Gallman, 207-44. New York: Cambridge University Press, 2000b.

McCusker, John J. and Russell R. Menard. The Economy of British America: 1607-1789. Chapel Hill: University of North Carolina Press, 1985.

Montgomery, Edward. “Evidence on Metropolitan Wage Differences across Industries and over Time.” Journal of Urban Economics 31 (1992): 69-83.

Morgan, Edmund S. “The Labor Problem at Jamestown, 1607-18.” American Historical Review 76 (1971): 595-611.

Murnane, Richard J., John B. Willett and Frank Levy. “The Growing Importance of Cognitive Skills in Wage Determination.” Review of Economics and Statistics 77 (1995): 251-66

Nelson, Daniel. Managers and Workers: Origins of the New Factory System in the United States, 1880-1920. Madison: University of Wisconsin Press, 1975.

O’Rourke, Kevin H. and Jeffrey G. Williamson. Globalization and History: The Evolution of a Nineteenth-Century Atlantic Economy. Cambridge, MA: MIT Press, 1999.

Owen, Laura. “History of Labor Turnover in the U.S.” EH.Net Encyclopedia, edited by Robert Whaples. April 30, 2004. URL http://www.eh.net/encyclopedia/articles/owen.turnover.

Piketty, Thomas and Emmanuel Saez. “Income Inequality in the United States, 1913-1998.” Quarterly Journal of Economics 118 (2003): 1-39.

Plotnick, Robert D. et al. “The Twentieth-Century Record of Inequality and Poverty in the United States” In The Cambridge Economic History of the United States, Volume 3: The Twentieth Century, edited by Stanley L. Engerman and Robert Gallman. New York: Cambridge University Press, 2000.

Romer, Christina. “New Estimates of Prewar Gross National Product and Unemployment.” Journal of Economic History 46, no. 2 (1986a): 341-52.

Romer, Christina. “Spurious Volatility in Historical Unemployment Data.” Journal of Political Economy 94 (1986b): 1-37.

Rosenbloom, Joshua L. “Was There a National Labor Market at the End of the Nineteenth Century? New Evidence on Earnings in Manufacturing.” Journal of Economic History 56, no. 3 (1996): 626-56.

Rosenbloom, Joshua L. Looking for Work, Searching for Workers: American Labor Markets during Industrialization. New York: Cambridge University Press, 2002.

Slichter, Sumner H. “The Current Labor Policies of American Industries.” Quarterly Journal of Economics 43 (1929): 393-435.

Sahling, Leonard G. and Sharon P. Smith. “Regional Wage Differentials: Has the South Risen Again?” Review of Economics and Statistics 65 (1983): 131-35.

Smith, James P. and Finis R. Welch. “Black Economic Progress after Myrdal.” Journal of Economic Literature 27 (1989): 519-64.

Steckel, Richard. “A History of the Standard of Living in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. July 22, 2002. URL http://eh.net/encyclopedia/article/steckel.standard.living.us

Sundstrom, William A. and Joshua L. Rosenbloom. “Occupational Differences in the Dispersion of Wages and Working Hours: Labor Market Integration in the United States, 1890-1903.” Explorations in Economic History 30 (1993): 379-408.

Ward, David. Cities and Immigrants: A Geography of Change in Nineteenth-Century America. New York: Oxford University Press, 1971.

Ware, Caroline F. The Early New England Cotton Manufacture: A Study in Industrial Beginnings. Boston: Houghton Mifflin, 1931.

Weiss, Thomas. “Revised Estimates of the United States Workforce, 1800-1860.” In Long Term Factors in American Economic Growth, edited by Stanley L. Engerman and Robert E. Gallman, 641-78. Chicago: University of Chicago, 1986.

Whaples, Robert. “Child Labor in the United States.” EH.Net Encyclopedia, edited by Robert Whaples. October 8, 2005. URL http://eh.net/encyclopedia/article/whaples.childlabor.

Williamson, Jeffrey G. “The Evolution of Global Labor Markets since 1830: Background Evidence and Hypotheses.” Explorations in Economic History 32 (1995): 141-96.

Williamson, Jeffrey G. and Peter H. Lindert. American Inequality: A Macroeconomic History. New York: Academic Press, 1980.

Wright, Gavin. Old South, New South: Revolutions in the Southern Economy since the Civil War. New York: Basic Books, 1986.

Wright, Gavin. “Postbellum Southern Labor Markets.” In Quantity and Quiddity: Essays in U.S. Economic History, edited by Peter Kilby. Middletown, CT: Wesleyan University Press, 1987.

Wright, Gavin. “American Agriculture and the Labor Market: What Happened to Proletarianization?” Agricultural History 62 (1988): 182-209.

Citation: Rosenbloom, Joshua. “The History of American Labor Market Institutions and Outcomes”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-history-of-american-labor-market-institutions-and-outcomes/

A Brief Economic History of Modern Israel

Nadav Halevi, Hebrew University

The Pre-state Background

The history of modern Israel begins in the 1880s, when the first Zionist immigrants came to Palestine, then under Ottoman rule, to join the small existing Jewish community, establishing agricultural settlements and some industry, restoring Hebrew as the spoken national language, and creating new economic and social institutions. The ravages of World War I reduced the Jewish population by a third, to 56,000, about what it had been at the beginning of the century.

As a result of the war, Palestine came under the control of Great Britain, whose Balfour Declaration had called for a Jewish National Home in Palestine. Britain’s control was formalized in 1920, when it was given the Mandate for Palestine by the League of Nations. During the Mandatory period, which lasted until May 1948, the social, political and economic structure for the future state of Israel was developed. Though the government of Palestine had a single economic policy, the Jewish and Arab economies developed separately, with relatively little connection.

Two factors were instrumental in fostering rapid economic growth of the Jewish sector: immigration and capital inflows. The Jewish population increased mainly through immigration; by the end of 1947 it had reached 630,000, about 35 percent of the total population. Immigrants came in waves, particularly large in the mid 1920s and mid 1930s. They consisted of ideological Zionists and refugees, economic and political, from Central and Eastern Europe. Capital inflows included public funds, collected by Zionist institutions, but were for the most part private funds. National product grew rapidly during periods of large immigration, but both waves of mass immigration were followed by recessions, periods of adjustment and consolidation.

In the period from 1922 to 1947 real net domestic product (NDP) of the Jewish sector grew at an average rate of 13.2 percent, and in 1947 accounted for 54 percent of the NDP of the Jewish and Arab economies together. NDP per capita in the Jewish sector grew at a rate of 4.8 percent; by the end of the period it was 8.5 times larger in than in 1922, and 2.5 times larger than in the Arab sector (Metzer, 1998). Though agricultural development – an ideological objective – was substantial, this sector never accounted for more than 15 percent of total net domestic product of the Jewish economy. Manufacturing grew slowly for most of the period, but very rapidly during World War II, when Palestine was cut off from foreign competition and was a major provider to the British armed forces in the Middle East. By the end of the period, manufacturing accounted for a quarter of NDP. Housing construction, though a smaller component of NDP, was the most volatile sector, and contributed to sharp business cycle movements. A salient feature of the Jewish economy during the Mandatory period, which carried over into later periods, was the dominant size of the services sector – more than half of total NDP. This included a relatively modern educational and health sector, efficient financial and business sectors, and semi-governmental Jewish institutions, which later were ready to take on governmental duties.

The Formative Years: 1948-1965

The state of Israel came into being, in mid May 1948, in the midst of a war with its Arab neighbors. The immediate economic problems were formidable: to finance and wage a war, to take in as many immigrants as possible (first the refugees kept in camps in Europe and on Cyprus), to provide basic commodities to the old and new population, and to create a government bureaucracy to cope with all these challenges. The creation of a government went relatively smoothly, as semi-governmental Jewish institutions which had developed during the Mandatory period now became government departments.

Cease-fire agreements were signed during 1949. By the end of that year a total of 340,000 immigrants had arrived, and by the end of 1951 an additional 345,000 (the latter including immigrants from Arab countries), thus doubling the Jewish population. Immediate needs were met by a strict austerity program and inflationary government finance, repressed by price controls and rationing of basic commodities. However, the problems of providing housing and employment for the new population were solved only gradually. A New Economic Policy was introduced in early 1952. It consisted of exchange rate devaluation, the gradual relaxation of price controls and rationing, and curbing of monetary expansion, primarily by budgetary restraint. Active immigration encouragement was curtailed, to await the absorption of the earlier mass immigration.

From 1950 until 1965, Israel achieved a high rate of growth: Real GNP (gross national product) grew by an average annual rate of over 11 percent, and per capita GNP by greater than 6 percent. What made this possible? Israel was fortunate in receiving large sums of capital inflows: U.S. aid in the forms of unilateral transfers and loans, German reparations and restitutions to individuals, sale of State of Israel Bonds abroad, and unilateral transfers to public institutions, mainly the Jewish Agency, which retained responsibility for immigration absorption and agricultural settlement. Thus, Israel had resources available for domestic use – for public and private consumption and investment – about 25 percent more than its own GNP. This made possible a massive investment program, mainly financed through a special government budget. Both the enormity of needs and the socialist philosophy of the main political party in the government coalitions led to extreme government intervention in the economy.

Governmental budgets and strong protectionist measures to foster import-substitution enabled the development of new industries, chief among them textiles, and subsidies were given to help the development of exports, additional to the traditional exports of citrus products and cut diamonds.

During the four decades from the mid 1960s until the present, Israel’s economy developed and changed, as did economic policy. A major factor affecting these developments has been the Arab-Israeli conflict. Its influence is discussed first, and is followed by brief descriptions of economic growth and fluctuations, and evolution of economic policy.

The Arab-Israel Conflict

The most dramatic event of the 1960s was the Six Day War of 1967, at the end of which Israel controlled the West Bank (of the Jordan River) – the area of Palestine absorbed by the Jordan since 1949 – and the Gaza Strip, controlled until then by Egypt.

As a consequence of the occupation of these territories Israel was responsible for the economic as well as the political life in the areas taken over. The Arab sections of Jerusalem were united with the Jewish section. Jewish settlements were established in parts of the occupied territories. As hostilities intensified, special investments in infrastructure were made to protect Jewish settlers. The allocation of resources to Jewish settlements in the occupied territories has been a political and economic issue ever since.

The economies of Israel and the occupied territories were partially integrated. Trade in goods and services developed, with restrictions placed on exports to Israel of products deemed too competitive, and Palestinian workers were employed in Israel particularly in construction and agriculture. At its peak, in 1996, Palestinian employment in Israel reached 115,000 to 120,000, about 40 percent of the Palestinian labor force, but never more than 6.5 percent of total Israeli employment. Thus, while employment in Israel was a major contributor to the economy of the Palestinians, its effects on the Israeli economy, except for the sectors of construction and agriculture, were not large.

The Palestinian economy developed rapidly – real per capita national income grew at an annual rate of close to 20 percent in 1969-1972 and 5 percent in 1973-1980 – but fluctuated widely thereafter, and actually decreased in times of hostilities. Palestinian per capita income equaled 10.2 percent of Israeli per capita income in 1968, 22.8 percent in 1986, and declined to 9.7 percent in 1998 (Kleiman, 2003).

As part of the peace process between Israel and the Palestinians initiated in the 1990s, an economic agreement was signed between the parties in 1994, which in effect transformed what had been essentially a one-sided customs agreement (which gave Israel full freedom to export to the Territories but put restrictions on Palestinian exports to Israel) into a more equal customs union: the uniform external trade policy was actually Israel’s, but the Palestinians were given limited sovereignty regarding imports of certain commodities.

Arab uprisings (intifadas), in the 1980s, and especially the more violent one beginning in 2000 and continuing into 2005, led to severe Israeli restrictions on interaction between the two economies, particularly employment of Palestinians in Israel, and even to military reoccupation of some areas given over earlier to Palestinian control. These measures set the Palestinian economy back many years, wiping out much of the gains in income which had been achieved since 1967 – per capita GNP in 2004 was $932, compared to about $1500 in 1999. Palestinian workers in Israel were replaced by foreign workers.

An important economic implication of the Arab-Israel conflict is that Israel must allocate a major part of its budget to defense. The size of the defense budget has varied, rising during wars and armed hostilities. The total defense burden (including expenses not in the budget) reached its maximum relative size during and after the Yom Kippur War of 1973, close to 30 percent of GNP in 1974-1978. In the 2000-2004 period, the defense budget alone reached about 22 to 25 percent of GDP. Israel has been fortunate in receiving generous amounts of U.S. aid. Until 1972 most of this came in the form of grants and loans, primarily for purchases of U.S. agricultural surpluses. But since 1973 U.S. aid has been closely connected to Israel’s defense needs. During 1973-1982 annual loans and grants averaged $1.9 billion, and covered some 60 percent of total defense imports. But even in more tranquil periods, the defense burden, exclusive of U.S. aid, has been much larger than usual in industrial countries during peace time.

Growth and Economic Fluctuations

The high rates of growth of income and income per capita which characterized Israel until 1973 were not achieved thereafter. GDP growth fluctuated, generally between 2 and 5 percent, reaching as high as 7.5 percent in 2000, but falling below zero in the recession years from 2001 to mid 2003. By the end of the twentieth century income per capita reached about $20,000, similar to many of the more developed industrialized countries.

Economic fluctuations in Israel have usually been associated with waves of immigration: a large flow of immigrants which abruptly increases the population requires an adjustment period until it is absorbed productively, with the investments for its absorption in employment and housing stimulating economic activity. Immigration never again reached the relative size of the first years after statehood, but again gained importance with the loosening of restrictions on emigration from the Soviet Union. The total number of immigrants in 1972-1982 was 325,000, and after the collapse of the Soviet Union immigration totaled 1,050,000 in 1990-1999, mostly from the former Soviet Union. Unlike the earlier period, these immigrants were gradually absorbed in productive employment (though often not in the same activity as abroad) without resort to make-work projects. By the end of the century the population of Israel passed 6,300,000, with the Jewish population being 78 percent of the total. The immigrants from the former Soviet Union were equal to about one-fifth of the Jewish population, and were a significant and important addition of human capital to the labor force.

As the economy developed, the structure of output changed. Though the service sectors are still relatively large – trade and services contributing 46 percent of the business sector’s product – agriculture has declined in importance, and industry makes up over a quarter of the total. The structure of manufacturing has also changed: both in total production and in exports the share of traditional, low-tech industries has declined, with sophisticated, high-tech products, particularly electronics, achieving primary importance.

Fluctuations in output were marked by periods of inflation and periods of unemployment. After a change in exchange rate policy in the late 1970s (discussed below), an inflationary spiral was unleashed. Hyperinflation rates were reached in the early 1980s, about 400 percent per year by the time a drastic stabilization policy was imposed in 1985. Exchange rate stabilization, budgetary and monetary restraint, and wage and price freezes sharply reduced the rate of inflation to less than 20 percent, and then to about 16 percent in the late 1980s. Very drastic monetary policy, from the late 1990s, finally reduced the inflation to zero by 2005. However, this policy, combined with external factors such as the bursting of the high-tech bubble, recession abroad, and domestic insecurity resulting from the intifada, led to unemployment levels above 10 percent at the beginning of the new century. The economic improvements since the latter half of 2003 have, as yet (February 2005), not significantly reduced the level of unemployment.

Policy Changes

The Israeli economy was initially subject to extensive government controls. Only gradually was the economy converted into a fairly free (though still not completely so) market economy. This process began in the 1960s. In response to a realization by policy makers that government intervention in the economy was excessive, and to the challenge posed by the creation in Europe of a customs union (which gradually progressed into the present European Union), Israel embarked upon a very gradual process of economic liberalization. This appeared first in foreign trade: quantitative restrictions on imports were replaced by tariff protection, which was slowly reduced, and both import-substitution and exports were encouraged by more realistic exchange rates rather than by protection and subsidies. Several partial trade agreements with the European Economic Community (EEC), starting in 1964, culminated in a free trade area agreement (FTA) in industrial goods in 1975, and an FTA agreement with the U.S. came into force in 1985.

By late 1977 a considerable degree of trade liberalization had taken place. In October of that year, Israel moved from a fixed exchange rate system to a floating rate system, and restrictions on capital movements were considerably liberalized. However, there followed a disastrous inflationary spiral which curbed the capital liberalization process. Capital flows were not completely liberalized until the beginning of the new century.

Throughout the 1980s and the 1990s there were additional liberalization measures: in monetary policy, in domestic capital markets, and in various instruments of governmental interference in economic activity. The role of government in the economy was considerably decreased. On the other hand, some governmental economic functions were increased: a national health insurance system was introduced, though private health providers continued to provide health services within the national system. Social welfare payments, such as unemployment benefits, child allowances, old age pensions and minimum income support, were expanded continuously, until they formed a major budgetary expenditure. These transfer payments compensated, to a large extent, for the continuous growth of income inequality, which had moved Israel from among the developed countries with the least income inequality to those with the most. By 2003, 15 percent of the government’s budget went to health services, 15 percent to education, and an additional 20 percent were transfer payments through the National Insurance Agency.

Beginning in 2003, the Ministry of Finance embarked upon a major effort to decrease welfare payments, induce greater participation in the labor force, privatize enterprises still owned by government, and reduce both the relative size of the government deficit and the government sector itself. These activities are the result of an ideological acceptance by the present policy makers of the concept that a truly free market economy is needed to fit into and compete in the modern world of globalization.

An important economic institution is the Histadrut, a federation of labor unions. What had made this institution unique is that, in addition to normal labor union functions, it encompassed agricultural and other cooperatives, major construction and industrial enterprises, and social welfare institutions, including the main health care provider. During the Mandatory period, and for many years thereafter, the Histadrut was an important factor in economic development and in influencing economic policy. During the 1990s, the Histadrut was divested of many of its non-union activities, and its influence in the economy has greatly declined. The major unions associated with it still have much say in wage and employment issues.

The Challenges Ahead

As it moves into the new century, the Israeli economy has proven to be prosperous, as it continuously introduces and applies economic innovation, and to be capable of dealing with economic fluctuations. However, it faces some serious challenges. Some of these are the same as those faced by most industrial economies: how to reconcile innovation, the switch from traditional activities which are no longer competitive, to more sophisticated, skill-intensive products, with the dislocation of labor it involves, and the income inequality it intensifies. Like other small economies, Israel has to see how it fits into the new global economy, marked by the two major markets of the EU and the U.S., and the emergence of China as a major economic factor.

Special issues relate to the relations of Israel with its Arab neighbors. First are the financial implications of continuous hostilities and military threats. Clearly, if peace can come to the region, resources can be transferred to more productive uses. Furthermore, foreign investment, so important for Israel’s future growth, is very responsive to political security. Other issues depend on the type of relations established: will there be the free movement of goods and workers between Israel and a Palestinian state? Will relatively free economic relations with other Arab countries lead to a greater integration of Israel in the immediate region, or, as is more likely, will Israel’s trade orientation continue to be directed mainly to the present major industrial countries? If the latter proves true, Israel will have to carefully maneuver between the two giants: the U.S. and the EU.

References and Recommended Reading

Ben-Bassat, Avi, editor. The Israeli Economy, 1985-1998: From Government Intervention to Market Economics. Cambridge, MA: MIT Press, 2002.

Ben-Porath, Yoram, editor. The Israeli Economy: Maturing through Crisis. Cambridge, MA: Harvard University Press, 1986.

Fischer, Stanley, Dani Rodrik and Elias Tuma, editors. The Economics of Middle East Peace. Cambridge, MA: MIT Press, 1993.

Halevi, Nadav and Ruth Klinov-Malul, The Economic Development of Israel. New York: Praeger, 1968.

Kleiman, Ephraim. “Palestinian Economic Viability and Vulnerability.” Paper presented at the UCLA Burkle Conference in Athens, August 2003. (Available at www.international.ucla.edu.)

Metz, Helen Chapin, editor. Israel: A Country Study. Washington: Library of Congress Country Studies, 1986.

Metzer, Jacob, The Divided Economy of Mandatory Palestine. Cambridge: Cambridge University Press, 1998.

Patinkin, Don. The Israel Economy: The First Decade. Jerusalem: Maurice Falk Institute for Economic Research in Israel, 1967.

Razin, Assaf and Efraim Sadka, The Economy of Modern Israel: Malaise and Promise. London: Chicago University Press, 1993.

World Bank. Developing the Occupied Territories: An Investment in Peace. Washington D.C.: The World Bank, September, 1993.

Citation: Halevi, Nadav. “A Brief Economic History of Modern Israel”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/a-brief-economic-history-of-modern-israel/

The Economic History of Indonesia

Jeroen Touwen, Leiden University, Netherlands

Introduction

In recent decades, Indonesia has been viewed as one of Southeast Asia’s successful highly performing and newly industrializing economies, following the trail of the Asian tigers (Hong Kong, Singapore, South Korea, and Taiwan) (see Table 1). Although Indonesia’s economy grew with impressive speed during the 1980s and 1990s, it experienced considerable trouble after the financial crisis of 1997, which led to significant political reforms. Today Indonesia’s economy is recovering but it is difficult to say when all its problems will be solved. Even though Indonesia can still be considered part of the developing world, it has a rich and versatile past, in the economic as well as the cultural and political sense.

Basic Facts

Indonesia is situated in Southeastern Asia and consists of a large archipelago between the Indian Ocean and the Pacific Ocean, with more than 13.000 islands. The largest islands are Java, Kalimantan (the southern part of the island Borneo), Sumatra, Sulawesi, and Papua (formerly Irian Jaya, which is the western part of New Guinea). Indonesia’s total land area measures 1.9 million square kilometers (750,000 square miles). This is three times the area of Texas, almost eight times the area of the United Kingdom and roughly fifty times the area of the Netherlands. Indonesia has a tropical climate, but since there are large stretches of lowland and numerous mountainous areas, the climate varies from hot and humid to more moderate in the highlands. Apart from fertile land suitable for agriculture, Indonesia is rich in a range of natural resources, varying from petroleum, natural gas, and coal, to metals such as tin, bauxite, nickel, copper, gold, and silver. The size of Indonesia’s population is about 230 million (2002), of which the largest share (roughly 60%) live in Java.

Table 1

Indonesia’s Gross Domestic Product per Capita

Compared with Several Other Asian Countries (in 1990 dollars)

Indonesia Philippines Thailand Japan
1900 745 1 033 812 1 180
1913 904 1 066 835 1 385
1950 840 1 070 817 1 926
1973 1 504 1 959 1 874 11 439
1990 2 516 2 199 4 645 18 789
2000 3 041 2 385 6 335 20 084

Source: Angus Maddison, The World Economy: A Millennial Perspective, Paris: OECD Development Centre Studies 2001, 206, 214-215. For year 2000: University of Groningen and the Conference Board, GGDC Total Economy Database, 2003, http://www.eco.rug.nl/ggdc.

Important Aspects of Indonesian Economic History

“Missed Opportunities”

Anne Booth has characterized the economic history of Indonesia with the somewhat melancholy phrase “a history of missed opportunities” (Booth 1998). One may compare this with J. Pluvier’s history of Southeast Asia in the twentieth century, which is entitled A Century of Unfulfilled Expectations (Breda 1999). The missed opportunities refer to the fact that despite its rich natural resources and great variety of cultural traditions, the Indonesian economy has been underperforming for large periods of its history. A more cyclical view would lead one to speak of several ‘reversals of fortune.’ Several times the Indonesian economy seemed to promise a continuation of favorable economic development and ongoing modernization (for example, Java in the late nineteenth century, Indonesia in the late 1930s or in the early 1990s). But for various reasons Indonesia time and again suffered from severe incidents that prohibited further expansion. These incidents often originated in the internal institutional or political spheres (either after independence or in colonial times), although external influences such as the 1930s Depression also had their ill-fated impact on the vulnerable export-economy.

“Unity in Diversity”

In addition, one often reads about “unity in diversity.” This is not only a political slogan repeated at various times by the Indonesian government itself, but it also can be applied to the heterogeneity in the national features of this very large and diverse country. Logically, the political problems that arise from such a heterogeneous nation state have had their (negative) effects on the development of the national economy. The most striking difference is between densely populated Java, which has a long tradition of politically and economically dominating the sparsely populated Outer Islands. But also within Java and within the various Outer Islands, one encounters a rich cultural diversity. Economic differences between the islands persist. Nevertheless, for centuries, the flourishing and enterprising interregional trade has benefited regional integration within the archipelago.

Economic Development and State Formation

State formation can be viewed as a condition for an emerging national economy. This process essentially started in Indonesia in the nineteenth century, when the Dutch colonized an area largely similar to present-day Indonesia. Colonial Indonesia was called ‘the Netherlands Indies.’ The term ‘(Dutch) East Indies’ was mainly used in the seventeenth and eighteenth centuries and included trading posts outside the Indonesian archipelago.

Although Indonesian national historiography sometimes refers to a presumed 350 years of colonial domination, it is exaggerated to interpret the arrival of the Dutch in Bantam in 1596 as the starting point of Dutch colonization. It is more reasonable to say that colonization started in 1830, when the Java War (1825-1830) was ended and the Dutch initiated a bureaucratic, centralizing polity in Java without further restraint. From the mid-nineteenth century onward, Dutch colonization did shape the borders of the Indonesian nation state, even though it also incorporated weaknesses in the state: ethnic segmentation of economic roles, unequal spatial distribution of power, and a political system that was largely based on oppression and violence. This, among other things, repeatedly led to political trouble, before and after independence. Indonesia ceased being a colony on 17 August 1945 when Sukarno and Hatta proclaimed independence, although full independence was acknowledged by the Netherlands only after four years of violent conflict, on 27 December 1949.

The Evolution of Methodological Approaches to Indonesian Economic History

The economic history of Indonesia analyzes a range of topics, varying from the characteristics of the dynamic exports of raw materials, the dualist economy in which both Western and Indonesian entrepreneurs participated, and the strong measure of regional variation in the economy. While in the past Dutch historians traditionally focused on the colonial era (inspired by the rich colonial archives), from the 1960s and 1970s onward an increasing number of scholars (among which also many Indonesians, but also Australian and American scholars) started to study post-war Indonesian events in connection with the colonial past. In the course of the 1990s attention gradually shifted from the identification and exploration of new research themes towards synthesis and attempts to link economic development with broader historical issues. In 1998 the excellent first book-length survey of Indonesia’s modern economic history was published (Booth 1998). The stress on synthesis and lessons is also present in a new textbook on the modern economic history of Indonesia (Dick et al 2002). This highly recommended textbook aims at a juxtaposition of three themes: globalization, economic integration and state formation. Globalization affected the Indonesian archipelago even before the arrival of the Dutch. The period of the centralized, military-bureaucratic state of Soeharto’s New Order (1966-1998) was only the most recent wave of globalization. A national economy emerged gradually from the 1930s as the Outer Islands (a collective name which refers to all islands outside Java and Madura) reoriented towards industrializing Java.

Two research traditions have become especially important in the study of Indonesian economic history during the past decade. One is a highly quantitative approach, culminating in reconstructions of Indonesia’s national income and national accounts over a long period of time, from the late nineteenth century up to today (Van der Eng 1992, 2001). The other research tradition highlights the institutional framework of economic development in Indonesia, both as a colonial legacy and as it has evolved since independence. There is a growing appreciation among scholars that these two approaches complement each other.

A Chronological Survey of Indonesian Economic History

The precolonial economy

There were several influential kingdoms in the Indonesian archipelago during the pre-colonial era (e.g. Srivijaya, Mataram, Majapahit) (see further Reid 1988,1993; Ricklefs 1993). Much debate centers on whether this heyday of indigenous Asian trade was effectively disrupted by the arrival of western traders in the late fifteenth century

Sixteenth and seventeenth century

Present-day research by scholars in pre-colonial economic history focuses on the dynamics of early-modern trade and pays specific attention to the role of different ethnic groups such as the Arabs, the Chinese and the various indigenous groups of traders and entrepreneurs. During the sixteenth to the nineteenth century the western colonizers only had little grip on a limited number of spots in the Indonesian archipelago. As a consequence much of the economic history of these islands escapes the attention of the economic historian. Most data on economic matters is handed down by western observers with their limited view. A large part of the area remained engaged in its own economic activities, including subsistence agriculture (of which the results were not necessarily very meager) and local and regional trade.

An older research literature has extensively covered the role of the Dutch in the Indonesian archipelago, which began in 1596 when the first expedition of Dutch sailing ships arrived in Bantam. In the seventeenth and eighteenth centuries the Dutch overseas trade in the Far East, which focused on high-value goods, was in the hands of the powerful Dutch East India Company (in full: the United East Indies Trading Company, or Vereenigde Oost-Indische Compagnie [VOC], 1602-1795). However, the region was still fragmented and Dutch presence was only concentrated in a limited number of trading posts.

During the eighteenth century, coffee and sugar became the most important products and Java became the most important area. The VOC gradually took over power from the Javanese rulers and held a firm grip on the productive parts of Java. The VOC was also actively engaged in the intra-Asian trade. For example, cotton from Bengal was sold in the pepper growing areas. The VOC was a successful enterprise and made large dividend payments to its shareholders. Corruption, lack of investment capital, and increasing competition from England led to its demise and in 1799 the VOC came to an end (Gaastra 2002, Jacobs 2000).

The nineteenth century

In the nineteenth century a process of more intensive colonization started, predominantly in Java, where the Cultivation System (1830-1870) was based (Elson 1994; Fasseur 1975).

During the Napoleonic era the VOC trading posts in the archipelago had been under British rule, but in 1814 they came under Dutch authority again. During the Java War (1825-1830), Dutch rule on Java was challenged by an uprising led by Javanese prince Diponegoro. To repress this revolt and establish firm rule in Java, colonial expenses increased, which in turn led to a stronger emphasis on economic exploitation of the colony. The Cultivation System, initiated by Johannes van den Bosch, was a state-governed system for the production of agricultural products such as sugar and coffee. In return for a fixed compensation (planting wage), the Javanese were forced to cultivate export crops. Supervisors, such as civil servants and Javanese district heads, were paid generous ‘cultivation percentages’ in order to stimulate production. The exports of the products were consigned to a Dutch state-owned trading firm (the Nederlandsche Handel-Maatschappij, NHM, established in 1824) and sold profitably abroad.

Although the profits (‘batig slot’) for the Dutch state of the period 1830-1870 were considerable, various reasons can be mentioned for the change to a liberal system: (a) the emergence of new liberal political ideology; (b) the gradual demise of the Cultivation System during the 1840s and 1850s because internal reforms were necessary; and (c) growth of private (European) entrepreneurship with know-how and interest in the exploitation of natural resources, which took away the need for government management (Van Zanden and Van Riel 2000: 226).

Table 2

Financial Results of Government Cultivation, 1840-1849 (‘Cultivation System’) (in thousands of guilders in current values)

1840-1844 1845-1849
Coffee 40 278 24 549
Sugar 8 218 4 136
Indigo, 7 836 7 726
Pepper, Tea 647 1 725
Total net profits 39 341 35 057

Source: Fasseur 1975: 20.

Table 3

Estimates of Total Profits (‘batig slot’) during the Cultivation System,

1831/40 – 1861/70 (in millions of guilders)

1831/40 1841/50 1851/60 1861/70
Gross revenues of sale of colonial products 227.0 473.9 652.7 641.8
Costs of transport etc (NHM) 88.0 165.4 138.7 114.7
Sum of expenses 59.2 175.1 275.3 276.6
Total net profits* 150.6 215.6 289.4 276.7

Source: Van Zanden and Van Riel 2000: 223.

* Recalculated by Van Zanden and Van Riel to include subsidies for the NHM and other costs that in fact benefited the Dutch economy.

The heyday of the colonial export economy (1900-1942)

After 1870, private enterprise was promoted but the exports of raw materials gained decisive momentum after 1900. Sugar, coffee, pepper and tobacco, the old export products, were increasingly supplemented with highly profitable exports of petroleum, rubber, copra, palm oil and fibers. The Outer Islands supplied an increasing share in these foreign exports, which were accompanied by an intensifying internal trade within the archipelago and generated an increasing flow of foreign imports. Agricultural exports were cultivated both in large-scale European agricultural plantations (usually called agricultural estates) and by indigenous smallholders. When the exploitation of oil became profitable in the late nineteenth century, petroleum earned a respectable position in the total export package. In the early twentieth century, the production of oil was increasingly concentrated in the hands of the Koninklijke/Shell Group.


Figure 1

Foreign Exports from the Netherlands-Indies, 1870-1940

(in millions of guilders, current values)

Source: Trade statistics

The momentum of profitable exports led to a broad expansion of economic activity in the Indonesian archipelago. Integration with the world market also led to internal economic integration when the road system, railroad system (in Java and Sumatra) and port system were improved. In shipping lines, an important contribution was made by the KPM (Koninklijke Paketvaart-Maatschappij, Royal Packet boat Company) that served economic integration as well as imperialist expansion. Subsidized shipping lines into remote corners of the vast archipelago carried off export goods (forest products), supplied import goods and transported civil servants and military.

The Depression of the 1930s hit the export economy severely. The sugar industry in Java collapsed and could not really recover from the crisis. In some products, such as rubber and copra, production was stepped up to compensate for lower prices. In the rubber exports indigenous producers for this reason evaded the international restriction agreements. The Depression precipitated the introduction of protectionist measures, which ended the liberal period that had started in 1870. Various import restrictions were launched, making the economy more self-sufficient, as for example in the production of rice, and stimulating domestic integration. Due to the strong Dutch guilder (the Netherlands adhered to the gold standard until 1936), it took relatively long before economic recovery took place. The outbreak of World War II disrupted international trade, and the Japanese occupation (1942-1945) seriously disturbed and dislocated the economic order.

Table 4

Annual Average Growth in Economic Key Aggregates 1830-1990

GDP per capita Export volume Export

Prices

Government Expenditure
Cultivation System 1830-1840 n.a. 13.5 5.0 8.5
Cultivation System 1840-1848 n.a. 1.5 - 4.5 [very low]
Cultivation System 1849-1873 n.a. 1.5 1.5 2.6
Liberal Period 1874-1900 [very low] 3.1 - 1.9 2.3
Ethical Period 1901-1928 1.7 5.8 17.4 4.1
Great Depression 1929-1934 -3.4 -3.9 -19.7 0.4
Prewar Recovery 1934-1940 2.5 2.2 7.8 3.4
Old Order 1950-1965 1.0 0.8 - 2.1 1.8
New Order 1966-1990 4.4 5.4 11.6 10.6

Source: Booth 1998: 18.

Note: These average annual growth percentages were calculated by Booth by fitting an exponential curve to the data for the years indicated. Up to 1873 data refer only to Java.

The post-1945 period

After independence, the Indonesian economy had to recover from the hardships of the Japanese occupation and the war for independence (1945-1949), on top of the slow recovery from the 1930s Depression. During the period 1949-1965, there was little economic growth, predominantly in the years from 1950 to 1957. In 1958-1965, growth rates dwindled, largely due to political instability and inappropriate economic policy measures. The hesitant start of democracy was characterized by a power struggle between the president, the army, the communist party and other political groups. Exchange rate problems and absence of foreign capital were detrimental to economic development, after the government had eliminated all foreign economic control in the private sector in 1957/58. Sukarno aimed at self-sufficiency and import substitution and estranged the suppliers of western capital even more when he developed communist sympathies.

After 1966, the second president, general Soeharto, restored the inflow of western capital, brought back political stability with a strong role for the army, and led Indonesia into a period of economic expansion under his authoritarian New Order (Orde Baru) regime which lasted until 1997 (see below for the three phases in New Order). In this period industrial output quickly increased, including steel, aluminum, and cement but also products such as food, textiles and cigarettes. From the 1970s onward the increased oil price on the world market provided Indonesia with a massive income from oil and gas exports. Wood exports shifted from logs to plywood, pulp, and paper, at the price of large stretches of environmentally valuable rainforest.

Soeharto managed to apply part of these revenues to the development of technologically advanced manufacturing industry. Referring to this period of stable economic growth, the World Bank Report of 1993 speaks of an ‘East Asian Miracle’ emphasizing the macroeconomic stability and the investments in human capital (World Bank 1993: vi).

The financial crisis in 1997 revealed a number of hidden weaknesses in the economy such as a feeble financial system (with a lack of transparency), unprofitable investments in real estate, and shortcomings in the legal system. The burgeoning corruption at all levels of the government bureaucracy became widely known as KKN (korupsi, kolusi, nepotisme). These practices characterize the coming-of-age of the 32-year old, strongly centralized, autocratic Soeharto regime.

From 1998 until present

Today, the Indonesian economy still suffers from severe economic development problems following the financial crisis of 1997 and the subsequent political reforms after Soeharto stepped down in 1998. Secessionist movements and the low level of security in the provincial regions, as well as relatively unstable political policies, form some of its present-day problems. Additional problems include the lack of reliable legal recourse in contract disputes, corruption, weaknesses in the banking system, and strained relations with the International Monetary Fund. The confidence of investors remains low, and in order to achieve future growth, internal reform will be essential to build up confidence of international donors and investors.

An important issue on the reform agenda is regional autonomy, bringing a larger share of export profits to the areas of production instead of to metropolitan Java. However, decentralization policies do not necessarily improve national coherence or increase efficiency in governance.

A strong comeback in the global economy may be at hand, but has not as yet fully taken place by the summer of 2003 when this was written.

Additional Themes in the Indonesian Historiography

Indonesia is such a large and multi-faceted country that many different aspects have been the focus of research (for example, ethnic groups, trade networks, shipping, colonialism and imperialism). One can focus on smaller regions (provinces, islands), as well as on larger regions (the western archipelago, the eastern archipelago, the Outer Islands as a whole, or Indonesia within Southeast Asia). Without trying to be exhaustive, eleven themes which have been subject of debate in Indonesian economic history are examined here (on other debates see also Houben 2002: 53-55; Lindblad 2002b: 145-152; Dick 2002: 191-193; Thee 2002: 242-243).

The indigenous economy and the dualist economy

Although western entrepreneurs had an advantage in technological know-how and supply of investment capital during the late-colonial period, there has been a traditionally strong and dynamic class of entrepreneurs (traders and peasants) in many regions of Indonesia. Resilient in times of economic malaise, cunning in symbiosis with traders of other Asian nationalities (particularly Chinese), the Indonesian entrepreneur has been rehabilitated after the relatively disparaging manner in which he was often pictured in the pre-1945 literature. One of these early writers, J.H. Boeke, initiated a school of thought centering on the idea of ‘economic dualism’ (referring to a modern western and a stagnant eastern sector). As a consequence, the term ‘dualism’ was often used to indicate western superiority. From the 1960s onward such ideas have been replaced by a more objective analysis of the dualist economy that is not so judgmental about the characteristics of economic development in the Asian sector. Some focused on technological dualism (such as B. Higgins) others on ethnic specialization in different branches of production (see also Lindblad 2002b: 148, Touwen 2001: 316-317).

The characteristics of Dutch imperialism

Another vigorous debate concerns the character of and the motives for Dutch colonial expansion. Dutch imperialism can be viewed as having a rather complex mix of political, economic and military motives which influenced decisions about colonial borders, establishing political control in order to exploit oil and other natural resources, and preventing local uprisings. Three imperialist phases can be distinguished (Lindblad 2002a: 95-99). The first phase of imperialist expansion was from 1825-1870. During this phase interference with economic matters outside Java increased slowly but military intervention was occasional. The second phase started with the outbreak of the Aceh War in 1873 and lasted until 1896. During this phase initiatives in trade and foreign investment taken by the colonial government and by private businessmen were accompanied by extension of colonial (military) control in the regions concerned. The third and final phase was characterized by full-scale aggressive imperialism (often known as ‘pacification’) and lasted from 1896 until 1907.

The impact of the cultivation system on the indigenous economy

The thesis of ‘agricultural involution’ was advocated by Clifford Geertz (1963) and states that a process of stagnation characterized the rural economy of Java in the nineteenth century. After extensive research, this view has generally been discarded. Colonial economic growth was stimulated first by the Cultivation System, later by the promotion of private enterprise. Non-farm employment and purchasing power increased in the indigenous economy, although there was much regional inequality (Lindblad 2002a: 80; 2002b:149-150).

Regional diversity in export-led economic expansion

The contrast between densely populated Java, which had been dominant in economic and political regard for a long time, and the Outer Islands, which were a large, sparsely populated area, is obvious. Among the Outer Islands we can distinguish between areas which were propelled forward by export trade, either from Indonesian or European origin (examples are Palembang, East Sumatra, Southeast Kalimantan) and areas which stayed behind and only slowly picked the fruits of the modernization that took place elsewhere (as for example Benkulu, Timor, Maluku) (Touwen 2001).

The development of the colonial state and the role of Ethical Policy

Well into the second half of the nineteenth century, the official Dutch policy was to abstain from interference with local affairs. The scarce resources of the Dutch colonial administrators should be reserved for Java. When the Aceh War initiated a period of imperialist expansion and consolidation of colonial power, a call for more concern with indigenous affairs was heard in Dutch politics, which resulted in the official Ethical Policy which was launched in 1901 and had the threefold aim of improving indigenous welfare, expanding the educational system, and allowing for some indigenous participation in the government (resulting in the People’s Council (Volksraad) that was installed in 1918 but only had an advisory role). The results of the Ethical Policy, as for example measured in improvements in agricultural technology, education, or welfare services, are still subject to debate (Lindblad 2002b: 149).

Living conditions of coolies at the agricultural estates

The plantation economy, which developed in the sparsely populated Outer Islands (predominantly in Sumatra) between 1870 and 1942, was in bad need of labor. The labor shortage was solved by recruiting contract laborers (coolies) in China, and later in Java. The Coolie Ordinance was a government regulation that included the penal clause (which allowed for punishment by plantation owners). In response to reported abuse, the colonial government established the Labor Inspectorate (1908), which aimed at preventing abuse of coolies on the estates. The living circumstances and treatment of the coolies has been subject of debate, particularly regarding the question whether the government put enough effort in protecting the interests of the workers or allowed abuse to persist (Lindblad 2002b: 150).

Colonial drain

How large of a proportion of economic profits was drained away from the colony to the mother country? The detrimental effects of the drain of capital, in return for which European entrepreneurial initiatives were received, have been debated, as well as the exact methods of its measurement. There was also a second drain to the home countries of other immigrant ethnic groups, mainly to China (Van der Eng 1998; Lindblad 2002b: 151).

The position of the Chinese in the Indonesian economy

In the colonial economy, the Chinese intermediary trader or middleman played a vital role in supplying credit and stimulating the cultivation of export crops such as rattan, rubber and copra. The colonial legal system made an explicit distinction between Europeans, Chinese and Indonesians. This formed the roots of later ethnic problems, since the Chinese minority population in Indonesia has gained an important (and sometimes envied) position as capital owners and entrepreneurs. When threatened by political and social turmoil, Chinese business networks may have sometimes channel capital funds to overseas deposits.

Economic chaos during the ‘Old Order’

The ‘Old Order’-period, 1945-1965, was characterized by economic (and political) chaos although some economic growth undeniably did take place during these years. However, macroeconomic instability, lack of foreign investment and structural rigidity formed economic problems that were closely connected with the political power struggle. Sukarno, the first president of the Indonesian republic, had an outspoken dislike of colonialism. His efforts to eliminate foreign economic control were not always supportive of the struggling economy of the new sovereign state. The ‘Old Order’ has for long been a ‘lost area’ in Indonesian economic history, but the establishment of the unitary state and the settlement of major political issues, including some degree of territorial consolidation (as well as the consolidation of the role of the army) were essential for the development of a national economy (Dick 2002: 190; Mackie 1967).

Development policy and economic planning during the ‘New Order’ period

The ‘New Order’ (Orde Baru) of Soeharto rejected political mobilization and socialist ideology, and established a tightly controlled regime that discouraged intellectual enquiry, but did put Indonesia’s economy back on the rails. New flows of foreign investment and foreign aid programs were attracted, the unbridled population growth was reduced due to family planning programs, and a transformation took place from a predominantly agricultural economy to an industrializing economy. Thee Kian Wie distinguishes three phases within this period, each of which deserve further study:

(a) 1966-1973: stabilization, rehabilitation, partial liberalization and economic recovery;

(b) 1974-1982: oil booms, rapid economic growth, and increasing government intervention;

(c) 1983-1996: post-oil boom, deregulation, renewed liberalization (in reaction to falling oil-prices), and rapid export-led growth. During this last phase, commentators (including academic economists) were increasingly concerned about the thriving corruption at all levels of the government bureaucracy: KKN (korupsi, kolusi, nepotisme) practices, as they later became known (Thee 2002: 203-215).

Financial, economic and political crisis: KRISMON, KRISTAL

The financial crisis of 1997 started with a crisis of confidence following the depreciation of the Thai baht in July 1997. Core factors causing the ensuing economic crisis in Indonesia were the quasi-fixed exchange rate of the rupiah, quickly rising short-term foreign debt and the weak financial system. Its severity had to be attributed to political factors as well: the monetary crisis (KRISMON) led to a total crisis (KRISTAL) because of the failing policy response of the Soeharto regime. Soeharto had been in power for 32 years and his government had become heavily centralized and corrupt and was not able to cope with the crisis in a credible manner. The origins, economic consequences, and socio-economic impact of the crisis are still under discussion. (Thee 2003: 231-237; Arndt and Hill 1999).

(Note: I want to thank Dr. F. Colombijn and Dr. J.Th Lindblad at Leiden University for their useful comments on the draft version of this article.)

Selected Bibliography

In addition to the works cited in the text above, a small selection of recent books is mentioned here, which will allow the reader to quickly grasp the most recent insights and find useful further references.

General textbooks or periodicals on Indonesia’s (economic) history:

Booth, Anne. The Indonesian Economy in the Nineteenth and Twentieth Centuries: A History of Missed Opportunities. London: Macmillan, 1998.

Bulletin of Indonesian Economic Studies.

Dick, H.W., V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie. The Emergence of a National Economy in Indonesia, 1800-2000. Sydney: Allen & Unwin, 2002.

Itinerario “Economic Growth and Institutional Change in Indonesia in the 19th and 20th centuries” [special issue] 26 no. 3-4 (2002).

Reid, Anthony. Southeast Asia in the Age of Commerce, 1450-1680, Vol. I: The Lands below the Winds. New Haven: Yale University Press, 1988.

Reid, Anthony. Southeast Asia in the Age of Commerce, 1450-1680, Vol. II: Expansion and Crisis. New Haven: Yale University Press, 1993.

Ricklefs, M.C. A History of Modern Indonesia since ca. 1300. Basingstoke/Londen: Macmillan, 1993.

On the VOC:

Gaastra, F.S. De Geschiedenis van de VOC. Zutphen: Walburg Pers, 1991 (1st edition), 2002 (4th edition).

Jacobs, Els M. Koopman in Azië: de Handel van de Verenigde Oost-Indische Compagnie tijdens de 18de Eeuw. Zutphen: Walburg Pers, 2000.

Nagtegaal, Lucas. Riding the Dutch Tiger: The Dutch East Indies Company and the Northeast Coast of Java 1680-1743. Leiden: KITLV Press, 1996.

On the Cultivation System:

Elson, R.E. Village Java under the Cultivation System, 1830-1870. Sydney: Allen and Unwin, 1994.

Fasseur, C. Kultuurstelsel en Koloniale Baten. De Nederlandse Exploitatie van Java, 1840-1860. Leiden, Universitaire Pers, 1975. (Translated as: The Politics of Colonial Exploitation: Java, the Dutch and the Cultivation System. Ithaca, NY: Southeast Asia Program, Cornell University Press 1992.)

Geertz, Clifford. Agricultural Involution: The Processes of Ecological Change in Indonesia. Berkeley: University of California Press, 1963.

Houben, V.J.H. “Java in the Nineteenth Century: Consolidation of a Territorial State.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 56-81. Sydney: Allen & Unwin, 2002.

On the Late-Colonial Period:

Dick, H.W. “Formation of the Nation-state, 1930s-1966.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 153-193. Sydney: Allen & Unwin, 2002.

Lembaran Sejarah, “Crisis and Continuity: Indonesian Economy in the Twentieth Century” [special issue] 3 no. 1 (2000).

Lindblad, J.Th., editor. New Challenges in the Modern Economic History of Indonesia. Leiden: PRIS, 1993. Translated as: Sejarah Ekonomi Modern Indonesia. Berbagai Tantangan Baru. Jakarta: LP3ES, 2002.

Lindblad, J.Th., editor. The Historical Foundations of a National Economy in Indonesia, 1890s-1990s. Amsterdam: North-Holland, 1996.

Lindblad, J.Th. “The Outer Islands in the Nineteenthh Century: Contest for the Periphery.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 82-110. Sydney: Allen & Unwin, 2002a.

Lindblad, J.Th. “The Late Colonial State and Economic Expansion, 1900-1930s.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 111-152. Sydney: Allen & Unwin, 2002b.

Touwen, L.J. Extremes in the Archipelago: Trade and Economic Development in the Outer Islands of Indonesia, 1900‑1942. Leiden: KITLV Press, 2001.

Van der Eng, Pierre. “Exploring Exploitation: The Netherlands and Colonial Indonesia, 1870-1940.” Revista de Historia Económica 16 (1998): 291-321.

Zanden, J.L. van, and A. van Riel. Nederland, 1780-1914: Staat, instituties en economische ontwikkeling. Amsterdam: Balans, 2000. (On the Netherlands in the nineteenth century.)

Independent Indonesia:

Arndt, H.W. and Hal Hill, editors. Southeast Asia’s Economic Crisis: Origins, Lessons and the Way forward. Singapore: Institute of Southeast Asian Studies, 1999.

Cribb, R. and C. Brown. Modern Indonesia: A History since 1945. Londen/New York: Longman, 1995.

Feith, H. The Decline of Constitutional Democracy in Indonesia. Ithaca, New York: Cornell University Press, 1962.

Hill, Hal. The Indonesian Economy. Cambridge: Cambridge University Press, 2000. (This is the extended second edition of Hill, H., The Indonesian Economy since 1966. Southeast Asia’s Emerging Giant. Cambridge: Cambridge University Press, 1996.)

Hill, Hal, editor. Unity and Diversity: Regional Economic Development in Indonesia since 1970. Singapore: Oxford University Press, 1989.

Mackie, J.A.C. “The Indonesian Economy, 1950-1960.” In The Economy of Indonesia: Selected Readings, edited by B. Glassburner, 16-69. Ithaca NY: Cornell University Press 1967.

Robison, Richard. Indonesia: The Rise of Capital. Sydney: Allen and Unwin, 1986.

Thee Kian Wie. “The Soeharto Era and After: Stability, Development and Crisis, 1966-2000.” In The Emergence of a National Economy in Indonesia, 1800-2000, edited by H.W. Dick, V.J.H. Houben, J.Th. Lindblad and Thee Kian Wie, 194-243. Sydney: Allen & Unwin, 2002.

World Bank. The East Asian Miracle: Economic Growth and Public Policy. Oxford: World Bank /Oxford University Press, 1993.

On economic growth:

Booth, Anne. The Indonesian Economy in the Nineteenth and Twentieth Centuries. A History of Missed Opportunities. London: Macmillan, 1998.

Van der Eng, Pierre. “The Real Domestic Product of Indonesia, 1880-1989.” Explorations in Economic History 39 (1992): 343-373.

Van der Eng, Pierre. “Indonesia’s Growth Performance in the Twentieth Century.” In The Asian Economies in the Twentieth Century, edited by Angus Maddison, D.S. Prasada Rao and W. Shepherd, 143-179. Cheltenham: Edward Elgar, 2002.

Van der Eng, Pierre. “Indonesia’s Economy and Standard of Living in the Twentieth Century.” In Indonesia Today: Challenges of History, edited by G. Lloyd and S. Smith, 181-199. Singapore: Institute of Southeast Asian Studies, 2001.

Citation: Touwen, Jeroen. “The Economic History of Indonesia”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-indonesia/

Economic History of Hong Kong

Catherine R. Schenk, University of Glasgow

Hong Kong’s economic and political history has been primarily determined by its geographical location. The territory of Hong Kong is comprised of two main islands (Hong Kong Island and Lantau Island) and a mainland hinterland. It thus forms a natural geographic port for Guangdong province in Southeast China. In a sense, there is considerable continuity in Hong Kong’s position in the international economy since its origins were as a commercial entrepot for China’s regional and global trade, and this is still a role it plays today. From a relatively unpopulated territory at the beginning of the nineteenth century, Hong Kong grew to become one of the most important international financial centers in the world. Hong Kong also underwent a rapid and successful process of industrialization from the 1950s that captured the imagination of economists and historians in the 1980s and 1990s.

Hong Kong from 1842 to 1949

After being ceded by China to the British under the Treaty of Nanking in 1842, the colony of Hong Kong quickly became a regional center for financial and commercial services based particularly around the Hongkong and Shanghai Bank and merchant companies such as Jardine Matheson. In 1841 there were only 7500 Chinese inhabitants of Hong Kong and a handful of foreigners, but by 1859 the Chinese community was over 85,000 supplemented by about 1600 foreigners. The economy was closely linked to commercial activity, dominated by shipping, banking and merchant companies. Gradually there was increasing diversification to services and retail outlets to meet the needs of the local population, and also shipbuilding and maintenance linked to the presence of the British naval and merchant shipping. There was some industrial expansion in the nineteenth century; notably sugar refining, cement and ice factories among the foreign sector, alongside smaller-scale local workshop manufactures. The mainland territory of Hong Kong was ceded to British rule by two further treaties in this period; Kowloon in 1860 and the New Territories in 1898.

Hong Kong was profoundly affected by the disastrous events in Mainland China in the inter-war period. After overthrow of the dynastic system in 1911, the Kuomintang (KMT) took a decade to pull together a republican nation-state. The Great Depression and fluctuations in the international price of silver then disrupted China’s economic relations with the rest of the world in the 1930s. From 1937, China descended into the Sino-Japanese War. Two years after the end of World War II, the civil war between the KMT and Chinese Communist Party pushed China into a downward economic spiral. During this period, Hong Kong suffered from the slowdown in world trade and in China’s trade in particular. However, problems on the mainland also diverted business and entrepreneurs from Shanghai and other cities to the relative safety and stability of the British colonial port of Hong Kong.

Post-War Industrialization

After the establishment of the People’s Republic of China (PRC) in 1949, the mainland began a process of isolation from the international economy, partly for ideological reasons and partly because of Cold War embargos on trade imposed first by the United States in 1949 and then by the United Nations in 1951. Nevertheless, Hong Kong was vital to the international economic links that the PRC continued in order to pursue industrialization and support grain imports. Even during the period of self-sufficiency in the 1960s, Hong Kong’s imports of food and water from the PRC were a vital source of foreign exchange revenue that ensured Hong Kong’s usefulness to the mainland. In turn, cheap food helped to restrain rises in the cost of living in Hong Kong thus helping to keep wages low during the period of labor-intensive industrialization.

The industrialization of Hong Kong is usually dated from the embargoes of the 1950s. Certainly, Hong Kong’s prosperity could no longer depend on the China trade in this decade. However, as seen above, industry emerged in the nineteenth century and it began to expand in the interwar period. Nevertheless, industrialization accelerated after 1945 with the inflow of refugees, entrepreneurs and capital fleeing the civil war on the mainland. The most prominent example is immigrants from Shanghai who created the cotton spinning industry in the colony. Hong Kong’s industry was founded in the textile sector in the 1950s before gradually diversifying in the 1960s to clothing, electronics, plastics and other labor-intensive production mainly for export.

The economic development of Hong Kong is unusual in a variety of respects. First, industrialization was accompanied by increasing numbers of small and medium-sized enterprises (SME) rather than consolidation. In 1955, 91 percent of manufacturing establishments employed fewer than one hundred workers, a proportion that increased to 96.5 percent by 1975. Factories employing fewer than one hundred workers accounted for 42 percent of Hong Kong’s domestic exports to the U.K. in 1968, amounting to HK$1.2 billion. At the end of 2002, SMEs still amounted to 98 percent of enterprises, providing 60 percent of total private employment.

Second, until the late 1960s, the government did not engage in active industrial planning. This was partly because the government was preoccupied with social spending on housing large flows of immigrants, and partly because of an ideological sympathy for free market forces. This means that Hong Kong fits outside the usual models of Asian economic development based on state-led industrialization (Japan, South Korea, Singapore, Taiwan) or domination of foreign firms (Singapore) or large firms with close relations to the state (Japan, South Korea). Low taxes, lax employment laws, absence of government debt, and free trade are all pillars of the Hong Kong experience of economic development.

In fact, of course, the reality was very different from the myth of complete laissez-faire. The government’s programs of public housing, land reclamation, and infrastructure investment were ambitious. New industrial towns were built to house immigrants, provide employment and aid industry. The government subsidized industry indirectly through this public housing, which restrained rises in the cost of living that would have threatened Hong Kong’s labor-cost advantage in manufacturing. The government also pursued an ambitious public education program, creating over 300,000 new primary school places between 1954 and 1961. By 1966, 99.8% of school-age children were attending primary school, although free universal primary school was not provided until 1971. Secondary school provision was expanded in the 1970s, and from 1978 the government offered compulsory free education for all children up to the age of 15. The hand of government was much lighter on international trade and finance. Exchange controls were limited to a few imposed by the U.K., and there were no controls on international flows of capital. Government expenditure even fell from 7.5% of GDP in the 1960s to 6.5% in the 1970s. In the same decades, British government spending as a percent of GDP rose from 17% to 20%.

From the mid-1950s Hong Kong’s rapid success as a textile and garment exporter generated trade friction that resulted in voluntary export restraints in a series of treaties with the U.K. beginning in 1959. Despite these agreements, Hong Kong’s exporters continued to exploit their flexibility and adaptability to increase production and find new markets. Indeed, exports increased from 54% of GDP in the 1960s to 64% in the 1970s. Figure 1 shows the annual changes in the growth of real GDP per capita. In the period from 1962 until the onset of the oil crisis in 1973, the average growth rate was 6.5% per year. From 1976 to 1996 GDP grew at an average of 5.6% per year. There were negative shocks in 1967-68 as a result of local disturbances from the onset of the Cultural Revolution in the PRC, and again in 1973 to 1975 from the global oil crisis. In the early 1980s there was another negative shock related to politics, as the terms of Hong Kong’s return to PRC control in 1997 were formalized.

 Annual percentage change of per capita GDP 1962-2001

Reintegration with China, 1978-1997

The Open Door Policy of the PRC announced by Deng Xiao-ping at the end of 1978 marked a new era for Hong Kong’s economy. With the newly vigorous engagement of China in international trade and investment, Hong Kong’s integration with the mainland accelerated as it regained its traditional role as that country’s main provider of commercial and financial services. From 1978 to 1997, visible trade between Hong Kong and the PRC grew at an average rate of 28% per annum. At the same time, Hong Kong firms began to move their labor-intensive activities to the mainland to take advantage of cheaper labor. The integration of Hong Kong with the Pearl River delta in Guangdong is the most striking aspect of these trade and investment links. At the end of 1997, the cumulative value of Hong Kong’s direct investment in Guangdong was estimated at US$48 billion, accounting for almost 80% of the total foreign direct investment there. Hong Kong companies and joint ventures in Guangdong province employed about five million people. Most of these businesses were labor-intensive assembly for export, but from 1997 onward there has been increased investment in financial services, tourism and retail trade.

While manufacturing was moved out of the colony during the 1980s and 1990s, there was a surge in the service sector. This transformation of the structure of Hong Kong’s economy from manufacturing to services was dramatic. Most remarkably it was accomplished without faltering growth rates overall, and with an average unemployment rate of only 2.5% from 1982 to 1997. Figure 2 shows that the value of manufacturing peaked in 1992 before beginning an absolute decline. In contrast, the value of commercial and financial services soared. This is reflected in the contribution of services and manufacturing to GDP shown in Figure 3. Employment in the service sector rose from 52% to 80% of the labor force from 1981 to 2000 while manufacturing employment fell from 39% to 10% in the same period.

 GDP by economic activity at current prices  Contribution to Hong Kong's GDP at factor prices

Asian Financial Crisis, 1997-2002

The terms for the return of Hong Kong to Chinese rule in July 1997 carefully protected the territory’s separate economic characteristics, which have been so beneficial to the Chinese economy. Under the Basic Law, a “one country-two systems” policy was formulated which left Hong Kong monetarily and economically separate from the mainland with exchange and trade controls remaining in place as well as restrictions on the movement of people. Hong Kong was hit hard by the Asian Financial Crisis that struck the region in mid-1997, just at the time of the handover of the colony back to Chinese administrative control. The crisis prompted a collapse in share prices and the property market that affected the ability of many borrowers to repay bank loans. Unlike most Asian countries, Hong Kong Special Administrative Region and mainland China maintained their currencies’ exchange rates with the U.S. dollar rather than devaluing. Along with the Sudden Acute Respiratory Syndrome (SARS) threat in 2002, the Asian Financial Crisis pushed Hong Kong into a new era of recession with a rise in unemployment (6% on average from 1998-2003) and absolute declines in output and prices. The longer-term impact of the crisis has been to increase the intensity and importance of Hong Kong’s trade and investment links with the PRC. Since the PRC did not fare as badly from the regional crisis, the economic prospects for Hong Kong have been tied more closely to the increasingly prosperous mainland.

Suggestions for Further Reading

For a general history of Hong Kong from the nineteenth century, see S. Tsang, A Modern History of Hong Kong, London: IB Tauris, 2004. For accounts of Hong Kong’s economic history see, D.R. Meyer, Hong Kong as a Global Metropolis, Cambridge: Cambridge University Press, 2000; C.R. Schenk, Hong Kong as an International Financial Centre: Emergence and Development, 1945-65, London: Routledge, 2001; and Y-P Ho, Trade, Industrial Restructuring and Development in Hong Kong, London: Macmillan, 1992. Useful statistics and summaries of recent developments are available on the website of the Hong Kong Monetary Authority www.info.gov.hk/hkma.

Citation: Schenk, Catherine. “Economic History of Hong Kong”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/economic-history-of-hong-kong/