EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

The International Natural Rubber Market, 1870-1930

Zephyr Frank, Stanford University and Aldo Musacchio, Ibmec SãoPaulo

Overview of the Rubber Market, 1870-1930

Natural rubber was first used by the indigenous peoples of the Amazon basin for a variety of purposes. By the middle of the eighteenth century, Europeans had begun to experiment with rubber as a waterproofing agent. In the early nineteenth century, rubber was used to make waterproof shoes (Dean, 1987). The best source of latex, the milky fluid from which natural rubber products were made, was hevea brasiliensis, which grew predominantly in the Brazilian Amazon (but also in the Amazonian regions of Bolivia and Peru). Thus, by geographical accident, the first period of rubber’s commercial history, from the late 1700s through 1900, was centered in Brazil; the second period, from roughly 1910 on, was increasingly centered in East Asia as the result of plantation development. The first century of rubber was typified by relatively low levels of production, high wages, and very high prices; the period following 1910 was one of rapidly increasing production, low wages, and falling prices.

Uses of Rubber

The early uses of the material were quite limited. Initially the problem of natural rubber was its sensitivity to temperature changes, which altered its shape and consistency. In 1839 Charles Goodyear improved the process called vulcanization, which modified rubber so that it would support extreme temperatures. It was then that natural rubber became suitable for producing hoses, tires, industrial bands, sheets, shoes, shoe soles, and other products. What initially caused the beginning of the “Rubber Boom,” however, was the popularization of the bicycle. The boom would then be accentuated after 1900 by the development of the automobile industry and the expansion of the tire industry to produce car tires (Weinstein, 1983; Dean 1987).

Brazil’s Initial Advantage and High-Wage Cost Structure

Until the turn of the twentieth century Brazil and the countries that share the Amazon basin (i.e. Bolivia, Venezuela and Peru), were the only exporters of natural rubber. Brazil sold almost ninety percent of the total rubber commercialized in the world. The fundamental fact that explains Brazil’s entry into and domination of natural rubber production during the period 1870 through roughly 1913 is that most of the world’s rubber trees grew naturally in the Amazon region of Brazil. The Brazilian rubber industry developed a high-wage cost structure as the result of labor scarcity and lack of competition in the early years of rubber production. Since there were no credit markets to finance the trips of the workers of other parts of Brazil to the Amazon, workers paid their trips with loans from their future employers. Much like indenture servitude during colonial times in the United States, these loans were paid back to the employers with work once the laborers were established in the Amazon basin. Another factor that increased the costs of producing rubber was that most provisions for tappers in the field had to be shipped in from outside the region at great expense (Barham and Coomes, 1994). This made Brazilian production very expensive compared to the future plantations in Asia. Nevertheless Brazil’s system of production worked well as long as two conditions were met: first, that the demand for rubber did not grow too quickly, for wild rubber production could not expand rapidly owing to labor and environmental constraints; second, that competition based on some other more efficient arrangement of factors of production did not exist. As can be seen in Figure 1, Brazil dominated the natural rubber market until the first decade of the twentieth century.

Between 1900 and 1913, these conditions ceased to hold. First, the demand for rubber skyrocketed [see Figure 2], providing a huge incentive for other producers to enter the market. Prices had been high before, but Brazilian supply had been quite capable of meeting demand; now, prices were high and demand appeared insatiable. Plantations, which had been possible since the 1880s, now became a reality mainly in the colonies of Southeast Asia. Because Brazil was committed to a high-wage, labor-scarce production regime, it was unable to counter the entry of Asian plantations into the market it had dominated for half a century.

Southeast Asian Plantations Develop a Low-Cost, Labor-Intensive Alternative

In Asia, the British and Dutch drew upon their superior stocks of capital and vast pools of cheap colonial labor to transform rubber collection into a low-cost, labor-intensive industry. Investment per tapper in Brazil was reportedly 337 pounds sterling circa 1910; in the low-cost Asian plantations, investment was estimated at just 210 pounds per worker (Dean, 1987). Not only were Southeast Asian tappers cheaper, they were potentially eighty percent more productive (Dean, 1987).

Ironically, the new plantation system proved equally susceptible to uncertainty and competition. Unexpected sources of uncertainty arose in the technological development of automobile tires. In spite of colonialism, the British and Dutch were unable to collude to control production and prices plummeted after 1910. When the British did attempt to restrict production in the 1920s, the United States attempted to set up plantations in Brazil and the Dutch were happy to take market share. Yet it was too late for Brazil: the cost structure of Southeast Asian plantations could not be matched. In a sense, then, the game was no longer worth the candle: in order to compete in rubber production, Brazil would have to have had significantly lower wages — which would only have been possible with a vastly expanded transport network and domestic agriculture sector in the hinterland of the Amazon basin. Such an expensive solution made no economic sense in the 1910s and 20s when coffee and nascent industrialization in São Paulo offered much more promising prospects.

Natural Rubber Extraction and Commercialization: Brazil

Rubber Tapping in the Amazon Rainforest

One disadvantage Brazilian rubber producers suffered was that the organization of production depended on the distribution of Hevea brasiliensis trees in the forest. The owner (or often lease concessionary) of a large land plot would hire tappers to gather rubber by gouging the tree trunk with an axe. In Brazil, the usual practice was to make a big dent in the tree and put a small bowl to collect the latex that would come out of the trunk. Typically, tappers had two “rows” of trees they worked on, alternating one row per day. The “rows” contained several circular roads that went through the forest with more than 100 trees each. Rubber could only be collected during the tapping season (August to January), and the living conditions of tappers were hard. As the need for rubber expanded, tappers had to be sent deep into the Amazon rainforest to look for unexplored land with more productive trees. Tappers established their shacks close to the river because rubber, once smoked, was sent by boat to Manaus (capital of the state of Amazonas) or to Belém (capital of the state of Pará), both entrepots for rubber exporting to Europe and the US.[1]

Competition or Exploitation? Tappers and Seringalistas

After collecting the rubber, tappers would go back to their shacks and smoke the resin in order to make balls of partially filtered and purified rough rubber that could be sold at the ports. There is much discussion about the commercialization of the product. Weinstein (1983) argues that the seringalista — the employer of the rubber tapper — controlled the transportation of rubber to the ports, where he sold the rubber, many times in exchange for goods that could be sold (with a large gain) back to the tapper. In this economy money was scarce and the “wages” of tappers or seringueiros were determined by the price of rubber. Wages depended on the current price of rubber; the usual agreement for tappers was to split the gross profits with their patrons. These salaries were most commonly paid in goods, such as cigarettes, food, and tools. According to Weinstein (1983), the goods were overpriced by the seringalistas to extract larger profits from the seringueiros work. Barham and Coomes (1994), on the other hand, argue that the structure of the market in the Amazon was less closed and that independent traders would travel around the basin in small boats, willing to exchange goods for rubber. Poor monitoring by employers and an absent state facilitated these under-the-counter transactions, which allowed tappers to get better pay for their work.

Exporting Rubber

From the ports, rubber was in the hands of mainly Brazilian, British and American exporters. Contrary to what Weinstein (1983) argued, Brazilian producers or local merchants from the interior could choose whether to send the rubber on consignment to a New York commission house, rather than selling it to a exporter in the Amazon (Shelley, 1918). Rubber was taken, like other commodities, to ports in Europe and the US to be distributed to the industries that bought large amounts of the product in the London or New York commodities exchanges. A large part of rubber produced was traded at these exchanges, but tire manufacturers and other large consumers also made direct purchases from the distributors in the country of origin.[2]

Rubber Production in Southeast Asia

Seeds Smuggled from Brazil to Britain

The Hevea brasiliensis, the most important type of rubber tree, was an Amazonian species. This is why the countries of the Amazon basin were the main producers of rubber at the beginning of the international rubber trade. How, then, did British and Dutch colonies in Southeast Asia end up dominating the market? Brazil tried to prevent Hevea brasiliensis seeds from being exported, as the Brazilian government knew that by being the main producers of rubber, profits from rubber trading were insured. Protecting property rights in seeds proved a futile exercise. In 1876, the Englishman and aspiring author and rubber expert, Henry Wickham, smuggled 70,000 seeds to London, a feat for which he earned Brazil’s eternal opprobrium and an English knighthood. After experimenting with the seeds, 2,800 plants were raised at the Royal Botanical Gardens in London (Kew Gardens) and then shipped to Perideniya Gardens in Ceylon. In 1877 a case of 22 plants reached Singapore and were planted at the Singapore Botanical Garden. In the same year the first plant arrived in the Malay States. Since rubber trees needed between 6 to 8 years to be mature enough to yield good rubber, tapping began in the 1880s.

Scientific Research to Maximize Yields

In order to develop rubber extraction in the Malay States, more scientific intervention was needed. In 1888, H. N. Ridley was appointed director of the Singapore Botanical Garden and began experimenting with tapping methods. The final result of all the experimentations with different methods of tapping in Southeast Asia was the discovery of how to extract rubber in such a way that the tree would maintain a high yield for a long period of time. Rather than making a deep gouge with an axe on the rubber tree, as in Brazil, Southeast Asian tappers scraped the trunk of the tree by making a series of overlapped Y-shaped cuts with an axe, such that at the bottom there would be a canal ending in a collecting receptacle. According to Akers (1912), the tapping techniques in Asia insured the exploitation of the trees for longer periods, because the Brazilian technique scarred the tree’s bark and lowered yields over time.

Rapid Commercial Development and the Automobile Boom

Commercial planting in the Malay States began in 1895. The development of large-scale plantations was slow because of the lack of capital. Investors did not get interested in plantations until the prospects for rubber improved radically with the spectacular development of the automobile industry. By 1905, European capitalists were sufficiently interested in investing in large-scale plantations in Southeast Asia to plant some 38,000 acres of trees. Between 1905 and 1911 the annual increase was over 70,000 acres per year, and, by the end of 1911, the acreage in the Malay States reached 542,877 (Baxendale, 1913). The expansion of plantations was possible because of the sophistication in the organization of such enterprises. Joint stock companies were created to exploit the land grants and capital was raised through stock issues on the London Stock Exchange. The high returns during the first years (1906-1910) made investors ever more optimistic and capital flowed in large amounts. Plantations depended on a very disciplined system of labor and an intensive use of land.

Malaysia’s Advantages over Brazil

In addition to the intensive use of land, the production system in Malaysia had several economic advantages over that of Brazil. First, in the Malay States there was no specific tapping season, unlike Brazil where the rain did not allow tappers to collect rubber during six months of the year. Second, health conditions were better on the plantations, where rubber companies typically provided basic medical care and built infirmaries. In Brazil, by contrast, yellow fever and malaria made survival harder for rubber tappers who were dispersed in the forest and without even rudimentary medical attention. Finally, better living conditions and the support of the British and Dutch colonial authorities helped to attract Indian labor to the rubber plantations. Japanese and Chinese labor also immigrated to the plantations in Southeast Asia in response to relatively high wages (Baxendale, 1913).

Initially, demand for rubber was associated with specialized industrial components (belts and gaskets, etc.), consumer goods (golf balls, shoe soles, galoshes, etc.), and bicycle tires. Prior to the development of the automobile as a mass-marketed phenomenon, the Brazilian wild rubber industry was capable of meeting world demand and, furthermore, it was impossible for rubber producers to predict the scope and growth of the automobile industry prior to the 1900s. Thus, as Figure 3 indicates, growth in demand, as measured by U.K. imports, was not particularly rapid in the period 1880-1899. There was no reason to believe, in the early 1880s, that demand for rubber would explode as it did in the 1890s. Even as demand rose in the 1890s with the bicycle craze, the rate of increase was not beyond the capacity of wild rubber producers in Brazil and elsewhere (see figure 3). High rubber prices did not induce rapid increases in production or plantation development in the nineteenth century. In this context, Brazil developed a reasonably efficient industry based on its natural resource endowment and limited labor and capital sources.

In the first three decades of the twentieth century, major changes in both supply and demand created unprecedented uncertainty in rubber markets. On the supply side, Southeast Asian rubber plantations transformed the cost structure and capacity of the industry. On the demand side, and directly inducing plantation development, automobile production and associated demand for rubber exploded. Then, in the 1920s, competition and technological advance in tire production led to another shift in the market with profound consequences for rubber producers and tire manufacturers alike.

Rapid Price Fluctuations and Output Lags

Figure 1 shows the fluctuations of the Rubber Smoked Sheet type 1 (RSS1) price in London on an annual basis. The movements from 1906 to 1910 were very volatile on a monthly basis, as well, thus complicating forecasts for producers and making it hard for producers to decide how to react to market signals. Even though the information of prices and amounts in the markets were published every month in the major rubber journals, producers did not have a good idea of what was going to happen in the long run. If prices were high today, then they wanted to expand the area planted, but since it took from 6 to 8 years for trees to yield good rubber, they would have to wait to see the result of the expansion in production many years and price swings later. Since many producers reacted in the same way, periods of overproduction of rubber six to eight -odd years after a price rise were common.[3] Overproduction meant low prices, but since investments were mostly sunk (the costs of preparing the land, planting the trees and bringing in the workers could not be recovered and these resources could not be easily shifted to other uses), the market tended to stay oversupplied for long periods of time.

In figure 1 we see the annual price of Malaysian rubber plotted over time.

The years 1905 and 1906 marked historic highs for rubber prices, only to be surpassed briefly in 1909 and 1910. The area planted in rubber throughout Asia grew from 15,000 acres in 1901 to 433,000 acres in 1907; these plantings matured circa 1913, and cultivated rubber surpassed Brazilian wild rubber in volume exported.[4] The growth of the Asian rubber industry soon swamped Brazil’s market share and drove prices well below pre-Boom levels. After the major peak in prices of 1910, prices plummeted and followed a downward trend throughout the 1920s. By 1921, the bottom had dropped out of the market, and Malaysian rubber producers were induced by the British colonial authorities to enter into a scheme to restrict production. Plantations received export coupons that set quotas that limited the supply of rubber. The shortage of rubber did not affect prices until 1924 when the consumption passed the production of rubber and prices started to rise rapidly. This scheme had a short success because competition from the Dutch plantations in southeast Asia and others drove prices down by 1926. The plan was officially ended in 1928.[5]

Automobiles’ Impact on Rubber Demand

In order to understand the boom in rubber production, it is fundamental to look at the automobile industry. Cars had originally been adapted from horse-drawn carriages; some ran on wooden wheels, some on metal, some shod as it were in solid rubber. In any case, the ride at the speeds cars were soon capable of was impossible to bear. The pneumatic tire was quickly adopted from the bicycle, and the automobile tire industry was born — soon to account for well over half of rubber company sales in the United States where the vast majority of automobiles were manufactured in the early years of the industry.[6] The amount of rubber required to satisfy demand for automobile tires led first to a spike in rubber prices; second, it led to the development of rubber plantations in Asia.[7]

The connection between automobiles, plantations, and the rubber tire industry was explicit and obvious to observers at the time. Harvey Firestone, son of the founder of the company, put it this way:

It was not until 1898 that any serious attention was paid to plantation development. Then came the automobile, and with it the awakening on the part of everybody that without rubber there could be no tires, and without tires there could be no automobiles. (Firestone, 1932, p. 41)

Thus the emergence of a strong consuming sector linked to the automobile was necessary. For instance, the average price of rubber from 1880-1884 was 401 pounds sterling per ton; from 1900 to 1904, when the first plantations were beginning to be set up, the average price was 459 pounds sterling per ton. Thus, Asian plantations were developed both in response to high rubber prices and to what everyone could see was an exponentially growing source of demand in automobiles. Previous consumers of rubber did not show the kind of dynamism needed to spur entry by plantations into the natural rubber market, even though prices were very high throughout most of second half of the nineteenth century.

Producers Need to Forecast Future Supply and Demand Conditions

Rubber producers made decisions about production and planting during the period 1900-1912 with the aim to reap windfall profits, instead of thinking about the long-run sustainability of their business. High prices were an incentive for all to increase production, but increasing production, through more acreage planted could mean a loss for everyone in the future (because too much supply could drive the prices down). Yet, current prices could not yield profits when investment decisions had to be made six or more years in advance, as was the case in plantation production: in order to invest in plantations, capital had to be able to predict future interactions in supply and demand. Demand, although high and apparently relatively price inelastic, was not entirely predictable. It was predictable enough, however, for planters to expand acreage in rubber in Asia at a dramatic rate. Planters were often uncertain as to the aggregate level of supply: new plantations were constantly coming into production; others were entering into decline or bankruptcy. Thus their investments could yield a lot in the short run, but if all the people reacted in the same way, prices were driven down and profits were low too. This is what happened in the 1920s, after all the acreage expansion of the first two decades of the century.

Demand Growth Unexpectedly Slows in the 1920s

Plantings between 1912 and 1916 were destined to come into production during a period in which growth in the automobile industry leveled off significantly owing to recession in 1920-21. Making maters worse for rubber producers, major advances in tire technology further controlled demand — for example, the change from corded to balloon tires increased average tire tread mileage from 8,000 to 15,000 miles.[8] The shift from corded to balloon tires decreased demand for natural rubber even as the automobile industry recovered from recession in the early 1920s. In addition, better design of tire casings circa 1920 led to the growth of the retreading industry, the result of which was further saving on rubber. Finally, better techniques in cotton weaving lowered friction and heat and further extended tire life.[9] As rubber supplies increased and demand decreased and became more price inelastic, prices plummeted: neither demand nor price proved predictable over the long run and suppliers paid a stiff price for overextending themselves during the boom years. Rubber tire manufacturers suffered the same fate: competition and technology (which they themselves introduced) pushed prices downward and, at the same time, flattened demand (Allen, 1936).[10]

Now, if one looks at the price of rubber and the rate of growth in demand as measured by imports in the 1920s, it is clear that the industry was over-invested in capacity. The consequences of technological change were dramatic for tire manufacturer profits as well as for rubber producers.

Conclusion

The natural rubber trade underwent several radical transformations over the period 1870 to 1930. First, prior to 1910, it was associated with high costs of production and high prices for final goods; most rubber was produced, during this period, by tapping rubber trees in the Amazon region of Brazil. After 1900, and especially after 1910, rubber was increasingly produced on low-cost plantations in Southeast Asia. The price of rubber fell with plantation development and, at the same time, the volume of rubber demanded by car tire manufacturers expanded dramatically. Uncertainty, in terms of both supply and demand, (often driven by changing tire technology) meant that natural rubber producers and tire manufacturers both experienced great volatility in returns. The overall evolution of the natural rubber trade and the related tire manufacture industry was toward large volume, low-cost production in an internationally competitive environment marked by commodity price volatility and declining levels of profit as the industry matured.

References

Akers, C. E. Report on the Amazon Valley: Its Rubber Industry and Other Resources. London: Waterlow & Sons, 1912.

Allen, Hugh. The House of Goodyear. Akron: Superior Printing, 1936.

Alves Pinto, Nelson Prado. Política Da Borracha No Brasil. A Falência Da Borracha Vegetal. São Paulo: HUCITEC, 1984.

Babcock, Glenn D. History of the United States Rubber Company. Indiana: Bureau of Business Research, 1966.

Barham, Bradford, and Oliver Coomes. “The Amazon Rubber Boom: Labor Control, Resistance, and Failed Plantation Development Revisited.” Hispanic American Historical Review 74, no. 2 (1994): 231-57.

Barham, Bradford, and Oliver Coomes. Prosperity’s Promise. The Amazon Rubber Boom and Distorted Economic Development. Boulder: Westview Press, 1996.

Barham, Bradford, and Oliver Coomes. “Wild Rubber: Industrial Organisation and the Microeconomics of Extraction during the Amazon Rubber Boom (1860-1920).” Hispanic American Historical Review 26, no. 1 (1994): 37-72.

Baxendale, Cyril. “The Plantation Rubber Industry.” India Rubber World, 1 January 1913.

Blackford, Mansel and Kerr, K. Austin. BFGoodrich. Columbus: Ohio State University Press, 1996.

Brazil. Instituto Brasileiro de Geografia e Estatística. Anuário Estatístico Do Brasil. Rio de Janeiro: Instituto Brasileiro de Geografia e Estatística, 1940.

Dean, Warren. Brazil and the Struggle for Rubber: A Study in Environmental History. Cambridge: Cambridge University Press, 1987.

Drabble, J. H. Rubber in Malaya, 1876-1922. Oxford: Oxford University Press, 1973.

Firestone, Harvey Jr. The Romance and Drama of the Rubber Industry. Akron: Firestone Tire and Rubber Co., 1932.

Santos, Roberto. História Econômica Da Amazônia (1800-1920). São Paulo: T.A. Queiroz, 1980.

Schurz, William Lytle, O. D Hargis, Curtis Fletcher Marbut, and C. B Manifold. Rubber Production in the Amazon Valley by William L. Schurz, Commercial Attaché, and O.D. Hargis, Special Agent, of the Department of Commerce, and C.F. Marbut, Chief, Division of Soil Survey, and C.B. Manifold, Soil Surveyor, of the Department of Agriculture. U.S. Bureau of Foreign and Domestic Commerce (Dept. of Commerce) Trade Promotion Series: Crude Rubber Survey: Crude Rubber Survey: Trade Promotion Series, no. 4. no. 28. Washington: Govt. Print. Office, 1925.

Shelley, Miguel. “Financing Rubber in Brazil.” India Rubber World, 1 July 1918.

Weinstein, Barbara. The Amazon Rubber Boom, 1850-1920. Stanford: Stanford University Press, 1983.


Notes:

[1] Rubber taping in the Amazon basin is described in Weinstein (1983), Barham and Coomes (1994), Stanfield (1998), and in several articles published in India Rubber World, the main journal on rubber trading. See, for example, the explanation of tapping in the October 1, 1910 issue, or “The Present and Future of the Native Havea Rubber Industry” in the January 1, 1913 issue. For a detailed analysis of the rubber industry by region in Brazil by contemporary observers, see Schurz et al (1925).

[2] Newspapers such as The Economist or the London Times included sections on rubber trading, such as weekly or monthly reports of the market conditions, prices and other information. For the dealings between tire manufacturers and distributors in Brazil and Malaysia see Firestone (1932).

[3] Using cross-correlations of production and prices, we found that changes in production at time t were correlated with price changes in t-6 and t-8 (years). This is only weak evidence because these correlations are not statistically significant.

[4] Drabble (1973), 213, 220. The expansion in acreage was accompanied by a boom in company formation.

[5] Drabble (1973), 192-199. This was the so-called Stevenson Committee restriction, which lasted from 1922 to 1926. This plan basically limited the amount of rubber each planter could export assigning quotas through coupons.

[6] Pneumatic tires were first adapted to automobiles in 1896; Dunlop’s pneumatic bicycle tire was introduced in 1888. The great advantage of these tires over solid rubber was that they generated far less friction, extending tread life, and, of course, cushioned the ride and allowed for higher speeds.

[7] Early histories of the rubber industry tended to blame Brazilian “monopolists” for holding up supply and reaping windfall profits, see, e.g., Allen (1936), 116-117. In fact, rubber production in Brazil was far from monopolistic; other reasons account for supply inelasticity.

[8] Blackford and Kerr (1996), p. 88.

[9] The so-called “supertwist” weave allowed for the manufacture of larger, more durable tires, especially for trucks. Allen (1936), pp. 215-216.

[10] Allen (1936), p. 320.

Citation: Frank, Zephyr and Aldo Musacchio. “The International Natural Rubber Market, 1870-1930″. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-international-natural-rubber-market-1870-1930/

David Ricardo

David R. Stead, University of York

David Ricardo (1772-1823) was one of the greatest theoretical economists of all time. The third child of Abigail and Abraham (a prosperous Jewish stockbroker who had emigrated to London from Holland), Ricardo attended school in London and Amsterdam and at the age of fourteen entered his father’s business. In 1793 he married a Quaker, Priscilla Wilkinson, with whom he was to have eight children. The couple’s different religious backgrounds meant that the marriage created a rift with both their families, and Ricardo was forced to set up independently as a broker on the London Stock Exchange. Ricardo, though, prospered in the financial business to a far greater extent than his father, amassing a fortune of about £700,000 (equivalent to approximately £40 million today).

Ricardo became interested in economics in 1799 after, apparently by chance, reading the work of Adam Smith. He subsequently published pamphlets and articles analyzing various economic problems of the day, including the stability of the currency and the national debt. After some struggle (“I fear the undertaking exceeds my powers,” he wrote), his classic work, The Principles of Political Economy, appeared in 1817. Two of Ricardo’s most important contributions were the theory of rent and the concept of comparative advantage. The former, which drew on the writings of (among others) his close friend and critic Robert Malthus, defined rent as “that portion of the produce of the earth which is paid to the landlord [by the tenant farmer] for the use of the original and indestructible powers of the soil.” Rent, Ricardo argued, is what remains from gross farm revenue after all the farmer’s production costs have been paid, including remuneration for the capital and labor he had expended on the land. It is an unearned surplus (now referred to as an economic rent) in that its payment is not necessary to ensure a supply of farmland. For Ricardo, rent arises from the advantages that one site has over another due to differing degrees of soil fertility: rent per acre is highest on the most fertile land, and declines to zero on the worst quality soil.

Comparative advantage, Ricardo believed, ensured that international trade would bring benefits for all countries; his theory remains the foundation of the economic case for free trade today. He argued that each country should specialize in making the products in which it possessed a comparative advantage, that is could produce relatively efficiently. Portuguese sunshine, for example, gave Portuguese entrepreneurs a comparative advantage in producing wine, whereas England’s wet climate meant that her comparative advantage was in making cloth. Ricardo showed that, by specializing in production and then trading, Portugal and England would each achieve greater consumption of both wine and cloth than in the absence of international trade.

Not surprisingly, then, Ricardo opposed the protectionist Corn Laws in place during his lifetime, and upon retiring from the Stock Exchange in 1819, made his case directly to the House of Commons as the member for Portarlington, a pocket borough in Ireland. Ricardo’s Parliamentary career was influential but brief: four years later he died suddenly after contracting an ear infection.

Bibliography

Henderson, John P. with John B. Davis [Warren J. Samuels and Gilbert B. Davis, editors]. The Life and Economics of David Ricardo. Boston: Kluwer Academic, 1997.Hollander, Samuel. The Economics of David Ricardo. London: Heinemann Educational, 1979.

Ricardo, David. On the Principles of Political Economy, and Taxation. 1st edition, 1817. Harmondsworth: Penguin reprint [R. M. Hartwell, editor], 1971.

Sraffa, Piero with M. H. Dobb, editors. The Works and Correspondence of David Ricardo (11 volumes). Cambridge: Cambridge University Press, 1951-73.

Turner, M. E., Beckett, J. V. and B. Afton. Agricultural Rent in England, 1690-1914. Cambridge: Cambridge University Press, 1997.

Weatherall, David. David Ricardo: A Biography. The Hague: Nijhoff, 1976.

Citation: Stead, David. “David Ricardo”. EH.Net Encyclopedia, edited by Robert Whaples. November 18, 2003. URL http://eh.net/encyclopedia/david-ricardo/

The Economics of the American Revolutionary War

Ben Baack, Ohio State University

By the time of the onset of the American Revolution, Britain had attained the status of a military and economic superpower. The thirteen American colonies were one part of a global empire generated by the British in a series of colonial wars beginning in the late seventeenth century and continuing on to the mid eighteenth century. The British military establishment increased relentlessly in size during this period as it engaged in the Nine Years War (1688-97), the War of Spanish Succession (1702-13), the War of Austrian Succession (1739-48), and the Seven Years War (1756-63). These wars brought considerable additions to the British Empire. In North America alone the British victory in the Seven Years War resulted in France ceding to Britain all of its territory east of the Mississippi River as well as all of Canada and Spain surrendering its claim to Florida (Nester, 2000).

Given the sheer magnitude of the British military and its empire, the actions taken by the American colonists for independence have long fascinated scholars. Why did the colonists want independence? How were they able to achieve a victory over what was at the time the world’s preeminent military power? What were the consequences of achieving independence? These and many other questions have engaged the attention of economic, legal, military, political, and social historians. In this brief essay we will focus only on the economics of the Revolutionary War.

Economic Causes of the Revolutionary War

Prior to the conclusion of the Seven Years War there was little, if any, reason to believe that one day the American colonies would undertake a revolution in an effort to create an independent nation-state. As apart of the empire the colonies were protected from foreign invasion by the British military. In return, the colonists paid relatively few taxes and could engage in domestic economic activity without much interference from the British government. For the most part the colonists were only asked to adhere to regulations concerning foreign trade. In a series of acts passed by Parliament during the seventeenth century the Navigation Acts required that all trade within the empire be conducted on ships which were constructed, owned and largely manned by British citizens. Certain enumerated goods whether exported or imported by the colonies had to be shipped through England regardless of the final port of destination.

Western Land Policies

Economic incentives for independence significantly increased in the colonies as a result of a series of critical land policy decisions made by the British government. The Seven Years’ War had originated in a contest between Britain and France over control of the land from the Appalachian Mountains to the Mississippi River. During the 1740s the British government pursued a policy of promoting colonial land claims to as well as settlement in this area, which was at the time French territory. With the ensuing conflict of land claims both nations resorted to the use of military force which ultimately led to the onset of the war. At the conclusion of the war as a result of one of many concessions made by France in the 1763 Treaty of Paris, Britain acquired all the contested land west of its colonies to the Mississippi River. It was at this point that the British government began to implement a fundamental change in its western land policy.

Britain now reversed its long-time position of encouraging colonial claims to land and settlement in the west. The essence of the new policy was to establish British control of the former French fur trade in the west by excluding any settlement there by the Americans. Implementation led to the development of three new areas of policy. 1. Construction of the new rules of exclusion. 2. Enforcement of the new exclusion rules. 3. Financing the cost of the enforcement of the new rules. First, the rules of exclusion were set out under the terms of the Proclamation of 1763 whereby colonists were not allowed to settle in the west. This action legally nullified the claims to land in the area by a host of individual colonists, land companies, as well as colonies. Second, enforcement of the new rules was delegated to the standing army of about 7,500 regulars newly stationed in the west. This army for the most part occupied former French forts although some new ones were built. Among other things, this army was charged with keeping Americans out of the west as well as returning to the colonies any Americans who were already there. Third, financing of the cost of the enforcement was to be accomplished by levying taxes on the Americans. Thus, Americans were being asked to finance a British army which was charged with keeping Americans out of the west (Baack, 2004).

Tax Policies

Of all the potential options available for funding the new standing army in the west, why did the British decide to tax their American colonies? The answer is fairly straightforward. First of all, the victory over the French in the Seven Years’ War had come at a high price. Domestic taxes had been raised substantially during the war and total government debt had increased nearly twofold (Brewer, 1989). In addition, taxes were significantly higher in Britain than in the colonies. One estimate suggests the per capita tax burden in the colonies ranged from two to four percent of that in Britain (Palmer, 1959). And finally, the voting constituencies of the members of parliament were in Britain not the colonies. All things considered, Parliament viewed taxing the colonies as the obvious choice.

Accordingly, a series of tax acts were passed by Parliament the revenue from which was to be used to help pay for the standing army in America. The first was the Sugar Act of 1764. Proposed by England’s Prime Minister the act lowered tariff rates on non-British products from the West Indies as well as strengthened their collection. It was hoped this would reduce the incentive for smuggling and thereby increase tariff revenue (Bullion, 1982). The following year Parliament passed the Stamp Act that imposed a tax commonly used in England. It required stamps for a broad range of legal documents as well as newspapers and pamphlets. While the colonial stamp duties were less than those in England they were expected to generate enough revenue to finance a substantial portion of the cost the new standing army. The same year passage of the Quartering Act imposed essentially a tax in kind by requiring the colonists to provide British military units with housing, provisions, and transportation. In 1767 the Townshend Acts imposed tariffs upon a variety of imported goods and established a Board of Customs Commissioners in the colonies to collect the revenue.

Boycotts

While the Americans could do little about the British army stationed in the west, they could do somthing about the new British taxes. American opposition to these acts was expressed initially in a variety of peaceful forms. While they did not have representation in Parliament, the colonists did attempt to exert some influence in it through petition and lobbying. However, it was the economic boycott that became by far the most effective means of altering the new British economic policies. In 1765 representatives from nine colonies met at the Stamp Act Congress in New York and organized a boycott of imported English goods. The boycott was so successful in reducing trade that English merchants lobbied Parliament for the repeal of the new taxes. Parliament soon responded to the political pressure. During 1766 it repealed both the Stamp and Sugar Acts (Johnson, 1997). In response to the Townshend Acts of 1767 a second major boycott started in 1768 in Boston and New York and subsequently spread to other cities leading Parliament in 1770 to repeal all of the Townshend duties except the one on tea. In addition, Parliament decided at the same time not to renew the Quartering Act.

With these actions taken by Parliament the Americans appeared to have successfully overturned the new British post war tax agenda. However, Parliament had not given up what it believed to be its right to tax the colonies. On the same day it repealed the Stamp Act, Parliament passed the Declaratory Act stating the British government had the full power and authority to make laws governing the colonies in all cases whatsoever including taxation. Legislation not principles had been overturned.

The Tea Act

Three years after the repeal of the Townshend duties British policy was once again to emerge as an issue in the colonies. This time the American reaction was not peaceful. It all started when Parliament for the first time granted an exemption from the Navigation Acts. In an effort to assist the financially troubled British East India Company Parliament passed the Tea Act of 1773, which allowed the company to ship tea directly to America. The grant of a major trading advantage to an already powerful competitor meant a potential financial loss for American importers and smugglers of tea. In December a small group of colonists responded by boarding three British ships in the Boston harbor and throwing overboard several hundred chests of tea owned by the East India Company (Labaree, 1964). Stunned by the events in Boston, Parliament decided not to cave in to the colonists as it had before. In rapid order it passed the Boston Port Act, the Massachusetts Government Act, the Justice Act, and the Quartering Act. Among other things these so-called Coercive or Intolerable Acts closed the port of Boston, altered the charter of Massachusetts, and reintroduced the demand for colonial quartering of British troops. Once done Parliament then went on to pass the Quebec Act as a continuation of its policy of restricting the settlement of the West.

The First Continental Congress

Many Americans viewed all of this as a blatant abuse of power by the British government. Once again a call went out for a colonial congress to sort out a response. On September 5, 1774 delegates appointed by the colonies met in Philadelphia for the First Continental Congress. Drawing upon the successful manner in which previous acts had been overturned the first thing Congress did was to organize a comprehensive embargo of trade with Britain. It then conveyed to the British government a list of grievances that demanded the repeal of thirteen acts of Parliament. All of the acts listed had been passed after 1763 as the delegates had agreed not to question British policies made prior to the conclusion of the Seven Years War. Despite all the problems it had created, the Tea Act was not on the list. The reason for this was that Congress decided not to protest British regulation of colonial trade under the Navigation Acts. In short, the delegates were saying to Parliament take us back to 1763 and all will be well.

The Second Continental Congress

What happened then was a sequence of events that led to a significant increase in the degree of American resistance to British polices. Before the Congress adjourned in October the delegates voted to meet again in May of 1775 if Parliament did not meet their demands. Confronted by the extent of the American demands the British government decided it was time to impose a military solution to the crisis. Boston was occupied by British troops. In April a military confrontation occurred at Lexington and Concord. Within a month the Second Continental Congress was convened. Here the delegates decided to fundamentally change the nature of their resistance to British policies. Congress authorized a continental army and undertook the purchase of arms and munitions. To pay for all of this it established a continental currency. With previous political efforts by the First Continental Congress to form an alliance with Canada having failed, the Second Continental Congress took the extraordinary step of instructing its new army to invade Canada. In effect, these actions taken were those of an emerging nation-state. In October as American forces closed in on Quebec the King of England in a speech to Parliament declared that the colonists having formed their own government were now fighting for their independence. It was to be only a matter of months before Congress formally declared it.

Economic Incentives for Pursuing Independence: Taxation

Given the nature of British colonial policies, scholars have long sought to evaluate the economic incentives the Americans had in pursuing independence. In this effort economic historians initially focused on the period following the Seven Years War up to the Revolution. It turned out that making a case for the avoidance of British taxes as a major incentive for independence proved difficult. The reason was that many of the taxes imposed were later repealed. The actual level of taxation appeared to be relatively modest. After all, the Americans soon after adopting the Constitution taxed themselves at far higher rates than the British had prior to the Revolution (Perkins, 1988). Rather it seemed the incentive for independence might have been the avoidance of the British regulation of colonial trade. Unlike some of the new British taxes, the Navigation Acts had remained intact throughout this period.

The Burden of the Navigation Acts

One early attempt to quantify the economic effects of the Navigation Acts was by Thomas (1965). Building upon the previous work of Harper (1942), Thomas employed a counterfactual analysis to assess what would have happened to the American economy in the absence of the Navigation Acts. To do this he compared American trade under the Acts with that which would have occurred had America been independent following the Seven Years War. Thomas then estimated the loss of both consumer and produce surplus to the colonies as a result of shipping enumerated goods indirectly through England. These burdens were partially offset by his estimated value of the benefits of British protection and various bounties paid to the colonies. The outcome of his analysis was that the Navigation Acts imposed a net burden of less than one percent of colonial per capita income. From this he concluded the Acts were an unlikely cause of the Revolution. A long series of subsequent works questioned various parts of his analysis but not his general conclusion (Walton, 1971). The work of Thomas also appeared to be consistent with the observation that the First Continental Congress had not demanded in its list of grievances the repeal of either the Navigation Acts or the Sugar Act.

American Expectations about Future British Policy

Did this mean then that the Americans had few if any economic incentives for independence? Upon further consideration economic historians realized that perhaps more important to the colonists were not the past and present burdens but rather the expected future burdens of continued membership in the British Empire. The Declaratory Act made it clear the British government had not given up what it viewed as its right to tax the colonists. This was despite the fact that up to 1775 the Americans had employed a variety of protest measures including lobbying, petitions, boycotts, and violence. The confluence of not having representation in Parliament while confronting an aggressive new British tax policy designed to raise their relatively low taxes may have made it reasonable for the Americans to expect a substantial increase in the level of taxation in the future (Gunderson, 1976, Reid, 1978). Furthermore a recent study has argued that in 1776 not only did the future burdens of the Navigation Acts clearly exceed those of the past, but a substantial portion would have borne by those who played a major role in the Revolution (Sawers, 1992). Seen in this light the economic incentive for independence would have been avoiding the potential future costs of remaining in the British Empire.

The Americans Undertake a Revolution

1776-77

British Military Advantages

The American colonies had both strengths and weaknesses in terms of undertaking a revolution. The colonial population of well over two million was nearly one third of that in Britain (McCusker and Menard, 1985). The growth in the colonial economy had generated a remarkably high level of per capita wealth and income (Jones, 1980). Yet the hurdles confronting the Americans in achieving independence were indeed formidable. The British military had an array of advantages. With virtual control of the Atlantic its navy could attack anywhere along the American coast at will and would have borne logistical support for the army without much interference. A large core of experienced officers commanded a highly disciplined and well-drilled army in the large-unit tactics of eighteenth century European warfare. By these measures the American military would have great difficulty in defeating the British. Its navy was small. The Continental Army had relatively few officers proficient in large-unit military tactics. Lacking both the numbers and the discipline of its adversary the American army was unlikely to be able to meet the British army on equal terms on the battlefield (Higginbotham, 1977).

British Financial Advantages

In addition, the British were in a better position than the Americans to finance a war. A tax system was in place that had provided substantial revenue during previous colonial wars. Also for a variety of reasons the government had acquired an exceptional capacity to generate debt to fund wartime expenses (North and Weingast, 1989). For the Continental Congress the situation was much different. After declaring independence Congress had set about defining the institutional relationship between it and the former colonies. The powers granted to Congress were established under the Articles of Confederation. Reflecting the political environment neither the power to tax nor the power to regulate commerce was given to Congress. Having no tax system to generate revenue also made it very difficult to borrow money. According to the Articles the states were to make voluntary payments to Congress for its war efforts. This precarious revenue system was to hamper funding by Congress throughout the war (Baack, 2001).

Military and Financial Factors Determine Strategy

It was within these military and financial constraints that the war strategies by the British and the Americans were developed. In terms of military strategies both of the contestants realized that America was simply too large for the British army to occupy all of the cities and countryside. This being the case the British decided initially that they would try to impose a naval blockade and capture major American seaports. Having already occupied Boston, the British during 1776 and 1777 took New York, Newport, and Philadelphia. With plenty of room to maneuver his forces and unable to match those of the British, George Washington chose to engage in a war of attrition. The purpose was twofold. First, by not engaging in an all out offensive Washington reduced the probability of losing his army. Second, over time the British might tire of the war.

Saratoga

Frustrated without a conclusive victory, the British altered their strategy. During 1777 a plan was devised to cut off New England from the rest of the colonies, contain the Continental Army, and then defeat it. An army was assembled in Canada under the command of General Burgoyne and then sent to and down along the Hudson River. It was to link up with an army sent from New York City. Unfortunately for the British the plan totally unraveled as in October Burgoyne’s army was defeated at the battle of Saratoga and forced to surrender (Ketchum, 1997).

The American Financial Situation Deteriorates

With the victory at Saratoga the military side of the war had improved considerably for the Americans. However, the financial situation was seriously deteriorating. The states to this point had made no voluntary payments to Congress. At the same time the continental currency had to compete with a variety of other currencies for resources. The states were issuing their own individual currencies to help finance expenditures. Moreover the British in an effort to destroy the funding system of the Continental Congress had undertaken a covert program of counterfeiting the Continental dollar. These dollars were printed and then distributed throughout the former colonies by the British army and agents loyal to the Crown (Newman, 1957). Altogether this expansion of the nominal money supply in the colonies led to a rapid depreciation of the Continental dollar (Calomiris, 1988, Michener, 1988). Furthermore, inflation may have been enhanced by any negative impact upon output resulting from the disruption of markets along with the destruction of property and loss of able-bodied men (Buel, 1998). By the end of 1777 inflation had reduced the specie value of the Continental to about twenty percent of what it had been when originally issued. This rapid decline in value was becoming a serious problem for Congress in that up to this point almost ninety percent of its revenue had been generated from currency emissions.

1778-83

British Invasion of the South

The British defeat at Saratoga had a profound impact upon the nature of the war. The French government still upset by their defeat by the British in the Seven Years War and encouraged by the American victory signed a treaty of alliance with the Continental Congress in early 1778. Fearing a new war with France the British government sent a commission to negotiate a peace treaty with the Americans. The commission offered to repeal all of the legislation applying to the colonies passed since 1763. Congress rejected the offer. The British response was to give up its efforts to suppress the rebellion in the North and in turn organize an invasion of the South. The new southern campaign began with the taking of the port of Savannah in December. Pursuing their southern strategy the British won major victories at Charleston and Camden during the spring and summer of 1780.

Worsening Inflation and Financial Problems

As the American military situation deteriorated in the South so did the financial circumstances of the Continental Congress. Inflation continued as Congress and the states dramatically increased the rate of issuance of their currencies. At the same time the British continued to pursue their policy of counterfeiting the Continental dollar. In order to deal with inflation some states organized conventions for the purpose of establishing wage and price controls (Rockoff, 1984). With few contributions coming from the states and a currency rapidly losing its value, Congress resorted to authorizing the army to confiscate whatever it needed to continue the war effort (Baack, 2001, 2008).

Yorktown

Fortunately for the Americans the British military effort collapsed before the funding system of Congress. In a combined effort during the fall of 1781 French and American forces trapped the British southern army under the command of Cornwallis at Yorktown, Virginia. Under siege by superior forces the British army surrendered on October 19. The British government had now suffered not only the defeat of its northern strategy at Saratoga but also the defeat of its southern campaign at Yorktown. Following Yorktown, Britain suspended its offensive military operations against the Americans. The war was over. All that remained was the political maneuvering over the terms for peace.

The Treaty of Paris

The Revolutionary War officially concluded with the signing of the Treaty of Paris in 1783. Under the terms of the treaty the United States was granted independence and British troops were to evacuate all American territory. While commonly viewed by historians through the lens of political science, the Treaty of Paris was indeed a momentous economic achievement by the United States. The British ceded to the Americans all of the land east of the Mississippi River which they had taken from the French during the Seven Years War. The West was now available for settlement. To the extent the Revolutionary War had been undertaken by the Americans to avoid the costs of continued membership in the British Empire, the goal had been achieved. As an independent nation the United States was no longer subject to the regulations of the Navigation Acts. There was no longer to be any economic burden from British taxation.

THE FORMATION OF A NATIONAL GOVERNMENT

When you start a revolution you have to be prepared for the possibility you might win. This means being prepared to form a new government. When the Americans declared independence their experience of governing at a national level was indeed limited. In 1765 delegates from various colonies had met for about eighteen days at the Stamp Act Congress in New York to sort out a colonial response to the new stamp duties. Nearly a decade passed before delegates from colonies once again got together to discuss a colonial response to British policies. This time the discussions lasted seven weeks at the First Continental Congress in Philadelphia during the fall of 1774. The primary action taken at both meetings was an agreement to boycott trade with England. After having been in session only a month, delegates at the Second Continental Congress for the first time began to undertake actions usually associated with a national government. However, when the colonies were declared to be free and independent states Congress had yet to define its institutional relationship with the states.

The Articles of Confederation

Following the Declaration of Independence, Congress turned to deciding the political and economic powers it would be given as well as those granted to the states. After more than a year of debate among the delegates the allocation of powers was articulated in the Articles of Confederation. Only Congress would have the authority to declare war and conduct foreign affairs. It was not given the power to tax or regulate commerce. The expenses of Congress were to be made from a common treasury with funds supplied by the states. This revenue was to be generated from exercising the power granted to the states to determine their own internal taxes. It was not until November of 1777 that Congress approved the final draft of the Articles. It took over three years for the states to ratify the Articles. The primary reason for the delay was a dispute over control of land in the West as some states had claims while others did not. Those states with claims eventually agreed to cede them to Congress. The Articles were then ratified and put into effect on March 1, 1781. This was just a few months before the American victory at Yorktown. The process of institutional development had proved so difficult that the Americans fought almost the entire Revolutionary War with a government not sanctioned by the states.

Difficulties in the 1780s

The new national government that emerged from the Revolution confronted a host of issues during the 1780s. The first major one to be addressed by Congress was what to do with all of the land acquired in the West. Starting in 1784 Congress passed a series of land ordinances that provided for land surveys, sales of land to individuals, and the institutional foundation for the creation of new states. These ordinances opened the West for settlement. While this was a major accomplishment by Congress, other issues remained unresolved. Having repudiated its own currency and no power of taxation, Congress did not have an independent source of revenue to pay off its domestic and foreign debts incurred during the war. Since the Continental Army had been demobilized no protection was being provided for settlers in the West or against foreign invasion. Domestic trade was being increasingly disrupted during the 1780s as more states began to impose tariffs on goods from other states. Unable to resolve these and other issues Congress endorsed a proposed plan to hold a convention to meet in Philadelphia in May of 1787 to revise the Articles of Confederation.

Rather than amend the Articles, the delegates to the convention voted to replace them entirely with a new form of national government under the Constitution. There are of course many ways to assess the significance of this truly remarkable achievement. One is to view the Constitution as an economic document. Among other things the Constitution specifically addressed many of the economic problems that confronted Congress during and after the Revolutionary War. Drawing upon lessons learned in financing the war, no state under the Constitution would be allowed to coin money or issue bills of credit. Only the national government could coin money and regulate its value. Punishment was to be provided for counterfeiting. The problems associated with the states contributing to a common treasury under the Articles were overcome by giving the national government the coercive power of taxation. Part of the revenue was to be used to pay for the common defense of the United States. No longer would states be allowed to impose tariffs as they had done during the 1780s. The national government was now given the power to regulate both foreign and interstate commerce. As a result the nation was to become a common market. There is a general consensus among economic historians today that the economic significance of the ratification of the Constitution was to lay the institutional foundation for long run growth. From the point of view of the former colonists, however, it meant they had succeeded in transferring the power to tax and regulate commerce from Parliament to the new national government of the United States.

TABLES
Table 1 Continental Dollar Emissions (1775-1779)

Year of Emission Nominal Dollars Emitted (000) Annual Emission As Share of Total Nominal Stock Emitted Specie Value of Annual Emission (000) Annual Emission As Share of Total Specie Value Emitted
1775 $6,000 3% $6,000 15%
1776 19,000 8 15,330 37
1777 13,000 5 4,040 10
1778 63,000 26 10,380 25
1779 140,500 58 5,270 13
Total $241,500 100% $41,020 100%

Source: Bullock (1895), 135.
Table 2 Currency Emissions by the States (1775-1781)

Year of Emission Nominal Dollars Emitted (000) Year of Emission Nominal Dollars Emitted (000)
1775 $4,740 1778 $9,118
1776 13,328 1779 17,613
1777 9,573 1780 66,813
1781 123.376
Total $27,641 Total $216,376

Source: Robinson (1969), 327-28.

References

Baack, Ben. “Forging a Nation State: The Continental Congress and the Financing of the War of American Independence.” Economic History Review 54, no.4 (2001): 639-56.

Baack, Ben. “British versus American Interests in Land and the War of American Independence.” Journal of European Economic History 33, no. 3 (2004): 519-54.

Baack, Ben. “America’s First Monetary Policy: Inflation and Seigniorage during the Revolutionary War.” Financial History Review 15, no. 2 (2008): 107-21.

Baack, Ben, Robert A. McGuire, and T. Norman Van Cott. “Constitutional Agreement during the Drafting of the Constitution: A New Interpretation.” Journal of Legal Studies 38, no. 2 (2009): 533-67.

Brewer, John. The Sinews of Power: War, Money and the English State, 1688- 1783. London: Cambridge University Press, 1989.

Buel, Richard. In Irons: Britain’s Naval Supremacy and the American Revolutionary Economy. New Haven: Yale University Press, 1998.

Bullion, John L. A Great and Necessary Measure: George Grenville and the Genesis of the Stamp Act, 1763-1765. Columbia: University of Missouri Press, 1982.

Bullock, Charles J. “The Finances of the United States from 1775 to 1789, with Especial Reference to the Budget.” Bulletin of the University of Wisconsin 1, no. 2 (1895): 117-273.

Calomiris, Charles W. “Institutional Failure, Monetary Scarcity, and the Depreciation of the Continental.” Journal of Economic History 48, no. 1 (1988): 47-68.

Egnal, Mark. A Mighty Empire: The Origins of the American Revolution. Ithaca: Cornell University Press, 1988.

Ferguson, E. James. The Power of the Purse: A History of American Public Finance, 1776-1790. Chapel Hill: University of North Carolina Press, 1961.

Gunderson, Gerald. A New Economic History of America. New York: McGraw- Hill, 1976.

Harper, Lawrence A. “Mercantilism and the American Revolution.” Canadian Historical Review 23 (1942): 1-15.

Higginbotham, Don. The War of American Independence: Military Attitudes, Policies, and Practice, 1763-1789. Bloomington: Indiana University Press, 1977.

Jensen, Merrill, editor. English Historical Documents: American Colonial Documents to 1776 New York: Oxford university Press, 1969.

Johnson, Allen S. A Prologue to Revolution: The Political Career of George Grenville (1712-1770). New York: University Press, 1997.

Jones, Alice H. Wealth of a Nation to Be: The American Colonies on the Eve of the Revolution. New York: Columbia University Press, 1980.

Ketchum, Richard M. Saratoga: Turning Point of America’s Revolutionary War. New York: Henry Holt and Company, 1997.

Labaree, Benjamin Woods. The Boston Tea Party. New York: Oxford University Press, 1964.

Mackesy, Piers. The War for America, 1775-1783. Cambridge: Harvard University Press, 1964.

McCusker, John J. and Russell R. Menard. The Economy of British America, 1607- 1789. Chapel Hill: University of North Carolina Press, 1985.

Michener, Ron. “Backing Theories and the Currencies of Eighteenth-Century America: A Comment.” Journal of Economic History 48, no. 3 (1988): 682-92.

Nester, William R. The First Global War: Britain, France, and the Fate of North America, 1756-1775. Westport: Praeger, 2000.

Newman, E. P. “Counterfeit Continental Currency Goes to War.” The Numismatist 1 (January, 1957): 5-16.

North, Douglass C., and Barry R. Weingast. “Constitutions and Commitment: The Evolution of Institutions Governing Public Choice in Seventeenth-Century England.” Journal of Economic History 49 No. 4 (1989): 803-32.

O’Shaughnessy, Andrew Jackson. An Empire Divided: The American Revolution and the British Caribbean. Philadelphia: University of Pennsylvania Press, 2000.

Palmer, R. R. The Age of Democratic Revolution: A Political History of Europe and America. Vol. 1. Princeton: Princeton University Press, 1959.

Perkins, Edwin J. The Economy of Colonial America. New York: Columbia University Press, 1988.

Reid, Joseph D., Jr. “Economic Burden: Spark to the American Revolution?” Journal of Economic History 38, no. 1 (1978): 81-100.

Robinson, Edward F. “Continental Treasury Administration, 1775-1781: A Study in the Financial History of the American Revolution.” Ph.D. diss., University of Wisconsin, 1969.

Rockoff, Hugh. Drastic Measures: A History of Wage and Price Controls in the United States. Cambridge: Cambridge University Press, 1984.

Sawers, Larry. “The Navigation Acts Revisited.” Economic History Review 45, no. 2 (1992): 262-84.

Thomas, Robert P. “A Quantitative Approach to the Study of the Effects of British Imperial Policy on Colonial Welfare: Some Preliminary Findings.” Journal of Economic History 25, no. 4 (1965): 615-38.

Tucker, Robert W. and David C. Hendrickson. The Fall of the First British Empire: Origins of the War of American Independence. Baltimore: Johns Hopkins Press, 1982.

Walton, Gary M. “The New Economic History and the Burdens of the Navigation Acts.” Economic History Review 24, no. 4 (1971): 533-42.

Citation: Baack, Ben. “Economics of the American Revolutionary War”. EH.Net Encyclopedia, edited by Robert Whaples. November 13, 2001 (updated August 5, 2010). URL http://eh.net/encyclopedia/the-economics-of-the-american-revolutionary-war/

Economic History of Retirement in the United States

Joanna Short, Augustana College

One of the most striking changes in the American labor market over the twentieth century has been the virtual disappearance of older men from the labor force. Moen (1987) and Costa (1998) estimate that the labor force participation rate of men age 65 and older declined from 78 percent in 1880 to less than 20 percent in 1990 (see Table 1). In recent decades, the labor force participation rate of somewhat younger men (age 55-64) has been declining as well. When coupled with the increase in life expectancy over this period, it is clear that men today can expect to spend a much larger proportion of their lives in retirement, relative to men living a century ago.

Table 1

Labor Force Participation Rates of Men Age 65 and Over

Year Labor Force Participation Rate (percent)
1850 76.6
1860 76.0
1870 —–
1880 78.0
1890 73.8
1900 65.4
1910 58.1
1920 60.1
1930 58.0
1940 43.5
1950 47.0
1960 40.8
1970 35.2
1980 24.7
1990 18.4
2000 17.5

Sources: Moen (1987), Costa (1998), Bureau of Labor Statistics

Notes: Prior to 1940, ‘gainful employment’ was the standard the U.S. Census used to determine whether or not an individual was working. This standard is similar to the ‘labor force participation’ standard used since 1940. With the exception of the figure for 2000, the data in the table are based on the gainful employment standard.

How can we explain the rise of retirement? Certainly, the development of government programs like Social Security has made retirement more feasible for many people. However, about half of the total decline in the labor force participation of older men from 1880 to 1990 occurred before the first Social Security payments were made in 1940. Therefore, factors other than the Social Security program have influenced the rise of retirement.

In addition to the increase in the prevalence of retirement over the twentieth century, the nature of retirement appears to have changed. In the late nineteenth century, many retirements involved a few years of dependence on children at the end of life. Today, retirement is typically an extended period of self-financed independence and leisure. This article documents trends in the labor force participation of older men, discusses the decision to retire, and examines the causes of the rise of retirement including the role of pensions and government programs.

Trends in U.S. Retirement Behavior

Trends by Gender

Research on the history of retirement focuses on the behavior of men because retirement, in the sense of leaving the labor force permanently in old age after a long career, is a relatively new phenomenon among women. Goldin (1990) concludes that “even as late as 1940, most young working women exited the labor force on marriage, and only a small minority would return.” The employment of married women accelerated after World War II, and recent evidence suggests that the retirement behavior of men and women is now very similar. Gendell (1998) finds that the average age at exit from the labor force in the U.S. was virtually identical for men and women from 1965 to 1995.

Trends by Race and Region

Among older men at the beginning of the twentieth century, labor force participation rates varied greatly by race, region of residence, and occupation. In the early part of the century, older black men were much more likely to be working than older white men. In 1900, for example, 84.1 percent of black men age 65 and over and 64.4 percent of white men were in the labor force. The racial retirement gap remained at about twenty percentage points until 1920, then narrowed dramatically by 1950. After 1950, the racial retirement gap reversed. In recent decades older black men have been slightly less likely to be in the labor force than older white men (see Table 2).

Table 2

Labor Force Participation Rates of Men Age 65 and Over, by Race

Labor Force Participation Rate (percent)
Year White Black
1880 76.7 87.3
1890 —- —-
1900 64.4 84.1
1910 58.5 86.0
1920 57.0 76.8
1930 —- —-
1940 44.1 54.6
1950 48.7 51.3
1960 40.3 37.3
1970 36.6 33.8
1980 27.1 23.7
1990 18.6 15.7
2000 17.8 16.6

Sources: Costa (1998), Bureau of Labor Statistics

Notes: Census data are unavailable for the years 1890 and 1930.

With the exception of the figures for 2000, participation rates are based on the gainful employment standard

Similarly, the labor force participation rate of men age 65 and over living in the South was higher than that of men living in the North in the early twentieth century. In 1900, for example, the labor force participation rate for older Southerners was sixteen percentage points higher than for Northerners. The regional retirement gap began to narrow between 1910 and 1920, and narrowed substantially by 1940 (see Table 3).

Table 3

Labor Force Participation Rates of Men Age 65 and Over, by Region

Labor Force Participation Rate (percent)
Year North South
1880 73.7 85.2
1890 —- —-
1900 66.0 82.9
1910 56.6 72.8
1920 58.8 69.9
1930 —- —-
1940 42.8 49.4
1950 43.2 42.9

Source: Calculated from Ruggles and Sobek, Integrated Public Use Microdata Series for 1880, 1900, 1910, 1920, 1940, and 1950, Version 2.0, 1997

Note: North includes the New England, Middle Atlantic, and North Central regions

South includes the South Atlantic and South Central regions

Differences in retirement behavior by race and region of residence are related. One reason Southerners appear less likely to retire in the late nineteenth and early twentieth centuries is that a relatively large proportion of Southerners were black. In 1900, 90 percent of black households were located in the South (see Maloney on African Americans in this Encyclopedia). In the early part of the century, black men were effectively excluded from skilled occupations. The vast majority worked for low pay as tenant farmers or manual laborers. Even controlling for race, southern per capita income lagged behind the rest of the nation well into the twentieth century. Easterlin (1971) estimates that in 1880, per capita income in the South was only half that in the Midwest, and per capita income remained less than 70 percent of the Midwestern level until 1950. Lower levels of income among blacks, and in the South as a whole during this period, may have made it more difficult for these men to accumulate resources sufficient to rely on in retirement.

Trends by Occupation

Older men living on farms have long been more likely to be working than men living in nonfarm households. In 1900, for example, 80.6 percent of farm residents and 62.7 percent of nonfarm residents over the age of 65 were in the labor force. Durand (1948), Graebner (1980), and others have suggested that older farmers could remain in the labor force longer than urban workers because of help from children or hired labor. Urban workers, on the other hand, were frequently forced to retire once they became physically unable to keep up with the pace of industry.

Despite the large difference in the labor force participation rates of farm and nonfarm residents, the actual gap in the retirement rates of farmers and nonfarmers was not that great. Confusion on this issue stems from the fact that the labor force participation rate of farm residents does not provide a good representation of the retirement behavior of farmers. Moen (1994) and Costa (1995a) point out that farmers frequently moved off the farm in retirement. When the comparison is made by occupation, farmers have labor force participation rates only slightly higher than laborers or skilled workers. Lee (2002) finds that excluding the period 1900-1910 (a period of exceptional growth in the value of farm property), the labor force participation rate of older farmers was on average 9.3 percentage points higher than that of nonfarmers from 1880-1940.

Trends in Living Arrangements

In addition to the overall rise of retirement, and the closing of differences in retirement behavior by race and region, over the twentieth century retired men became much more independent. In 1880, nearly half of retired men lived with children or other relatives. Today, fewer than 5 percent of retired men live with relatives. Costa (1998) finds that between 1910 and 1940, men who were older, had a change in marital status (typically from married to widowed), or had low income were much more likely to live with family members as a dependent. Rising income appears to explain most of the movement away from coresidence, suggesting that the elderly have always preferred to live by themselves, but they have only recently had the means to do so.

Explaining Trends in the Retirement Decision

One way to understand the rise of retirement is to consider the individual retirement decision. In order to retire permanently from the labor force, one must have enough resources to live on to the end of the expected life span. In retirement, one can live on pension income, accumulated savings, and anticipated contributions from family and friends. Without at least the minimum amount of retirement income necessary to survive, the decision-maker has little choice but to remain in the labor force. If the resource constraint is met, individuals choose to retire once the net benefits of retirement (e.g., leisure time) exceed the net benefits of working (labor income less the costs associated with working). From this model, we can predict that anything that increases the costs associated with working, such as advancing age, an illness, or a disability, will increase the probability of retirement. Similarly, an increase in pension income increases the probability of retirement in two ways. First, an increase in pension income makes it more likely the resource constraint will be satisfied. In addition, higher pension income makes it possible to enjoy more leisure in retirement, thereby increasing the net benefits of retirement.

Health Status

Empirically, age, disability, and pension income have all been shown to increase the probability that an individual is retired. In the context of the individual model, we can use this observation to explain the overall rise of retirement. Disability, for example, has been shown to increase the probability of retirement, both today and especially in the past. However, it is unlikely that the rise of retirement was caused by increases in disability rates — advances in health have made the overall population much healthier. Costa (1998), for example, shows that chronic conditions were much more prevalent for the elderly born in the nineteenth century than for men born in the twentieth century.

The Decline of Agriculture

Older farmers are somewhat more likely to be in the labor force than nonfarmers. Furthermore, the proportion of people employed in agriculture has declined steadily, from 51 percent of the work force in 1880, to 17 percent in 1940, to about 2 percent today (Lebergott, 1964). Therefore, as argued by Durand (1948), the decline in agriculture could explain the rise in retirement. Lee (2002) finds, though, that the decline of agriculture only explains about 20 percent of the total rise of retirement from 1880 to 1940. Since most of the shift away from agricultural work occurred before 1940, the decline of agriculture explains even less of the retirement trend since 1940. Thus, the occupational shift away from farming explains part of the rise of retirement. However, the underlying trend has been a long-term increase in the probability of retirement within all occupations.

Rising Income: The Most Likely Explanation

The most likely explanation for the rise of retirement is the overall increase in income, both from labor market earnings and from pensions. Costa (1995b) has shown that the pension income received by Union Army veterans in the early twentieth century had a strong effect on the probability that the veteran was retired. Over the period from 1890 to 1990, economic growth has led to nearly an eightfold increase in real gross domestic product (GDP) per capita. In 1890, GDP per capita was $3430 (in 1996 dollars), which is comparable to the levels of production in Morocco or Jamaica today. In 1990, real GDP per capita was $26,889. On average, Americans today enjoy a standard of living commensurate with eight times the income of Americans living a century ago. More income has made it possible to save for an extended retirement.

Rising income also explains the closing of differences in retirement behavior by race and region by the 1950s. Early in the century blacks and Southerners earned much lower income than Northern whites, but these groups made substantial gains in earnings by 1950. In the second half of the twentieth century, the increasing availability of pension income has also made retirement more attractive. Expansions in Social Security benefits, Medicare, and growth in employer-provided pensions all serve to increase the income available to people in retirement.

Costa (1998) has found that income is now less important to the decision to retire than it once was. In the past, only the rich could afford to retire. Income is no longer a binding constraint. One reason is that Social Security provides a safety net for those who are unable or unwilling to save for retirement. Another reason is that leisure has become much cheaper over the last century. Television, for example, allows people to enjoy concerts and sporting events at a very low price. Golf courses and swimming pools, once available only to the rich, are now publicly provided. Meanwhile, advances in health have allowed people to enjoy leisure and travel well into old age. All of these factors have made retirement so much more attractive that people of all income levels now choose to leave the labor force in old age.

Financing Retirement

Rising income also provided the young with a new strategy for planning for old age and retirement. Ransom and Sutch (1986a,b) and Sundstrom and David (1988) hypothesize that in the nineteenth century men typically used the promise of a bequest as an incentive for children to help their parents in old age. As more opportunities for work off the farm became available, children left home and defaulted on the implicit promise to care for retired parents. Children became an unreliable source of old age support, so parents stopped relying on children — had fewer babies — and began saving (in bank accounts) for retirement.

To support the “babies-to-bank accounts” theory, Sundstrom and David look for evidence of an inheritance-for-old age support bargain between parents and children. They find that many wills, particularly in colonial New England and some ethnic communities in the Midwest, included detailed clauses specifying the care of the surviving parent. When an elderly parent transferred property directly to a child, the contracts were particularly specific, often specifying the amount of food and firewood with which the parent was to be supplied. There is also some evidence that people viewed children and savings as substitute strategies for retirement planning. Haines (1985) uses budget studies from northern industrial workers in 1890 and finds a negative relationship between the number of children and the savings rate. Short (2001) conducts similar studies for southern men that indicate the two strategies were not substitutes until at least 1920. This suggests that the transition from babies to bank accounts occurred later in the South, only as income began to approach northern levels.

Pensions and Government Retirement Programs

Military and Municipal Pensions (1781-1934)

In addition to the rise in labor market income, the availability of pension income greatly increased with the development of Social Security and the expansion of private (employer-provided) pensions. In the U.S., public (government-provided) pensions originated with the military pensions that have been available to disabled veterans and widows since the colonial era. Military pensions became available to a large proportion of Americans after the Civil War, when the federal government provided pensions to Union Army widows and veterans disabled in the war. The Union Army pension program expanded greatly as a result of the Pension Act of 1890. As a result of this law, pensions were available for all veterans age 65 and over who had served more than 90 days and were honorably discharged, regardless of current employment status. In 1900, about 20 percent of all white men age 55 and over received a Union Army pension. The Union Army pension was generous even by today’s standards. Costa (1995b) finds that the average pension replaced about 30 percent of the income of a laborer. At its peak of nearly one million pensioners in 1902, the program consumed about 30 percent of the federal budget.

Each of the formerly Confederate states also provided pensions to its Confederate veterans. Most southern states began paying pensions to veterans disabled in the war and to war widows around 1880. These pensions were gradually liberalized to include most poor or disabled veterans and their widows. Confederate veteran pensions were much less generous than Union Army pensions. By 1910, the average Confederate pension was only about one-third the amount awarded to the average Union veteran.

By the early twentieth century, state and municipal governments also began paying pensions to their employees. Most major cities provided pensions for their firemen and police officers. By 1916, 33 states had passed retirement provisions for teachers. In addition, some states provided limited pensions to poor elderly residents. By 1934, 28 states had established these pension programs (See Craig in this Encyclopedia for more on public pensions).

Private Pensions (1875-1934)

As military and civil service pensions became available to more men, private firms began offering pensions to their employees. The American Express Company developed the first formal pension in 1875. Railroads, among the largest employers in the country, also began providing pensions in the late nineteenth century. Williamson (1992) finds that early pension plans, like that of the Pennsylvania Railroad, were funded entirely by the employer. Thirty years of service were required to qualify for a pension, and retirement was mandatory at age 70. Because of the lengthy service requirement and mandatory retirement provision, firms viewed pensions as a way to reduce labor turnover and as a more humane way to remove older, less productive employees. In addition, the 1926 Revenue Act excluded from current taxation all income earned in pension trusts. This tax advantage provided additional incentive for firms to provide pensions. By 1930, a majority of large firms had adopted pension plans, covering about 20 percent of all industrial workers.

In the early twentieth century, labor unions also provided pensions to their members. By 1928, thirteen unions paid pension benefits. Most of these were craft unions, whose members were typically employed by smaller firms that did not provide pensions.

Most private pensions survived the Great Depression. Exceptions were those plans that were funded under a ‘pay as you go’ system — where benefits were paid out of current earnings, rather than from built-up reserves. Many union pensions were financed under this system, and hence failed in the 1930s. Thanks to strong political allies, the struggling railroad pensions were taken over by the federal government in 1937.

Social Security (1935-1991)

The Social Security system was designed in 1935 to extend pension benefits to those not covered by a private pension plan. The Social Security Act consisted of two programs, Old Age Assistance (OAA) and Old Age Insurance (OAI). The OAA program provided federal matching funds to subsidize state old age pension programs. The availability of federal funds quickly motivated many states to develop a pension program or to increase benefits. By 1950, 22 percent of the population age 65 and over received OAA benefits. The OAA program peaked at this point, though, as the newly liberalized OAI program began to dominate Social Security. The OAI program is administered by the federal government, and financed by payroll taxes. Retirees (and later, survivors, dependents of retirees, and the disabled) who have paid into the system are eligible to receive benefits. The program remained small until 1950, when coverage was extended to include farm and domestic workers, and average benefits were increased by 77 percent. In 1965, the Social Security Act was amended to include Medicare, which provides health insurance to the elderly. The Social Security program continued to expand in the late 1960s and early 1970s — benefits increased 13 percent in 1968, another 15 percent in 1969, and 20 percent in 1972.

In the late 1970s and early 1980s Congress was finally forced to slow the growth of Social Security benefits, as the struggling economy introduced the possibility that the program would not be able to pay beneficiaries. In 1977, the formula for determining benefits was adjusted downward. Reforms in 1983 included the delay of a cost-of-living adjustment, the taxation of up to half of benefits, and payroll tax increases.

Today, Social Security benefits are the main source of retirement income for most retirees. Poterba, Venti, and Wise (1994) find that Social Security wealth was three times as large as all the other financial assets of those age 65-69 in 1991. The role of Social Security benefits in the budgets of elderly households varies greatly. In elderly households with less than $10,000 in income in 1990, 75 percent of income came from Social Security. Higher income households gain larger shares of income from earnings, asset income, and private pensions. In households with $30,000 to $50,000 in income, less than 30 percent was derived from Social Security.

The Growth of Private Pensions (1935-2000)

Even in the shadow of the Social Security system, employer-provided pensions continued to grow. The Wage and Salary Act of 1942 froze wages in an attempt to contain wartime inflation. In order to attract employees in a tight labor market, firms increasingly offered generous pensions. Providing pensions had the additional benefit that the firm’s contributions were tax deductible. Therefore, pensions provided firms with a convenient tax shelter from high wartime tax rates. From 1940 to 1960, the number of people covered by private pensions increased from 3.7 million to 23 million, or to nearly 30 percent of the labor force.

In the 1960s and 1970s, the federal government acted to regulate private pensions, and to provide tax incentives (like those for employer-provided pensions) for those without access to private pensions to save for retirement. Since 1962, the self-employed have been able to establish ‘Keogh plans’ — tax deferred accounts for retirement savings. In 1974, the Employment Retirement Income Security Act (ERISA) regulated private pensions to ensure their solvency. Under this law, firms are required to follow funding requirements and to insure against unexpected events that could cause insolvency. To further level the playing field, ERISA provided those not covered by a private pension with the option of saving in a tax-deductible Individual Retirement Account (IRA). The option of saving in a tax-advantaged IRA was extended to everyone in 1981.

Over the last thirty years, the type of pension plan that firms offer employees has shifted from ‘defined benefit’ to ‘defined contribution’ plans. Defined benefit plans, like Social Security, specify the amount of benefits the retiree will receive. Defined contribution plans, on the other hand, specify only how much the employer will contribute to the plan. Actual benefits then depend on the performance of the pension investments. The switch from defined benefit to defined contribution plans therefore shifts the risk of poor investment performance from the employer to the employee. The employee stands to benefit, though, because the high long-run average returns on stock market investments may lead to a larger retirement nest egg. Recently, 401(k) plans have become a popular type of pension plan, particularly in the service industries. These plans typically involve voluntary employee contributions that are tax deductible to the employee, employer matching of these contributions, and more choice as far as how the pension is invested.

Summary and Conclusions

The retirement pattern we see today, typically involving decades of self-financed leisure, developed gradually over the last century. Economic historians have shown that rising labor market and pension income largely explain the dramatic rise of retirement. Rather than being pushed out of the labor force because of increasing obsolescence, older men have increasingly chosen to use their rising income to finance an earlier exit from the labor force. In addition to rising income, the decline of agriculture, advances in health, and the declining cost of leisure have contributed to the popularity of retirement. Rising income has also provided the young with a new strategy for planning for old age and retirement. Instead of being dependent on children in retirement, men today save for their own, more independent, retirement.

References

Achenbaum, W. Andrew. Social Security: Visions and Revisions. New York: Cambridge University Press, 1986. Bureau of Labor Statistics, cpsaat3.pdf

Costa, Dora L. The Evolution of Retirement: An American Economic History, 1880-1990. Chicago: University of Chicago Press, 1998.

Costa, Dora L. “Agricultural Decline and the Secular Rise in Male Retirement Rates.” Explorations in Economic History 32, no. 4 (1995a): 540-552.

Costa, Dora L. “Pensions and Retirement: Evidence from Union Army Veterans.” Quarterly Journal of Economics 110, no. 2 (1995b): 297-319.

Durand, John D. The Labor Force in the United States 1890-1960. New York: Gordon and Breach Science Publishers, 1948.

Easterlin, Richard A. “Interregional Differences in per Capita Income, Population, and Total Income, 1840-1950.” In Trends in the American Economy in the Nineteenth Century: A Report of the National Bureau of Economic Research, Conference on Research in Income and Wealth. Princeton, NJ: Princeton University Press, 1960.

Easterlin, Richard A. “Regional Income Trends, 1840-1950.” In The Reinterpretation of American Economic History, edited by Robert W. Fogel and Stanley L. Engerman. New York: Harper & Row, 1971.

Gendell, Murray. “Trends in Retirement Age in Four Countries, 1965-1995.” Monthly Labor Review 121, no. 8 (1998): 20-30.

Glasson, William H. Federal Military Pensions in the United States. New York: Oxford University Press, 1918.

Glasson, William H. “The South’s Pension and Relief Provisions for the Soldiers of the

Confederacy.” Publications of the North Carolina Historical Commission, Bulletin no. 23, Raleigh, 1918.

Goldin, Claudia. Understanding the Gender Gap: An Economic History of American Women. New York: Oxford University Press, 1990.

Graebner, William. A History of Retirement: The Meaning and Function of an American Institution, 1885-1978. New Haven: Yale University Press, 1980.

Haines, Michael R. “The Life Cycle, Savings, and Demographic Adaptation: Some Historical Evidence for the United States and Europe.” In Gender and the Life Course, edited by Alice S. Rossi, pp. 43-63. New York: Aldine Publishing Co., 1985.

Kingson, Eric R. and Edward D. Berkowitz. Social Security and Medicare: A Policy Primer. Westport, CT: Auburn House, 1993.

Lebergott, Stanley. Manpower in Economic Growth. New York: McGraw Hill, 1964.

Lee, Chulhee. “Sectoral Shift and the Labor-Force Participation of Older Males in the United States, 1880-1940.” Journal of Economic History 62, no. 2 (2002): 512-523.

Maloney, Thomas N. “African Americans in the Twentieth Century.” EH.Net Encyclopedia, edited by Robert Whaples, Jan 18, 2002. http://www.eh.net/encyclopedia/contents/maloney.african.american.php

Moen, Jon R. Essays on the Labor Force and Labor Force Participation Rates: The United States from 1860 through 1950. Ph.D. dissertation, University of Chicago, 1987.

Moen, Jon R. “Rural Nonfarm Households: Leaving the Farm and the Retirement of Older Men, 1860-1980.” Social Science History 18, no. 1 (1994): 55-75.

Ransom, Roger and Richard Sutch. “Babies or Bank Accounts, Two Strategies for a More Secure Old Age: The Case of Workingmen with Families in Maine, 1890.” Paper prepared for presentation at the Eleventh Annual Meeting of the Social Science History Association, St. Louis, 1986a.

Ransom, Roger L. and Richard Sutch. “Did Rising Out-Migration Cause Fertility to Decline in Antebellum New England? A Life-Cycle Perspective on Old-Age Security Motives, Child Default, and Farm-Family Fertility.” California Institute of Technology, Social Science Working Paper, no. 610, April 1986b.

Ruggles, Steven and Matthew Sobek, et al. Integrated Public Use Microdata Series: Version 2.0. Minneapolis: Historical Census Projects, University of Minnesota, 1997.

http://www.ipums.umn.edu

Short, Joanna S. “The Retirement of the Rebels: Georgia Confederate Pensions and Retirement Behavior in the New South.” Ph.D. dissertation, Indiana University, 2001.

Sundstrom, William A. and Paul A. David. “Old-Age Security Motives, Labor Markets, and Farm Family Fertility in Antebellum America.” Explorations in Economic History 25, no. 2 (1988): 164-194.

Williamson, Samuel H. “United States and Canadian Pensions before 1930: A Historical Perspective.” In Trends in Pensions, U.S. Department of Labor, Vol. 2, 1992, pp. 34-45.

Williamson, Samuel H. The Development of Industrial Pensions in the United States during the Twentieth Century. World Bank, Policy Research Department, 1995.

Citation: Short, Joanna. “Economic History of Retirement in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. September 30, 2002. URL http://eh.net/encyclopedia/economic-history-of-retirement-in-the-united-states/

Reconstruction Finance Corporation

James Butkiewicz, University of Delaware

Introduction

The Reconstruction Finance Corporation (RFC) was established during the Hoover administration with the primary objective of providing liquidity to, and restoring confidence in the banking system. The banking system experienced extensive pressure during the economic contraction of 1929-1933. During the contraction period, many banks had to suspend business operations and most of these ultimately failed. A number of these suspensions occurred during banking panics, when large numbers of depositors rushed to convert their deposits to cash from fear their bank might fail. Since this period was prior to the establishment of federal deposit insurance, bank depositors lost part or all of their deposits when their bank failed.

During its first thirteen months of operation, the RFC’s primary activity was to make loans to banks and financial institutions. During President Roosevelt’s New Deal, the RFC’s powers were expanded significantly. At various times, the RFC purchased bank preferred stock, made loans to assist agriculture, housing, exports, business, governments, and for disaster relief, and even purchased gold at the President’s direction in order to change the market price of gold. The scope of RFC activities was expanded further immediately before and during World War II. The RFC established or purchased, and funded, eight corporations that made important contributions to the war effort. After the war, the RFC’s activities were limited primarily to making loans to business. RFC lending ended in 1953, and the corporation ceased operations in 1957, when all remaining assets were transferred to other government agencies.

The Genesis of the Reconstruction Finance Corporation

The difficulties experienced by the American banking system were one of the defining characteristics of the Great Contraction of 1929-1933. During this period, the American banking system was comprised of a very large number of banks. At the end of December 1929, there were 24,633 banks in the United States. The vast majority of these banks were small, serving small towns and rural communities. These small banks were particularly susceptible to local economic difficulties, which could result in failure of the bank.

The Federal Reserve and Small Banks

The Federal Reserve System was created in 1913 to address the problem of periodic banking crises. The Fed had the ability to act as a lender of last resort, providing funds to banks during crises. While nationally chartered banks were required to join the Fed, state-chartered banks could join the Fed at their discretion. Most state-chartered banks chose not to join the Federal Reserve System. The majority of the small banks in rural communities were not Fed members. Thus, during crises, these banks were unable to seek assistance from the Fed, and the Fed felt no obligation to engage in a general expansion of credit to assist nonmember banks.

How Banking Panics Develop

At this time there was no federal deposit insurance system, so bank customers generally lost part or all of their deposits when their bank failed. Fear of failure sometimes caused people to panic. In a panic, bank customers attempt to immediately withdraw their funds. While banks hold enough cash for normal operations, they use most of their deposited funds to make loans and purchase interest-earning assets. In a panic, banks are forced to attempt to rapidly convert these assets to cash. Frequently, they are forced to sell assets at a loss to obtain cash quickly, or may be unable to sell assets at all. As losses accumulate, or cash reserves dwindle, a bank becomes unable to pay all depositors, and must suspend operations. During this period, most banks that suspended operations declared bankruptcy. Bank suspensions and failures may incite panic in adjacent communities or regions. This spread of panic, or contagion, can result in a large number of bank failures. Not only do customers lose some or all of their deposits, but also people become wary of banks in general. A widespread withdrawal of bank deposits reduces the amount of money and credit in society. This monetary contraction can contribute to a recession or depression.

Bank failures were a common event throughout the 1920s. In any year, it was normal for several hundred banks to fail. In 1930, the number of failures increased substantially. Failures and contagious panics occurred repeatedly during the contraction years. President Hoover recognized that the banking system required assistance. However, the President also believed that this assistance, like charity, should come from the private sector rather than the government, if at all possible.

The National Credit Corporation

To this end, Hoover encouraged a number of major banks to form the National Credit Corporation (NCC), to lend money to other banks experiencing difficulties. The NCC was announced on October 13, 1931, and began operations on November 11, 1931. However, the banks in the NCC were not enthusiastic about this endeavor, and made loans very reluctantly, requiring that borrowing banks pledge their best assets as collateral, or security for the loan. Hoover quickly recognized that the NCC would not provide the necessary relief to the troubled banking system.

RFC Approved, January 1932

Eugene Meyer, Governor of the Federal Reserve Board, convinced the President that a public agency was needed to make loans to troubled banks. On December 7, 1931, a bill was introduced to establish the Reconstruction Finance Corporation. The legislation was approved on January 22, 1932, and the RFC opened for business on February 2, 1932.

The original legislation authorized the RFC’s existence for a ten-year period. However, Presidential approval was required to operate beyond January 1, 1933, and Congressional approval was required for lending authority to continue beyond January 1, 1934. Subsequent legislation extended the life of the RFC and added many additional responsibilities and authorities.

The RFC was funded through the United States Treasury. The Treasury provided $500 million of capital to the RFC, and the RFC was authorized to borrow an additional $1.5 billion from the Treasury. The Treasury, in turn, sold bonds to the public to fund the RFC. Over time, this borrowing authority was increased manyfold. Subsequently, the RFC was authorized to sell securities directly to the public to obtain funds. However, most RFC funding was obtained by borrowing from the Treasury. During its years of existence, the RFC borrowed $51.3 billion from the Treasury, and $3.1 billion from the public.

The RFC During the Hoover Administration

RFC Authorized to Lend to Banks and Others

The original legislation authorized the RFC to make loans to banks and other financial institutions, to railroads, and for crop loans. While the original objective of the RFC was to help banks, railroads were assisted because many banks owned railroad bonds, which had declined in value, because the railroads themselves had suffered from a decline in their business. If railroads recovered, their bonds would increase in value. This increase, or appreciation, of bond prices would improve the financial condition of banks holding these bonds.

Through legislation approved on July 21, 1932, the RFC was authorized to make loans for self-liquidating public works project, and to states to provide relief and work relief to needy and unemployed people. This legislation also required that the RFC report to Congress, on a monthly basis, the identity of all new borrowers of RFC funds.

RFC Undercut by Requirement That It Publish Names of Banks Receiving Loans

From its inception through Franklin Roosevelt’s inauguration on March 4, 1933, the RFC primarily made loans to financial institutions. During the first months following the establishment of the RFC, bank failures and currency holdings outside of banks both declined. However, several loans aroused political and public controversy, which was the reason the July 21, 1932 legislation included the provision that the identity of banks receiving RFC loans from this date forward be reported to Congress. The Speaker of the House of Representatives, John Nance Garner, ordered that the identity of the borrowing banks be made public. The publication of the identity of banks receiving RFC loans, which began in August 1932, reduced the effectiveness of RFC lending. Bankers became reluctant to borrow from the RFC, fearing that public revelation of a RFC loan would cause depositors to fear the bank was in danger of failing, and possibly start a panic. Legislation passed in January 1933 required that the RFC publish a list of all loans made from its inception through July 21, 1932, the effective date for the publication of new loan recipients.

RFC, Politics and Bank Failure in February and March 1933

In mid-February 1933, banking difficulties developed in Detroit, Michigan. The RFC was willing to make a loan to the troubled bank, the Union Guardian Trust, to avoid a crisis. The bank was one of Henry Ford’s banks, and Ford had deposits of $7 million in this particular bank. Michigan Senator James Couzens demanded that Henry Ford subordinate his deposits in the troubled bank as a condition of the loan. If Ford agreed, he would risk losing all of his deposits before any other depositor lost a penny. Ford and Couzens had once been partners in the automotive business, but had become bitter rivals. Ford refused to agree to Couzens’ demand, even though failure to save the bank might start a panic in Detroit. When the negotiations failed, the governor of Michigan declared a statewide bank holiday. In spite of the RFC’s willingness to assist the Union Guardian Trust, the crisis could not be averted.

The crisis in Michigan resulted in a spread of panic, first to adjacent states, but ultimately throughout the nation. By the day of Roosevelt’s inauguration, March 4, all states had declared bank holidays or had restricted the withdrawal of bank deposits for cash. As one of his first acts as president, on March 5 President Roosevelt announced to the nation that he was declaring a nationwide bank holiday. Almost all financial institutions in the nation were closed for business during the following week. The RFC lending program failed to prevent the worst financial crisis in American history.

Criticisms of the RFC

The effectiveness of RFC lending to March 1933 was limited in several respects. The RFC required banks to pledge assets as collateral for RFC loans. A criticism of the RFC was that it often took a bank’s best loan assets as collateral. Thus, the liquidity provided came at a steep price to banks. Also, the publicity of new loan recipients beginning in August 1932, and general controversy surrounding RFC lending probably discouraged banks from borrowing. In September and November 1932, the amount of outstanding RFC loans to banks and trust companies decreased, as repayments exceeded new lending.

The RFC in the New Deal

FDR Sees Advantages in Using the RFC

President Roosevelt inherited the RFC. He and his colleagues, as well as Congress, found the independence and flexibility of the RFC to be particularly useful. The RFC was an executive agency with the ability to obtain funding through the Treasury outside of the normal legislative process. Thus, the RFC could be used to finance a variety of favored projects and programs without obtaining legislative approval. RFC lending did not count toward budgetary expenditures, so the expansion of the role and influence of the government through the RFC was not reflected in the federal budget.

RFC Given the Authority to Buy Bank Stock

The first task was to stabilize the banking system. On March 9, 1933, the Emergency Banking Act was approved as law. This legislation and a subsequent amendment improved the RFC’s ability to assist banks by giving it the authority to purchase bank preferred stock, capital notes and debentures (bonds), and to make loans using bank preferred stock as collateral. While banks were initially reluctant, the RFC encouraged banks to issue preferred stock for it to purchase. This provision of capital funds to banks strengthened the financial position of many banks. Banks could use the new capital funds to expand their lending, and did not have to pledge their best assets as collateral. The RFC purchased $782 million of bank preferred stock from 4,202 individual banks, and $343 million of capital notes and debentures from 2,910 individual bank and trust companies. In sum, the RFC assisted almost 6,800 banks. Most of these purchases occurred in the years 1933 through 1935.

The preferred stock purchase program did have controversial aspects. The RFC officials at times exercised their authority as shareholders to reduce salaries of senior bank officers, and on occasion, insisted upon a change of bank management. However, the infusion of new capital into the banking system, and the establishment of the Federal Deposit Insurance Corporation to insure bank depositors against loss, stabilized the financial system. In the years following 1933, bank failures declined to very low levels.

RFC’s Assistance to Farmers

Throughout the New Deal years, the RFC’s assistance to farmers was second only to its assistance to bankers. Total RFC lending to agricultural financing institutions totaled $2.5 billion. Over half, $1.6 billion, went to its subsidiary, the Commodity Credit Corporation. The Commodity Credit Corporation was incorporated in Delaware in 1933, and operated by the RFC for six years. In 1939, control of the Commodity Credit Corporation was transferred to the Department of Agriculture, were it remains today.

Commodity Credit Corporation

The agricultural sector was hit particularly hard by depression, drought, and the introduction of the tractor, displacing many small and tenant farmers. The primary New Deal program for farmers was the Agricultural Adjustment Act. Its objective was to reverse the decline of product prices and farm incomes experienced since 1920. The Commodity Credit Corporation contributed to this objective by purchasing selected agricultural products at guaranteed prices, typically above the prevailing market price. Thus, the CCC purchases established a guaranteed minimum price for these farm products.

The RFC also funded the Electric Home and Farm Authority, a program designed to enable low- and moderate- income households to purchase gas and electric appliances. This program would create demand for electricity in rural areas, such as the area served by the new Tennessee Valley Authority. Providing electricity to rural areas was the objective of the Rural Electrification Program.

Decline in Bank Lending Concerns RFC and New Deal Officials

After 1933, bank assets and bank deposits both increased. However, banks changed their asset allocation dramatically during the recovery years. Prior to the depression, banks primarily made loans, and purchased some securities, such as U.S. Treasury securities. During the recovery years, banks primarily purchased securities, which involved less risk. Whether due to concerns over safety, or because potential borrowers had weakened financial positions due to the depression, bank lending did not recover, as indicated by the data in Table 1.

The relative decline in bank lending was a major concern for RFC officials and the New Dealers, who felt that lack of lending by banks was hindering economic recovery. The sentiment within the Roosevelt administration was that the problem was banks’ unwillingness to lend. They viewed the lending by the Commodity Credit Corporation and the Electric Home and Farm Authority, as well as reports from members of Congress, as evidence that there was unsatisfied business loan demand.

TABLE 1
Year Bank Loans and Investments in Millions of Dollars Bank Loans in Millions of Dollars Bank Net Deposits in Millions of Dollars Loans as a Percentage of Loans and Investments Loans as a Percentage of Net Deposits
1921 39895 28927 30129 73% 96%
1922 39837 27627 31803 69% 87%
1923 43613 30272 34359 69% 88%
1924 45067 31409 36660 70% 86%
1925 48709 33729 40349 69% 84%
1926 51474 36035 42114 70% 86%
1927 53645 37208 43489 69% 86%
1928 57683 39507 44911 68% 88%
1929 58899 41581 45058 71% 92%
1930 58556 40497 45586 69% 89%
1931 55267 35285 41841 64% 84%
1932 46310 27888 32166 60% 87%
1933 40305 22243 28468 55% 78%
1934 42552 21306 32184 50% 66%
1935 44347 20213 35662 46% 57%
1936 48412 20636 41027 43% 50%
1937 49565 22410 42765 45% 52%
1938 47212 20982 41752 44% 50%
1939 49616 21320 45557 43% 47%
1940 51336 22340 49951 44% 45%

Source: Banking and Monetary Statistics, 1914 –1941.
Net Deposits are total deposits less interbank deposits.
All data are for the last business day of June in each year.

RFC Provides Credit to Business

Due to the failure of bank lending to return to pre-Depression levels, the role of the RFC expanded to include the provision of credit to business. RFC support was deemed as essential for the success of the National Recovery Administration, the New Deal program designed to promote industrial recovery. To support the NRA, legislation passed in 1934 authorized the RFC and the Federal Reserve System to make working capital loans to businesses. However, direct lending to businesses did not become an important RFC activity until 1938, when President Roosevelt encouraged expanding business lending in response to the recession of 1937-38.

RFC Mortgage Company

During the depression, many families and individuals were unable to make their mortgage payments, and had their homes repossessed. Another New Deal goal was to provide more funding for mortgages, to avoid the displacement of homeowners. In June 1934, the National Housing Act provided for the establishment of the Federal Housing Administration (FHA). The FHA would insure mortgage lenders against loss, and FHA mortgages required a smaller percentage down payment than was customary at that time, thus making it easier to purchase a house. In 1935, the RFC Mortgage Company was established to buy and sell FHA-insured mortgages.

RFC and Fannie Mae

Financial institutions were reluctant to purchase FHA mortgages, so in 1938 the President requested that the RFC establish a national mortgage association, the Federal National Mortgage Association, or Fannie Mae. Fannie Mae was originally funded by the RFC to create a market for FHA and later Veterans Administration (VA) mortgages. The RFC Mortgage Company was absorbed by the RFC in 1947. When the RFC was closed, its remaining mortgage assets were transferred to Fannie Mae. Fannie Mae evolved into a private corporation. During its existence, the RFC provided $1.8 billion of loans and capital to its mortgage subsidiaries.

RFC and Export-Import Bank

President Roosevelt sought to encourage trade with the Soviet Union. To promote this trade, the Export-Import Bank was established in 1934. The RFC provided capital, and later loans to the Ex-Im Bank. Interest in loans to support trade was so strong that a second Ex-Im bank was created to fund trade with other foreign nations a month after the first bank was created. These two banks were merged in 1936, with the authority to make loans to encourage exports in general. The RFC provided $201 million of capital and loans to the Ex-Im Banks.

Other RFC activities during this period included lending to federal government agencies providing relief from the depression including the Public Works Administration and the Works Progress Administration, disaster loans, and loans to state and local governments.

RFC Pushed Up the Price of Gold, Devalues the Dollar

Evidence of the flexibility afforded through the RFC was President Roosevelt’s use of the RFC to affect the market price of gold. The President wanted to reduce the gold value of the dollar from $20.67 per ounce of gold. As the dollar price of gold increased, the dollar exchange rate would fall relative to currencies that had a fixed gold price. A fall in the value of the dollar makes exports cheaper and imports more expensive. In an economy with high levels of unemployment, a decline in imports and increase in exports would increase domestic employment.

The goal of the RFC purchases was to increase the market price of gold. During October 1933 the RFC began purchasing gold at a price of $31.36 per ounce. The price was gradually increased to over $34 per ounce. The RFC price set a floor for the price of gold. In January 1934, the new official dollar price of gold was fixed at $35.00 per ounce, a 59% devaluation of the dollar.

Twice President Roosevelt instructed Jesse Jones, the president of the RFC, to stop lending, as he intended to close the RFC. The first occasion was in October 1937, and the second was in early 1940. The recession of 1937-38 caused Roosevelt to authorize the resumption of RFC lending in early 1938. The German invasion of France and the Low Countries gave the RFC new life on the second occasion.

The RFC in World War II

In 1940 the scope of RFC activities increased significantly, as the United States began preparing to assist its allies, and for possible direct involvement in the war. The RFC’s wartime activities were conducted in cooperation with other government agencies involved in the war effort. For its part, the RFC established seven new corporations, and purchased an existing corporation. The eight RFC wartime subsidiaries are listed in Table 2, below.

Table 2
RFC Wartime Subsidiaries
Metals Reserve Company
Rubber Reserve Company
Defense Plant Corporation
Defense Supplies Corporation
War Damage Corporation
U.S. Commercial Company
Rubber Development Corporation
Petroleum Reserve Corporation (later War Assets Corporation)

Source: Final Report of the Reconstruction Finance Corporation

Development of Materials Cut Off By the War

The RFC subsidiary corporations assisted the war effort as needed. These corporations were involved in funding the development of synthetic rubber, construction and operation of a tin smelter, and establishment of abaca (Manila hemp) plantations in Central America. Both natural rubber and abaca (used to produce rope products) were produced primarily in south Asia, which came under Japanese control. Thus, these programs encouraged the development of alternative sources of supply of these essential materials. Synthetic rubber, which was not produced in the United States prior to the war, quickly became the primary source of rubber in the post-war years.

Other War-Related Activities

Other war-related activities included financing plant conversion and construction for the production of military and essential goods, to deal and stockpile strategic materials, to purchase materials to reduce the supply available to enemy nations, to administer war damage insurance programs, and to finance construction of oil pipelines from Texas to New Jersey to free tankers for other uses.

During its existence, RFC management made discretionary loans and investments of $38.5 billion, of which $33.3 billion was actually disbursed. Of this total, $20.9 billion was disbursed to the RFC’s wartime subsidiaries. From 1941 through 1945, the RFC authorized over $2 billion of loans and investments each year, with a peak of over $6 billion authorized in 1943. The magnitude of RFC lending had increased substantially during the war. Most lending to wartime subsidiaries ended in 1945, and all such lending ended in 1948.

The Final Years of the RFC, 1946-1953

After the war, RFC lending decreased dramatically. In the postwar years, only in 1949 was over $1 billion authorized. Through 1950, most of this lending was directed toward businesses and mortgages. On September 7, 1950, Fannie Mae was transferred to the Housing and Home Finance Agency. During its last three years, almost all RFC loans were to businesses, including loans authorized under the Defense Production Act.

Eisenhower Terminates the RFC

President Eisenhower was inaugurated in 1953, and shortly thereafter legislation was passed terminating the RFC. The original RFC legislation authorized operations for one year of a possible ten-year existence, giving the President the option of extending its operation for a second year without Congressional approval. The RFC survived much longer, continuing to provide credit for both the New Deal and World War II. Now, the RFC would finally be closed.

Small Business Administration

However, there was concern that the end of RFC business loans would hurt small businesses. Thus, the Small Business Administration (SBA) was created in 1953 to continue the program of lending to small businesses, as well as providing training programs for entrepreneurs. The disaster loan program was also transferred to the SBA.

Through legislation passed on July 30, 1953, RFC lending authority ended on September 28, 1953. The RFC continued to collect on its loans and investments through June 30, 1957, at which time all remaining assets were transferred to other government agencies. At the time the liquidation act was passed, the RFC’s production of synthetic rubber, tin, and abaca remained in operation. Synthetic rubber operations were sold or leased to private industry. The tin and abaca programs were ultimately transferred to the General Services Administration.

Successors of the RFC

Three government agencies and one private corporation that were related to the RFC continue today. The Small Business Administration was established to continue lending to small businesses. The Commodity Credit Corporation continues to provide assistance to farmers. The Export-Import Bank continues to provide loans to promote exports. Fannie Mae became a private corporation in 1968. Today it is the most important source of mortgage funds in the nation, and has become one of the largest corporations in the country. Its stock is traded on the New York Stock Exchange under the symbol FNM.

Economic Analysis of the RFC

Role of a Lender of Last Resort

The American central bank, the Federal Reserve System, was created to be a lender of last resort. A lender of last resort exists to provide liquidity to banks during crises. The famous British central banker, Walter Bagehot, advised, “…in a panic the holders of the ultimate Bank reserve (whether one bank or many) should lend to all that bring good securities quickly, freely, and readily. By that policy they allay a panic…”

However, the Fed was not an effective lender of last resort during the depression years. Many of the banks experiencing problems during the depression years were not members of the Federal Reserve System, and thus could not borrow from the Fed. The Fed was reluctant to assist troubled banks, and banks also feared that borrowing from the Fed might weaken depositors’ confidence.

President Hoover hoped to restore stability and confidence in the banking system by creating the Reconstruction Finance Corporation. The RFC made collateralized loans to banks. Many scholars argue that initially RFC lending did provide relief. These observations are based on the decline in bank suspensions and public currency holdings in the months immediately following the creation of the RFC in February 1932. These data are presented in Table 3.

Table 3
1932 Currency in Millions of Dollars Bank Suspensions Number
January 4896 342
February 4824 119
March 4743 45
April 4751 74
May 4746 82
June 4959 151
July 5048 132
August 4988 85
September 4941 67
October 4863 102
November 4842 93
December 4830 161

Data sources: Currency – Friedman and Schwartz (1963)
Bank suspensions – Board of Governors (1937)

Bank suspensions occur when banks cannot open for normal business operations due to financial problems. Most bank suspensions ended in failure of the bank. Currency held by the public can be an indicator of public confidence in banks. As confidence declines, members of the public convert deposits to currency, and vice versa.

The banking situation deteriorated in June 1932 when a crisis developed in and around Chicago. Both Friedman and Schwartz (1963) and Jones (1951) assert that an RFC loan to a key bank helped to end the crisis, even though the bank subsequently failed.

The Debate over the Impact of the RFC

Two studies of RFC lending have come to differing conclusions. Butkiewicz (1995) examines the effect of RFC lending on bank suspensions and finds that lending reduced suspensions in the months prior to publication of the identities of loan recipients. He further argues that publication of the identities of banks receiving loans discouraged banks from borrowing. As noted above, RFC loans to banks declined in two months after publication began. Mason (2001) examines the impact of lending on a sample of Illinois banks and finds that those receiving RFC loans were increasingly likely to fail. Thus, the limited evidence provided from scholarly studies provides conflicting results about the impact of RFC lending.

Critics of RFC lending to banks argue that the RFC took the banks’ best assets as collateral, thereby reducing bank liquidity. Also, RFC lending requirements were initially very stringent. After the financial collapse in March 1933, the RFC was authorized to provide banks with capital through preferred stock and bond purchases. This change, along with the creation of the Federal Deposit Insurance System, stabilized the banking system.

Economic and Noneconomic Rationales for an Agency Like the RFC

Beginning 1933, the RFC became more directly involved in the allocation of credit throughout the economy. There are several economic reasons why a government agency might actively participate in the allocation of liquid capital funds. These are market failure, externalities, and noneconomic reasons.

A market failure occurs if private markets fail to allocate resources efficiently. For example, small business owners complain that markets do not provide enough loans at reasonable interest rates, a so-called “credit gap”. However, small business loans are riskier than loans to large corporations. Higher interest rates compensate for the greater risk involved in lending to small businesses. Thus, the case for a market failure is not compelling. However, small business loans remain politically popular.

An externality exists when the benefits to society are greater than the benefits to the individuals involved. For example, loans to troubled banks may prevent a financial crisis. Purchases of bank capital may also help stabilize the financial system. Prevention of financial crises and the possibility of a recession or depression provide benefits to society beyond the benefits to bank depositors and shareholders. Similarly, encouraging home ownership may create a more stable society. This argument is often used to justify government provision of funds to the mortgage market.

While wars are often fought over economic issues, and wars have economic consequences, a nation may become involved in a war for noneconomic reasons. Thus, the RFC wartime programs were motivated by political reasons, as much or more than economic reasons.

The RFC was a federal credit agency. The first federal credit agency was established in 1917. However, federal credit programs were relatively limited until the advent of the RFC. Many RFC lending programs were targeted to help specific sectors of the economy. A number of these activities were controversial, as are some federal credit programs today. Three important government agencies and one private corporation that descended from the RFC still operate today. All have important effects on the allocation of credit in our economy.

Criticisms of Governmental Credit Programs

Critics of federal credit programs cite several problems. One is that these programs subsidize certain activities, which may result in overproduction and misallocation of resources. For example, small businesses can obtain funds through the SBA at lower interest rates than are available through banks. This interest rate differential is a subsidy to small business borrowers. Crop loans and price supports result in overproduction of agricultural products. In general, federal credit programs reallocate capital resources to favored activities.

Finally, federal credit programs, including the RFC, are not funded as part of the normal budget process. They obtain funds through the Treasury, or their own borrowings are assumed to have the guarantee of the federal government. Thus, their borrowing is based on the creditworthiness of the federal government, not their own activities. These “off-budget” activities increase the scope of federal involvement in the economy while avoiding the normal budgetary decisions of the President and Congress. Also, these lending programs involve risk. Default on a significant number of these loans might require the federal government to bail out the affected agency. Taxpayers would bear the cost of a bailout.

Any analysis of market failures, externalities, or federal programs should involve a comparison of costs and benefits. However, precise measurement of costs and benefits in these cases is often difficult. Supporters value the benefits very highly, while opponents argue that the costs are excessive.

Conclusion

The RFC was created to assist banks during the Great Depression. It experienced some, albeit limited, success in this activity. However, the RFC’s authority to borrow directly from the Treasury outside the normal budget process proved very attractive to President Roosevelt and his advisors. Throughout the New Deal, the RFC was used to finance a vast array of favored activities. During World War II, RFC lending to its subsidiary corporations was an essential component of the war effort. It was the largest and most important federal credit program of its time. Even after the RFC was closed, some of its lending activities have continued through agencies and corporations that were first established or funded by the RFC. These descendent organizations, especially Fannie Mae, play a very important role in the allocation of credit in the American economy. The legacy of the RFC continues, long after it ceased to exist.

 

Data Sources

Banking data are from Banking and Monetary Statistics, 1914-1941, Board of Governors of the Federal Reserve System, 1943.

RFC data are from Final Report on the Reconstruction Finance Corporation, Secretary of the Treasury, 1959.

Currency data are from The Monetary History of the United States, 1867-1960, Friedman and Schwartz, 1963.

Bank suspension data are from Federal Reserve Bulletin, Board of Governors, September 1937.

References

Bagehot, Walter. Lombard Street: A Description of the Money Market. New York: Scribner, Armstrong & Co., 1873.

Board of Governors of the Federal Reserve System. Banking and Monetary Statistics, 1914-1941. Washington, DC, 1943.

Board of Governors of the Federal Reserve System. Federal Reserve Bulletin. September 1937.

Bremer, Cornelius D. American Bank Failures. New York: AMS Press, 1968.

Butkiewicz, James L. “The Impact of a Lender of Last Resort during the Great Depression: The Case of the Reconstruction Finance Corporation.” Explorations in Economic History 32, no. 2 (1995): 197-216.

Butkiewicz, James L. “The Reconstruction Finance Corporation, the Gold Standard, and the Banking Panic of 1933.” Southern Economic Journal 66, no. 2 (1999): 271-93.

Chandler, Lester V. America’s Greatest Depression, 1929-1941. New York: Harper and Row, 1970.

Friedman, Milton, and Anna J. Schwartz. The Monetary History of the United States, 1867-1960. Princeton, NJ: Princeton University Press, 1963.

Jones, Jesse H. Fifty Billion Dollars: My Thirteen Years with the RFC, 1932-1945. New York: Macmillan Co., 1951.

Keehn, Richard H., and Gene Smiley. “U.S. Bank Failures, 1932-1933: A Provisional Analysis.” Essays in Economic and Business History 6 (1988): 136-56.

Keehn, Richard H., and Gene Smiley. “U.S. Bank Failures, 1932-33: Additional Evidence on Regional Patterns, Timing, and the Role of the Reconstruction Finance Corporation.” Essays in Economic and Business History 11 (1993): 131-45.

Kennedy, Susan E. The Banking Crisis of 1933. Lexington, KY: University of Kentucky Press, 1973.

Mason, Joseph R. “Do Lender of Last Resort Policies Matter? The Effects of Reconstruction Finance Corporation Assistance to Banks During the Great Depression.” Journal of Financial Services Research 20, no 1. (2001): 77-95.

Nadler, Marcus, and Jules L. Bogen. The Banking Crisis: The End of an Epoch. New York, NY: Arno Press, 1980.

Olson, James S. Herbert Hoover and the Reconstruction Finance Corporation. Ames, IA: Iowa State University Press, 1977.

Olson, James S. Saving Capitalism: The Reconstruction Finance Corporation in the New Deal, 1933-1940. Princeton, NJ: Princeton University Press, 1988.

Saulnier, R. J., Harold G. Halcrow, and Neil H. Jacoby. Federal Lending and Loan Insurance. Princeton, NJ: Princeton University Press, 1958.

Schlesinger, Jr., Arthur M. The Age of Roosevelt: The Coming of the New Deal. Cambridge, MA: Riverside Press, 1957.

Secretary of the Treasury, Final Report on the Reconstruction Finance Corporation. Washington, DC: United States Government Printing Office, 1959.

Sprinkel, Beryl Wayne. “Economic Consequences of the Operations of the Reconstruction Finance Corporation.” Journal of Business of the University of Chicago 25, no. 4 (1952): 211-24.

Sullivan, L. Prelude to Panic: The Story of the Bank Holiday. Washington, DC: Statesman Press, 1936.

Trescott, Paul B. “Bank Failures, Interest Rates, and the Great Currency Outflow in the United States, 1929-1933.” Research in Economic History 11 (1988): 49-80.

Upham, Cyril B., and Edwin Lamke. Closed and Distressed Banks: A Study in Public Administration. Washington, DC: Brookings Institution, 1934.

Wicker, Elmus. The Banking Panics of the Great Depression. Cambridge: Cambridge University Press, 1996.

Web Links

Commodity Credit Corporation

http://www.fsa.usda.gov/pas/publications/facts/html/ccc99.htm

Ex-Im Bank http://www.exim.gov/history.html

Fannie Mae http://www.fanniemae.com/company/history.html

Small Business Administration http://www.sba.gov/aboutsba/sbahistory.doc

Citation: Butkiewicz, James. “Reconstruction Finance Corporation”. EH.Net Encyclopedia, edited by Robert Whaples. July 19, 2002. URL http://eh.net/encyclopedia/reconstruction-finance-corporation/

The History of the Radio Industry in the United States to 1940

Carole E. Scott, State University of West Georgia

The Technological Development of Radio: From Thales to Marconi

All electrically-based industries trace their ancestry back to at least 600 B.C. when the Greek philosopher Thales observed that after it is rubbed, amber (electron in Greek) attracts small objects. In 1600, William Gilbert, an Englishman, distinguished between magnetism, such as that displayed by a lodestone, and what we now call the static electricity produced by rubbing amber. In 1752, America’s multi-talented Benjamin Franklin used a kite connected to a Leyden jar during a thunderstorm to prove that a lightning flash has the same nature as static electricity. In 1831, an American, Joseph Henry, used an electromagnet to send messages by wire between buildings on Princeton’s campus. Assisted by Henry, an American artist, Samuel F. B. Morse, developed a telegraph system utilizing a key to open and close an electric circuit to transmit an intermittent signal (Morse Code) through a wire.

The possibility of transmitting messages through the air, water, or ground via low frequency magnetic waves was discovered soon after Morse invented the telegraph. Induction was the method used in the first documented “wireless telephone” demonstration by Nathan B. Stubblefield, a Kentucky farmer, in 1892. Because Stubblefield transmitted sound through the air via induction, rather than by radiation, he was not the inventor of radio.

Transmission by radiation owes its existence to the discovery in1877 of electromagnetic waves by a German, Heinrich Rudolf Hertz. Electromagnetic waves of from 10,000 cycles a second to 1,200,000,000 cycles a second are today called radio waves. Eight years after Hertz’s discovery, an American, Thomas Alva Edison, took out a patent for wireless telegraphy through the use of discontinuous radio waves. A few years later, in 1894, using a different and much superior wireless telegraphy system, an Italian, Guglielmo Marconi, used discontinuous waves to send Morse Code messages through the air for short distances over land. Later he sent them across the Atlantic Ocean. On land in Europe Marconi was stymied by laws giving government-operated postal services a monopoly on message delivery, and initially only over water was he able to transmit radio waves very far.

Several Americans transmitted speech without the benefit of wires prior to 1900. Alexander G. Bell, for example, experimented in 1890 with transmitting sound with rays of light, whose frequency exceeds that of radio waves. His test of what he called the photophone was said to be the first practical test of such a device ever made. Although Marconi is widely given the credit for being the first man to develop a successful wireless system, some believe that others, including Nicola Tesla preceded him. However, it is clear that Marconi had far more influence on the shaping of the radio industry than these men did.

The Structure of the Radio Industry before 1920: Inventor-Entrepreneurs

As had been true of earlier high-tech industries such as the telegraph and electric lighting in their formative years, what was accomplished in the early years of the radio industry was primarily brought about by inventor/entrepreneurs. None of the major electrical and telephone companies played a role in the formative years of the radio industry. So this industry’s early history is a story of individual inventors and entrepreneurs, many of whom were both inventors and entrepreneurs. However, after 1920 this industry’s history is largely one of organizations.

Scientists obtain their objective by discovering the laws of nature. Inventors, on the other hand, use the laws of nature to find a way to do something. Because they do not know or do not care what scientists’ laws say is possible, inventors will try things that scientists will not. When the creations of inventors work in seeming defiance of scientists’ work, scientists rush to the lab to find out why. Scientists thought that radio waves could not be transmitted beyond the horizon because they thought that this would require that they bend to follow the curvature of the Earth. Marconi tried transmitting beyond the horizon anyway and succeeded. A typical scientist would not have tried to do this because he knew better and his fellow scientists might laugh at him.

Marconi

Marconi may not have been visionary enough to found the radio broadcasting industry. Vision was required because, while there was already an established market for electronic, point-to-point communication, there was no existing market for broadcasting, nor could the technology for transmitting speech be as easily developed as could that for transmitting dots and dashes. In point-to-point communications radio’s disadvantage was lack of privacy. Its competitive advantage was much lower cost than transmission by wire over land and undersea cable.

Due in part to his Marconi Company’s purchase of competitors who had infringed on its patents, by the time World War I broke out, the American Marconi Company dominated the American radio market. As a result, it had no overwhelming need to develop a new service. In addition, Marconi had no surplus funds to plow into a new business. Shortly after the end of World War I, the United States’ government’s hostile attitude convinced Marconi that his British-based company had no future in America, and he agreed to sell it to the General Electric Company (GE). Marconi had wanted to create an international wireless monopoly. However, the United States government opposed the creation of a foreign-owned wireless monopoly. During World War I the United States Navy was given control of all the nation’s private wireless facilities. After the war the Navy wanted wireless to continue to be a government-controlled monopoly. Unable to achieve this, the Navy recommended that an American-owned company be established to control the manufacture and marketing of wireless in the United States. As a result, the government-sponsored Radio Corporation of America was created to take over the assets of Marconi’s American company.

The four chief players in American radio’s early years, Marconi, Canadian-born Aubrey Fessenden, Lee deForest, and John Stone Stone [sic] were all inventor/entrepreneurs. Marconi successfully exploited the interdependence among technology, business strategy, and the press. He was the only one of the four to have an adequate business strategy. Only he and deForest took full advantage of the press. However, deForest seems to have used the press more to sell stock than apparatus. Marconi was also more astute in his patent dealings than were his American competitors. For example, to protect himself from a possible patent suit, he purchased from Thomas A. Edison his patent on a system of wireless telegraphy that Edison had never used. Marconi never used it either because it was inferior to one he developed.

Fessenden

Fessenden, a very prolific inventor, first experimented with voice transmission while working for the United States Weather Bureau. In 1900 he left what is now the University of Pittsburgh, where he was head of the electrical engineering department, to develop a method for the U.S. Weather Bureau to transmit weather reports. That year, through the use of a transmitter that produced discontinuous waves, he succeeded in transmitting speech.

Although discontinuous waves would satisfactorily transmit the dots and dashes of Morse code, high quality voice and music cannot be transmitted in this way. So, in 1902, Fessenden switched to using a continuous wave, becoming the first person to transmit voice and music by this method. On Christmas Eve, 1906, Fessenden made history by broadcasting music and speech from Massachusetts that was heard as far away as the West Indies. After picking up this broadcast, the United Fruit Company purchased equipment from Fessenden to communicate with its ships. Navies and shipping companies were among those most interested in purchasing early radio equipment. During World War I armies also made significant use of radio. Important among its army uses was communicating with airplanes.

Because he did not provide a regular schedule of programming for the public, Fessenden is not usually credited with having operated the first broadcasting station. Nonetheless, he is widely recognized as the father of broadcasting because those who had gone before him had only used radio to deliver messages from one person to another. However, despite being preoccupied with laboratory work and being unsuited by temperament and experience to be a businessman, he chose to directly manage his company. It failed, and an embittered Fessenden left the radio industry.

deForest

Lee deForest, whose doctoral dissertation was about Hertzian waves, received his Ph.D. from Yale in 1896. His first job was with Western Electric. By 1902 he had started the DeForest Wireless Telegraph Company, which became insolvent in 1906. His second company, the DeForest Radio Telephone Company began to fail in 1909. In 1912 he was indicted for using the mails to defraud by promoting “a worthless device,” the Audion tube. He was acquitted. The Audion tube (later known as a triode tube) was far from being a worthless device, as it was a key component of radios so long as vacuum tubes continued to be used.

The development of a commercially viable radio broadcasting industry could not have taken place without the invention of the vacuum tube, which had its origins in Englishman Michael Faraday’s belief that an electric current could probably pass through a vacuum. (The vacuum tube’s obsolescence was the result of a study of semiconductors in 1948 by William Shockley, Walter Brattain, and John Bardeen. They discovered that the introduction of impurities into semiconductors provided a solid-state material that would not only rectify a current, but also amplify it. Transistors using this material rapidly replaced vacuum tubes. Later it became possible to etch transistors on small pieces of silicon in integrated circuits.)

In 1910, deForest broadcast, probably rather poorly, the singing of opera singer Enrico Caruso. Possibly stimulated by the American Telephone and Telegraph Company transmitting from the Navy’s Arlington, Virginia facility in 1915 radio telephone signals heard both across the Atlantic and in Honolulu, deForest resumed experimenting with broadcasting. He installed a transmitter at the Columbia Gramophone building in New York and began daily broadcasts of phonograph music sponsored by Columbia. Because in the late nineteenth century the new electrical industry had made some investors multimillionaires almost over night, Americans like deForest and his partners found easy pickings for awhile, as many people were eager to snap up the stock offered by overly optimistic inventors in this new branch of the electrical industry. The quick failure of firms whose end, rather than their means, was selling stock made life more difficult for ethical firms.

Amateur Radio

In the United States in 1913 there were 322 licensed amateur radio operators who would ultimately be relegated to the seemingly barren wasteland of the radio spectrum, short wave. By 1917 there were 13,581 amateur radio operators. At that time building a radio receiver was a fad. The typical builder was a boy or young man. Many older people thought that all radio would ever be was a fad, and certainly so long as the public had to build its own radios, put up with poor reception, and listen to dots and dashes and a few experimental broadcasts of music and speech over earphones, relatively few people were going to be interested in having a radio. Laying the groundwork for making radio a mass medium was Edwin H. Armstrong’s invention based on work he did in the U.S. Army during World War I of the super heterodyne that made it possible to replace earphones with a loudspeaker.

In 1921, the American Radio Relay league and a British amateur group assisted by Armstrong, an engineer and college professor, proved that contrary to the belief of experts, short waves can travel over long distances. Three years later Marconi, who had previously used only long waves, showed that short-wave radio waves, by bounding off the upper atmosphere, can hopscotch around the world. This discovery led to short wave radio being used for long distance radio broadcasting. (Today telephone companies use microwave relay systems for long-distance, on-shore communication through the air.)

After 1920: Large Corporations Come to Dominate the Industry

In 1919, Frank Conrad, a Westinghouse engineer, began broadcasting music in Pittsburgh. These broadcasts stimulated the sales of crystal sets. A crystal set, which could be made at home, was composed of a tuning coil, a crystal detector, and a pair of earphones. The use of a crystal eliminated the need for a battery or other electric source. The popularity of Conrad’s broadcasts led to Westinghouse establishing a radio station, KDKA, on November 2, 1920. In 1921, KDKA began broadcasting prizefights and major league baseball. While Conrad was creating KDKA, the Detroit News established a radio station. Other newspapers soon followed the Detroit newspaper’s lead.

RCA

The Radio Corporation of America (RCA) was the government-sanctioned radio monopoly formed to replace Marconi’s American company. (Later, a government that had once considered making radio a government monopoly followed a policy of promoting competition in the radio industry.) RCA was owned by a GE-dominated partnership that included Westinghouse, American Telegraph and Telephone Company (AT&T), Western Electric, United Fruit Company, and others. There were cross-licensing agreements (patent pooling) agreements between GE, AT&T, Westinghouse, and RCA, which owned the assets of Marconi’s company. Patent pooling was the solution to the problem of each company owning some essential patents.

For many years RCA and its head, David Sarnoff, were virtual synonyms. Sarnoff, who began his career in radio as a Marconi office boy, gained fame as a wireless operator and showed the great value of radio when he picked up distress messages from the sinking Titanic. Ultimately, RCA expanded into nearly every area of communications and electronics. Its extensive patent holdings gave it power over most of its competitors because they had to pay it royalties. While still working for Marconi Sarnoff had the foresight to realize that the real money in radio lay in selling radio receivers. (Because the market was far smaller, radio transmitters generated smaller revenues.)

Financing Radio Broadcasts

Marconi was able to charge people for transmitting messages for them, but how was radio broadcasting to be financed? In Europe the government financed it. In this country it soon came to be largely financed by advertising. In 1922, few stations sold advertising time. Then the motive of many operating radio stations was to advertise other businesses they owned or to get publicity. About a quarter of the nation’s 500 stations were owned by manufacturers, retailers, and other businesses, such as hotels and newspapers. Another quarter were owned by radio-related firms. Educational institutions, radio clubs, civic groups, churches, government, and the military owned 40 percent of the stations. Radio manufacturers viewed broadcasting simply as a way to sell radios. Over its first three years of selling radios, RCA’s revenues amounted to $83,500,000. By 1930 nine out of ten broadcasting stations were selling advertising time. In 1939, more than a third of the stations lost money. However, by the end of World War II only five percent were in the red. Stations’ advertising revenues came both from local and national advertisers after networks were established. By 1938, 40 percent of the nation’s 660 stations were affiliated with a network, and many were part of a chain (commonly-owned).

Radio Networks

On September 25, 1926, RCA formed the National Broadcasting Company (NBC) to take over its network broadcasting business. In early 1927 only seven percent of the nation’s 737 radio stations were affiliated with NBC. In that year a rival network whose name eventually became the Columbia Broadcasting System (CBS) was established. In 1928, CBS was purchased and reorganized by William S. Paley, a cigar company executive whose CBS career spanned more than a half-century. In 1934, the Mutual Broadcasting System was formed. Unlike NBC and CBS, it did not move into television. In 1943, the Federal Communications Commission forced NBC to sell a part of its system to Edward J. Noble, who formed the American Broadcasting Corporation (ABC). To avoid the high cost of producing radio shows, local radio stations got most of their shows other than news from the networks, which enjoyed economies of scale in producing radio programs because their costs were spread over the many stations using their programming.

The Golden Age of Radio

Radio broadcasting was the cheapest form of entertainment, and it provided the public with far better entertainment than most people were accustomed to. As a result, its popularity grew rapidly in the late 1920s and early 1930s, and by 1934, 60 percent of the nation’s households had radios. One and a half million cars were also equipped with them. The 1930s were the Golden Age of radio. It was so popular that theaters dared not open until after the extremely popular “Amos ‘n Andy” show was over.

In the thirties radio broadcasting was an entirely different genre from what it became after the introduction of television. Those who have only known the music, news, and talk radio of recent decades can have no conception of the big budget days of the thirties when radio was king of the electronic hill. Like reading, radio demanded the use of imagination. Through image-inspiring sound effects, which reached a high degree of sophistication in the thirties, radio replaced vision with visualization. Perfected during the thirties was the only new “art form” radio originated, the “soap opera,” so called because the sponsors of these serialized morality plays aimed at housewives, who were then very numerous, were usually soap companies.

The Growth of Radio

The growth of radio in the 1920s and 30s can be seen in Tables 1, 2, and 3, which give the number of stations, the amount of advertising revenue and sales of radio equipment.

Table 1
Number of Radio Stations in the US, 1921-1940

Year Number
1921 5
1922 30
1923 556
1924 530
1925 571
1926 528
1927 681
1928 677
1929 606
1930 618
1931 612
1932 604
1933 599
1934 583
1935 585
1936 616
1937 646
1938 689
1939 722
1940 765

Source: Sterling and Kittross (1978), p. 510.

Table 2
Radio Advertising Expenditures in Millions of Dollars, 1927-1940

Year Amount in Millions of $
1927 4.8
1928 14.1
1929 26.8
1930 40.5
1931 56.0
1932 61.9
1933 57.0
1934 72.8
1935 112.6
1936 122.3
1937 164.6
1938 167.1
1939 183.8
1940 215.6

Source: Sterling and Kittross (1978).

Table 3
Sales of Radio Equipment in Millions of Dollars

Year Sales in Millions of $
1922 60
1923 136
1924 358
1925 430
1926 506
1927 426
1928 651
1929 843

Source: Douglas (1987), p. 75

Impact of TV and Later Developments

The most popular drama and comedy shows and most of their stars migrated from radio to television in the 1940s and 1950s. (A few stars, like the comedy star, Fred Allen, did not successfully make the transition.) Other shows died, as radio became a medium, first, of music and news and then of call-in talk shows, music, and news. Television sets replaced the furniture-like radios that dominated the nation’s living rooms in the thirties. Point-to-point radio communication became essential for the police and trucking and other companies with similar needs. New technology made portable radio sets popular. Many decades after the loss of comedy and drama shows to television the creation of the Internet provided radio stations both with a new way to broadcast and gave then a visual component.

Government Regulation

Radio’s Property Rights Problem

Because the radio spectrum is quite different from say, a piece of real estate, radio produced a property rights problem. Originally, it was viewed as being like a navigable waterway, that is, public property. However, it wasn’t long before so many people wanted to use it that there wasn’t enough room for everyone. The only ways to deal with an excess of demand over supply are either to raise price until some potential users leave the market or to turn to rationing. The selling of the radio spectrum does not appear to have been considered. Instead, the spectrum was rationed by the government, which parceled it out to selected parties for free.

The Free-Speech Issue

Navigable waterways present no free speech problem, but radio does. Was radio to be treated like newspapers and magazines, or were broadcasters to be denied free speech? Were radio stations to be treated, like telephone companies, as common carriers, that is, anyone desiring to make use of them would have to be allowed to use them, or would they be treated like newspapers, which are under no obligation to allow all comers access to their pages? It was also established that radio stations, like newspapers, would be protected by the First Amendment

Regulation and Legislation

Government regulation of radio began in 1904 when President Theodore Roosevelt organized the Interdepartmental Board of Wireless Telegraphy. In 1910 the Wireless Ship Act was passed. That radio was to be a regulated industry was decided in 1912, when Congress passed a Radio Act that required people to obtain a license from the government in order to operate a radio transmitter. In 1924, Herbert Hoover, who was secretary of the Commerce Department, said that the radio industry was probably the only industry in the nation that was unanimously in favor of having itself regulated. Presumably, this was due both to the industry’s desire to put a stop to stations interfering with each others’ broadcasts and to limit the number of stations to a small enough number to lock in a profit. The Radio Act of 1927 solved the problem of broadcasting stations using the same frequency and the more powerful ones drowning out less powerful ones. This Act also established that radio waves are public property; therefore, radio stations must be licensed by the government. It was decided, however, not to charge stations for the use of this property.

FM Radio: Technology and Patent Suits

One method of imposing speech and music on a continuous wave requires increasing or reducing the amplitude (modulating) the distance between a radio waves peaks and troughs. This type of transmission is called amplitude modulation (AM). It appears to have first been thought of by John Stone Stone in 1892. Many years after Armstrong’s invention of the super heterodyne, he solved radio’s last major problem, static, by inventing frequency modulation (FM), which he successfully tested in 1933. A significant characteristic of FM as compared with AM is that FM stations using the same frequency do not interfere with each other. Radios simply pick up whichever FM station is the strongest. This means that low-power FM stations can operate in close proximity. Armstrong was hindered in his development of FM radio by a Federal Communications Commission (FCC) spectrum reallocation that he blamed on RCA.

Astute patent dealings were a must in the early radio industry. As was true of the rest of the electric industry, patent litigation was very common in the radio industry. One reason for the success of Marconi in America was his astute patent dealings. One of the most acrimonious radio patent suits was one between Armstrong and RCA. Armstrong expected to receive royalties on every FM radio set sold and, because FM was selected for the audio portion of TV broadcasting, he also expected royalties on every TV set sold. Some television manufacturers paid Armstrong. RCA didn’t. RCA also developed and patented a FM system different from Armstrong’s that he claimed involved no new principle. So, in 1948, he instituted a suit against RCA and NBC, charging them with willfully infringing and inducing others to infringe on his FM patents.

It was to RCA’s advantage to drag the suit out. It had more money than Armstrong did, and it could make more money until the case was settled by selling sets utilizing technology Armstrong said was his. It might be able to do this until his patents ran out. To finance the case and his research facility at Columbia, Armstrong had to sell many of his assets, including stock in Zenith, RCA, and Standard Oil. By 1954, the financial burden imposed on him forced him to try to settle with RCA. RCA’s offer did not even cover Armstrong’s remaining legal fees. Not long after he received this offer he committed suicide.

Bibliography

Aitken, Hugh G. J. The Continuous Wave: Technology and American Radio, 1900-1932. Princeton, N.J.: Princeton University Press, 1985.

Archer, Gleason Leonard. Big Business and Radio. New York, Arno Press, 1971.

Benjamin, Louise Margaret. Freedom of the Air and the Public Interest: First Amendment Rights in Broadcasting to 1935. Carbondale: Southern Illinois University Press, 2001.

Bilby, Kenneth. The General: David Sarnoff and the Rise of the Communications Industry. New York: Harper & Row, 1986.

Bittner, John R. Broadcast Law and Regulation. Englewood Cliffs, N.J.: Prentice-Hall, 1982.

Brown, Robert J. Manipulating the Ether: The Power of Broadcast Radio in Thirties America. Jefferson, N.C.: McFarland & Co., 1998.

Campbell, Robert. The Golden Years of Broadcasting: A Celebration of the First Fifty Years of Radio and TV on NBC. New York: Scribner, 1976.

Douglas, George H. The Early Years of Radio Broadcasting. Jefferson, NC: McFarland, 1987.

Douglas, Susan J. Inventing American Broadcasting, 1899-1922. Baltimore: Johns Hopkins University Press, 1987.

Erickson, Don V. Armstrong’s Fight for FM Broadcasting: One Man vs Big Business and Bureaucracy. University, AL: University of Alabama Press, 1973.

Fornatale, Peter and Joshua E. Mills. Radio in the Television Age. New York: Overlook Press, 1980.

Godfrey, Donald G. and Frederic A. Leigh, editors. Historical Dictionary of American Radio. Westport, CT: Greenwood Press, 1998.

Head, Sydney W. Broadcasting in America: A Survey of Television and Radio. Boston: Houghton Mifflin, 1956.

Hilmes, Michele. Radio Voices: American Broadcasting, 1922-1952. Minneapolis: University of Minnesota Press, 1997.

Jackaway, Gwenyth L. Media at War: Radio’s Challenge to the Newspapers, 1924-1939. Westport, CT: Praeger, 1995.

Jolly, W. P. Marconi. New York: Stein and Day, 1972.

Jome, Hiram Leonard. Economics of the Radio Industry. New York: Arno Press, 1971.

Lewis, Tom. Empire of the Air: The Men Who Made Radio. New York: Edward Burlingame Books, 1991.

Ladd, Jim. Radio Waves: Life and Revolution on the FM Dial. New York: St. Martin’s Press, 1991.

Lichty, Lawrence Wilson and Malachi C. Topping. American Broadcasting: A Source Book on the History of Radio and Television (first edition). New York: Hastings House, 1975.

Lyons, Eugene. David Sarnoff: A Biography (first edition). New York: Harper & Row, 1966.

MacDonald, J. Fred. Don’t Touch That Dial! Radio Programming in American Life, 1920-1960. Chicago: Nelson-Hall, 1979.

Maclaurin, William Rupert. Invention and Innovation in the Radio Industry. New York: Arno Press, 1971.

Nachman, Gerald. Raised on Radio. New York: Pantheon Books, 1998.

Rosen, Philip T. The Modern Stentors: Radio Broadcasters and the Federal Government, 1920-1934. Westport, CT: Greenwood Press, 1980.

Sies, Luther F. Encyclopedia of American Radio, 1920-1960. Jefferson, NC : McFarland, 2000.

Slotten, Hugh Richard. Radio and Television Regulation: Broadcast Technology in the United States, 1920-1960. Baltimore: Johns Hopkins University Press, 2000.

Smulyan, Susan. Selling Radio: The Commercialization of American Broadcasting, 1920-1934. Washington: Smithsonian Institution Press, 1994.

Sobel, Robert. RCA. New York: Stein and Day/Publishers, 1986.

Sterling, Christopher H. and John M. Kittross. Stay Tuned. Belmont, CA: Wadsworth, 1978.

Weaver, Pat. The Best Seat in the House: The Golden Years of Radio and Television. New York: Knopf, 1994.

Citation: Scott, Carole. “History of the Radio Industry in the United States to 1940″. EH.Net Encyclopedia, edited by Robert Whaples. March 26, 2008. URL http://eh.net/encyclopedia/the-history-of-the-radio-industry-in-the-united-states-to-1940/

The Protestant Ethic Thesis

Donald Frey, Wake Forest University

German sociologist Max Weber (1864 -1920) developed the Protestant-ethic thesis in two journal articles published in 1904-05. The English translation appeared in book form as The Protestant Ethic and the Spirit of Capitalism in 1930. Weber argued that Reformed (i.e., Calvinist) Protestantism was the seedbed of character traits and values that under-girded modern capitalism. This article summarizes Weber’s formulation, considers criticisms of Weber’s thesis, and reviews evidence of linkages between cultural values and economic growth.

Outline of Weber’s Thesis

Weber emphasized that money making as a calling had been “contrary to the ethical feelings of whole epochs…” (Weber 1930, p.73; further Weber references by page number alone). Lacking moral support in pre-Protestant societies, business had been strictly limited to “the traditional manner of life, the traditional rate of profit, the traditional amount of work…” (67). Yet, this pattern “was suddenly destroyed, and often entirely without any essential change in the form of organization…” Calvinism, Weber argued, changed the spirit of capitalism, transforming it into a rational and unashamed pursuit of profit for its own sake.

In an era when religion dominated all of life, Martin Luther’s (1483-1546) insistence that salvation was by God’s grace through faith had placed all vocations on the same plane. Contrary to medieval belief, religious vocations were no longer considered superior to economic vocations for only personal faith mattered with God. Nevertheless, Luther did not push this potential revolution further because he clung to a traditional, static view of economic life. John Calvin (1509-1564), or more accurately Calvinism, changed that.

Calvinism accomplished this transformation, not so much by its direct teachings, but (according to Weber) by the interaction of its core theology with human psychology. Calvin had pushed the doctrine of God’s grace to the limits of the definition: grace is a free gift, something that the Giver, by definition, must be free to bestow or withhold. Under this definition, sacraments, good deeds, contrition, virtue, assent to doctrines, etc. could not influence God (104); for, if they could, that would turn grace into God’s side of a transaction instead its being a pure gift. Such absolute divine freedom, from mortal man’s perspective, however, seemed unfathomable and arbitrary (103). Thus, whether one was among those saved (the elect) became the urgent question for the average Reformed churchman according to Weber.

Uncertainty about salvation, according to Weber, had the psychological effect of producing a single-minded search for certainty. Although one could never influence God’s decision to extend or withhold election, one might still attempt to ascertain his or her status. A life that “… served to increase the glory of God” presumably flowed naturally from a state of election (114). If one glorified God and conformed to what was known of God’s requirements for this life then that might provide some evidence of election. Thus upright living, which could not earn salvation, returned as evidence of salvation.

The upshot was that the Calvinist’s living was “thoroughly rationalized in this world and dominated by the aim to add to the glory of God in earth…” (118). Such a life became a systematic living out of God’s revealed will. This singleness of purpose left no room for diversion and created what Weber called an ascetic character. “Not leisure and enjoyment, but only activity serves to increase the glory of God, according to the definite manifestations of His will” (157). Only in a calling does this focus find full expression. “A man without a calling thus lacks the systematic, methodical character which is… demanded by worldly asceticism” (161). A calling represented God’s will for that person in the economy and society.

Such emphasis on a calling was but a small step from a full-fledged capitalistic spirit. In practice, according to Weber, that small step was taken, for “the most important criterion [of a calling] is … profitableness. For if God … shows one of His elect a chance of profit, he must do it with a purpose…” (162). This “providential interpretation of profit-making justified the activities of the business man,” and led to “the highest ethical appreciation of the sober, middle-class, self-made man” (163).

A sense of calling and an ascetic ethic applied to laborers as well as to entrepreneurs and businessmen. Nascent capitalism required reliable, honest, and punctual labor (23-24), which in traditional societies had not existed (59-62). That free labor would voluntarily submit to the systematic discipline of work under capitalism required an internalized value system unlike any seen before (63). Calvinism provided this value system (178-79).

Weber’s “ascetic Protestantism” was an all-encompassing value system that shaped one’s whole life, not merely ethics on the job. Life was to be controlled the better to serve God. Impulse and those activities that encouraged impulse, such as sport or dance, were to be shunned. External finery and ornaments turned attention away from inner character and purpose; so the simpler life was better. Excess consumption and idleness were resources wasted that could otherwise glorify God. In short, the Protestant ethic ordered life according to its own logic, but also according to the needs of modern capitalism as understood by Weber.

An adequate summary requires several additional points. First, Weber virtually ignored the issue of usury or interest. This contrasts with some writers who take a church’s doctrine on usury to be the major indicator of its sympathy to capitalism. Second, Weber magnified the extent of his Protestant ethic by claiming to find Calvinist economic traits in later, otherwise non-Calvinist Protestant movements. He recalled the Methodist John Wesley’s (1703-1791) “Earn all you can, save all you can, give all you can,” and ascetic practices by followers of the eighteenth-century Moravian leader Nicholas Von Zinzendorf (1700-1760). Third, Weber thought that once established the spirit of modern capitalism could perpetuate its values without religion, citing Benjamin Franklin whose ethic already rested on utilitarian foundations. Fourth, Weber’s book showed little sympathy for either Calvinism, which he thought encouraged a “spiritual aristocracy of the predestined saints” (121), or capitalism , which he thought irrational for valuing profit for its own sake. Finally, although Weber’s thesis could be viewed as a rejoinder to Karl Marx (1818-1883), Weber claimed it was not his goal to replace Marx’s one-sided materialism with “an equally one-sided spiritualistic causal interpretation…” of capitalism (183).

Critiques of Weber

Critiques of Weber can be put into three categories. First, Weber might have been wrong about the facts: modern capitalism might have arisen before Reformed Protestantism or in places where the Reformed influence was much smaller than Weber believed. Second, Weber might have misinterpreted Calvinism or, more narrowly, Puritanism; if Reformed teachings were not what Weber supposed, then logically they might not have supported capitalism. Third, Weber might have overstated capitalism’s need for the ascetic practices produced by Reformed teachings.

On the first count, Weber has been criticized by many. During the early twentieth century, historians studied the timing of the emergence of capitalism and Calvinism in Europe. E. Fischoff (1944, 113) reviewed the literature and concluded that the “timing will show that Calvinism emerged later than capitalism where the latter became decisively powerful,” suggesting no cause-and-effect relationship. Roland Bainton also suggests that the Reformed contributed to the development of capitalism only as a “matter of circumstance” (Bainton 1952, 254). The Netherlands “had long been the mart of Christendom, before ever the Calvinists entered the land.” Finally, Kurt Samuelsson (1957) concedes that “the Protestant countries, and especially those adhering to the Reformed church, were particularly vigorous economically” (Samuelsson, 102). However, he finds much reason to discredit a cause-and-effect relationship. Sometimes capitalism preceded Calvinism (Netherlands), and sometimes lagged by too long a period to suggest causality (Switzerland). Sometimes Catholic countries (Belgium) developed about the same time as the Protestant countries. Even in America, capitalist New England was cancelled out by the South, which Samuelsson claims also shared a Puritan outlook.

Weber himself, perhaps seeking to circumvent such evidence, created a distinction between traditional capitalism and modern capitalism. The view that traditional capitalism could have existed first, but that Calvinism in some meaningful sense created modern capitalism, depends on too fine a distinction according to critics such as Samuelsson. Nevertheless, because of the impossibility of controlled experiments to firmly resolve the question, the issue will never be completely closed.

The second type of critique is that Weber misinterpreted Calvinism or Puritanism. British scholar R. H. Tawney in Religion and the Rise of Capitalism (1926) noted that Weber treated multi-faceted Reformed Christianity as though it were equivalent to late-era English Puritanism, the period from which Weber’s most telling quotes were drawn. Tawney observed that the “iron collectivism” of Calvin’s Geneva had evolved before Calvinism became harmonious with capitalism. “[Calvinism] had begun by being the very soul of authoritarian regimentation. It ended by being the vehicle of an almost Utilitarian individualism” (Tawney 1962, 226-7). Nevertheless, Tawney affirmed Weber’s point that Puritanism “braced [capitalism’s] energies and fortified its already vigorous temper.”

Roland Bainton in his own history of the Reformation disputed Weber’s psychological claims. Despite the psychological uncertainty Weber imputed to Puritans, their activism could be “not psychological and self-centered but theological and God-centered” (Bainton 1952, 252-53). That is, God ordered all of life and society, and Puritans felt obliged to act on His will. And if some Puritans scrutinized themselves for evidence of election, “the test was emphatically not economic activity as such but upright character…” He concludes that Calvinists had no particular affinity for capitalism but that they brought “vitality and drive into every area … whether they were subduing a continent, overthrowing a monarchy, or managing a business, or reforming the evils of the very order which they helped to create” (255).

Samuelsson, in a long section (27-48), argued that Puritan leaders did not truly endorse capitalistic behavior. Rather, they were ambivalent. Given that Puritan congregations were composed of businessmen and their families (who allied with Puritan churches because both wished for less royal control of society), the preachers could hardly condemn capitalism. Instead, they clarified “the moral conditions under which a prosperous, even wealthy, businessman may, despite success and wealth, become a good Christian” (38). But this, Samuelsson makes clear, was hardly a ringing endorsement of capitalism.

Criticisms that what Weber described as Puritanism was not true Puritanism, much less Calvinism, may be correct but beside the point. Puritan leaders indeed condemned exclusive devotion to one’s business because it excluded God and the common good. Thus, the Protestant ethic as described by Weber apparently would have been a deviation from pure doctrine. However, the pastors’ very attacks suggest that such a (mistaken) spirit did exist within their flocks. But such mistaken doctrine, if widespread enough, could still have contributed to the formation of the capitalist spirit.

Furthermore, any misinterpretation of Puritan orthodoxy was not entirely the fault of Puritan laypersons. Puritan theologians and preachers could place heavier emphasis on economic success and virtuous labor than critics such as Samuelsson would admit. The American preacher John Cotton (1582-1652) made clear that God “would have his best gifts improved to the best advantage.” The respected theologian William Ames (1576-1633) spoke of “taking and using rightly opportunity.” And, speaking of the idle, Cotton Mather said, “find employment for them, set them to work, and keep them at work…” A lesser standard would hardly apply to his hearers. Although these exhortations were usually balanced with admonitions to use wealth for the common good, and not to be motivated by greed, they are nevertheless clear endorsements of vigorous economic behavior. Puritan leaders may have placed boundaries around economic activism, but they still preached activism.

Frey (1998) has argued that orthodox Puritanism exhibited an inherent tension between approval of economic activity and emphasis upon the moral boundaries that define acceptable economic activity. A calling was never meant for the service of self alone but for the service of God and the common good. That is, Puritan thinkers always viewed economic activity against the backdrop of social and moral obligation. Perhaps what orthodox Puritanism contributed to capitalism was a sense of economic calling bounded by moral responsibility. In an age when Puritan theologians were widely read, Williams Ames defined the essence of the business contract as “upright dealing, by which one does sincerely intend to oblige himself…” If nothing else, business would be enhanced and made more efficient by an environment of honesty and trust.

Finally, whether Weber misinterpreted Puritanism is one issue. Whether he misinterpreted capitalism by exaggerating the importance of asceticism is another. Weber’s favorite exemplar of capitalism, Benjamin Franklin, did advocate unremitting personal thrift and discipline. No doubt, certain sectors of capitalism advanced by personal thrift, sometimes carried to the point of deprivation. Samuelsson (83-87) raises serious questions, however, that thrift could have contributed even in a minor way to the creation of the large fortunes of capitalists. Perhaps more important than personal fortunes is the finance of business. The retained earnings of successful enterprises, rather than personal savings, probably have provided a major source of funding for business ventures from the earliest days of capitalism. And successful capitalists, even in Puritan New England, have been willing to enjoy at least some of the fruits of their labors. Perhaps the spirit of capitalism was not the spirit of asceticism.

Evidence of Links between Values and Capitalism

Despite the critics, some have taken the Protestant ethic to be a contributing cause of capitalism, perhaps a necessary cause. Sociologist C. T. Jonassen (1947) understood the Protestant ethic this way. By examining a case of capitalism’s emergence in the nineteenth century, rather than in the Reformation or Puritan eras, he sought to resolve some of the uncertainties of studying earlier eras. Jonassen argued that capitalism emerged in nineteenth-century Norway only after an indigenous, Calvinist-like movement challenged the Lutheranism and Catholicism that had dominated the country. Capitalism had not “developed in Norway under centuries of Catholic and Lutheran influence,” although it appeared only “two generations after the introduction of a type of religion that produced the same behavior as Calvinism” (Jonassen, 684). Jonassen’s argument also discounted other often-cited causes of capitalism, such as the early discoveries of science, the Renaissance, or developments in post-Reformation Catholicism; these factors had existed for centuries by the nineteenth century and still had left Norway as a non-capitalist society. Only in the nineteenth century, after a Calvinist-like faith emerged, did capitalism develop.

Engerman’s (2000) review of economic historians shows that they have given little explicit attention to Weber in recent years. However, they show an interest in the impact of cultural values broadly understood on economic growth. A modified version of the Weber thesis has also found some support in empirical economic research. Granato, Inglehart and Leblang (1996, 610) incorporated cultural values in cross-country growth models on the grounds that Weber’s thesis fits the historical evidence in Europe and America. They did not focus on Protestant values, but accepted “Weber’s more general concept, that certain cultural factors influence economic growth…” Specifically they incorporated a measure of “achievement motivation” in their regressions and concluded that such motivation “is highly relevant to economic growth rates” (625). Conversely, they found that “post-materialist” (i.e., environmentalist) values are correlated with slower economic growth. Barro’s (1997, 27) modified Solow growth models also find that a “rule of law index” is associated with more rapid economic growth. This index is a proxy for such things as “effectiveness of law enforcement, sanctity of contracts and … the security of property rights.” Recalling Puritan theologian William Ames’ definition of a contract, one might conclude that a religion such as Puritanism could create precisely the cultural values that Barro finds associated with economic growth.

Conclusion

Max Weber’s thesis has attracted the attention of scholars and researchers for most of a century. Some (including Weber) deny that the Protestant ethic should be understood to be a cause of capitalism — that it merely points to a congruency between and culture’s religion and its economic system. Yet Weber, despite his own protests, wrote as though he believed that traditional capitalism would never have turned into modern capitalism except for the Protestant ethic– implying causality of sorts. Historical evidence from the Reformation era (sixteenth century) does not provide much support for a strong (causal) interpretation of the Protestant ethic. However, the emergence of a vigorous capitalism in Puritan England and its American colonies (and the case of Norway) at least keeps the case open. More recent quantitative evidence supports the hypothesis that cultural values count in economic development. The cultural values examined in recent studies are not religious values, as such. Rather, such presumably secular values as the need to achieve, intolerance for corruption, respect for property rights, are all correlated with economic growth. However, in its own time Puritanism produced a social and economic ethic known for precisely these sorts of values.

References

Bainton, Roland. The Reformation of the Sixteenth Century. Boston: Beacon Press, 1952.

Barro, Robert. Determinants of Economic Growth: A Cross-country Empirical Study. Cambridge, MA: MIT Press, 1997.

Engerman, Stanley. “Capitalism, Protestantism, and Economic Development.” EH.NET, 2000. http://www.eh.net/bookreviews/library/engerman.shtml

Fischoff, Ephraim. “The Protestant Ethic and the Spirit of Capitalism: The History of a Controversy.” Social Research (1944). Reprinted in R. W. Green (ed.), Protestantism and Capitalism: The Weber Thesis and Its Critics. Boston: D.C. Heath, 1958.

Frey, Donald E. “Individualist Economic Values and Self-Interest: The Problem in the Protestant Ethic.” Journal of Business Ethics (Oct. 1998).

Granato, Jim, R. Inglehart and D. Leblang. “The Effect of Cultural Values on Economic Development: Theory, Hypotheses and Some Empirical Tests.” American Journal of Political Science (Aug. 1996).

Green, Robert W. (ed.), Protestantism and Capitalism: The Weber Thesis and Its Critics. Boston: D.C. Heath, 1959.

Jonassen, Christen. “The Protestant Ethic and the Spirit of Capitalism in Norway.” American Sociological Review (Dec. 1947).

Samuelsson, Kurt. Religion and Economic Action. Toronto: University of Toronto Press, 1993 [orig. 1957].

Tawney, R. H. Religion and the Rise of Capitalism. Gloucester, MA: Peter Smith, 1962 [orig., 1926].

Weber, Max, The Protestant Ethic and the Spirit of Capitalism. New York: Charles Scribner’s Sons, 1958 [orig. 1930].

Citation: Frey, Donald. “Protestant Ethic Thesis”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/the-protestant-ethic-thesis/

History of Property Taxes in the United States

Glenn W. Fisher, Wichita State University (Emeritus)

Taxes based on ownership of property were used in ancient times, but the modern tax has roots in feudal obligations owned to British and European kings or landlords. In the fourteenth and fifteenth century, British tax assessors used ownership or occupancy of property to estimate a taxpayer’s ability to pay. In time the tax came to be regarded as a tax on the property itself (in rem). In the United Kingdom the tax developed into a system of “rates” based on the annual (rental) value of property.

The growth of the property tax in America was closely related to economic and political conditions on the frontier. In pre-commercial agricultural areas the property tax was a feasible source of local government revenue and equal taxation of wealth was consistent with the prevailing equalitarian ideology.

Taxation in the American Colonies

When the Revolutionary War began, the colonies had well-developed tax systems that made a war against the world’s leading military power thinkable. The tax structure varied from colony to colony, but five kinds of taxes were widely used. Capitation (poll) taxes were levied at a fixed rate on all adult males and sometimes on slaves. Property taxes were usually specific taxes levied at fixed rates on enumerated items, but sometimes items were taxed according to value. Faculty taxes were levied on the faculty or earning capacity of persons following certain trades or having certain skills. Tariffs (imposts) were levied on goods imported or exported and excises were levied on consumption goods, especially liquor.

During the war colonial tax rates increased several fold and taxation became a matter of heated debate and some violence. Settlers far from markets complained that taxing land on a per-acre basis was unfair and demanded that property taxation be based on value. In the southern colonies light land taxes and heavy poll taxes favored wealthy landowners. In some cases, changes in the tax system caused the wealthy to complain. In New York wealthy leaders saw the excess profits tax, which had been levied on war profits, as a dangerous example of “leveling tendencies.” Owners of intangible property in New Jersey saw the tax on intangible property in a similar light.

By the end of the war, it was obvious that the concept of equality so eloquently stated in the Declaration of Independence had far-reaching implications. Wealthy leaders and ordinary men pondered the meaning of equality and asked its implications for taxation. The leaders often saw little connection among independence, political equality, and the tax system, but many ordinary men saw an opportunity to demand changes.

Constitutionalizing Uniformity in the Nineteenth Century

In 1796 seven of the fifteen states levied uniform capitation taxes. Twelve taxed some or all livestock. Land was taxed in a variety of ways, but only four states taxed the mass of property by valuation. No state constitution required that taxation be by value or required that rates on all kinds of property be uniform. In 1818, Illinois adopted the first uniformity clause. Missouri followed in 1820, and in 1834 Tennessee replaced a provision requiring that land be taxed at a uniform amount per acre with a provision that land be taxed according to its value (ad valorem). By the end of the century thirty-three states had included uniformity clauses in new constitutions or had amended old ones to include the requirement that all property be taxed equally by value. A number of other states enacted uniformity statutes requiring that all property be taxed. Table 1 summarizes this history.

Table 1 Nineteenth-Century Uniformity Provisions

(first appearance in state constitutions)

Year

Universality Provision

Illinois

1818

Yes

Missouri

1820

No

*Tennessee1

1834

Yes2

Arkansas

1836

No

Florida

1838

No

*Louisiana

1845

No

Texas

1845

Yes

Wisconsin

1848

No

California

1849

Yes

*Michigan3

1850

No

*Virginia

1850

Yes4

Indiana

1851

Yes

*Ohio

1851

Yes

Minnesota

1857

Yes

Kansas

1859

No

Oregon

1859

Yes

West Virginia

1863

Yes

Nevada

1864

Yes5

*South Carolina

1865

Yes

*Georgia

1868

No

*North Carolina

1868

Yes

*Mississippi

1869

Yes

*Maine

1875

No

*Nebraska

1875

No

*New Jersey

1875

No

Montana

1889

Yes

North Dakota

1889

Yes

South Dakota

1889

Yes

Washington

1889

Yes

Idaho6

1890

Yes

Wyoming

1890

No

*Kentucky

1891

Yes

Utah

1896

Yes

*Indicates amendment or revised constitution.

1. The Tennessee constitution of 1796 included a unique provision requiring taxation of land to be uniform per 100 acres.
2. One thousand dollars of personal property and the products of the soil in the hands of the original producer were exempt in Tennessee.
3. The Michigan provision required that the legislature provide a uniform rule of taxation except for property paying specific taxes.
4. Except for taxes on slaves.
5. Nevada exempted mining claims.
6. One provision in Idaho requires uniformity as to class, another seems to prescribe uniform taxation.
Source: Fisher (1996) 57

The political appeal of uniformity was strong, especially in the new states west of the Appalachians. A uniform tax on all wealth, administered by locally elected officials appealed to frontier settlers many of whom strongly supported the Jacksonian ideas of equality, and distrusted both centralized government and professional administrators.

The general property tax applied to all wealth — real and personal, tangible and intangible. It was administrated by elected local officials who were to determine the market value of the property, compute the tax rates necessary to raise the amount levied, compute taxes on each property, collect the tax, and remit the proceeds to the proper government. Because the tax was uniform and levied on all wealth, each taxpayer would pay for the government services he or she enjoyed in exact proportion to his wealth.

The tax and the administrative system were well adapted as a revenue source for the system of local government that grew up in the United States. Typically, the state divided itself into counties, which were given many responsibilities for administering state laws. Citizens were free to organize municipalities, school districts, and many kinds of special districts to perform additional functions. The result, especially in the states formed after the Revolution, was a large number of overlapping governments. Many were in rural areas with no business establishment. Sales or excise taxes would yield no revenue and income taxes were not feasible.

The property tax, especially the real estate tax, was ideally suited to such a situation. Real estate had a fixed location, it was visible, and its value was generally well known. Revenue could easily be allocated to the governmental unit in which the property was located.

Failure of the General Property Tax

By the beginning of the twentieth century, criticism of the uniform, universal (general) property tax was widespread. A leading student of taxation called the tax, as administered, one of the worst taxes ever used by a civilized nation (Seligman, 1905).

There are several reasons for the failure of the general property tax. Advocates of uniformity failed to deal with the problems resulting from differences between property as a legal term and wealth as an economic concept. In a simple rural economy wealth consists largely of real property and tangible personal property — land, buildings, machinery and livestock. In such an economy, wealth and property are the same things and the ownership of property is closely correlated with income or ability to pay taxes.

In a modern commercial economy ownership and control of wealth is conferred by an ownership of rights that may be evidenced by a variety of financial and legal instruments such as stocks, bonds, notes, and mortgages. These rights may confer far less than fee simple (absolute) ownership and may be owned by millions of individuals residing all over the world. Local property tax administrators lack the legal authority, skills, and resources needed to assess and collect taxes on such complex systems of property ownership.

Another problem arose from the inability or unwillingness of elected local assessors to value their neighbor’s property at full value. An assessor who valued property well below its market value and changed values infrequently was much more popular and more apt to be reelected. Finally the increasing number of wage-earners and professional people who had substantial incomes but little property made property ownership a less suitable measure of ability to pay taxes.

Reformers, led by The National Tax Association which was founded in 1907, proposed that state income taxes be enacted and that intangible property and some kinds of tangible personal property be eliminated from the property tax base. They proposed that real property be assessed by professionally trained assessors. Some advocated the classified property tax in which different rates of assessment or taxation was applied to different classes of real property.

Despite its faults, however, the tax continued to provide revenue for one of the most elaborate systems of local government in the world. Local governments included counties, municipalities of several classes, towns or townships, and school districts. Special districts were organized to provide water, irrigation, drainage, roads, parks, libraries, fire protection, health services, gopher control, and scores of other services. In some states, especially in the Midwest and Great Plains, it was not uncommon to find that property was taxed by seven or eight different governments.

Overlapping governments caused little problem for real estate taxation. Each parcel of property was coded by taxing districts and the applicable taxes applied.

Reforming the Property Tax in the Twentieth Century

Efforts to reform the property tax varied from state to state, but usually included centralized assessment of railroad and utility property and exemption or classification of some forms of property. Typically intangibles such as mortgages were taxed at lower rates, but in several states tangible personal property and real estate were also classified. In 1910 Montana divided property into six classes. Assessment rates ranged from 100 percent of the net proceeds of mines to seven percent for money and credits. Minnesota’s 1913 law divided tangible property into four classes, each assessed at a different rate. Some states replaced the town or township assessors with county assessors, and many created state agencies to supervise and train local assessors. The National Association of Assessing Officers (later International Association of Assessing Officers) was organized in 1934 to develop better assessment methods and to train and certify assessors.

The depression years after 1929 resulted in widespread property tax delinquency and in several states taxpayers forcibly resisted the sale of tax delinquent property. State governments placed additional limits on property tax rates and several states exempted owner-occupied residence from taxation. These homestead exemptions were later criticized because they provided large amounts of relief to wealthy homeowners and disproportionally reduced the revenue of local governments whose property tax base was made up largely of residential property.

After World War II many states replaced the homestead exemption with state financed “circuit breakers” which benefited lower and middle income homeowners, older homeowners, and disabled persons. In many states renters were included by provisions that classified a portion of rental payments as property taxes. By 1991 thirty-five states had some form of circuit breakers (Advisory Commission on Intergovernmental Relations, 1992, 126-31).

Proponents of the general property tax believed that uniform and universal taxation of property would tend to limit taxes. Everybody would have to pay their share and the political game of taxing somebody else for one’s favorite program would be impossible. Perhaps there was some truth in this argument, but state legislatures soon began to impose additional limitations. Typically, the statutes authorizing local government to impose taxes for a particular purpose such as education, road building, or water systems, specified the rate, usually stated in mills, dollars per hundred or dollars per thousand of assessed value, that could be imposed for that purpose.

These limitations provided no overall limit on the taxes imposed on a particular property so state legislatures and state constitutions began to impose limits restricting the total rate or amount that could be imposed by a unit of local government. Often these were complicated to administer and had many unintended consequences. For example, limiting the tax that could be imposed by a particular kind of government sometime led to the creation of additional special districts.

During World War II, state and local taxes were stable or decreased as spending programs were cut back because of decreased needs or unavailability of building materials or other resources. This was reversed in the post-war years as governments expanded programs and took advantage of rising property value to increase tax collections. Assessment rose, tax rates rose, and the newspapers carried stories of homeowners forced to sell their homes because of rising taxes

California’s Tax Revolt

Within a few years the country was swept by a wave of tax protests, often called the Tax Revolt. Almost every state imposed some kind of limitation on the property tax, but the most widely publicized was Proposition 13, a constitutional amendment passed by popular vote in California in 1978. This proved to be the most successful attack on the property tax in American history. The amendment:

1. limited property taxes to one percent of full cash value

2. required property to be valued at its value on March 1, 1975 or on the date it changes hands or is constructed after that date.

3. limited subsequent value adjustment in value to 2 percent per year or the rate of inflation, whichever is lesser.

4. prohibited the imposition of sales or transaction taxes on the sale of real estate.

5. required two-thirds vote in each house of the legislature to increase state taxes

and a two-thirds vote of the electorate to increase or add new local taxes.

This amendment proved to be extremely difficult to administer. It resulted in hundreds of court cases, scores of new statutes, many attorney generals’ opinions and several additional amendments to the California constitution. One of the amendments permits property to be passed to heirs without triggering a new assessment.

In effect Proposition 13 replaced the property tax with a hybrid tax based on a property’s value in 1975 or the date it was last transferred to a non-family member. These values have been modified by annual adjustments that have been much less than the increase in the market value of the property. Thus it has favored the business or family that remains in the same building or residence for a long period of time.

Local government in California seems to have been weakened and there has been a great increase in fees, user charges, and business taxes. A variety of devices, including the formation of fee-financed special districts, have been utilized to provide services.

Although Proposition 13 was the most far-reaching and widely publicized attempt to limit property taxes, it is only one of many provisions that have attempted to limit the property tax. Some are general limitations on rates or amounts that may be levied. Others provide tax benefits to particular groups or are intended to promote economic development. Several other states adopted overall limitations or tax freezes modeled on Proposition 13 and in addition have adopted a large number of provisions to provide relief to particular classes of individuals or to serve as economic incentives. These include provisions favoring agricultural land, exemption or reduced taxation of owner-occupied homes, provisions benefiting the poor, veterans, disabled individuals, and the aged. Economic incentives incorporated in property tax laws include exemptions or lower rates on particular business or certain types of business, exemption of the property of newly established businesses, tax breaks in development zones, and earmarking of taxes for expenditure that benefit a particular business (enterprise zones).

The Property Tax Today

In many states assessment techniques have improved greatly. Computer assisted mass appraisal (CAMA) combines computer technology, statistical methods and valve theory to make possible reasonably accurate property assessments. Increases in state school aid, stemming in part from court decisions requiring equal school quality, have increased the pressure for statewide uniformity in assessment. Some states now use elaborate statistical procedures to measure the quality and equality of assessment from place to place in the state. Today, departures from uniformity come less from poor assessment than from provision in the property tax statutes.

The tax on a particular property may depend on who owns it, what it is used for, and when it last sold. To compute the tax the administrator may have to know the income, age, medical condition, and previous military service of the owner. Anomalies abound as taxpayers figure out ways to make the complicated system work in their favor. A few bales of hay harvested from a development site may qualify it as agricultural land and enterprise zones, which are intended to provide incentive for development in poverty-stricken areas, may contain industrial plants, but no people — poverty stricken or otherwise.

The many special provision fuel the demand for other special provisions. As the base narrows, the tax rate rises and taxpayers become aware of the special benefits enjoyed by their neighbors or competitors. This may lead to demands for overall tax limitations or to the quest for additional exemptions and special provisions.

The Property Tax as a Revenue Source during the Twentieth Century

At the time of the 1902 Census of Government the property tax provided forty-five percent of the general revenue received by state governments from their own sources. (excluding grants from other governments). That percentage declined steadily, taking its most precipitous drop between 1922 and 1942 as states adopted sales and income taxes. Today property taxes are an insignificant source of state tax revenue. (See Table 2.)

The picture at the local level is very different. The property tax as a percentage of own-source general revenue rose from 1902 until 1932 when it provided 85.2 percent of local government own-source general revenue. Since that time there has been a significant gradual decline in the importance of local property taxes.

The decline in the revenue importance of the property tax is more dramatic when the increase in federal and state aid is considered. In fiscal year 1999, local governments received 228 billion in property tax revenue and 328 billion in aid from state and federal governments. If current trends continue, the property tax will decline in importance and states and the federal government will take over more local functions, or expand the system of grants to local governments. Either way, government will become more centralized.

Table 2

Property Taxes as a Percentage of Own-Source General Revenue, Selected Years

______________________________
Year State Local
______________________________
1902 45.3 78.2
1913 38.9 77.4
1922 30.9 83.9
1932 15.2 85.2
1942 6.2 80.8
1952 3.4 71.0
1962 2.7 69.0
1972 1.8 63.5
1982 1.5 48.0
1992 1.7 48.1
­­1999 1.8 44.6
_______________________________

Source: U. S. Census of Governments, Historical Statistics of State and Local Finance, 1902-1953; U. S. Census of Governments, Governments Finances for (various years); and http://www.census.gov.

References

Adams, Henry Carter. Taxation in the United States, 1789-1816. New York: Burt Franklin, 1970, originally published in 1884.

Advisory Commission on Intergovernmental Relations. Significant Features of Fiscal Federalism, Volume 1, 1992.

Becker, Robert A. Revolution, Reform and the Politics of American Taxation. Baton Rouge: Louisiana State University Press, 1980.

Ely, Richard T. Taxation in the American States and Cities. New York: T. Y. Crowell & Co, 1888.

Fisher, Glenn W. The Worst Tax? A History of the Property Tax in America. Lawrence: University Press of Kansas, 1996.

Fisher, Glenn W. “The General Property Tax in the Nineteenth Century: The Search for Equality.” Property Tax Journal 6, no. 2 ((1987): 99-117.

Jensen, Jens Peter. Property Taxation in the United States. Chicago: University of Chicago Press, 1931.

Seligman, E. R. A. Essays in Taxation. New York: Macmillan Company, 1905, originally published in 1895.

Stocker, Frederick, editor. Proposition 13: A Ten-Year Retrospective. Cambridge, Massachusetts: Lincoln Institute of Land Policy, 1991.

Citation: Fisher, Glenn. “History of Property Taxes in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. September 30, 2002. URL http://eh.net/encyclopedia/history-of-property-taxes-in-the-united-states/

Economic History of Portugal

Luciano Amaral, Universidade Nova de Lisboa

Main Geographical Features

Portugal is the south-westernmost country of Europe. With the approximate shape of a vertical rectangle, it has a maximum height of 561 km and a maximum length of 218 km, and is delimited (in its north-south range) by the parallels 37° and 42° N, and (in its east-west range) by the meridians 6° and 9.5° W. To the west, it faces the Atlantic Ocean, separating it from the American continent by a few thousand kilometers. To the south, it still faces the Atlantic, but the distance to Africa is only of a few hundred kilometers. To the north and the east, it shares land frontiers with Spain, and both countries constitute the Iberian Peninsula, a landmass separated directly from France and, then, from the rest of the continent by the Pyrenees. Two Atlantic archipelagos are still part of Portugal, the Azores – constituted by eight islands in the same latitudinal range of mainland Portugal, but much further west, with a longitude between 25° and 31° W – and Madeira – two islands, to the southwest of the mainland, 16° and 17° W, 32.5° and 33° N.

Climate in mainland Portugal is of the temperate sort. Due to its southern position and proximity to the Mediterranean Sea, the country’s weather still presents some Mediterranean features. Temperature is, on average, higher than in the rest of the continent. Thanks to its elongated form, Portugal displays a significant variety of landscapes and sometimes brisk climatic changes for a country of such relatively small size. Following a classical division of the territory, it is possible to identify three main geographical regions: a southern half – with practically no mountains and a very hot and dry climate – and a northern half subdivided into two other vertical sub-halves – with a north-interior region, mountainous, cool but relatively dry, and a north-coast region, relatively mountainous, cool and wet. Portugal’s population is close to 10,000,000, in an area of about 92,000 square kilometers (35,500 square miles).

The Period before the Creation of Portugal

We can only talk of Portugal as a more or less clearly identified and separate political unit (although still far from a defined nation) from the eleventh or twelfth centuries onwards. The geographical area which constitutes modern Portugal was not, of course, an eventless void before that period. But scarcity of space allows only a brief examination of the earlier period, concentrating on its main legacy to future history.

Roman and Visigothic Roots

That legacy is overwhelmingly marked by the influence of the Roman Empire. Portugal owes to Rome its language (a descendant of Latin) and main religion (Catholicism), as well as its primary juridical and administrative traditions. Interestingly enough, little of the Roman heritage passed directly to the period of existence of Portugal as a proper nation. Momentous events filtered the transition. Romans first arrived in the Iberian Peninsula around the third century B.C., and kept their rule until the fifth century of the Christian era. Then, they succumbed to the so-called “barbarian invasions.” Of the various peoples that then roamed the Peninsula, certainly the most influential were the Visigoths, a people of Germanic origin. The Visigoths may be ranked as the second most important force in the shaping of future Portugal. The country owes them the monarchical institution (which lasted until the twentieth century), as well as the preservation both of Catholicism and (although substantially transformed) parts of Roman law.

Muslim Rule

The most spectacular episode following Visigoth rule was the Muslim invasion of the eighth century. Islam ruled the Peninsula from then until the fifteenth century, although occupying an increasingly smaller area from the ninth century onwards, as the Christian Reconquista started repelling it with growing efficiency. Muslim rule set the area on a path different from the rest of Western Europe for a few centuries. However, apart from some ethnic traits legated to its people, a few words in its lexicon, as well as certain agricultural, manufacturing and sailing techniques and knowledge (of which the latter had significant importance to the Portuguese naval discoveries), nothing of the magnitude of the Roman heritage was left in the peninsula by Islam. This is particularly true of Portugal, where Muslim rule was less effective and shorter than in the South of Spain. Perhaps the most important legacy of Muslim rule was, precisely, its tolerance towards the Roman heritage. Much representative of that tolerance was the existence during the Muslim period of an ethnic group, the so-called moçárabe or mozarabe population, constituted by traditional residents that lived within Muslim communities, accepted Muslim rule, and mixed with Muslim peoples, but still kept their language and religion, i.e. some form of Latin and the Christian creed.

Modern Portugal is a direct result of the Reconquista, the Christian fight against Muslim rule in the Iberian Peninsula. That successful fight was followed by the period when Portugal as a nation came to existence. The process of creation of Portugal was marked by the specific Roman-Germanic institutional synthesis that constituted the framework of most of the country’s history.

Portugal from the Late Eleventh Century to the Late Fourteenth Century

Following the Muslim invasion, a small group of Christians kept their independence, settling in a northern area of the Iberian Peninsula called Asturias. Their resistance to Muslim rule rapidly transformed into an offensive military venture. During the eighth century a significant part of northern Iberia was recovered to Christianity. This frontier, roughly cutting the peninsula in two halves, held firm until the eleventh century. Then, the crusaders came, mostly from France and Germany, inserting the area in the overall European crusade movement. By the eleventh century, the original Asturian unit had been divided into two kingdoms, Leon and Navarra, which in turn were subdivided into three new political units, Castile, Aragon and the Condado Portucalense. The Condado Portucalense (the political unit at the origin of future Portugal) resulted from a donation, made in 1096, by the Leonese king to a Crusader coming from Burgundy (France), Count Henry. He did not claim the title king, a job that would be fulfilled only by his son, Afonso Henriques (generally accepted as the first king of Portugal) in the first decade of the twelfth century.

Condado Portucalense as the King’s “Private Property”

Such political units as the various peninsular kingdoms of that time must be seen as entities differing in many respects from current nations. Not only did their peoples not possess any clear “national consciousness,” but also the kings themselves did not rule them based on the same sort of principle we tend to attribute to current rulers (either democratic, autocratic or any other sort). Both the Condado Portucalense and Portugal were understood by their rulers as something still close to “private property” – the use of quotes here is justified by the fact that private property, in the sense we give to it today, was a non-existent notion then. We must, nevertheless, stress this as the moment in which Portuguese rulers started seeing Portugal as a political unit separate from the remaining units in the area.

Portugal as a Military Venture

Such novelty was strengthened by the continuing war against Islam, still occupying most of the center and south of what later became Portugal. This is a crucial fact about Portugal in its infancy, and one that helps one understand the most important episode in Portuguese history , the naval discoveries, i.e. that the country in those days was largely a military venture against Islam. As, in that fight, the kingdom expanded to the south, it did so separately from the other Christian kingdoms existing in the peninsula. And these ended up constituting the two main negative forces for Portugal’s definition as an independent country, i.e. Islam and the remaining Iberian Christian kingdoms. The country achieved a clear geographical definition quite early in its history, more precisely in 1249, when King Sancho II conquered the Algarve from Islam. Remarkably for a continent marked by so much permanent frontier redesign, Portugal acquired then its current geographical shape.

The military nature of the country’s growth gave rise to two of its most important characteristics in early times: Portugal was throughout this entire period a frontier country, and one where the central authority was unable to fully control the territory in its entirety. This latter fact, together with the reception of the Germanic feudal tradition, shaped the nature of the institutions then established in the country. This was particularly important in understanding the land donations made by the crown. These were crucial, for they brought a dispersion of central powers, devolved to local entities, as well as a delegation of powers we would today call “public” to entities we would call “private.” Donations were made in favor of three sorts of groups: noble families, religious institutions and the people in general of particular areas or cities. They resulted mainly from the needs of the process of conquest: noblemen were soldiers, and the crown’s concession of the control of a certain territory was both a reward for their military feats as well as an expedient way of keeping the territory under control (even if in a more indirect way) in a period when it was virtually impossible to directly control the full extent of the conquered area. Religious institutions were crucial in the Reconquista, since the purpose of the whole military effort was to eradicate the Muslim religion from the country. Additionally, priests and monks were full military participants in the process, not limiting their activity to studying or preaching. So, as the Reconquista proceeded, three sorts of territories came into existence: those under direct control of the crown, those under the control of local seigneurs (which subdivided into civil and ecclesiastical) and the communities.

Economic Impact of the Military Institutional Framework

This was an institutional framework that had a direct economic impact. The crown’s donations were not comparable to anything we would nowadays call private property. The land’s donation had attached to it the ability conferred on the beneficiary to a) exact tribute from the population living in it, b) impose personal services or reduce peasants to serfdom, and c) administer justice. This is a phenomenon that is typical of Europe until at least the eighteenth century, and is quite representative of the overlap between the private and public spheres then prevalent. The crown felt it was entitled to give away powers we would nowadays call public, such as those of taxation and administering justice, and beneficiaries from the crown’s donations felt they were entitled to them. As a further limit to full private rights, the land was donated under certain conditions, restricting the beneficiaries’ power to divide, sell or buy it. They managed those lands, thus, in a manner entirely dissimilar from a modern enterprise. And the same goes for actual farmers, those directly toiling the land, since they were sometimes serfs, and even when they were not, had to give personal services to seigneurs and pay arbitrary tributes.

Unusually Tight Connections between the Crown and High Nobility

Much of the history of Portugal until the nineteenth century revolves around the tension between these three layers of power – the crown, the seigneurs and the communities. The main trend in that relationship was, however, in the direction of an increased weight of central power over the others. This is already visible in the first centuries of existence of the country. In a process that may look paradoxical, that increased weight was accompanied by an equivalent increase in seigneurial power at the expense of the communities. This gave rise to a uniquely Portuguese institution, which would be of extreme importance for the development of the Portuguese economy (as we will later see): the extremely tight connection between the crown and the high nobility. As a matter of fact, very early in the country’s history, the Portuguese nobility and Church became much dependent on the redistributive powers of the crown, in particular in what concerns land and the tributes associated with it. This led to an apparently contradictory process, in which at the same time as the crown was gaining ascendancy in the ruling of the country, it also gave away to seigneurs some of those powers usually considered as being public in nature. Such was the connection between the crown and the seigneurs that the intersection between private and public powers proved to be very resistant in Portugal. That intersection lasted longer in Portugal than in other parts of Europe, and consequently delayed the introduction in the country of the modern notion of property rights. But this is something to be developed later, and to fully understand it we must go through some further episodes of Portuguese history. For now, we must note the novelty brought by these institutions. Although they can be seen as unfriendly to property rights from a nineteenth- and twentieth-century vantage point, they represented in fact a first, although primitive and incomplete, definition of property rights of a certain sort.

Centralization and the Evolution of Property

As the crown’s centralization of power proceeded in the early history of the country, some institutions such as serfdom and settling colonies gave way to contracts that granted fuller personal and property rights to farmers. Serfdom was not exceptionally widespread in early Portugal – and tended to disappear from the thirteenth century onwards. More common was the settlement of colonies, a situation in which settlers were simple toilers of land, having to pay significant tributes to either the king or seigneurs, but had no rights over buying and selling the land. From the thirteenth century onwards, as the king and the seigneurs began encroaching on the kingdom’s land and the military situation got calmer, serfdom and settling contracts were increasingly substituted by contracts of the copyhold type. When compared with current concepts of private property, copyhold includes serious restrictions to the full use of private property. Yet, it represented an improvement when compared to the prior legal forms of land use. In the end, private property as we understand it today began its dissemination through the country at this time, although in a form we would still consider primitive. This, to a large extent, repeats with one to two centuries of delay, the evolution that had already occurred in the core of “feudal Europe,” i.e. the Franco-Germanic world and its extension to the British Isles.

Movement toward an Exchange Economy

Precisely as in that core “feudal Europe,” such institutional change brought a first moment of economic growth to the country – of course, there are no consistent figures for economic activity in this period, and, consequently, this is entirely based on more or less superficial evidence pointing in that direction. The institutional change just noted was accompanied by a change in the way noblemen and the Church understood their possessions. As the national territory became increasingly sheltered from the destruction of war, seigneurs became less interested in military activity and conquest, and more so in the good management of the land they already owned land. Accompanying that, some vague principles of specialization also appeared. Some of those possessions were thus significantly transformed into agricultural firms devoted to a certain extent to selling on the market. One should not, of course, exaggerate the importance acquired by the exchange of goods in this period. Most of the economy continued to be of a non-exchange or (at best) barter character. But the signs of change were important, as a certain part of the economy (small as it was) led the way to future more widespread changes. Not by chance, this is the period when we have evidence of the first signs of monetization of the economy, certainly a momentous change (even if initially small in scale), corresponding to an entirely new framework for economic relations.

These essential changes are connected with other aspects of the country’s evolution in this period. First, the war at the frontier (rather than within the territory) seems to have had a positive influence on the rest of the economy. The military front was constituted by a large number of soldiers, who needed constant supply of various goods, and this geared a significant part of the economy. Also, as the conquest enlarged the territory under the Portuguese crown’s control, the king’s court became ever more complex, thus creating one more demand pole. Additionally, together with enlargement of territory also came the insertion within the economy of various cities previously under Muslim control (such as the future capital, Lisbon, after 1147). All this was accompanied by a widespread movement of what we might call internal colonization, whose main purpose was to farm previously uncultivated agricultural land. This is also the time of the first signs of contact of Portuguese merchants with foreign markets, and foreign merchants with Portuguese markets. There are various signs of the presence of Portuguese merchants in British, French and Flemish ports, and vice versa. Much of Portuguese exports were of a typical Mediterranean nature, such as wine, olive oil, salt, fish and fruits, and imports were mainly of grain and textiles. The economy became, thus, more complex, and it is only natural that, to accompany such changes, the notions of property, management and “firm” changed in such a way as to accommodate the new evolution. The suggestion has been made that the success of the Christian Reconquista depended to a significant extent on the economic success of those innovations.

Role of the Crown in Economic Reforms

Of additional importance for the increasing sophistication of the economy is the role played by the crown as an institution. From the thirteenth century onwards, the rulers of the country showed a growing interest in having a well organized economy able to grant them an abundant tax base. Kings such as Afonso III (ruling from 1248 until 1279) and D. Dinis (1279-1325) became famous for their economic reforms. Monetary reforms, fiscal reforms, the promotion of foreign trade, and the promotion of local fairs and markets (an extraordinarily important institution for exchange in medieval times) all point in the direction of an increased awareness on the part of Portuguese kings of the relevance of promoting a proper environment for economic activity. Again, we should not exaggerate the importance of that awareness. Portuguese kings were still significantly (although not entirely) arbitrary rulers, able with one decision to destroy years of economic hard work. But changes were occurring, and some in a direction positive for economic improvement.

As mentioned above, the definition of Portugal as a separate political entity had two main negative elements: Islam as occupier of the Iberian Peninsula and the centralization efforts of the other political entities in the same area. The first element faded as the Portuguese Reconquista, by mid-thirteenth century, reached the southernmost point in the territory of what is today’s Portugal. The conflict (either latent or open) with the remaining kingdoms of the peninsula was kept alive much beyond that. As the early centuries of the first millennium unfolded, a major centripetal force emerged in the peninsula, the kingdom of Castile. Castile progressively became the most successful centralizing political unit in the area. Such success reached a first climatic moment by the middle of the fifteenth century, during the reign of Ferdinand and Isabella, and a second one by the end of the sixteenth century, with the brief annexation of Portugal by the Spanish king, Phillip II. Much of the effort of Portuguese kings was to keep Portugal independent of those other kingdoms, particularly Castile. But sometimes they envisaged something different, such as an Iberian union with Portugal as its true political head. It was one of those episodes that led to a major moment both for the centralization of power in the Portuguese crown within the Portuguese territory and for the successful separation of Portugal from Castile.

Ascent of John I (1385)

It started during the reign of King Ferdinand (of Portugal), during the sixth and seventh decades of the fourteenth century. Through various maneuvers to unite Portugal to Castile (which included war and the promotion of diverse coups), Ferdinand ended up marrying his daughter to the man who would later become king of Castile. Ferdinand was, however, generally unsuccessful in his attempts to tie the crowns under his heading, and when he died in 1383 the king of Castile (thanks to his marriage with Ferdinand’s daughter) became the legitimate heir to the Portuguese crown. This was Ferdinand’s dream in reverse. The crowns would unite, but not under Portugal. The prospect of peninsular unity under Castile was not necessarily loathed by a large part of Portuguese elites, particularly parts of the aristocracy, which viewed Castile as a much more noble-friendly kingdom. This was not, however, a unanimous sentiment, and a strong reaction followed, led by other parts of the same elite, in order to keep the Portuguese crown in the hands of a Portuguese king, separate from Castile. A war with Castile and intimations of civil war ensued, and in the end Portugal’s independence was kept. The man chosen to be the successor of Ferdinand, under a new dynasty, was the bastard son of Peter I (Ferdinand’s father), the man who became John I in 1385.

This was a crucial episode, not simply because of the change in dynasty, imposed against the legitimate heir to the throne, but also because of success in the centralization of power by the Portuguese crown and, as a consequence, of separation of Portugal from Castile. Such separation led Portugal, additionally, to lose interest in further political adventures concerning Castile, and switch its attention to the Atlantic. It was the exploration of this path that led to the most unique period in Portuguese history, one during which Portugal reached heights of importance in the world that find no match in either its past or future history. This period is the Discoveries, a process that started during John I’s reign, in particular under the forceful direction of the king’s sons, most famous among them the mythical Henry, the Navigator. The 1383-85 crisis and John’s victory can thus be seen as the founding moment of the Portuguese Discoveries.

The Discoveries and the Apex of Portuguese International Power

The Discoveries are generally presented as the first great moment of world capitalism, with markets all over the world getting connected under European leadership. Albeit true, this is a largely post hoc perspective, for the Discoveries became a big commercial adventure only somewhere half-way into the story. Before they became such a thing, the aims of the Discoveries’ protagonists were mostly of another sort.

The Conquest of Ceuta

An interesting way to have a fuller picture of the Discoveries is to study the Portuguese contribution to them. Portugal was the pioneer of transoceanic navigation, discovering lands and sea routes formerly unknown to Europeans, and starting trades and commercial routes that linked Europe to other continents in a totally unprecedented fashion. But, at the start, the aims of the whole venture were entirely other. The event generally chosen to date the beginning of the Portuguese discoveries is the conquest of Ceuta – a city-state across the Straits of Gibraltar from Spain – in 1415. In itself such voyage would not differ much from other attempts made in the Mediterranean Sea from the twelfth century onwards by various European travelers. The main purpose of all these attempts was to control navigation in the Mediterranean, in what constitutes a classical fight between Christianity and Islam. Other objectives of Portuguese travelers were the will to find the mythical Prester John – a supposed Christian king surrounded by Islam: there are reasons to suppose that the legend of Prester John is associated with the real existence of the Copt Christians of Ethiopia – and to reach, directly at the source, the gold of Sudan. Despite this latter objective, religious reasons prevailed over others in spurring the first Portuguese efforts of overseas expansion. This should not surprise us, however, for Portugal had since its birth been, precisely, an expansionist political unit under a religious heading. The jump to the other side of the sea, to North Africa, was little else than the continuation of that expansionist drive. Here we must understand Portugal’s position as determined by two elements, one that was general to the whole European continent, and another one, more specific. The first is that the expansion of Portugal in the Middle-Ages coincides with the general expansion of Europe. And Portugal was very much a part of that process. The second is that, by being part of the process, Portugal was (by geographical hazard) at the forefront of the process. Portugal (and Spain) was in the first line of attack and defense against Islam. The conquest of Ceuta, by Henry, the Navigator, is hence a part of that story of confrontation with Islam.

Exploration from West Africa to India

The first efforts of Henry along the Western African coast and in the Atlantic high sea can be put within this same framework. The explorations along the African coast had two main objectives: to have a keener perception of how far south Islam’s strength went, and to surround Morocco, both in order to attack Islam on a wider shore and to find alternative ways to reach Prester John. These objectives depended, of course, on geographical ignorance, as the line of coast Portuguese navigators eventually found was much larger than the one Henry expected to find. In these efforts, Portuguese navigators went increasingly south, but also, mainly due to accidental changes of direction, west. Such westbound dislocations led to the discovery, in the first decades of the fifteenth century, of three archipelagos, the Canaries, Madeira (and Porto Santo) and the Azores. But the major navigational feat of this period was the passage of Cape Bojador in 1434, in the sequence of which the whole western coast of the African continent was opened for exploration and increasingly (and here is the novelty) commerce. As Africa revealed its riches, mostly gold and slaves, these ventures began acquiring a more strict economic meaning. And all this kept on fostering the Portuguese to go further south, and when they reached the southernmost tip of the African continent, to pass it and go east. And so they did. Bartolomeu Dias crossed the Cape of Good Hope in 1487 and ten years later Vasco da Gama would entirely circumnavigate Africa to reach India by sea. By the time of Vasco da Gama’s journey, the autonomous economic importance of intercontinental trade was well established.

Feitorias and Trade with West Africa, the Atlantic Islands and India

As the second half of the fifteenth century unfolded, Portugal created a complex trade structure connecting India and the African coast to Portugal and, then, to the north of Europe. This consisted of a net of trading posts (feitorias) along the African coast, where goods were shipped to Portugal, and then re-exported to Flanders, where a further Portuguese feitoria was opened. This trade was based on such African goods as gold, ivory, red peppers, slaves and other less important goods. As was noted by various authors, this was somehow a continuation of the pattern of trade created during the Middle Ages, meaning that Portugal was able to diversify it, by adding new goods to its traditional exports (wine, olive oil, fruits and salt). The Portuguese established a virtual monopoly of these African commercial routes until the early sixteenth century. The only threats to that trade structure came from pirates originating in Britain, Holland, France and Spain. One further element of this trade structure was the Atlantic Islands (Madeira, the Azores and the African archipelagos of Cape Verde and São Tomé). These islands contributed with such goods as wine, wheat and sugar cane. After the sea route to India was discovered and the Portuguese were able to establish regular connections with India, the trading structure of the Portuguese empire became more complex. Now the Portuguese began bringing multiple spices, precious stones, silk and woods from India, again based on a net of feitorias there established. The maritime route to India acquired an extreme importance to Europe, precisely at this time, since the Ottoman Empire was then able to block the traditional inland-Mediterranean route that supplied the continent with Indian goods.

Control of Trade by the Crown

One crucial aspect of the Portuguese Discoveries is the high degree of control exerted by the crown over the whole venture. The first episodes in the early fifteenth century, under Henry the Navigator (as well as the first exploratory trips along the African coast) were entirely directed by the crown. Then, as the activity became more profitable, it was, first, liberalized, and then rented (in totu) to merchants, whom were constrained to pay the crown a significant share of their profits. Finally, when the full Indo-African network was consolidated, the crown controlled directly the largest share of the trade (although never monopolizing it), participated in “public-private” joint-ventures, or imposed heavy tributes on traders. The grip of the crown increased with growth of the size and complexity of the empire. Until the early sixteenth century, the empire consisted mainly of a network of trading posts. No serious attempt was made by the Portuguese crown to exert a significant degree of territorial control over the various areas constituting the empire.

The Rise of a Territorial Empire

This changed with the growth of trade from India and Brazil. As India was transformed into a platform for trade not only around Africa but also in Asia, a tendency was developed (in particular under Afonso de Albuquerque, in the early sixteenth century) to create an administrative structure in the territory. This was not particularly successful. An administrative structure was indeed created, but stayed forever incipient. A relatively more complex administrative structure would only appear in Brazil. Until the middle of the sixteenth century, Brazil was relatively ignored by the crown. But with the success of the system of sugar cane plantation in the Atlantic Isles, the Portuguese crown decided to transplant it to Brazil. Although political power was controlled initially by a group of seigneurs to whom the crown donated certain areas of the territory, the system got increasingly more centralized as time went on. This is clearly visible with the creation of the post of governor-general of Brazil, directly respondent to the crown, in 1549.

Portugal Loses Its Expansionary Edge

Until the early sixteenth century, Portugal capitalized on being the pioneer of European expansion. It monopolized African and, initially, Indian trade. But, by that time, changes were taking place. Two significant events mark the change in political tide. First, the increasing assertiveness of the Ottoman Empire in the Eastern Mediterranean, which coincided with a new bout of Islamic expansionism – ultimately bringing the Mughal dynasty to India – as well as the re-opening of the Mediterranean route for Indian goods. This put pressure on Portuguese control over Indian trade. Not only was political control over the subcontinent now directly threatened by Islamic rulers, but also the profits from Indian trade started declining. This is certainly one of the reasons why Portugal redirected its imperial interests to the south Atlantic, particularly Brazil – the other reasons being the growing demand for sugar in Europe and the success of the sugar cane plantation system in the Atlantic islands. The second event marking the change in tide was the increased assertiveness of imperial Spain, both within Europe and overseas. Spain, under the Habsburgs (mostly Charles V and Phillip II), exerted a dominance over the European continent which was unprecedented since Roman times. This was complemented by the beginning of exploration of the American continent (from the Caribbean to Mexico and the Andes), again putting pressure on the Portuguese empire overseas. What is more, this is the period when not only Spain, but also Britain, Holland and France acquired navigational and commercial skills equivalent to the Portuguese, thus competing with them in some of their more traditional routes and trades. By the middle of the sixteenth century, Portugal had definitely lost the expansionary edge. And this would come to a tragic conclusion in 1580, with the death of the heirless King Sebastian in North Africa and the loss of political independence to Spain, under Phillip II.

Empire and the Role, Power and Finances of the Crown

The first century of empire brought significant political consequences for the country. As noted above, the Discoveries were directed by the crown to a very large extent. As such, they constituted one further step in the affirmation of Portugal as a separate political entity in the Iberian Peninsula. Empire created a political and economic sphere where Portugal could remain independent from the rest of the peninsula. It thus contributed to the definition of what we might call “national identity.” Additionally, empire enhanced significantly the crown’s redistributive power. To benefit from profits from transoceanic trade, to reach a position in the imperial hierarchy or even within the national hierarchy proper, candidates had to turn to the crown. As it controlled imperial activities, the crown became a huge employment agency, capable of attracting the efforts of most of the national elite. The empire was, thus, transformed into an extremely important instrument of the crown in order to centralize power. It has already been mentioned that much of the political history of Portugal from the Middle Ages to the nineteenth century revolves around the tension between the centripetal power of the crown and the centrifugal powers of the aristocracy, the Church and the local communities. Precisely, the imperial episode constituted a major step in the centralization of the crown’s power. The way such centralization occurred was, however, peculiar, and that would bring crucial consequences for the future. Various authors have noted how, despite the growing centralizing power of the crown, the aristocracy was able to keep its local powers, thanks to the significant taxing and judicial autonomy it possessed in the lands under its control. This is largely true, but as other authors have noted, this was done with the crown acting as an intermediary agent. The Portuguese aristocracy was since early times much less independent from the crown than in most parts of Western Europe, and this situation accentuated during the days of empire. As we have seen above, the crown directed the Reconquista in a way that made it able to control and redistribute (through the famous donations) most of the land that was conquered. In those early medieval days, it was, thus, the service to the crown that made noblemen eligible to benefit from land donations. It is undoubtedly true that by donating land the crown was also giving away (at least partially) the monopoly of taxing and judging. But what is crucial here is its significant intermediary power. With empire, that power increased again. And once more a large part of the aristocracy became dependent on the crown to acquire political and economic power. The empire became, furthermore, the main means of financing of the crown. Receipts from trade activities related to the empire (either profits, tariffs or other taxes) never went below 40 percent of total receipts of the crown, until the nineteenth century, and this was only briefly in its worst days. Most of the time, those receipts amounted to 60 or 70 percent of total crown’s receipts.

Other Economic Consequences of the Empire

Such a role for the crown’s receipts was one of the most important consequences of empire. Thanks to it, tax receipts from internal economic activity became in large part unnecessary for the functioning of national government, something that was going to have deep consequences, precisely for that exact internal activity. This was not, however, the only economic consequence of empire. One of the most important was, obviously, the enlargement of the trade base of the country. Thanks to empire, the Portuguese (and Europe, through the Portuguese) gained access to vast sources of precious metals, stones, tropical goods (such as fruit, sugar, tobacco, rice, potatoes, maize, and more), raw materials and slaves. Portugal used these goods to enlarge its comparative advantage pattern, which helped it penetrate European markets, while at the same time enlarging the volume and variety of imports from Europe. Such a process of specialization along comparative advantage principles was, however, very incomplete. As noted above, the crown exerted a high degree of control over the trade activity of empire, and as a consequence, many institutional factors interfered in order to prevent Portugal (and its imperial complex) from fully following those principles. In the end, in economic terms, the empire was inefficient – something to be contrasted, for instance, with the Dutch equivalent, much more geared to commercial success, and based on clearer efficiency managing-methods. By so significantly controlling imperial trade, the crown became a sort of barrier between the empire’s riches and the national economy. Much of what was earned in imperial activity was spent either on maintaining it or on the crown’s clientele. Consequently, the spreading of the gains from imperial trade to the rest of the economy was highly centralized in the crown. A much visible effect of this phenomenon was the fantastic growth and size of the country’s capital, Lisbon. In the sixteenth century, Lisbon was the fifth largest city in Europe, and from the sixteenth century to the nineteenth century it was always in the top ten, a remarkable feat for a country with such a small population as Portugal. And it was also the symptom of a much inflated bureaucracy, living on the gains of empire, as well as of the low degree of repercussion of those gains of empire through the whole of the economy.

Portuguese Industry and Agriculture

The rest of the economy did, indeed, remain very much untouched by this imperial manna. Most of industry was untouched by it, and the only visible impact of empire on the sector was by fostering naval construction and repair, and all the accessory activities. Most of industry kept on functioning according to old standards, far from the impact of transoceanic prosperity. And much the same happened with agriculture. Although benefiting from the introduction of new crops (mostly maize, but also potatoes and rice), Portuguese agriculture did not benefit significantly from the income stream arising from imperial trade, in particular when we could expect it to be a source of investment. Maize constituted an important technological innovation which had a much important impact on the Portuguese agriculture’s productivity, but it was too localized in the north-western part of the country, thus leaving the rest of the sector untouched.

Failure of a Modern Land Market to Develop

One very important consequence of empire on agriculture and, hence, on the economy, was the preservation of the property structure coming from the Middle Ages, namely that resulting from the crown’s donations. The empire enhanced again the crown’s powers to attract talent and, consequently, donate land. Donations were regulated by official documents called Cartas de Foral, in which the tributes due to the beneficiaries were specified. During the time of the empire, the conditions ruling donations changed in a way that reveals an increased monarchical power: donations were made for long periods (for instance, one life), but the land could not be sold nor divided (and, thus, no parts of it could be sold separately) and renewal required confirmation on the part of the crown. The rules of donation, thus, by prohibiting buying, selling and partition of land, were a major obstacle to the existence not only of a land market, but also of a clear definition of property rights, as well as freedom in the management of land use.

Additionally, various tributes were due to the beneficiaries. Some were in kind, some in money, some were fixed, others proportional to the product of the land. This process dissociated land ownership and appropriation of land product, since the land was ultimately the crown’s. Furthermore, the actual beneficiaries (thanks to the donation’s rules) had little freedom in the management of the donated land. Although selling land in such circumstances was forbidden to the beneficiaries, renting it was not, and several beneficiaries did so. A new dissociation between ownership and appropriation of product was thus introduced. Although in these donations some tributes were paid by freeholders, most of them were paid by copyholders. Copyhold granted to its signatories the use of land in perpetuity or in lives (one to three), but did not allow them to sell it. This introduced a new dissociation between ownership, appropriation of land product and its management. Although it could not be sold, land under copyhold could be ceded in “sub-copyhold” contracts – a replication of the original contract under identical conditions. This introduced, obviously, a new complication to the system. As should be clear by now, such a “baroque” system created an accumulation of layers of rights over the land, as different people could exert different rights over it, and each layer of rights was limited by the other layers, and sometimes conflicting with them in an intricate way. A major consequence of all this was the limited freedom the various owners of rights had in the management of their assets.

High Levels of Taxation in Agriculture

A second direct consequence of the system was the complicated juxtaposition of tributes on agricultural product. The land and its product in Portugal in those days were loaded with tributes (a sort of taxation). This explains one recent historian’s claim (admittedly exaggerated) that, in that period, those who owned the land did not toil it, and those who toiled it did not hold it. We must distinguish these tributes from strict rent payments, as rent contracts are freely signed by the two (or more) sides taking part in it. The tributes we are discussing here represented, in reality, an imposition, which makes the use of the word taxation appropriate to describe them. This is one further result of the already mentioned feature of the institutional framework of the time, the difficulty to distinguish between the private and the public spheres.

Besides the tributes we have just described, other tributes also impended on the land. Some were, again, of a nature we would call private nowadays, others of a more clearly defined public nature. The former were the tributes due to the Church, the latter the taxes proper, due explicitly as such to the crown. The main tribute due to the Church was the tithe. In theory, the tithe was a tenth of the production of farmers and should be directly paid to certain religious institutions. In practice, not always was it a tenth of the production nor did the Church always receive it directly, as its collection was in a large number of cases rented to various other agents. Nevertheless, it was an important tribute to be paid by producers in general. The taxes due to the crown were the sisa (an indirect tax on consumption) and the décima (an income tax). As far as we know, these tributes weighted on average much less than the seigneurial tributes. Still, when added to them, they accentuated the high level of taxation or para-taxation typical of the Portuguese economy of the time.

Portugal under Spanish Rule, Restoration of Independence and the Eighteenth Century

Spanish Rule of Portugal, 1580-1640

The death of King Sebastian in North Africa, during a military mission in 1578, left the Portuguese throne with no direct heir. There were, however, various indirect candidates in line, thanks to the many kinship links established by the Portuguese royal family to other European royal and aristocratic families. Among them was Phillip II of Spain. He would eventually inherit the Portuguese throne, although only after invading the country in 1580. Between 1578 and 1580 leaders in Portugal tried unsuccessfully to find a “national” solution to the succession problem. In the end, resistance to the establishment of Spanish rule was extremely light.

Initial Lack of Resistance to Spanish Rule

To understand why resistance was so mild one must bear in mind the nature of such political units as the Portuguese and Spanish kingdoms at the time. These kingdoms were not the equivalent of contemporary nation-states. They had a separate identity, evident in such things as a different language, a different cultural history, and different institutions, but this didn’t amount to being a nation. The crown itself, when seen as an institution, still retained many features of a “private” venture. Of course, to some extent it represented the materialization of the kingdom and its “people,” but (by the standards of current political concepts) it still retained a much more ambiguous definition. Furthermore, Phillip II promised to adopt a set of rules allowing for extensive autonomy: the Portuguese crown would be “aggregated” to the Spanish crown although not “absorbed” or “associated” or even “integrated” with it. According to those rules, Portugal was to keep its separate identity as a crown and as a kingdom. All positions in the Portuguese government were to be attributed to Portuguese persons, the Portuguese language was the only one allowed in official matters in Portugal, positions in the Portuguese empire were to be attributed only to Portuguese.

The implementation of such rules depended largely on the willingness of the Portuguese nobility, Church and high-ranking officials to accept them. As there were no major popular revolts that could pressure these groups to decide otherwise, they did not have much difficulty in accepting them. In reality, they saw the new situation as an opportunity for greater power. After all, Spain was then the largest and most powerful political unit in Europe, with vast extensions throughout the world. To participate in such a venture under conditions of great autonomy was seen as an excellent opening.

Resistance to Spanish Rule under Phillip IV

The autonomous status was kept largely untouched until the third decade of the seventeenth century, i.e., until Phillip IV’s reign (1621-1640, in Portugal). This was a reign marked by an important attempt at centralization of power under the Spanish crown. A major impulse for this was Spain’s participation in the Thirty Years War. Simply put, the financial stress caused by the war forced the crown not only to increase fiscal pressure on the various political units under it but also to try to control them more closely. This led to serious efforts at revoking the autonomous status of Portugal (as well as other European regions of the empire). And it was as a reaction to those attempts that many Portuguese aristocrats and important personalities led a movement to recover independence. This movement must, again, be interpreted with care, paying attention to the political concepts of the time. This was not an overtly national reaction, in today’s sense of the word “national.” It was mostly a reaction from certain social groups that felt a threat to their power by the new plans of increased centralization under Spain. As some historians have noted, the 1640 revolt should be best understood as a movement to preserve the constitutional elements of the framework of autonomy established in 1580, against the new centralizing drive, rather than a national or nationalist movement.

Although that was the original intent of the movement, the fact is that, progressively, the new Portuguese dynasty (whose first monarch was John IV, 1640-1656) proceeded to an unprecedented centralization of power in the hands of the Portuguese crown. This means that, even if the original intent of the mentors of the 1640 revolt was to keep the autonomy prevalent both under pre-1580 Portuguese rule and post-1580 Spanish rule, the final result of their action was to favor centralization in the Portuguese crown, and thus help define Portugal as a clearly separate country. Again, we should be careful not to interpret this new bout of centralization in the seventeenth and eighteenth centuries as the creation of a national state and of a modern government. Many of the intermediate groups (in particular the Church and the aristocracy) kept their powers largely intact, even powers we would nowadays call public (such as taxation, justice and police). But there is no doubt that the crown increased significantly its redistributive power, and the nobility and the church had, increasingly, to rely on service to the crown to keep most of their powers.

Consequences of Spanish Rule for the Portuguese Empire

The period of Spanish rule had significant consequences for the Portuguese empire. Due to integration in the Spanish empire, Portuguese colonial territories became a legitimate target for all of Spain’s enemies. The European countries having imperial strategies (in particular, Britain, the Netherlands and France) no longer saw Portugal as a countervailing ally in their struggle with Spain, and consequently promoted serious assaults on Portuguese overseas possessions. There was one further element of the geopolitical landscape of the period that aggravated the willingness of competitors to attack Portugal, and that was Holland’s process of separation from the Spanish empire. Spain was not only a large overseas empire but also an enormous European one, of which Holland was a part until the 1560s. Holland, precisely, saw the Portuguese section of the Iberian empire as its weakest link, and, accordingly, attacked it in a fairly systematic way. The Dutch attack on Portuguese colonial possessions ranged from America (Brazil) to Africa (Sao Tome and Angola) to Asia (India, several points in Southeast Asia, and Indonesia), and in the course of it several Portuguese territories were conquered, mostly in Asia. Portugal, however, managed to keep most of its African and American territories.

The Shift of the Portuguese Empire toward the Atlantic

When it regained independence, Portugal had to re-align its external position in accordance with the new context. Interestingly enough, all those rivals that had attacked the country’s possessions during Spanish rule initially supported its separation. France was the most decisive partner in the first efforts to regain independence. Later (in the 1660s, in the final years of the war with Spain) Britain assumed that role. This was to inaugurate an essential feature of Portuguese external relations. From then on Britain became the most consistent Portuguese foreign partner. In the 1660s such a move was connected to the re-orientation of the Portuguese empire. What had until then been the center of empire (its Eastern part – India and the rest of Asia) lost importance. At first, this was due to the renewal in activity in the Mediterranean route, something that threatened the sea route to India. Then, this was because the Eastern empire was the part where the Portuguese had ceded more territory during Spanish rule, in particular to the Netherlands. Portugal kept most of its positions both in Africa and America, and this part of the world was to acquire extreme importance in the seventeenth and eighteenth centuries. In the last decades of the seventeenth century, Portugal was able to develop numerous trades mostly centered in Brazil (although some of the Atlantic islands also participated), involving sugar, tobacco and tropical woods, all sent to the growing market for luxury goods in Europe, to which was added a growing and prosperous trade of slaves from West Africa to Brazil.

Debates over the Role of Brazilian Gold and the Methuen Treaty

The range of goods in Atlantic trade acquired an important addition with the discovery of gold in Brazil in the late seventeenth century. It is the increased importance of gold in Portuguese trade relations that helps explain one of the most important diplomatic moments in Portuguese history, the Methuen Treaty (also called the Queen Anne Treaty), signed between Britain and Portugal in 1703. Many Portuguese economists and historians have blamed the treaty for Portugal’s inability to achieve modern economic growth during the eighteenth and nineteenth centuries. It must be remembered that the treaty stipulated tariffs to be reduced in Britain for imports of Portuguese wine (favoring it explicitly in relation to French wine), while, as a counterpart, Portugal had to eliminate all prohibitions on imports of British wool textiles (even if tariffs were left in place). Some historians and economists have seen this as Portugal’s abdication of having a national industrial sector and, instead, specializing in agricultural goods for export. As proof, such scholars present figures for the balance of trade between Portugal and Britain after 1703, with the former country exporting mainly wine and the latter textiles, and a widening trade deficit. Other authors, however, have shown that what mostly allowed for this trade (and the deficit) was not wine but the newly discovered Brazilian gold. Could, then, gold be the culprit for preventing Portuguese economic growth? Most historians now reject the hypothesis. The problem would lie not in a particular treaty signed in the early eighteenth century but in the existing structural conditions for the economy to grow – a question to be dealt with further below.

Portuguese historiography currently tends to see the Methuen Treaty mostly in the light of Portuguese diplomatic relations in the seventeenth and eighteenth centuries. The treaty would mostly mark the definite alignment of Portugal within the British sphere. The treaty was signed during the War of Spanish Succession. This was a war that divided Europe in a most dramatic manner. As the Spanish crown was left without a successor in 1700, the countries of Europe were led to support different candidates. The diplomatic choice ended up being polarized around Britain, on the one side, and France, on the other. Increasingly, Portugal was led to prefer Britain, as it was the country that granted more protection to the prosperous Portuguese Atlantic trade. As Britain also had an interest in this alignment (due to the important Portuguese colonial possessions), this explains why the treaty was economically beneficial to Portugal (contrary to what some of the older historiography tended to believe) In fact, in simple trade terms, the treaty was a good bargain for both countries, each having been given preferential treatment for certain of its more typical goods.

Brazilian Gold’s Impact on Industrialization

It is this sequence of events that has led several economists and historians to blame gold for the Portuguese inability to industrialize in the eighteenth and nineteenth centuries. Recent historiography, however, has questioned the interpretation. All these manufactures were dedicated to the production of luxury goods and, consequently, directed to a small market that had nothing to do (in both the nature of the market and technology) with those sectors typical of European industrialization. Were it to continue, it is very doubtful it would ever have become a full industrial spurt of the kind then underway in Britain. The problem lay elsewhere, as we will see below.

Prosperity in the Early 1700s Gives Way to Decline

Be that as it may, the first half of the eighteenth century was a period of unquestionable prosperity for Portugal, mostly thanks to gold, but also to the recovery of the remaining trades (both tropical and from the mainland). Such prosperity is most visible in the period of King John V (1706-1750). This is generally seen as the Portuguese equivalent to the reign of France’s Louis XIV. Palaces and monasteries of great dimensions were then built, and at the same time the king’s court acquired a pomp and grandeur not seen before or after, all financed largely by Brazilian gold. By the mid-eighteenth century, however, it all began to falter. The beginning of decline in gold remittances occurred in the sixth decade of the century. A new crisis began, which was compounded by the dramatic 1755 earthquake, which destroyed a large part of Lisbon and other cities. This new crisis was at the root of a political project aiming at a vast renaissance of the country. This was the first in a series of such projects, all of them significantly occurring in the sequence of traumatic events related to empire. The new project is associated with King Joseph I period (1750-1777), in particular with the policies of his prime-minister, the Marquis of Pombal.

Centralization under the Marquis of Pombal

The thread linking the most important political measures taken by the Marquis of Pombal is the reinforcement of state power. A major element in this connection was his confrontation with certain noble and church representatives. The most spectacular episodes in this respect were, first, the killing of an entire noble family and, second, the expulsion of the Jesuits from national soil. Sometimes this is taken as representing an outright hostile policy towards both aristocracy and church. However, it should be best seen as an attempt to integrate aristocracy and church into the state, thus undermining their autonomous powers. In reality, what the Marquis did was to use the power to confer noble titles, as well as the Inquisition, as means to centralize and increase state power. As a matter of fact, one of the most important instruments of recruitment for state functions during the Marquis’ rule was the promise of noble titles. And the Inquisition’s functions also changed form being mainly a religious court, mostly dedicated to the prosecution of Jews, to becoming a sort of civil political police. The Marquis’ centralizing policy covered a wide range of matters, in particular those most significant to state power. Internal police was reinforced, with the creation of new police institutions directly coordinated by the central government. The collection of taxes became more efficient, through an institution more similar to a modern Treasury than any earlier institutions. Improved collection also applied to tariffs and profits from colonial trade.

Centralizing power by the government had significant repercussions in certain aspects of the relationship between state and civil society. Although the Marquis’ rule is frequently pictured as violent, it included measures generally considered as “enlightened.” Such is the case of the abolition of the distinction between “New Christians” and Christians (new Christians were Jews converted to Catholicism, and as such suffered from a certain degree of segregation, constituting an intermediate category between Jews and Christians proper). Another very important political measure by the Marquis was the abolition of slavery in the empire’s mainland (even if slavery kept on being used in the colonies and the slave trade continued to prosper, there is no way of questioning the importance of the measure).

Economic Centralization under the Marquis of Pombal

The Marquis applied his centralizing drive to economic matters as well. This happened first in agriculture, with the creation of a monopolizing company for trade in Port wine. It continued in colonial trade, where the method applied was the same, that is, the creation of companies monopolizing trade for certain products or regions of the empire. Later, interventionism extended to manufacturing. Such interventionism was essentially determined by the international trade crisis that affected many colonial goods, the most important among them gold. As the country faced a new international payments crisis, the Marquis reverted to protectionism and subsidization of various industrial sectors. Again, as such state support was essentially devoted to traditional, low-tech, industries, this policy failed to boost Portugal’s entry into the group of countries that first industrialized.

Failure to Industrialize

The country would never be the same after the Marquis’ consulate. The “modernization” of state power and his various policies left a profound mark in the Portuguese polity. They were not enough, however, to create the necessary conditions for Portugal to enter a process of industrialization. In reality, most of the structural impediments to modern growth were left untouched or aggravated by the Marquis’ policies. This is particularly true of the relationship between central power and peripheral (aristocratic) powers. The Marquis continued the tradition exacerbated during the fifteenth and sixteenth centuries of liberally conferring noble titles to court members. Again, this accentuated the confusion between the public and the private spheres, with a particular incidence (for what concerns us here) in the definition of property and property rights. The act of granting a noble title by the crown, on many occasions implied a donation of land. The beneficiary of the donation was entitled to collect tributes from the population living in the territory but was forbidden to sell it and, sometimes, even rent it. This meant such beneficiaries were not true owners of the land. The land could not exactly be called their property. This lack of private rights was, however, compensated by the granting of such “public” rights as the ability to obtain tributes – a sort of tax. Beneficiaries of donations were, thus, neither true landowners nor true state representatives. And the same went for the crown. By giving away many of the powers we tend to call public today, the crown was acting as if it could dispose of land under its administration in the same manner as private property. But since this was not entirely private property, by doing so the crown was also conceding public powers to agents we would today call private. Such confusion did not help the creation of either a true entrepreneurial class or of a state dedicated to the protection of private property rights.

The whole property structure described above was kept, even after the reforming efforts of the Marquis of Pombal. The system of donations as a method of payment for jobs taken at the King’s court as well as the juxtaposition of various sorts of tributes, either to the crown or local powers, allowed for the perpetuation of a situation where the private and the public spheres were not clearly separated. Consequently, property rights were not well defined. If there is a crucial reason for Portugal’s impaired economic development, these are the things we should pay attention to. Next, we will begin the study of the nineteenth and twentieth centuries, and see how difficult was the dismantling of such an institutional structure and how it affected the growth potential of the Portuguese economy.

Suggested Reading:

Birmingham, David. A Concise History of Portugal. Cambridge: Cambridge University Press, 1993.

Boxer, C.R. The Portuguese Seaborne Empire, 1415-1825. New York: Alfred A. Knopf, 1969.

Godinho, Vitorino Magalhães. “Portugal and Her Empire, 1680-1720.” The New Cambridge Modern History, Vol. VI. Cambridge: Cambridge University Press, 1970.

Oliveira Marques, A.H. History of Portugal. New York: Columbia University Press, 1972.

Wheeler, Douglas. Historical Dictionary of Portugal. London: Scarecrow Press, 1993.

Citation: Amaral, Luciano. “Economic History of Portugal”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/economic-history-of-portugal/

English Poor Laws

George Boyer, Cornell University

A compulsory system of poor relief was instituted in England during the reign of Elizabeth I. Although the role played by poor relief was significantly modified by the Poor Law Amendment Act of 1834, the Crusade Against Outrelief of the 1870s, and the adoption of various social insurance programs in the early twentieth century, the Poor Law continued to assist the poor until it was replaced by the welfare state in 1948. For nearly three centuries, the Poor Law constituted “a welfare state in miniature,” relieving the elderly, widows, children, the sick, the disabled, and the unemployed and underemployed (Blaug 1964). This essay will outline the changing role played by the Poor Law, focusing on the eighteenth and nineteenth centuries.

The Origins of the Poor Law

While legislation dealing with vagrants and beggars dates back to the fourteenth century, perhaps the first English poor law legislation was enacted in 1536, instructing each parish to undertake voluntary weekly collections to assist the “impotent” poor. The parish had been the basic unit of local government since at least the fourteenth century, although Parliament imposed few if any civic functions on parishes before the sixteenth century. Parliament adopted several other statutes relating to the poor in the next sixty years, culminating with the Acts of 1597-98 and 1601 (43 Eliz. I c. 2), which established a compulsory system of poor relief that was administered and financed at the parish (local) level. These Acts laid the groundwork for the system of poor relief up to the adoption of the Poor Law Amendment Act in 1834. Relief was to be administered by a group of overseers, who were to assess a compulsory property tax, known as the poor rate, to assist those within the parish “having no means to maintain them.” The poor were divided into three groups: able-bodied adults, children, and the old or non-able-bodied (impotent). The overseers were instructed to put the able-bodied to work, to give apprenticeships to poor children, and to provide “competent sums of money” to relieve the impotent.

Deteriorating economic conditions and loss of traditional forms of charity in the 1500s

The Elizabethan Poor Law was adopted largely in response to a serious deterioration in economic circumstances, combined with a decline in more traditional forms of charitable assistance. Sixteenth century England experienced rapid inflation, caused by rapid population growth, the debasement of the coinage in 1526 and 1544-46, and the inflow of American silver. Grain prices more than tripled from 1490-1509 to 1550-69, and then increased by an additional 73 percent from 1550-69 to 1590-1609. The prices of other commodities increased nearly as rapidly — the Phelps Brown and Hopkins price index rose by 391 percent from 1495-1504 to 1595-1604. Nominal wages increased at a much slower rate than did prices; as a result, real wages of agricultural and building laborers and of skilled craftsmen declined by about 60 percent over the course of the sixteenth century. This decline in purchasing power led to severe hardship for a large share of the population. Conditions were especially bad in 1595-98, when four consecutive poor harvests led to famine conditions. At the same time that the number of workers living in poverty increased, the supply of charitable assistance declined. The dissolution of the monasteries in 1536-40, followed by the dissolution of religious guilds, fraternities, almshouses, and hospitals in 1545-49, “destroyed much of the institutional fabric which had provided charity for the poor in the past” (Slack 1990). Given the circumstances, the Acts of 1597-98 and 1601 can be seen as an attempt by Parliament both to prevent starvation and to control public order.

The Poor Law, 1601-1750

It is difficult to determine how quickly parishes implemented the Poor Law. Paul Slack (1990) contends that in 1660 a third or more of parishes regularly were collecting poor rates, and that by 1700 poor rates were universal. The Board of Trade estimated that in 1696 expenditures on poor relief totaled £400,000 (see Table 1), slightly less than 1 percent of national income. No official statistics exist for this period concerning the number of persons relieved or the demographic characteristics of those relieved, but it is possible to get some idea of the makeup of the “pauper host” from local studies undertaken by historians. These suggest that, during the seventeenth century, the bulk of relief recipients were elderly, orphans, or widows with young children. In the first half of the century, orphans and lone-parent children made up a particularly large share of the relief rolls, while by the late seventeenth century in many parishes a majority of those collecting regular weekly “pensions” were aged sixty or older. Female pensioners outnumbered males by as much as three to one (Smith 1996). On average, the payment of weekly pensions made up about two-thirds of relief spending in the late seventeenth and early eighteenth centuries; the remainder went to casual benefits, often to able-bodied males in need of short-term relief because of sickness or unemployment.

Settlement Act of 1662

One of the issues that arose in the administration of relief was that of entitlement: did everyone within a parish have a legal right to relief? Parliament addressed this question in the Settlement Act of 1662, which formalized the notion that each person had a parish of settlement, and which gave parishes the right to remove within forty days of arrival any newcomer deemed “likely to be chargeable” as well as any non-settled applicant for relief. While Adam Smith, and some historians, argued that the Settlement Law put a serious brake on labor mobility, available evidence suggests that parishes used it selectively, to keep out economically undesirable migrants such as single women, older workers, and men with large families.

Relief expenditures increased sharply in the first half of the eighteenth century, as can be seen in Table 1. Nominal expenditures increased by 72 percent from 1696 to 1748-50 despite the fact that prices were falling and population was growing slowly; real expenditures per capita increased by 84 percent. A large part of this rise was due to increasing pension benefits, especially for the elderly. Some areas also experienced an increase in the number of able-bodied relief recipients. In an attempt to deter some of the poor from applying for relief, Parliament in 1723 adopted the Workhouse Test Act, which empowered parishes to deny relief to any applicant who refused to enter a workhouse. While many parishes established workhouses as a result of the Act, these were often short-lived, and the vast majority of paupers continued to receive outdoor relief (that is, relief in their own homes).

The Poor Law, 1750-1834

The period from 1750 to 1820 witnessed an explosion in relief expenditures. Real per capita expenditures more than doubled from 1748-50 to 1803, and remained at a high level until the Poor Law was amended in 1834 (see Table 1). Relief expenditures increased from 1.0% of GDP in 1748-50 to a peak of 2.7% of GDP in 1818-20 (Lindert 1998). The demographic characteristics of the pauper host changed considerably in the late eighteenth and early nineteenth centuries, especially in the rural south and east of England. There was a sharp increase in numbers receiving casual benefits, as opposed to regular weekly pensions. The age distribution of those on relief became younger — the share of paupers who were prime-aged (20- 59) increased significantly, and the share aged 60 and over declined. Finally, the share of relief recipients in the south and east who were male increased from about a third in 1760 to nearly two-thirds in 1820. In the north and west there also were shifts toward prime-age males and casual relief, but the magnitude of these changes was far smaller than elsewhere (King 2000).

Gilbert’s Act and the Removal Act

There were two major pieces of legislation during this period. Gilbert’s Act (1782) empowered parishes to join together to form unions for the purpose of relieving their poor. The Act stated that only the impotent poor should be relieved in workhouses; the able-bodied should either be found work or granted outdoor relief. To a large extent, Gilbert’s Act simply legitimized the policies of a large number of parishes that found outdoor relief both less and expensive and more humane that workhouse relief. The other major piece of legislation was the Removal Act of 1795, which amended the Settlement Law so that no non-settled person could be removed from a parish unless he or she applied for relief.

Speenhamland System and other forms of poor relief

During this period, relief for the able-bodied took various forms, the most important of which were: allowances-in-aid-of-wages (the so-called Speenhamland system), child allowances for laborers with large families, and payments to seasonally unemployed agricultural laborers. The system of allowances-in-aid-of-wages was adopted by magistrates and parish overseers throughout large parts of southern England to assist the poor during crisis periods. The most famous allowance scale, though by no means the first, was that adopted by Berkshire magistrates at Speenhamland on May 6, 1795. Under the allowance system, a household head (whether employed or unemployed) was guaranteed a minimum weekly income, the level of which was determined by the price of bread and by the size of his or her family. Such scales typically were instituted only during years of high food prices, such as 1795-96 and 1800-01, and removed when prices declined. Child allowance payments were widespread in the rural south and east, which suggests that laborers’ wages were too low to support large families. The typical parish paid a small weekly sum to laborers with four or more children under age 10 or 12. Seasonal unemployment had been a problem for agricultural laborers long before 1750, but the extent of seasonality increased in the second half of the eighteenth century as farmers in southern and eastern England responded to the sharp increase in grain prices by increasing their specialization in grain production. The increase in seasonal unemployment, combined with the decline in other sources of income, forced many agricultural laborers to apply for poor relief during the winter.

Regional differences in relief expenditures and recipients

Table 2 reports data for fifteen counties located throughout England on per capita relief expenditures for the years ending in March 1783-85, 1803, 1812, and 1831, and on relief recipients in 1802-03. Per capita expenditures were higher on average in agricultural counties than in more industrial counties, and were especially high in the grain-producing southern counties — Oxford, Berkshire, Essex, Suffolk, and Sussex. The share of the population receiving poor relief in 1802-03 varied significantly across counties, being 15 to 23 percent in the grain- producing south and less than 10 percent in the north. The demographic characteristics of those relieved also differed across regions. In particular, the share of relief recipients who were elderly or disabled was higher in the north and west than it was in the south; by implication, the share that were able-bodied was higher in the south and east than elsewhere. Economic historians typically have concluded that these regional differences in relief expenditures and numbers on relief were caused by differences in economic circumstances; that is, poverty was more of a problem in the agricultural south and east than it was in the pastoral southwest or in the more industrial north (Blaug 1963; Boyer 1990). More recently, King (2000) has argued that the regional differences in poor relief were determined not by economic structure but rather by “very different welfare cultures on the part of both the poor and the poor law administrators.”

Causes of the Increase in Relief to Able-bodied Males

What caused the increase in the number of able-bodied males on relief? In the second half of the eighteenth century, a large share of rural households in southern England suffered significant declines in real income. County-level cross-sectional data suggest that, on average, real wages for day laborers in agriculture declined by 19 percent from 1767-70 to 1795 in fifteen southern grain-producing counties, then remained roughly constant from 1795 to 1824, before increasing to a level in 1832 about 10 percent above that of 1770 (Bowley 1898). Farm-level time-series data yield a similar result — real wages in the southeast declined by 13 percent from 1770-79 to 1800-09, and remained low until the 1820s (Clark 2001).

Enclosures

Some historians contend that the Parliamentary enclosure movement, and the plowing over of commons and waste land, reduced the access of rural households to land for growing food, grazing animals, and gathering fuel, and led to the immiseration of large numbers of agricultural laborers and their families (Hammond and Hammond 1911; Humphries 1990). More recent research, however, suggests that only a relatively small share of agricultural laborers had common rights, and that there was little open access common land in southeastern England by 1750 (Shaw-Taylor 2001; Clark and Clark 2001). Thus, the Hammonds and Humphries probably overstated the effect of late eighteenth-century enclosures on agricultural laborers’ living standards, although those laborers who had common rights must have been hurt by enclosures.

Declining cottage industry

Finally, in some parts of the south and east, women and children were employed in wool spinning, lace making, straw plaiting, and other cottage industries. Employment opportunities in wool spinning, the largest cottage industry, declined in the late eighteenth century, and employment in the other cottage industries declined in the early nineteenth century (Pinchbeck 1930; Boyer 1990). The decline of cottage industry reduced the ability of women and children to contribute to household income. This, in combination with the decline in agricultural laborers’ wage rates and, in some villages, the loss of common rights, caused many rural household’s incomes in southern England to fall dangerously close to subsistence by 1795.

North and Midlands

The situation was different in the north and midlands. The real wages of day laborers in agriculture remained roughly constant from 1770 to 1810, and then increased sharply, so that by the 1820s wages were about 50 percent higher than they were in 1770 (Clark 2001). Moreover, while some parts of the north and midlands experienced a decline in cottage industry, in Lancashire and the West Riding of Yorkshire the concentration of textile production led to increased employment opportunities for women and children.

The Political Economy of the Poor Law, 1795-1834

A comparison of English poor relief with poor relief on the European continent reveals a puzzle: from 1795 to 1834 relief expenditures per capita, and expenditures as a share of national product, were significantly higher in England than on the continent. However, differences in spending between England and the continent were relatively small before 1795 and after 1834 (Lindert 1998). Simple economic explanations cannot account for the different patterns of English and continental relief.

Labor-hiring farmers take advantage of the poor relief system

The increase in relief spending in the late-eighteenth and early-nineteenth centuries was partly a result of politically-dominant farmers taking advantage of the poor relief system to shift some of their labor costs onto other taxpayers (Boyer 1990). Most rural parish vestries were dominated by labor-hiring farmers as a result of “the principle of weighting the right to vote according to the amount of property occupied,” introduced by Gilbert’s Act (1782), and extended in 1818 by the Parish Vestry Act (Brundage 1978). Relief expenditures were financed by a tax levied on all parishioners whose property value exceeded some minimum level. A typical rural parish’s taxpayers can be divided into two groups: labor-hiring farmers and non-labor-hiring taxpayers (family farmers, shopkeepers, and artisans). In grain-producing areas, where there were large seasonal variations in the demand for labor, labor-hiring farmers anxious to secure an adequate peak season labor force were able to reduce costs by laying off unneeded workers during slack seasons and having them collect poor relief. Large farmers used their political power to tailor the administration of poor relief so as to lower their labor costs. Thus, some share of the increase in relief spending in the early nineteenth century represented a subsidy to labor-hiring farmers rather than a transfer from farmers and other taxpayers to agricultural laborers and their families. In pasture farming areas, where the demand for labor was fairly constant over the year, it was not in farmers’ interests to shed labor during the winter, and the number of able-bodied laborers receiving casual relief was smaller. The Poor Law Amendment Act of 1834 reduced the political power of labor-hiring farmers, which helps to account for the decline in relief expenditures after that date.

The New Poor Law, 1834-70

The increase in spending on poor relief in the late eighteenth and early nineteenth centuries, combined with the attacks on the Poor Laws by Thomas Malthus and other political economists and the agricultural laborers’ revolt of 1830-31 (the Captain Swing riots), led the government in 1832 to appoint the Royal Commission to Investigate the Poor Laws. The Commission published its report, written by Nassau Senior and Edwin Chadwick, in March 1834. The report, described by historian R. H. Tawney (1926) as “brilliant, influential and wildly unhistorical,” called for sweeping reforms of the Poor Law, including the grouping of parishes into Poor Law unions, the abolition of outdoor relief for the able-bodied and their families, and the appointment of a centralized Poor Law Commission to direct the administration of poor relief. Soon after the report was published Parliament adopted the Poor Law Amendment Act of 1834, which implemented some of the report’s recommendations and left others, like the regulation of outdoor relief, to the three newly appointed Poor Law Commissioners.

By 1839 the vast majority of rural parishes had been grouped into poor law unions, and most of these had built or were building workhouses. On the other hand, the Commission met with strong opposition when it attempted in 1837 to set up unions in the industrial north, and the implementation of the New Poor Law was delayed in several industrial cities. In an attempt to regulate the granting of relief to able-bodied males, the Commission, and its replacement in 1847, the Poor Law Board, issued several orders to selected Poor Law Unions. The Outdoor Labour Test Order of 1842, sent to unions without workhouses or where the workhouse test was deemed unenforceable, stated that able-bodied males could be given outdoor relief only if they were set to work by the union. The Outdoor Relief Prohibitory Order of 1844 prohibited outdoor relief for both able-bodied males and females except on account of sickness or “sudden and urgent necessity.” The Outdoor Relief Regulation Order of 1852 extended the labor test for those relieved outside of workhouses.

Historical debate about the effect of the New Poor Law

Historians do not agree on the effect of the New Poor Law on the local administration of relief. Some contend that the orders regulating outdoor relief largely were evaded by both rural and urban unions, many of whom continued to grant outdoor relief to unemployed and underemployed males (Rose 1970; Digby 1975). Others point to the falling numbers of able- bodied males receiving relief in the national statistics and the widespread construction of union workhouses, and conclude that the New Poor Law succeeded in abolishing outdoor relief for the able-bodied by 1850 (Williams 1981). A recent study by Lees (1998) found that in three London parishes and six provincial towns in the years around 1850 large numbers of prime-age males continued to apply for relief, and that a majority of those assisted were granted outdoor relief. The Poor Law also played an important role in assisting the unemployed in industrial cities during the cyclical downturns of 1841-42 and 1847-48 and the Lancashire cotton famine of 1862-65 (Boot 1990; Boyer 1997). There is no doubt, however, that spending on poor relief declined after 1834 (see Table 1). Real per capita relief expenditures fell by 43 percent from 1831 to 1841, and increased slowly thereafter.

Beginning in 1840, data on the number of persons receiving poor relief are available for two days a year, January 1 and July 1; the “official” estimates in Table 1 of the annual number relieved were constructed as the average of the number relieved on these two dates. Studies conducted by Poor Law administrators indicate that the number recorded in the day counts was less than half the number assisted during the year. Lees’s “revised” estimates of annual relief recipients (see Table 1) assumes that the ratio of actual to counted paupers was 2.24 for 1850- 1900 and 2.15 for 1905-14; these suggest that from 1850 to 1870 about 10 percent of the population was assisted by the Poor Law each year. Given the temporary nature of most spells of relief, over a three year period as much as 25 percent of the population made use of the Poor Law (Lees 1998).

The Crusade Against Outrelief

In the 1870s Poor Law unions throughout England and Wales curtailed outdoor relief for all types of paupers. This change in policy, known as the Crusade Against Outrelief, was not a result of new government regulations, although it was encouraged by the newly formed Local Government Board (LGB). The Board was aided in convincing the public of the need for reform by the propaganda of the Charity Organization Society (COS), founded in 1869. The LGB and the COS maintained that the ready availability of outdoor relief destroyed the self-reliance of the poor. The COS went on to argue that the shift from outdoor to workhouse relief would significantly reduce the demand for assistance, since most applicants would refuse to enter workhouses, and therefore reduce Poor Law expenditures. A policy that promised to raise the morals of the poor and reduce taxes was hard for most Poor Law unions to resist (MacKinnon 1987).

The effect of the Crusade can be seen in Table 1. The deterrent effect associated with the workhouse led to a sharp fall in numbers on relief — from 1871 to 1876, the number of paupers receiving outdoor relief fell by 33 percent. The share of paupers relieved in workhouses increased from 12-15 percent in 1841-71 to 22 percent in 1880, and it continued to rise to 35 percent in 1911. The extent of the crusade varied considerably across poor law unions. Urban unions typically relieved a much larger share of their paupers in workhouses than did rural unions, but there were significant differences in practice across cities. In 1893, over 70 percent of the paupers in Liverpool, Manchester, Birmingham, and in many London Poor Law unions received indoor relief; however, in Leeds, Bradford, Newcastle, Nottingham and several other industrial and mining cities the majority of paupers continued to receive outdoor relief (Booth 1894).

Change in the attitude of the poor toward relief

The last third of the nineteenth century also witnessed a change in the attitude of the poor towards relief. Prior to 1870, a large share of the working class regarded access to public relief as an entitlement, although they rejected the workhouse as a form of relief. Their opinions changed over time, however, and by the end of the century most workers viewed poor relief as stigmatizing (Lees 1998). This change in perceptions led many poor people to go to great lengths to avoid applying for relief, and available evidence suggests that there were large differences between poverty rates and pauperism rates in late Victorian Britain. For example, in York in 1900, 3,451 persons received poor relief at some point during the year, less than half of the 7,230 persons estimated by Rowntree to be living in primary poverty.

The Declining Role of the Poor Law, 1870-1914

Increased availability of alternative sources of assistance

The share of the population on relief fell sharply from 1871 to 1876, and then continued to decline, at a much slower pace, until 1914. Real per capita relief expenditures increased from 1876 to 1914, largely because the Poor Law provided increasing amounts of medical care for the poor. Otherwise, the role played by the Poor Law declined over this period, due in large part to an increase in the availability of alternative sources of assistance. There was a sharp increase in the second half of the nineteenth century in the membership of friendly societies — mutual help associations providing sickness, accident, and death benefits, and sometimes old age (superannuation) benefits — and of trade unions providing mutual insurance policies. The benefits provided workers and their families with some protection against income loss, and few who belonged to friendly societies or unions providing “friendly” benefits ever needed to apply to the Poor Law for assistance.

Work relief

Local governments continued to assist unemployed males after 1870, but typically not through the Poor Law. Beginning with the Chamberlain Circular in 1886 the Local Government Board encouraged cities to set up work relief projects when unemployment was high. The circular stated that “it is not desirable that the working classes should be familiarised with Poor Law relief,” and that the work provided should “not involve the stigma of pauperism.” In 1905 Parliament adopted the Unemployed Workman Act, which established in all large cities distress committees to provide temporary employment to workers who were unemployed because of a “dislocation of trade.”

Liberal welfare reforms, 1906-1911

Between 1906 and 1911 Parliament passed several pieces of social welfare legislation collectively known as the Liberal welfare reforms. These laws provided free meals and medical inspections (later treatment) for needy school children (1906, 1907, 1912) and weekly pensions for poor persons over age 70 (1908), and established national sickness and unemployment insurance (1911). The Liberal reforms purposely reduced the role played by poor relief, and paved the way for the abolition of the Poor Law.

The Last Years of the Poor Law

During the interwar period the Poor Law served as a residual safety net, assisting those who fell through the cracks of the existing social insurance policies. The high unemployment of 1921-38 led to a sharp increase in numbers on relief. The official count of relief recipients rose from 748,000 in 1914 to 1,449,000 in 1922; the number relieved averaged 1,379,800 from 1922 to 1938. A large share of those on relief were unemployed workers and their dependents, especially in 1922-26. Despite the extension of unemployment insurance in 1920 to virtually all workers except the self-employed and those in agriculture or domestic service, there still were large numbers who either did not qualify for unemployment benefits or who had exhausted their benefits, and many of them turned to the Poor Law for assistance. The vast majority were given outdoor relief; from 1921 to 1923 the number of outdoor relief recipients increased by 1,051,000 while the number receiving indoor relieve increased by 21,000.

The Poor Law becomes redundant and is repealed

Despite the important role played by poor relief during the interwar period, the government continued to adopt policies, which bypassed the Poor Law and left it “to die by attrition and surgical removals of essential organs” (Lees 1998). The Local Government Act of 1929 abolished the Poor Law unions, and transferred the administration of poor relief to the counties and county boroughs. In 1934 the responsibility for assisting those unemployed who were outside the unemployment insurance system was transferred from the Poor Law to the Unemployment Assistance Board. Finally, from 1945 to 1948, Parliament adopted a series of laws that together formed the basis for the welfare state, and made the Poor Law redundant. The National Assistance Act of 1948 officially repealed all existing Poor Law legislation, and replaced the Poor Law with the National Assistance Board to act as a residual relief agency.

Table 1
Relief Expenditures and Numbers on Relief, 1696-1936

Expend. Real Expend. Expend. Number Share of Number Share of Share of
on expend. as share as share relieved Pop. relieved pop. paupers
Year Relief per capita of GDP of GDP (Official) relieved (Lees) relieved relieved
(£s) 1803=100 (Slack) (Lindert) 1 000s (Official) 1 000s (Lees) indoors
1696 400 24.9 0.8
1748-50 690 45.8 1.0 0.99
1776 1 530 64.0 1.6 1.59
1783-85 2 004 75.6 2.0 1.75
1803 4 268 100.0 1.9 2.15 1 041 11.4 8.0
1813 6 656 91.8 2.58
1818 7 871 116.8
1821 6 959 113.6 2.66
1826 5 929 91.8
1831 6 799 107.9 2.00
1836 4 718 81.1
1841 4 761 61.8 1.12 1 299 8.3 2 910 18.5 14.8
1846 4 954 69.4 1 332 8.0 2 984 17.8 15.0
1851 4 963 67.8 1.07 941 5.3 2 108 11.9 12.1
1856 6 004 62.0 917 4.9 2 054 10.9 13.6
1861 5 779 60.0 0.86 884 4.4 1 980 9.9 13.2
1866 6 440 65.0 916 4.3 2 052 9.7 13.7
1871 7 887 73.3 1 037 4.6 2 323 10.3 14.2
1876 7 336 62.8 749 3.1 1 678 7.0 18.1
1881 8 102 69.1 0.70 791 3.1 1 772 6.9 22.3
1886 8 296 72.0 781 2.9 1 749 6.4 23.2
1891 8 643 72.3 760 2.6 1 702 5.9 24.0
1896 10 216 84.7 816 2.7 1 828 6.0 25.9
1901 11 549 84.7 777 2.4 1 671 5.2 29.2
1906 14 036 96.9 892 2.6 1 918 5.6 31.1
1911 15 023 93.6 886 2.5 1 905 5.3 35.1
1921 31 925 75.3 627 1.7 35.7
1926 40 083 128.3 1 331 3.4 17.7
1931 38 561 133.9 1 090 2.7 21.5
1936 44 379 165.7 1 472 3.6 12.6

Notes: Relief expenditure data are for the year ended on March 25. In calculating real per capita expenditures, I used cost of living and population data for the previous year.

Table 2
County-level Poor Relief Data, 1783-1831

Per capita Per capita Per capita Per capita Share of Percent Share of
relief relief relief relief Percent of Recipients of land in Pop
spending spending spending spending population over 60 or arable Employed
County (s.) (s.) (s.) (s.) relieved Disabled farming in Agric
1783-5 1802-03 1812 1831 1802-03 1802-03 c. 1836 1821
North
Durham 2.78 6.50 9.92 6.83 9.3 22.8 54.9 20.5
Northumberland 2.81 6.67 7.92 6.25 8.8 32.2 46.5 26.8
Lancashire 3.48 4.42 7.42 4.42 6.7 15.0 27.1 11.2
West Riding 2.91 6.50 9.92 5.58 9.3 18.1 30.0 19.6
Midlands
Stafford 4.30 6.92 8.50 6.50 9.1 17.2 44.8 26.6
Nottingham 3.42 6.33 10.83 6.50 6.8 17.3 na 35.4
Warwick 6.70 11.25 13.33 9.58 13.3 13.7 47.5 27.9
Southeast
Oxford 7.07 16.17 24.83 16.92 19.4 13.2 55.8 55.4
Berkshire 8.65 15.08 27.08 15.75 20.0 12.7 58.5 53.3
Essex 9.10 12.08 24.58 17.17 16.4 12.7 72.4 55.7
Suffolk 7.35 11.42 19.33 18.33 16.6 11.4 70.3 55.9
Sussex 11.52 22.58 33.08 19.33 22.6 8.7 43.8 50.3
Southwest
Devon 5.53 7.25 11.42 9.00 12.3 23.1 22.5 40.8
Somerset 5.24 8.92 12.25 8.83 12.0 20.8 24.4 42.8
Cornwall 3.62 5.83 9.42 6.67 6.6 31.0 23.8 37.7
England & Wales 4.06 8.92 12.75 10.08 11.4 16.0 48.0 33.0

References

Blaug, Mark. “The Myth of the Old Poor Law and the Making of the New.” Journal of Economic History 23 (1963): 151-84.

Blaug, Mark. “The Poor Law Report Re-examined.” Journal of Economic History (1964) 24: 229-45.

Boot, H. M. “Unemployment and Poor Law Relief in Manchester, 1845-50.” Social History 15 (1990): 217-28.

Booth, Charles. The Aged Poor in England and Wales. London: MacMillan, 1894.

Boyer, George R. “Poor Relief, Informal Assistance, and Short Time during the Lancashire Cotton Famine.” Explorations in Economic History 34 (1997): 56-76.

Boyer, George R. An Economic History of the English Poor Law, 1750-1850. Cambridge: Cambridge University Press, 1990.

Brundage, Anthony. The Making of the New Poor Law. New Brunswick, N.J.: Rutgers University Press, 1978.

Clark, Gregory. “Farm Wages and Living Standards in the Industrial Revolution: England, 1670-1869.” Economic History Review, 2nd series 54 (2001): 477-505.

Clark, Gregory and Anthony Clark. “Common Rights to Land in England, 1475-1839.” Journal of Economic History 61 (2001): 1009-36.

Digby, Anne. “The Labour Market and the Continuity of Social Policy after 1834: The Case of the Eastern Counties.” Economic History Review, 2nd series 28 (1975): 69-83.

Eastwood, David. Governing Rural England: Tradition and Transformation in Local Government, 1780-1840. Oxford: Clarendon Press, 1994.

Fraser, Derek, editor. The New Poor Law in the Nineteenth Century. London: Macmillan, 1976.

Hammond, J. L. and Barbara Hammond. The Village Labourer, 1760-1832. London: Longmans, Green, and Co., 1911.

Hampson, E. M. The Treatment of Poverty in Cambridgeshire, 1597-1834. Cambridge: Cambridge University Press, 1934

Humphries, Jane. “Enclosures, Common Rights, and Women: The Proletarianization of Families in the Late Eighteenth and Early Nineteenth Centuries.” Journal of Economic History 50 (1990): 17-42.

King, Steven. Poverty and Welfare in England, 1700-1850: A Regional Perspective. Manchester: Manchester University Press, 2000.

Lees, Lynn Hollen. The Solidarities of Strangers: The English Poor Laws and the People, 1770-1948. Cambridge: Cambridge University Press, 1998.

Lindert, Peter H. “Poor Relief before the Welfare State: Britain versus the Continent, 1780- 1880.” European Review of Economic History 2 (1998): 101-40.

MacKinnon, Mary. “English Poor Law Policy and the Crusade Against Outrelief.” Journal of Economic History 47 (1987): 603-25.

Marshall, J. D. The Old Poor Law, 1795-1834. 2nd edition. London: Macmillan, 1985.

Pinchbeck, Ivy. Women Workers and the Industrial Revolution, 1750-1850. London: Routledge, 1930.

Pound, John. Poverty and Vagrancy in Tudor England, 2nd edition. London: Longmans, 1986.

Rose, Michael E. “The New Poor Law in an Industrial Area.” In The Industrial Revolution, edited by R.M. Hartwell. Oxford: Oxford University Press, 1970.

Rose, Michael E. The English Poor Law, 1780-1930. Newton Abbot: David & Charles, 1971.

Shaw-Taylor, Leigh. “Parliamentary Enclosure and the Emergence of an English Agricultural Proletariat.” Journal of Economic History 61 (2001): 640-62.

Slack, Paul. Poverty and Policy in Tudor and Stuart England. London: Longmans, 1988.

Slack, Paul. The English Poor Law, 1531-1782. London: Macmillan, 1990.

Smith, Richard (1996). “Charity, Self-interest and Welfare: Reflections from Demographic and Family History.” In Charity, Self-Interest and Welfare in the English Past, edited by Martin Daunton. NewYork: St Martin’s.

Sokoll, Thomas. Household and Family among the Poor: The Case of Two Essex Communities in the Late Eighteenth and Early Nineteenth Centuries. Bochum: Universitätsverlag Brockmeyer, 1993.

Solar, Peter M. “Poor Relief and English Economic Development before the Industrial Revolution.” Economic History Review, 2nd series 48 (1995): 1-22.

Tawney, R. H. Religion and the Rise of Capitalism: A Historical Study. London: J. Murray, 1926.

Webb, Sidney and Beatrice Webb. English Poor Law History. Part I: The Old Poor Law. London: Longmans, 1927.

Williams, Karel. From Pauperism to Poverty. London: Routledge, 1981.

Citation: Boyer, George. “English Poor Laws”. EH.Net Encyclopedia, edited by Robert Whaples. May 7, 2002. URL http://eh.net/encyclopedia/english-poor-laws/