EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

The Fordney-McCumber Tariff of 1922

Edward S. Kaplan, New York City College of Technology of CUNY

The Emergency Tariff Act of 1921: Prelude to Fordney-McCumber

Before we discuss the passage of the Fordney-McCumber Tariff of 1922 and its effect on the economy of the 1920s, we should briefly mention the Emergency Tariff Act of 1921. This tariff was a stopgap measure, put in place until the Fordney-McCumber Tariff could be passed. The Republican Party wanted to quickly reverse the low rates of the Underwood-Simmons Tariff of the Wilson administration. Protectionism had never died, but remained dormant during World War I, and now its supporters could base their arguments on both economics and nationalism. They claimed that the economic prosperity which occurred during the war was due mostly to a lack of imports and to the abundance of exports. Now that the war had ended, imports would increase, threatening the current economic prosperity. Why should Americans suffer economic hardship, especially after sending our boys to fight in a war that we did not start – a war that was supposed to make the world a better place, but now seemed a mistake? Isolationism – keeping out of international affairs, and worrying more about your own country – was on the rise in the United States, as the Senate, in the last days of the Wilson administration voted against joining the League of Nations. Isolationism, nationalism and the concern for continued prosperity made it easier for protectionists to press their arguments for a higher protective tariff. These trends led to the passage of Emergency Tariff in 1921 and to the Fordney-McCumber Tariff a year later. The rates of these tariffs rivaled the protectionist Payne-Aldrich Tariff of 1909, and were considerably higher than the Underwood-Simmons Tariff passed in 1913.

In January 1921, Joseph W. Fordney of Michigan, the Republican Chairman of the House Ways and Means Committee, guided the Emergency Tariff bill through the House of Representatives. Many Democrats were against protective tariffs, favoring them only as a source of revenue. They had opposed the bill, asserting that it would not raise enough revenue to justify itself, and that it would bring retaliations, which would close most world markets to American goods. Many Republicans cared little about raising revenue. They claimed that the bill would help end the current recession by providing protection for American workers. In February 1921, Fordney blamed the recession on the insufficient duty rates of the Underwood-Simmons Tariff and urged raising the rates to remedy the high cost of production in the United States. It is interesting to note that most modern economists believe that lower tariff rates and not higher ones contribute to economic prosperity.

The Emergency Tariff bill easily passed through the Senate Finance Committee and was sent to the Senate floor on January 17, 1921 where it only took a month for final approval. Porter McCumber, Republican senator from North Dakota, guided the bill through the Senate. He declared that the House version of the bill was not protective enough, especially in the area of wheat, the most important crop in his state. Wheat had been taxed at 25 cents a bushel in the Payne-Aldrich Bill in 1909, but was put on the free list (not taxed) in the Underwood-Simmons Bill in 1913. The House version of the Emergency Tariff imposed a 35 cent tax on wheat, but McCumber wanted a 50 cent tax, in order to help the poor wheat farmers of his state deal with the large quantities of wheat imported from Canada. In late February, a Conference Committee made up of five members each from the House and Senate agreed upon a compromise tariff bill, which was passed by both houses of Congress, only to be vetoed by Woodrow Wilson just before leaving office. (Note that at that time, the newly elected president did not take office until March 4th. Since 1936, all newly elected presidents take office on January 20th.) The attempt to override the veto failed, but shortly after Warren Harding took office, it was passed again and signed by the new president. It was intended to last only six months until the Fordney-McCumber Tariff was expected to be passed. However, it took an additional ten months for this to happen.

The Fordney-McCumber Tariff

The Fordney Bill in the House of Representatives

On June 28, 1921, the House Ways and Means Committee sent the Fordney bill to the full House for action. One of the most controversial parts of the House bill was the provision for an American Valuation Plan supported by Fordney to determine ad valorem rates – an ad valorem rate is set as a percent of the product’s value, rather than as a specific dollar amount. An invoice of imports would have to contain a statement by the exporter giving the cost of producing the imported article, together with a statement of its actual money value in the country of origin. The money value would be compared at the Customs House in the United States with its value in American dollars. The tariff charged would then be determined. The idea was to apply ad valorem rates to the American valuation of the product, which was generally higher than its foreign valuation.

The Democrats wasted little time in attacking the Fordney Bill when it reached the House for consideration. John Nance Garner of Texas, the Democratic leader in the House, who in 1932 became Franklin D. Roosevelt’s running mate, seized a straw hat and challenged any Republican to state a duty on it. He declared that the duty on the straw hat was 50 percent ad valorem in the Payne-Aldrich Bill, but in the new Fordney Bill being considered, it was $10 a dozen, plus an ad valorem rate of 20 percent, which made the import tax 61 percent. When the Republicans reminded Garner that he had just voted for the Emergency Tariff Bill, which had the same high rate, he admitted to making a mistake, one that he would correct now. Garner also opposed the American Valuation Plan, predicting that it, together with the rest of the tariff, would raise the cost of living in the United States substantially.

Fordney defended the bill, declaring that tariff reform had been long overdue; he assured his fellow House members that the new bill would protect the American farmer from cheap imports, as well as provide more jobs for American labor. He even contended that the new tariff would help our servicemen returning from the war to find employment, warning that the failure to pass this legislation would seriously endanger the economy. On July 21, 1921, the Fordney tariff bill passed the House by a margin of 289 to 127. Only seven Republicans voted against the measure, while seven Democrats voted for it. In its final form, the Fordney bill kept hides, lumber, oil, cotton, and asphalt on the free list, ended the embargo on dyestuffs, and declared for the American valuation system.

The Senate Version of the Fordney Bill

Serious discussion on the tariff bill in the Senate Finance Committee began on January 10, 1922. McCumber, the new chairman, held open hearings and it wasn’t until April 11 that the bill finally went to the Senate floor for consideration. It contained over 2,000 amendments added to the Fordney version. It did not include the American Valuation Plan, which was voted down in the Senate Finance Committee by 7 to 3. However, in order to appease Fordney, the Finance Committee gave the president the authority to modify tariff rates. He could change the basis for assessing ad valorem duties on selected items from the foreign value to the American value, if he saw that there was a major discrepancy between the two values.

When the bill reached the Senate floor, it had the support of the Farm Bloc and its spokesman, Senator Edwin Ladd of North Dakota. It is interesting to note that the Farm Bloc had supported low tariff rates of the Underwood-Simmons bill in 1913. In fact, President Wilson, and Senator Furnifold Simmons of North Carolina, who helped write the 1913 tariff, warned farmers in 1919 that protective tariffs would hurt rather than help them. Wilson declared that the farmer needed a better system of marketing and credit, and larger foreign markets for his surplus. Simmons added that the high tariff rates on agricultural goods would lead to retaliation and reduce the farmers’ exports. However, by 1922, farmers had become desperate to find a way to stop the precipitous decline of prices of their goods. They were led to believe by Senator Ladd that protection would be their best salvation.

The New York Times condemned the tariff in its editorial on May 2. The newspaper opposed protectionism in general, but especially criticized the duty on hides. It used an example by the United States Tariff Commission, which declared that every cent pound of duty on hides caused the price of a pair of shoes to rise by ten cents, as well as prices on all leather goods, which only strengthened the packers’ power over prices and output (packers are involved in wholesale trade of food and nonfood manufacturing and utilize meat by-products to make leather goods and hides).

The debate in the Senate dragged on throughout the spring and summer of 1922, with the Democrats doing all they could to delay and defeat the bill. Finally on August 19, 1922, the Senate voted 48 to 25 in favor of the Senate tariff bill. The only Republican to vote against the bill was William Borah of Idaho, while only three Democrats voted for it, John Kendrick of Wyoming, and Joseph Ransdell and Edwin Broussard of Louisiana. Four days later, a Conference Committee was formed to reconcile the differences between the Fordney and McCumber versions of the tariff. This committee remained deadlocked on the issue of the American Valuation Plan until a breakthrough occurred on September 9th.

The Conference Committee Agreement

The House and Senate conferees agreed on the higher rates of the Senate bill for most items in the tariff. Fordney did not get his American Valuation Plan, as Republican Senator Reed Smoot of Utah vehemently opposed it along with McCumber. They urged Fordney to accept the compromise in the Senate bill, which created a new Tariff Commission that would advise the president on the course of tariff rates. The president had the authority to raise or lower rates up to fifty percent, if necessary. The dyestuff embargo of Emergency Tariff, dropped in the Fordney bill was also absent in the Senate version of the McCumber bill. On September 21, 1922, President Harding signed the Fordney-McCumber Tariff into law and called it one of the greatest tariff bills ever created by Congress. He also assured the American people that the new tariff would contribute to the growing prosperity in the United States for years to come.

Comparing Fordney-McCumber, Payne-Aldrich, and Underwood-Simmons

In evaluating the Fordney McCumber Tariff, let’s compare the average duties on all imports and then the average on dutiable imports of the Payne-Aldrich, the Underwood-Simmons and the Fordney-McCumber bills.

Payne-Aldrich (1909)
Average duty on all imports was 19.3 percent
Average on dutiable imports was 40.8 percent
Underwood-Simmons (1913)
Average duty on all imports was 9.1 percent
Average on dutiable imports was 27 percent
Fordney-McCumber (1922)
Average duty on all imports was 14 percent
Average on dutiable imports was 38.5 percent

Source: Three Dimensions of U. S. Trade Policy, Chapter 2, p. 3. http://www.washingtontradereports.com/Analyses/Chapter2.pdf

Though the Fordney-McCumber Tariff had higher average rates than the Underwood-Simmons Tariff, it was still less than the Payne-Aldrich Tariff. However, for several goods, including raw sugar, metals and some agricultural products, the rates of the Fordney-McCumber were the highest. The increase in the rates on raw sugar was one of the most controversial in the bill and demonstrated the power of the sugar cane lobby in Louisiana and the sugar beet lobby in California, Michigan, and Colorado. The rates on ores, such as tungsten, ferrotungsten, and manganese, were the highest ever, and the rationale for this action was that it was necessary for both the national defense and the economic welfare of the country. Listed below is a comparison of some of the important rate changes in the Payne, Underwood and Fordney tariffs.

Payne-Aldrich Underwood-Simmons Fordney-McCumber
Raw Sugar $1.68 a pound $1.25 a pound $2.20 a pound
Tungsten 20% ad valorem Free 0.45 a pound
Ferrotungs 25% ad valorem 15% ad valorem 0.60 a pound
Manganese Free Free 0.01 a pound
Poultry 0.03 a pound Free 0.03 a pound
Eggs 0.05 a dozen Free 0.08 a dozen
Corn 0.15 a bushel Free 0.15 a bushel
Oats 0.15 a bushel 0.06 a bushel 0.15 a bushel
Rye 0.10 a bushel Free 0.15 a bushel
Olives 0.15 a gallon 0.15 a gallon 0.20 a gallon
Wheat 0.25 a bushel Free 0.30 a bushel
Apples 0.25 a bushel 0.10 a bushel 0.25 a bushel
Apricots Free Free 0.50 a pound
Lemons 0.015 a pound 0.50 a pound 0.02 a pound
Potatoes 0.25 a bushel Free 0.50 per 100
Peanuts 0.01 a pound 0.01 a pound 0.04 a pound
Butter 0.06 a pound 0.025 a pound 0.08 a pound

New York Times, September 13, 1922, p. 12.

The Fordney-McCumber Tariff and American Agriculture

The recession of 1920-1921 marked the end of a burst of prosperity for the American farmer, as Europe had recovered from the ravages of war and no longer required large quantities of American agricultural products. The surplus of farm goods could no longer be absorbed in the national market and agricultural prices dropped rapidly in the United States. Gross agricultural income fell from $17.7 billion in 1919 to $10.5 billion in 1921. From June to July 1920 the index of farm prices declined by ten points and by August another thirty points. The number of farm foreclosures per thousand told a tragic story. From 1913 to 1920, it averaged only 3.2 per thousand farms, increasing to 10.7 per thousand from 1921 to 1925, and 17 per thousand from 1926 to 1930. The tariff became a major issue in 1920s America as prices of wheat, corn, meats and cotton declined to one-third of their wartime values.

Farmers’ problems were exacerbated by the recession of 1920-1921 and in the longer-term were tied to increased productivity and production in the face of slowly-growing domestic demand. Still, many farmers believed that the Fordney-McCumber Tariff would help them by keeping out agricultural goods from abroad and raising farm prices. As mentioned above, Senator Edwin Ladd of North Dakota, a leader of the Farm Bloc, made up of bipartisan members of the House and Senate, urged quick passage of the Fordney bill.

However, not all farm groups were in agreement. The American Farm Bureau Federation, founded in 1919, opposed the Fordney-McCumber bill, claiming that it raised prices for all consumers and specifically pointed to the tariff on raw wool that cost the public millions of dollars a year. Senator David Walsh of Massachusetts challenged the supporters of the Fordney-McCumber Tariff. He declared that the farmer did not need tariff protection and that tariffs would not raise farm prices, as the farmer was now a net exporter of goods and depended on foreign markets to sell his goods.

In September 1926, economic statistics were released by farmers’ groups that argued that protection had failed to resolve the agricultural depression. The figures blamed the Fordney-McCumber Tariff for increases in the costs of farm equipment. For example, the average harness set that sold for $46 in 1918 sold for $75 in 1926. At the same time the fourteen inch plow doubled in cost $14 to $28, mowing machines went from $45 to $95, and farm wagons increased in price from $85 to $150. Meanwhile, the purchasing price of the farmers’ dollar decreased from $1.12 in 1918 to 60.3 cents in 1926. Though it is arguable how much the Fordney-McCumber tariff hurt the farmer, it did not raise farm prices, as its proponents said it would.

The Fordney-McCumber Tariff and the Debt Payments

The United States emerged from World War I as a creditor nation. In 1920 the foreign trade of the United States was larger than at any other time in its history. The value of exports stood at $8.25 billion and imports at $5.75 billion. During war, the United States had loaned the European nations $7 billion, with another $3.3 billion in loans after the war for relief and rehabilitation, and expected that these loans would be repaid as soon as possible. However, Europe could not meet its debt obligations, as it suffered a gold drain from 1914 to 1917, when it shipped gold to America to pay for goods. Germany, blamed for starting the war, owed $33 billion dollars in reparations, which it stopped paying in 1923, due to its weak and inflationary economy.

In early 1924, the Coolidge Administration called for an economic conference, which led to the implementation of the Dawes Plan, named after Charles Dawes, a prominent Chicago banker and later vice president under Coolidge. It called for an international loan of $200 million in gold to Germany, most of it coming from the United States, and the reorganization of the Reichsbank. The idea was to help revive the economy of Germany so that it could continue paying reparations to the allies, and they in turn would make loan payments to the United States.

The concern of the United States with helping Germany and demanding repayment of its war debts made little sense in view of the Fordney-McCumber Tariff. There was never a need for the bizarre Dawes Plan, and debt repayment would have been easier, if the tariff had been reduced. Sir Josiah Stamp, a British financial expert and a member of the commission that wrote the Dawes Plan, contended that debt payments could not be made unless the Fordney-McCumber Tariff was reduced to enable the European nations to sell their good in the United States.

By 1924, bankers, especially those that had made loans to Europe during the war, were critics of the tariff. Dr. Benjamin Anderson, Jr., an economist at Chase National Bank in New York, declared that the United States should reduce the tariff to attract more goods into the country to relieve the gold pressure in Europe. Anderson believed that a lower tariff would allow the payment of debts in goods rather than in gold.

In 1926, Senator Joseph Robinson, Democrat of Arkansas, spoke to the National Council of Importers and Traders, a group oppose to the tariff. He called the Fordney-McCumber Tariff “a dismal failure and disappointment,” making it difficult for importers to make a living.

International Retaliation and the Fordney-McCumber Tariff

With the passage of the Fordney-McCumber Tariff in 1922, the United States had, for a creditor country, one of the highest tariff rates in the world. After failing to convince the United States to lower its tariff duties, European and Latin American countries decided to retaliate and raise their duties. Between 1925 and 1929, there were thirty-three general revisions with substantial tariff changes in twenty-six European nations, and seventeen revisions and changes in Latin America. In 1927 and 1928, Australia, Canada, and New Zealand all raised their tariff rates in response to the Fordney-McCumber Tariff.

Canada particularly suffered from the provisions of the tariff. C. E. Burton, general manager of the Robert Simpson Company of Toronto, contended that the tariff forced his and other Canadian companies to close their New York offices. He claimed that Canada, the best customer of the United States, was treated unfairly by the tariff law.

Both the French and Spanish governments responded to the tariff by raising their rates. In April 1927, the French raised import duties in general, but specifically targeted the American automobile companies by increasing duties from 45 percent of their value to 100 percent. The French were willing to forgo these increases if the United States would allow them concessions on exports such as silks, perfumes, and handmade lace. In May 1927, the Spanish government announced a 40 percent increase in duties on American exports to Spain.

The League of Nations, concerned about the tariff wars, organized a World Economic Conference in Geneva, Switzerland in 1927 to negotiate a tariff truce. Though the United States had a representative present (it did not belong to the League.), no agreement could be reached. However, it is interesting to note that the United States’ position was that any agreement reached by the League would apply to European tariffs and not Fordney-McCumber.

Shortly after the failure of the World Economic Conference, Germany and Italy imposed high import duties on American wheat. After 1925, it appeared that European nations began using the same arguments as the United States to rationalize their higher tariff rates – the domestic producer has the right to protect his market and the worker the right to job protection and higher wages.

In conclusion, nationalism and isolationism resulted from World War I, leading to a return of protectionism, with the passage of the Emergency Tariff in 1921 and Fordney McCumber Tariff in 1922. This in turn hurt both the domestic and international economies. Ironically, President Herbert Hoover stayed the course – by signing an even more protectionist tariff bill, the Smoot-Hawley Tariff of 1930. In the aftermath of the Great Depression and the collapse in world trade, the U.S. moved back toward free trade in 1933, when Democratic President Franklin D. Roosevelt and his Secretary of State Cordell Hull worked to end protectionism through a series of bilateral and later multilateral agreements, with foreign countries.

Further Reading

Books

Kaplan, Edward S. American Trade Policy, 1923-1995. Westport, CT: Greenwood Press, 1996.

Kaplan, Edward S., and Thomas W. Ryley. Prelude to Trade Wars: American Tariff Policy, 1890-1922. Westport, CT: Greenwood Press, 1994.

Taussig, F. W. The Tariff History of the United States. Eighth edition. New York: G. P. Putnam’s Sons, 1931.

Vousden, Neil. The Economics of Trade Protection. New York: Cambridge University Press, 1990.

Articles

Baldwin, Robert E. “The Political Economy of Trade Policy.” Journal of Economic Perspectives 3 (1989): 119-136.

Berglund, Abraham. “The Tariff Act of 1922.” American Economic Review 13 (1923): 14-32.

Link, Arthur S. “What Happened to the Progressive Movement in the 1920s?” American Historical Review 64 (1959): 851-883.

“The Tariff and the Farmer.” Literary Digest (July 19, 1930): 8-9.

Taussig, F. W. “The Tariff Act of 1922.” Quarterly Journal of Economics 37 (1922): 1-28.

Citation: Kaplan, Edward. “The Fordney-McCumber Tariff of 1922″. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-fordney-mccumber-tariff-of-1922/

Fire Insurance in the United States

Dalit Baranoff

Fire Insurance before 1810

Marine Insurance

The first American insurers modeled themselves after British marine and fire insurers, who were already well-established by the eighteenth century. In eighteenth-century Britain, individual merchants wrote most marine insurance contracts. Shippers and ship owners were able to acquire insurance through an informal exchange centering on London’s coffeehouses. Edward Lloyd’s Coffee-house, the predecessor of Lloyd’s of London, came to dominate the individual underwriting business by the middle of the eighteenth century.

Similar insurance offices where local merchants could underwrite individual voyages began to appear in a number of American port cities in the 1720s. The trade centered on Philadelphia, where at least fifteen different brokerages helped place insurance in the hands of some 150 private underwriters over the course of the eighteenth century. But only a limited amount of coverage was available. American shippers also could acquire insurance through the agents of Lloyds and other British insurers, but often had to wait months for payments of losses.

Mutual Fire Insurance

When fire insurance first appeared in Britain after the Great London Fire of 1666, mutual societies, in which each policyholder owned a share of the risk, predominated. The earliest American fire insurers followed this model as well. Established in the few urban centers where capital was concentrated, American mutuals were not considered money-making ventures, but rather were outgrowths of volunteer firefighting organizations. In 1735 Charleston residents formed the first American mutual insurance company, the Friendly Society of Mutual Insuring of Homes against Fire. It only lasted until 1741, when a major fire put it out of business.

Benjamin Franklin was the organizing force behind the next, more successful, mutual insurance venture, the Philadelphia Contributionship for the Insurance of Houses from Loss by Fire 1, known familiarly by the name of its symbol, the “Hand in Hand.” By the 1780s, growing demand had led to the formation of other fire mutuals in Philadelphia, New York, Baltimore, Norwich (CT), Charleston, Richmond, Boston, Providence, and elsewhere. (See Table 1.)

Joint-Stock Companies

Joint-stock insurance companies, which raise capital through the sale of shares and distribute dividends, rose to prominence in American fire and marine insurance after the War of Independence. While only a few British insurers were granted the royal charters that allowed them to sell stock and to claim limited liability, insurers in the young United States found it relatively easy to obtain charters from state legislatures eager to promote a domestic insurance industry.

Joint-stock companies first appeared in the marine sector, where demand and the potential for profit were greater. Because they did not rely on the fortunes of any one individual, joint-stock companies provided greater security than private underwriting. In addition to their premium income, joint-stock companies maintained a fixed capital, allowing them to cover larger amounts than mutuals could.

The first successful joint-stock company, the Insurance Company of North America, was formed in 1792 in Philadelphia to sell marine, fire, and life insurance. By 1810, more than seventy such companies had been chartered in the United States. Most of the firms incorporated before 1810 operated primarily in marine insurance, although they were often chartered to handle other lines. (See Table 1.)

Table 1: American Insurance Companies, 1735-1810

Connecticut
1794 Norwich Mutual Fire Insurance Co. (Norwich)
1796 New Haven Insurance Co.
1797 New Haven Insurance Co. (Marine)
1801 Mutual Assurance Co. (New Haven)
1803 Hartford Insurance Co.(M)
1803 Middletown Insurance Co. (Middletown) (M)
1803 Norwich Marine Insurance Co.
1805 Union Insurance Co. (New London) (M)
1810 Hartford Fire Insurance Co.
Maryland
1787 Baltimore Fire Insurance Co. (Baltimore)
1791 Maryland I. Insurance Co. (Baltimore)
1794 Baltimore Equitable Society (Baltimore)
1795 Baltimore Fire Insurance Co. (Baltimore)
1795 Maryland Insurance Co. (Baltimore)
1796 Charitable Marine Society (Baltimore)
1798 Georgetown Mutual Insurance Co. (Georgetown)
1804 Chesapeake Insurance Co. (Baltimore)
1804 Marine Insurance Co. (Baltimore)
1804 Union Insurance Co. of MD (Baltimore)
Massachusetts
1795 Massachusetts Fire and Marine Insurance Co. (Boston)
1798 Massachusetts Mutual Ins. Co. (Boston)
1799 Boston Marine Insurance Co. (Boston)
1799 Newburyport Marine Insurance Co. (Newburyport)
1800 Maine Fire and Marine Ins. Co. (Portland)
1800 Salem Marine Insurance Co. (Salem)
1803 New England Marine Insurance Co. (Boston)
1803 Suffolk Insurance Co. (Boston)
1803 Cumberland Marine and Fire Insurance Co. (Portland, ME)
1803 Essex Fire and Marine Insurance Co. (Salem)
1803 Gloucester Marine Ins. Co. (Gloucester)
1803 Lincoln and Kennebeck Marine Ins. Co. (Wicasset)
1803 Merrimac Marine and Fire Ins. Co. (Newburyport)
1803 Marblehead Marine Insurance Co. (Marblehead)
1803 Nantucket Marine Insurance Co. (Nantucket)
1803 Portland Marine and Fire Insurance Co. (Portland)
1804 North American Insurance Co. (Boston)
1804 Union Insurance Co. (Boston)
1804 Hampshire Mutual Fire Insurance Co. (Northampton)
1804 Kennebunk Marine Ins. Co. (Wells)
1804 Nantucket Union Marine Insurance Co. (Nantucket)
1804 Plymouth Marine Insurance Co. (Plymouth)
1804 Union Marine Insurance Co. (Salem)
1805 Bedford Marine Insurance Co. (New Bedford)
1806 Newburyport Marine Insurance Co. (Newburyport)
1807 Bath Fire and Marine Insurance Co. (Bath)
1807 Middlesex Insurance Co. (Charlestown)
1807 Union Marine and Fire Insurance Co. (Newburyport)
1808 Kennebeck Marine Ins. Co. (Bath)
1809 Beverly Marine Insurance Co. (Beverly)
1809 Marblehead Social (Marblehead)
1809 Social Insurance Co. (Salem)
Pennsylvania
1752 Philadelphia Contributionship for the Insurance of Houses from Loss by Fire
1784 Mutual Assurance Co. (Philadelphia)
1794 Insurance Co. of North America (Philadelphia)
1794 Insurance Co. of the State of Pennsylvania (Philadelphia)
1803 Phoenix Insurance Co. (Philadelphia)
1803 Philadelphia Insurance Co. (Philadelphia)
1804 Delaware Insurance Co. (Philadelphia)
1804 Union Insurance Co. (Chester County)
1807 Lancaster and Susquehanna Insurance Co.
1809 Marine and Fire Insurance Co. (Philadelphia)
1810 United States Insurance Co. (Philadelphia)
1810 American Fire Insurance Co. (Philadelphia)
Delaware
1810 Farmers’ Bank of the State of Delaware (Dover)
Rhode Island
1799 Providence Insurance Co.
1800 Washington Insurance Co.
1800 Providence Mutual Fire Insurance Co.
South Carolina
1735 Friendly Society (Charleston) – royal charter
1797 Charleston Insurance Co. (Charleston)
1797 Charleston Mutual Insurance Co. (Charleston)
1805 South Carolina Insurance Co. (Charleston)
1807 Union Insurance Co. (Charleston)
New Hampshire
1799 New Hampshire Insurance Co. (Portsmouth)
New York City
1787 Knickerbocker Fire Insurance Co. (originally Mutual Insurance Co. of the City of New York)
1796 New York Insurance Co.
1796 Insurance Co. of New York
1797 Associated Underwriters
1798 Mutual Assurance Co.
1800 Columbian Insurance Co.
1802 Washington Mutual Assurance Co.
1802 Marine Insurance Co.
1804 Commercial Insurance Co.
1804 Eagle Fire Insurance Co.
1807 Phoenix Insurance Co.
1809 Mutual Insurance Co.
1810 Fireman’s Insurance Co.
1810 Ocean Insurance Co.
North Carolina
1803 Mutual Insurance Co. (Raleigh)
Virginia
1794 Mutual Assurance Society(Richmond)

The Embargo Act (1807-1809) and the War of 1812 (1812-1814) interrupted shipping, drying up marine insurers’ premiums and forcing them to look for other sources of revenue. These same events also stimulated the development of domestic industries, such as textiles, which created new demand for fire insurance. Together, these events led many marine insurers into the fire field, previously a sideline for most. After 1810, new joint-stock companies appeared whose business centered on fire insurance from the outset. Unlike mutuals, these new fire underwriters insured contents as well as real estate, a growing necessity as Americans’ personal wealth began to expand.

1810-1870

Geographic Diversification

Until the late 1830s, most fire insurers concentrated on their local markets, with only a few experimenting with representation through agents in distant cities. Many state legislatures discouraged “foreign” competition by taxing the premiums of out-of-state insurers. This situation prevailed through 1835, when fire insurers learned a lesson they were not to forget. A devastating fire destroyed New York City’s business district, causing between $15 million and $26 million in damage, bankrupting 23 of the 26 local fire insurance companies. From this point on, fire insurers regarded the geographic diversification of risks as imperative.

Insurers sought to enter new markets in order to reduce their exposure to large-scale conflagrations. They gradually discovered that contracting with agents allowed them to expand broadly, rapidly, and at relatively low cost. Pioneered mainly by companies based in Hartford and Philadelphia, the agency system did not become truly widespread until the 1850s. Once the system began to emerge in earnest, it rapidly took off. By 1855, for example, New York State had authorized 38 out-of-state companies to sell insurance there. Most were fewer than five years old. By 1860, national companies relying on networks of local agents had replaced purely local operations as the mainstay of the industry.

Competition

As the agency system grew, so too did competition. By the 1860s, national fire insurance firms competed in hundreds of local markets simultaneously. Low capitalization requirements and the widespread adoption of general incorporation laws provided for easy entry into the field.

Competition forced insurers to base their premiums on short-term costs. As a result, fire insurance rates were inadequate to cover the long-term costs associated with the city-wide conflagrations that might occur unpredictably once or twice in a generation. When another large fire occurred, many consumers would be left with worthless policies.

Aware of this danger, insurers struggled to raise rates through cooperation. Their most notable effort was the National Board of Fire Underwriters. Formed in 1866 with 75 member companies, it established local boards throughout the country to set uniform rates. But by 1870, renewed competition led the members of the National Board to give up the attempt.

Regulation

Insurance regulation developed during this period to protect consumers from the threat of insurance company insolvency. Beginning with New York (1849) and Massachusetts (1852), a number of states began to codify their insurance laws. Following New York’s lead in 1851, some states adopted $100,000-minimum capitalization requirements. But these rules did little to protect consumers when a large fire resulted in losses in excess of that amount.

By 1860 four states had established insurance departments. Two decades later, insurance departments, headed by a commissioner or superintendent, existed in some 25 states. In states without formal departments, the state treasurer, comptroller, or secretary of state typically oversaw insurance regulation.

State Insurance Departments through 1910
(Departments headed by insurance commissioner or superintendent unless otherwise indicated)

Source: Harry C. Brearley, Fifty Years of a Civilizing Force (1916), 261-174.
Year listed is year department began operating, not year legislation creating it was passed.

1852
  • New Hampshire
  • Vermont (state treasurer served as insurance commissioner)
1855
  • Massachusetts (annual returns required since 1837)
1860
  • New York (comptroller first authorized to prepare reports in 1853, first annual report 1855)
1862
  • Rhode Island
1865
  • Indiana (1852-1865, state auditor headed)
  • Connecticut
1867
  • West Virginia (state auditor supervised 1865 until 1907, when reorganized)
1868
  • California
  • Maine
1869
  • Missouri
1870
  • Kentucky (part of bureau of state auditor.s department)
1871
  • Kansas
  • Michigan
1872
  • Florida
  • Ohio (1867-72, state auditor supervised)
  • Maryland
  • Minnesota
1873
  • Arkansas
  • Nebraska
  • Pennsylvania
  • Tennessee (state treasurer acted as insurance commissioner)
1876
  • Texas
1878
  • Wisconsin (1867-78, secretary of state supervised insurance)
1879
  • Delaware
1881
  • Nevada (1864-1881, state comptroller supervised insurance)
1883
  • Colorado
1887
  • Georgia (1869-1887, insurance supervised by state comptroller general)
1889
  • North Dakota
  • Washington (secretary of state acted as insurance commissioner until 1908)
1890
  • Oklahoma (secretary of territory headed through 1907)
1891
  • New Jersey (1875-1891, secretary of state supervised insurance)
1893
  • Illinois (auditor of public accounts supervised insurance 1869-1893)
1896
  • Utah (1884-1896, supervised by territorial secretary. Supervised by secretary of state until department reorganized in 1909)
1897
  • Alabama (1860-1897, insurance supervised by state auditor)
  • Wyoming (territorial auditor supervised insurance 1868-1896) (1877)
  • South Dakota (1889-1897, state auditor supervised)
1898
  • Louisiana (secretary of state acted as superintendent)
1900
  • Alaska (administered by survey-general of territory)
1901
  • Arizona (1887-1901 supervised by territorial treasurer)
  • Idaho (1891-1901, state treasurer headed)
1902
  • Mississippi (1857-1902, auditor of public accounts supervised insurance)
  • District of Columbia
1905
  • New Mexico (1882-1904, territorial auditor supervised)
1906
  • Virginia (from 1866 auditor of public accounts supervised)
1908
  • South Carolina (1876-1908, comptroller general supervised insurance)
1909
  • Montana (supervised by territorial/state auditor 1883-1909)

The Supreme Court affirmed state supervision of insurance in 1868 in Paul v. Virginia, which found insurance not to be interstate commerce. As a result, it would not be subject to any federal regulations over the coming decades.

1871-1906

Chicago and Boston Fires

The Great Chicago Fire of October 9 and 10, 1871 destroyed over 2,000 acres (nearly 3½ square miles) of the city. With close to 18,000 buildings burned, including 1,500 “substantial business structures,” 100,000 people were left homeless and thousands jobless. Insurance losses totaled between $90 and $100 million. Many firms’ losses exceeded their available assets.

About 200 fire insurance companies did business in Chicago at the time. The fire bankrupted 68 of them. At least one-half of the property in the burnt district was covered by insurance, but as a result of the insurance company failures, Chicago policyholders recovered only about 40 percent of what they were owed.

A year later, on November 9 and 10, 1872, a fire destroyed Boston’s entire mercantile district, an area of 40 acres. Insured losses in this case totaled more than $50 million, bankrupting an additional 32 companies. The rate of insurance coverage was higher in Boston, where commercial property, everywhere more likely to be insured, happened to bear the brunt of the fire. Some 75 percent of ruined buildings and their contents were insured against fire. In this case, policyholders recovered about 70 percent of their insured losses.

Local Boards

After the Chicago and Boston fires revealed the inadequacy of insurance rates, surviving insurers again tried to set rates collectively. By 1875, a revitalized National Board had organized over 1,000 local boards, placing them under the supervision of district organizations. State auxiliary boards oversaw the districts, and the National Board itself was the final arbiter of rates. But this top-down structure encountered resistance from the local agents, long accustomed to setting their own rates. In the midst of the economic downturn that followed the Panic of 1873, the National Board’s efforts again collapsed.

In 1877, the membership took a fresh approach. They voted to dismantle the centralized rating bureaucracy, instead leaving rate-setting to local boards composed of agents. The National Board now focused its attention on promoting fire prevention and collecting statistics. By the mid-1880s, local rate-setting cartels operated in cities throughout the U.S. Regional boards or private companies rated smaller communities outside the jurisdiction of a local board.

The success of the new breed of local rate-setting cartels owed much to the ever-expanding scale of commerce and property, which fostered a system of mutual dependence between the local agents. Although individual agents typically represented multiple companies, they had come routinely to split risks amongst themselves and the several firms they served. Responding to the imperative of diversification, companies rarely covered more than $10,000 on an individual property, or even within one block of a city.

As property values rose, it was not unusual to see single commercial buildings insured by 20 or more firms, each underwriting a $1,000 or $2,000 chunk of a given risk. Insurers who shared their business had few incentives to compete on price. Undercutting other insurers might even cost them future business. When a sufficiently large group of agents joined forces to set minimum prices, they effectively could shut out any agents who refused to follow the tariff.

Cooperative price-setting by local boards allowed insurers to maintain higher rates, taking periodic conflagrations into account as long-term costs. Cooperation also resulted, for the first time, in rates that followed a stable pattern, where aggregate prices reflected aggregate costs, the so-called underwriting cycle.

(Note: The underwriting cycle is illustrated above using combined ratios, which are the ratio of losses and expenses to premium income in any given year. Because combined ratios include dividend payments but not investment income, they are often greater than 100.)

Local boards helped fire insurance companies diversify their risks and stabilize their rates. The companies in turn, supported the local boards. As a result, the local rate-setting boards that formed during the early 1880s proved remarkably durable and successful. Despite brief disruptions in some cities during the severe economic downturn of the mid-1890s, the local boards did not fail.

As an additional benefit, insurers were able to accomplish collectively what they could not afford to do individually: collect and analyze data on a large scale. The “science” of fire insurance remained in its infancy. The local boards inspected property and created detailed rating charts. Some even instituted scheduled rating – a system where property owners were penalized for defects, such as lack of fire doors, and rewarded for improvements. Previously, agents had set rates based on their personal, idiosyncratic knowledge of local conditions. Within the local boards, agents shared both their subjective personal knowledge and objective data. The results were a crude approximation of an actuarial science.

Anti-Compact Laws

Price-setting by local boards was not viewed favorably by many policy-holders who had to pay higher prices for insurance. Since Paul v. Virginia had exempted insurance from federal antitrust laws, consumers encouraged their state legislatures to pass laws outlawing price collusion among insurers. Ohio adopted the first anti-compact law in 1885, followed by Michigan (1887), Arkansas, Nebraska, Texas, and Kansas (1889), Maine, New Hampshire, and Georgia (1891). By 1906, 19 states had anti-compact laws, but they had limited effectiveness. Where open collusion was outlawed, insurers simply established private rating bureaus to set “advisory” rates.

Spread of Insurance

Local boards flourished in prosperous times. During the boom years of the 1880s, new capital flowed into every sector. The increasing concentration of wealth in cities steadily drove the amounts and rates of covered property upward. Between 1880 and 1889, insurance coverage rose by an average rate of 4.6 percent a year, increasing 50 percent overall. By 1890, close to 60 percent of burned property in the U.S. was insured, a figure that would not be exceeded until the 1910s, when upwards of 70 percent of property was insured.

In 1889, the dollar value of property insured against fire in the United States approached $12 billion. Fifteen years later, $20 billion dollars in property was covered.

Baltimore and San Francisco

The ability of higher, more stable prices to insulate industry and society from the consequences of citywide conflagrations can be seen in the strikingly different results following the sequels to Boston and Chicago, which occurred in Baltimore and San Francisco in the early 1900s. The Baltimore Fire of Feb. 7 through 9, 1904 resulted in $55 million in insurance claims, 90 percent of which was paid. Only a few Maryland-based companies went bankrupt.

San Francisco’s disaster dwarfed Baltimore’s. The earthquake that struck the city on April 18, 1906 set off fires that burned for three days, destroying over 500 blocks that contained at least 25,000 buildings. The damages totaled $350 million, some two-thirds covered by insurance. In the end, $225 million was paid out, or around 90 percent of what was owed. Only 20 companies operating in San Francisco were forced to suspend business, some only temporarily.

Improvements in construction and firefighting would put an end to the giant blazes that had plagued America’s cities. But by the middle of the first decade of the twentieth century, cooperative price-setting in fire insurance already had ameliorated the worst economic consequences of these disasters.

1907-1920

State Rate-Setting

Despite the passage of anti-compact legislation, fire insurance in the early 1900s was regulated as much by companies as by state governments. After Baltimore and San Francisco, state governments, recognizing the value of cooperative price-setting, began to abandon anti-compact laws in favor of state involvement in rate-setting which took one of two forms: set rates, or state review of industry-set rates.

Kansas was the first to adopt strict rate regulation in 1909, followed by Texas in 1910 and Missouri in 1911. These laws required insurers to submit their rates for review by the state insurance department, which could overrule them. Contesting the constitutionality of its law, the insurance industry took the State of Kansas to court. In 1914, the Supreme Court of the United States decided German Alliance Insurance Co. v. Ike Lewis, Superintendent of Insurance in favor of Kansas. The Court declared insurance to be a public good, subject to rate regulation.

While the case was pending, New York entered the rating arena in 1911 with a much less restrictive law. New York’s law was greatly influenced by a legislative investigation, the Merritt Committee. The Armstrong Committee’s investigation of New York’s life insurance industry in 1905 had uncovered numerous financial improprieties, leading legislators to call for investigations into the fire insurance industry, where they hoped to discover similar evidence of corruption or profiteering. The Merritt Committee, which met in 1910 and 1911, instead found that most fire insurance companies brought in only modest profits.

The Merritt Committee further concluded that cooperation among firms was often in the public interest, and recommended that insurance boards continue to set rates. The ensuing law mandated state review of rates to prevent discrimination, requiring companies to charge the same rates for the same types of property. The law also required insurance companies to submit uniform statistics on premiums and losses for the first time. Other states soon adopted similar requirements. By the early 1920, nearly thirty states had some form of rate regulation.

Data Collection

New York’s data-collection requirement had far-reaching consequences for the entire fire insurance industry. Because every major insurer in the United States did business in New York (and often a great deal of it), any regulatory act passed there had national implications. And once New York mandated that companies submit data, the imperative for a uniform classification system was born.

In 1914, the industry responded by creating an Actuarial Bureau within the National Board of Fire Underwriters to collect uniformly organized data and submit it to the states. Supported by the National Convention of Insurance Commissioners (today called the National Association of Insurance Commissioners, or NAIC), the Actuarial Bureau was soon able to establish uniform, industry-wide classification standards. The regular collection of uniform data enabled the development of modern actuarial science in the fire field.

1920 to the Present

Federal Regulation

Through the 1920s and 1930s, property insurance rating continued as it had before, with various rating bureaus determining the rates that insurers were to charge, and the states reviewing or approving them. In 1944, the Supreme Court decided a federal antitrust suit against the Southeastern Underwriters Association, which set rates in a number of southern states. The Supreme Court found the SEUA to be in violation of the Sherman Act, thereby overturning Paul v. Virginia. The industry had become subject to federal regulation for the first time.

Within a year, Congress had passed the McCarran-Ferguson Act, allowing the states to continue regulating insurance so long as they met certain federal requirements. The law also granted the industry a limited exemption from antitrust statutes. The Act gave the National Association of Insurance Commissioners three years to develop model rating laws for the states to adopt.

State Rating Laws

In 1946, the NAIC adopted model rate laws for fire and casualty insurance that required “prior approval” of rates by the states before they could be used by insurers. While most of the industry supported this requirement as a way to prevent competition, a group of “independent” insurers opposed prior approval and instead supported “file and use” rates.

By the 1950s, all states had passed rating laws, although not necessarily the model laws. Some allowed insurers to file deviations from bureau rates, while others required bureau membership and strict prior approval of rates. Most regulatory activity through the late 1950s involved the industry’s attempts to protect the bureau rating system.

The bureaus’ tight hold on rates was soon to loosen. In 1959, an investigation into bureau practices by a U.S. Senate Antitrust subcommittee (the O’Mahoney Committee) found that competition should be the main regulator of the industry. As a result, some states began to make it easier for insurers to deviate from prior approval rates.

During the 1960s, two different systems of property/casualty insurance regulation developed. While many states abandoned prior approval in favor of competitive rating, others strengthened strict rating laws. At the same time, the many rating bureaus that had provided rates for different states began to consolidate. By the 1970s, the rates that these combined rating bureaus provided were officially only advisory. Insurers could choose whether to use them or develop their own rates.

Although membership in rating bureaus is no longer mandatory, advisory organizations continue to play an important part in property/casualty insurance by providing required statistics to the states. They also allow new firms easy access to rating data. The Insurance Services Office (ISO), one of the largest “bureaus,” became a for-profit corporation in 1997, and is no longer controlled by the insurance industry. Still, even in its current, mature state, the property/casualty field still functions largely according to the patterns set in fire insurance by the 1920s.

References and Further Reading:

Bainbridge, John. Biography of an Idea: The Story of Mutual Fire and Casualty Insurance. New York: Doubleday, 1952.

Baranoff, Dalit. “Shaped By Risk: Fire Insurance in America 1790-1920.” Ph.D. dissertation, Johns Hopkins University, 2003.

Brearley, Harry Chase. Fifty Years of a Civilizing Force: An Historical and Critical Study of the Work of the National Board of Fire Underwriters. New York: Frederick A. Stokes Company, 1916.

Grant, H. Roger. Insurance Reform: Consumer Action in the Progressive Era. Ames: Iowa State University Press, 1979.

Harrington, Scott E. “Insurance Rate Regulation in the Twentieth Century.” Journal of Risk and Insurance 19, no. 2 (2000): 204-18.

Lilly, Claude C. “A History of Insurance Regulation in the United States.” CPCU Annals 29 (1976): 99-115.

Perkins, Edwin J. American Public Finance and Financial Services, 1700-1815. Columbus: Ohio State University Press, 1994.

Pomeroy, Earl and Carole Olson Gates. “State and Federal Regulation of the Business of Insurance.” Journal of Risk and Insurance 19, no. 2 (2000): 179-88.

Tebeau, Mark. Eating Smoke: Fire in Urban America, 1800-1950. Baltimore: Johns Hopkins University Press, 2003.

Wagner, Tim. “Insurance Rating Bureaus.” Journal of Risk and Insurance 19, no. 2 (2000): 189-203.

1 The name appears in various sources as either the “Contributionship” or the “Contributorship.”

Citation: Baranoff, Dalit. “Fire Insurance in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/fire-insurance-in-the-united-states/

An Economic History of Finland

Riitta Hjerppe, University of Helsinki

Finland in the early 2000s is a small industrialized country with a standard of living ranked among the top twenty in the world. At the beginning of the twentieth century it was a poor agrarian country with a gross domestic product per capita less than half of that of the United Kingdom and the United States, world leaders at the time in this respect. Finland was part of Sweden until 1809, and a Grand Duchy of Russia from 1809 to 1917, with relatively broad autonomy in its economic and many internal affairs. It became an independent republic in 1917. While not directly involved in the fighting in World War I, the country went through a civil war during the years of early independence in 1918, and fought against the Soviet Union during World War II. Participation in Western trade liberalization and bilateral trade with the Soviet Union required careful balancing of foreign policy, but also enhanced the welfare of the population. Finland has been a member of the European Union since 1995, and has belonged to the European Economic and Monetary Union since 1999, when it adopted the euro as its currency.

Gross Domestic Product per capita in Finland and in EU 15, 1860-2004, index 2004 = 100

Sources: Eurostat (2001–2005)

Finland has large forest areas of coniferous trees, and forests have been and still are an important natural resource in its economic development. Other natural resources are scarce: there is no coal or oil, and relatively few minerals. Outokumpu, the biggest copper mine in Europe in its time, was depleted in the 1980s. Even water power is scarce, despite the large number of lakes, because of the small height differences. The country is among the larger ones in Europe in area, but it is sparsely populated with 44 people per square mile, 5.3 million people altogether. The population is very homogeneous. There are a small number of people of foreign origin, about two percent, and for historical reasons there are two official language groups, the Finnish-speaking majority and a Swedish-speaking minority. In recent years population has grown at about 0.3 percent per year.

The Beginnings of Industrialization and Accelerating Growth

Finland was an agrarian country in the 1800s, despite poor climatic conditions for efficient grain growing. Seventy percent of the population was engaged in agriculture and forestry, and half of the value of production came from these primary industries in 1900. Slash and burn cultivation finally gave way to field cultivation during the nineteenth century, even in the eastern parts of the country.

Some iron works were founded in the southwestern part of the country in order to process Swedish iron ore as early as in the seventeenth century. Significant tar burning, sawmilling and fur trading brought cash with which to buy a few imported items such as salt, and some luxuries – coffee, sugar, wines and fine cloths. The small towns in the coastal areas flourished through the shipping of these items, even if restrictive legislation in the eighteenth century required transport via Stockholm. The income from tar and timber shipping accumulated capital for the first industrial plants.

The nineteenth century saw the modest beginnings of industrialization, clearly later than in Western Europe. The first modern cotton factories started up in the 1830s and 1840s, as did the first machine shops. The first steam machines were introduced in the cotton factories and the first rag paper machine in the 1840s. The first steam sawmills were allowed to start only in 1860. The first railroad shortened the traveling time from the inland towns to the coast in 1862, and the first telegraphs came at around the same time. Some new inventions, such as electrical power and the telephone, came into use early in the 1880s, but generally the diffusion of new technology to everyday use took a long time.

The export of various industrial and artisan products to Russia from the 1840s on, as well as the opening up of British markets to Finnish sawmill products in the 1860s were important triggers of industrial development. From the 1870s on pulp and paper based on wood fiber became major export items to the Russian market, and before World War I one-third of the demand of the vast Russian empire was satisfied with Finnish paper. Finland became a very open economy after the 1860s and 1870s, with an export share equaling one-fifth of GDP and an import share of one-fourth. A happy coincidence was the considerable improvement in the terms of trade (export prices/import prices) from the late 1860s to 1900, when timber and other export prices improved in relation to the international prices of grain and industrial products.

Openness of the economies (exports+imports of goods/GDP, percent) in Finland and EU 15, 1960-2005

Sources: Heikkinen and van Zanden 2004; Hjerppe 1989.

Finland participated fully in the global economy of the first gold-standard era, importing much of its grain tariff-free and a lot of other foodstuffs. Half of the imports consisted of food, beverages and tobacco. Agriculture turned to dairy farming, as in Denmark, but with poorer results. The Finnish currency, the markka from 1865, was tied to gold in 1878 and the Finnish Senate borrowed money from Western banking houses in order to build railways and schools.

GDP grew at a slightly accelerating average rate of 2.6 percent per annum, and GDP per capita rose 1.5 percent per year on average between 1860 and 1913. The population was also growing rapidly, and from two million in the 1860s it reached three million on the eve of World War I. Only about ten percent of the population lived in towns. The investment rate was a little over 10 percent of GDP between the 1860s and 1913 and labor productivity was low compared to the leading nations. Accordingly, economic growth depended mostly on added labor inputs, as well as a growing cultivated area.

Catching up in the Interwar Years

The revolution of 1917 in Russia and Finland’s independence cut off Russian trade, which was devastating for Finland’s economy. The food situation was particularly difficult as 60 percent of grain required had been imported.

Postwar reconstruction in Europe and the consequent demand for timber soon put the economy on a swift growth path. The gap between the Finnish economy and Western economies narrowed dramatically in the interwar period, although it remained the same among the Scandinavian countries, which also experienced fast growth: GDP grew by 4.7 percent per annum and GDP per capita by 3.8 percent in 1920–1938. The investment rate rose to new heights, which also improved labor productivity. The 1930s depression was milder than in many other European countries because of the continued demand for pulp and paper. On the other hand, Finnish industries went into depression at different times, which made the downturn milder than it would have been if all the industries had experienced their troughs simultaneously. The Depression, however, had serious and long-drawn-out consequences for poor people.

The land reform of 1918 secured land for tenant farmers and farm workers. A large number of new, small farms were established, which could only support families if they had extra income from forest work. The country remained largely agrarian. On the eve of World War II, almost half of the labor force and one-third of the production were still in the primary industries. Small-scale agriculture used horses and horse-drawn machines, lumberjacks went into the forest with axes and saws, and logs were transported from the forest by horses or by floating. Tariff protection and other policy measures helped to raise the domestic grain production to 80–90 percent of consumption by 1939.

Soon after the end of World War I, Finnish sawmill products, pulp and paper found old and new markets in the Western world. The structure of exports became more one-sided, however. Textiles and metal products found no markets in the West and had to compete hard with imports on the domestic market. More than four-fifths of exports were based on wood, and one-third of industrial production was in sawmilling, other wood products, pulp and paper. Other growing industries included mining, basic metal industries and machine production, but they operated on the domestic market, protected by the customs barriers that were typical of Europe at that time.

The Postwar Boom until the 1970s

Finland came out of World War II crippled by the loss of a full tenth of its territory, and with 400.000 evacuees from Karelia. Productive units were dilapidated and the raw material situation was poor. The huge war reparations to the Soviet Union were the priority problem of the decision makers. The favorable development of the domestic machinery and shipbuilding industries, which was based on domestic demand during the interwar period and arms deliveries to the army during the War made war-reparations deliveries possible. They were paid on time and according to the agreements. At the same time, timber exports to the West started again. Gradually the productive capacity was modernized and the whole industry was reformed. Evacuees and soldiers were given land on which to settle, and this contributed to the decrease in farm size.

Finland became part of the Western European trade-liberalization movement by joining the World Bank, the International Monetary Fund (IMF) and the Bretton Woods agreement in 1948, becoming a member of the General Agreement on Tariffs and Trade (GATT) two years later, and joining Finnefta (an agreement between the European Free Trade Area (EFTA) and Finland) in 1961. The government chose not to receive Marshall Aid because of the world political situation. Bilateral trade agreements with the Soviet Union started in 1947 and continued until 1991. Tariffs were eased and imports from market economies liberated from 1957. Exports and imports, which had stayed at internationally high levels during the interwar years, only slowly returned to the earlier relative levels.

The investment rate climbed to new levels soon after War World II under a government policy favoring investments and it remained on this very high level until the end of the 1980s. The labor-force growth stopped in the early 1960s, and economic growth has since depended on increases in productivity rather than increased labor inputs. GDP growth was 4.9 percent and GDP per capita 4.3 percent in 1950–1973 – matching the rapid pace of many other European countries.

Exports and, accordingly, the structure of the manufacturing industry were diversified by Soviet and, later, on Western orders for machinery products including paper machines, cranes, elevators, and special ships such as icebreakers. The vast Soviet Union provided good markets for clothing and footwear, while Finnish wool and cotton factories slowly disappeared because of competition from low-wage countries. The modern chemical industry started to develop in the early twentieth century, often led by foreign entrepreneurs, and the first small oil refinery was built by the government in the 1950s. The government became actively involved in industrial activities in the early twentieth century, with investments in mining, basic industries, energy production and transmission, and the construction of infrastructure, and this continued in the postwar period.

The new agricultural policy, the aim of which was to secure reasonable incomes and favorable loans to the farmers and the availability of domestic agricultural products for the population, soon led to overproduction in several product groups, and further to government-subsidized dumping on the international markets. The first limitations on agricultural production were introduced at the end of the 1960s.

The population reached four million in 1950, and the postwar baby boom put extra pressure on the educational system. The educational level of the Finnish population was low in Western European terms in the 1950s, even if everybody could read and write. The underdeveloped educational system was expanded and renewed as new universities and vocational schools were founded, and the number of years of basic, compulsory education increased. Education has been government run since the 1960s and 1970s, and is free at all levels. Finland started to follow the so-called Nordic welfare model, and similar improvements in health and social care have been introduced, normally somewhat later than in the other Nordic countries. Public child-health centers, cash allowances for children, and maternity leave were established in the 1940s, and pension plans have covered the whole population since the 1950s. National unemployment programs had their beginnings in the 1930s and were gradually expanded. A public health-care system was introduced in 1970, and national health insurance also covers some of the cost of private health care. During the 1980s the income distribution became one of the most even in the world.

Slower Growth from the 1970s

The oil crises of the 1970s put the Finnish economy under pressure. Although the oil reserves of the main supplier, the Soviet Union, showed no signs of running out, the price increased in line with world market prices. This was a source of devastating inflation in Finland. On the other hand, it was possible to increase exports under the terms of the bilateral trade agreement with the Soviet Union. This boosted export demand and helped Finland to avoid the high and sustained unemployment that plagued Western Europe.

Economic growth in the 1980s was somewhat better than in most Western economies, and at the end of the 1980s Finland caught up with the sluggishly-growing Swedish GDP per capita for the first time. In the early 1990s the collapse of the Soviet trade, Western European recession and problems in adjusting to the new liberal order of international capital movement led the Finnish economy into a depression that was worse than that of the 1930s. GDP fell by over 10 percent in three years, and unemployment rose to 18 percent. The banking crisis triggered a profound structural change in the Finnish financial sector. The economy revived again to a brisk growth rate of 3.6 percent in 1994-2005: GDP growth was 2.5 percent and GDP per capita 2.1 percent between 1973 and 2005.

Electronics started its spectacular rise in the 1980s and it is now the largest single manufacturing industry with a 25 percent share of all manufacturing. Nokia is the world’s largest producer of mobile phones and a major transmission-station constructor. Connected to this development was the increase in the research-and- development outlay to three percent of GDP, one of the highest in the world. The Finnish paper companies UPM-Kymmene and M-real and the Finnish-Swedish Stora-Enso are among the largest paper producers in the world, although paper production now accounts for only 10 percent of manufacturing output. The recent discussion on the future of the industry is alarming, however. The position of the Nordic paper industry, which is based on expensive, slowly-growing timber, is threatened by new paper factories founded near the expanding consumption areas in Asia and South America, which use local, fast-growing tropical timber. The formerly significant sawmilling operations now constitute a very small percentage of the activities, although the production volumes have been growing. The textile and clothing industries have shrunk into insignificance.

What has typified the last couple of decades is the globalization that has spread to all areas. Exports and imports have increased as a result of export-favoring policies. Some 80 percent of the stocks of Finnish public companies are now in foreign hands: foreign ownership was limited and controlled until the early 1990s. A quarter of the companies operating in Finland are foreign-owned, and Finnish companies have even bigger investments abroad. Most big companies are truly international nowadays. Migration to Finland has increased, and since the collapse of the eastern bloc Russian immigrants have become the largest single foreign group. The number of foreigners is still lower than in many other countries – there are about 120.000 people with foreign background out of a population of 5.3 million.

The directions of foreign trade have been changing because trade with the rising Asian economies has been gaining in importance and Russian trade has fluctuated. Otherwise, almost the same country distribution prevails as has been common for over a century. Western Europe has a share of three-fifths, which has been typical. The United Kingdom was for long Finland’s biggest trading partner, with a share of one-third, but this started to diminish in the 1960s. Russia accounted for one-third of Finnish foreign trade in the early 1900s, but the Soviet Union had minimal trade with the West at first, and its share of the Finnish foreign trade was just a few percentage points. After World War II Soviet-Finnish trade increased gradually until it reached 25 percent of Finnish foreign trade in the 1970s and early 1980s. Trade with Russia is now gradually gaining ground again from the low point of the early 1990s, and had risen to about ten percent in 2006. This makes Russia one of Finland’s three biggest trading partners, Sweden and Germany being the other two with a ten percent share each.

The balance of payments was a continuing problem in the Finnish economy until the 1990s. Particularly in the post-World War II period inflation repeatedly eroded the competitive capacity of the economy and led to numerous devaluations of the currency. An economic policy favoring exports helped the country out of the depression of the 1990s and improved the balance of payments.

Agriculture continued its problematic development of overproduction and high subsidies, which finally became very unpopular. The number of farms has shrunk since the 1960s and the average size has recently risen to average European levels. The share of agricultural production and labor are also on the Western European levels nowadays. Finnish agriculture is incorporated into the Common Agricultural Policy of the European Union and shares its problems, even if Finnish overproduction has been virtually eliminated.

The share of forestry is equally low, even if it supplies four-fifths of the wood used in Finnish sawmills and paper factories: the remaining fifth is imported mainly from the northwestern parts of Russia. The share of manufacturing is somewhat above Western European levels and, accordingly, that of services is high but slightly lower than in the old industrialized countries.

Recent discussion on the state of the economy mainly focuses on two issues. The very open economy of Finland is very much influenced by the rather sluggish economic development of the European Union. Accordingly, not very high growth rates are to be expected in Finland either. Since the 1990s depression, the investment rate has remained at a lower level than was common in the postwar period, and this is cause for concern.

The other issue concerns the prominent role of the public sector in the economy. The Nordic welfare model is basically approved of, but the costs create tensions. High taxation is one consequence of this and political parties discuss whether or not the high public-sector share slows down economic growth.

The aging population, high unemployment and the decreasing numbers of taxpayers in the rural areas of eastern and central Finland place a burden on the local governments. There is also continuing discussion about tax competition inside the European Union: how does the high taxation in some member countries affect the location decisions of companies?

Development of Finland’s exports by commodity group 1900-2005, percent

Source: Finnish National Board of Customs, Statistics Unit

Note on classification: Metal industry products SITC 28, 67, 68, 7, 87; Chemical products SITC 27, 32, 33, 34, 5, 66; Textiles SITC 26, 61, 65, 84, 85; Wood, paper and printed products SITC 24, 25, 63, 64, 82; Food, beverages, tobacco SITC 0, 1, 4.

Development of Finland’s imports by commodity group 1900-2005, percent

Source: Finnish National Board of Customs, Statistics Unit

Note on classification: Metal industry products SITC 28, 67, 68, 7, 87; Chemical products SITC 27, 32, 33, 34, 5, 66; Textiles SITC 26, 61, 65, 84, 85; Wood, paper and printed products SITC 24, 25, 63, 64, 82; Food, beverages, tobacco SITC 0, 1, 4.

References:

Heikkinen, S. and J.L van Zanden, eds. Explorations in Economic Growth. Amsterdam: Aksant, 2004.

Heikkinen, S. Labour and the Market: Workers, Wages and Living Standards in Finland, 1850–1913. Commentationes Scientiarum Socialium 51 (1997).

Hjerppe, R. The Finnish Economy 1860–1985: Growth and Structural Change. Studies on Finland’s Economic Growth XIII. Helsinki: Bank of Finland Publications, 1989.

Jalava, J., S. Heikkinen and R. Hjerppe. “Technology and Structural Change: Productivity in the Finnish Manufacturing Industries, 1925-2000.” Transformation, Integration and Globalization Economic Research (TIGER), Working Paper No. 34, December 2002.

Kaukiainen, Yrjö. A History of Finnish Shipping. London: Routledge, 1993.

Myllyntaus, Timo. Electrification of Finland: The Transfer of a New Technology into a Late Industrializing Economy. Worcester, MA: Macmillan, Worcester, 1991.

Ojala, J., J. Eloranta and J. Jalava, editors. The Road to Prosperity: An Economic History of Finland. Helsinki: Suomalaisen Kirjallisuuden Seura, 2006.

Pekkarinen, J. and J. Vartiainen. Finlands ekonomiska politik: den långa linjen 1918–2000, Stockholm: Stiftelsen Fackföreningsrörelsens institut för ekonomisk forskning FIEF, 2001.

Citation: Hjerppe, Riitta. “An Economic History of Finland”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/an-economic-history-of-finland/

The Glorious Revolution of 1688

Stephen Quinn, Texas Christian University

The Glorious Revolution was when William of Orange took the English throne from James II in 1688. The event brought a permanent realignment of power within the English constitution. The new co-monarchy of King William III and Queen Mary II accepted more constraints from Parliament than previous monarchs had, and the new constitution created the expectation that future monarchs would also remain constrained by Parliament. The new balance of power between parliament and crown made the promises of the English government more credible, and credibility allowed the government to reorganize its finances through a collection of changes called the Financial Revolution. A more contentious argument is that the constitutional changes made property rights more secure and thus promoted economic development.

Historical Overview

Tension between king and parliament ran deep throughout the seventeenth century. In the 1640s, the dispute turned into civil war. The loser, Charles I, was beheaded in 1649; his sons, Charles and James, fled to France; and the victorious Oliver Cromwell ruled England in the 1650s. Cromwell’s death in 1659 created a political vacuum, so Parliament invited Charles I’s sons back from exile, and the English monarchy was restored with the coronation of Charles II in 1660.

Tensions after the Restoration

The Restoration, however, did not settle the fundamental questions of power between king and Parliament. Indeed, exile had exposed Charles I’s sons to the strong monarchical methods of Louis XIV. Charles and James returned to Britain with expectations of an absolute monarchy justified by the Divine Right of Kings, so tensions continued during the reigns of Charles II (1660-1685) and his brother James II (1685-88). Table 1 lists many of the tensions and the positions favored by each side. The compromise struck during the Restoration was that Charles II would control his succession, that he would control his judiciary, and that he would have the power to collect traditional taxes. In exchange, Charles II would remain Protestant and the imposition of additional taxes would require Parliament’s approval.

Table 1

Issues Separating Crown and Parliament, 1660-1688

Issue King’s Favored Position Parliament’s Favored Position
Constitution Absolute Royal Power

(King above Law)

Constrained Royal Power

(King within Law)

Religion Catholic Protestant
Ally France Holland
Enemy Holland France
Inter-Branch Checks Royal right to control succession

(Parliamentary approval NOT required)

Parliament’s right to meet

(Royal summons NOT required)

Judiciary Subject to Royal Punishment Subject to Parliamentary Impeachment
Ordinary Revenue Royal authority sufficient to impose and collect traditional taxes. Parliamentary authority necessary to impose and collect traditional taxes.

traditional taxes traditional taxes.

Extraordinary Revenue Royal authority sufficient to impose and collect new taxes. Parliamentary authority necessary to impose and collect new taxes.
Appropriation Complete royal control over expenditures. Parliamentary audit or even appropriation.

In practice, authority over additional taxation was how Parliament constrained Charles II. Charles brought England into war against Protestant Holland (1665-67) with the support of extra taxes authorized by Parliament. In the years following that war, however, the extra funding from Parliament ceased, but Charles II’s borrowing and spending did not. By 1671, all his income was committed to regular expenses and paying interest on his debts. Parliament would not authorize additional funds, so Charles II was fiscally shackled.

Treaty of Dover

To regain fiscal autonomy and subvert Parliament, Charles II signed the secret Treaty of Dover with Louis XIV in 1671. Charles agreed that England would join France in war against Holland and that he would publicly convert to Catholicism. In return, Charles received cash from France and the prospect of victory spoils that would solve his debt problem. The treaty, however, threatened the Anglican Church, contradicted Charles II’s stated policy of support for Protestant Holland, and provided a source of revenue independent of Parliament.

Moreover, to free the money needed to launch his scheme, Charles stopped servicing many of his debts in an act called the Stop of the Exchequer, and, in Machiavellian fashion, Charles isolated a few bankers to take the loss (Roseveare 1991). The gamble, however, was lost when the English Navy failed to defeat the Dutch in 1672. Charles then avoided a break with Parliament by retreating from Catholicism.

James II

Parliament, however, was also unable to gain the upper hand. From 1679 to 1681, Protestant nobles had Parliament pass acts excluding Charles II’s Catholic brother James from succession to the throne. The political turmoil of the Exclusion Crisis created the Whig faction favoring exclusion and the Tory counter-faction opposing exclusion. Even with a majority in Commons, however, the Whigs could not force a reworking of the constitution in their favor because Charles responded by dissolving three Parliaments without giving his consent to the acts.

As a consequence of the stalemate, Charles did not summon Parliament over the final years of his life, and James did succeed to the throne in 1685. Unlike the pragmatic Charles, James II boldly pushed for all of his goals. On the religious front, the Catholic James upset his Anglican allies by threatening the preeminence of the Anglican Church (Jones 1978, 238). He also declared that his son and heir would be raised Catholic. On the military front, James expanded the standing army and promoted Catholic officers. On the financial front, he attempted to subvert Parliament by packing it with his loyalists. With a packed Parliament, “the king and his ministers could have achieved practical and permanent independence by obtaining a larger revenue” (Jones 1978, p. 243). By 1688, Tories, worried about the Church of England, and Whigs, worried about the independence of Parliament, agreed that they needed to unite against James II.

William of Orange

The solution became Mary Stuart and her husband, William of Orange. English factions invited Mary and William to seize the throne because the couple was Protestant and Mary was the daughter of James II. The situation, however, had additional drama because William was also the military commander of the Dutch Republic, and, in 1688, the Dutch were in a difficult military position. Holland was facing war with France (the Nine Years War, 1688-97), and the possibility was growing that James II would bring England into the war on the side of France. James was nearing open war with his son-in-law William.

For William and Holland, accepting the invitation and invading England was a bold gamble, but the success could turn England from a threat to an ally. William landed in England with a Dutch army on November 5, 1688 (Israel 1991). Defections in James II’s army followed before battle was joined, and William allowed James to flee to France. Parliament took the flight of James II as abdication and the co-reign of William III and Mary II officially replaced him on February 13, 1689. Although Mary had the claim to the throne as James II’s daughter, William demanded to be made King and Mary wanted William to have that power. Authority was simplified when Mary’s death in 1694 left William the sole monarch.

New Constitution

The deal struck between Parliament and the royal couple in 1688-89 was that Parliament would support the war against France, while William and Mary would accept new constraints on their authority. The new constitution reflected the relative weakness of William’s bargaining position more than any strength in Parliament’s position. Parliament feared the return of James, but William very much needed England’s willing support in the war against France because the costs would be extraordinary and William would be focused on military command instead of political wrangling.

The initial constitutional settlement was worked out in 1689 in the English Bill of Rights, the Toleration Act, and the Mutiny Act that collectively committed the monarchs to respect Parliament and Parliament’s laws. Fiscal power was settled over the 1690s as Parliament stopped granting the monarchs the authority to collect taxes for life. Instead, Parliament began regular re-authorization of all taxes, Parliament began to specify how new revenue authorizations could be spent, Parliament began to audit how revenue was spent, and Parliament diverted some funds entirely from the king’s control (Dickson 1967: 48-73). By the end of the war in 1697, the new fiscal powers of Parliament were largely in place.

Constitutional Credibility

The financial and economic importance of the arrangement between William and Mary and Parliament was that the commitments embodied in the constitutional monarchy of the Glorious Revolution were more credible that the commitments under the Restoration constitution (North and Weingast 1989). Essential to the argument is what economists mean by the term credible. If a constitution is viewed as a deal between Parliament and the Crown, then credibility means how believable it is today that Parliament and the king will choose to honor their promises tomorrow. Credibility does not ask whether Charles II reneged on a promise; rather, credibility asks if people expected Charles to renege.

One can represent the situation by drawing a decision tree that shows the future choices determining credibility. For example, the decision tree in Figure 1 contains the elements determining the credibility of Charles II’s honoring the Restoration constitution of 1660. Going forward in time from 1660 (left to right), the critical decision is whether Charles II will honor the constitution or eventually renege. The future decision by Charles, however, will depend on his estimation of benefits of becoming an absolute monarch versus the cost of failure and the chances he assigns to each. Determining credibility in 1660 requires working backwards (right to left). If one thinks Charles II will risk civil war to become an absolute monarch, then one would expect Charles II to renege on the constitution, and therefore the constitution lacks credibility despite what Charles II may promise in 1660. In contrast, if one expects Charles II to avoid civil war, then one would expect Charles to choose to honor the constitution, so the Restoration constitution would be credible.

Figure 1. Restoration of 1660 Decision Tree

A difficulty with credibility is foreseeing future options. With hindsight, we know that Charles II did attempt to break the Restoration constitution in 1670-72. When his war against Holland failed, he repaired relations with Parliament and avoided civil war, so Charles managed something not portrayed in Figure 1. He replaced the outcome of civil war in the decision tree with the outcome of a return to the status quo. The consequence of removing the threat of civil war, however, was to destroy credibility in the king’s commitment to the constitution. If James II believed he inherited the options created by his brother, then James II’s 1685 commitment to the Restoration constitution lacked credibility because the worst that would happen to James was a return to the status quo.

So why would the Glorious Revolution constitution be more credible than Restoration constitution challenged by both Charles II and James II? William was very unlikely to become Catholic or pro-French which eliminated many tensions. Also, William very much needed Parliament’s support for his war against France; however, the change in credibility argued by North and Weingast (1989) looks past William’s reign, so it also requires confidence that William’s successors would abide by the constitution. A source of long-run confidence was that the Glorious Revolution reasserted the risk of a monarch losing his throne. William III’s decision tree in 1689 again looked like Charles II’s in 1660, and Parliament’s threat to remove an offending monarch was becoming credible. The seventeenth century had now seen Parliament remove two of the four Stuart monarchs, and the second displacement in 1688 was much easier than the wars that ended the reign of Charles I in 1649.

Another lasting change that made the new constitution more credible than the old constitution was that William and his successors were more constrained in fiscal matters. Parliament’s growing ‘power of the purse’ gave the king less freedom to maneuver a constitutional challenge. Moreover, Parliament’s fiscal control increased over time because the new constitution favored Parliament in the constitutional renegotiations that accompanied each succeeding monarch.

As a result, the Glorious Revolution constitution made credible the enduring ascendancy of Parliament. In terms of the king, the new constitution increased the credibility of the proposition that kings would not usurp Parliament.

Fiscal Credibility

The second credibility story of the Glorious Revolution was that the increased credibility of the government’s constitutional structure translated into an increased credibility for the government’s commitments. When acting together, the king and Parliament retained the power to default on debt, seize property, or change rules; so why would the credibility of the constitution create confidence in a government’s promises to the public?

A king who lives within the constitution has less desire to renege on his commitments. Recall that Charles II defaulted on his debts in an attempt to subvert the constitution, and, in contrast, Parliament after the Glorious Revolution generously financed wars for monarchs who abided by the constitution. An irony of the Glorious Revolution is that monarchs who accepted constitutional constraints gained more resources than their absolutist forebears.

Still, should a monarch want to have his government renege, Parliament will not always agree, and a stable constitution assures a Parliamentary veto. The two houses of Parliament, Commons and Lords, creates more veto opportunities, and the chances of a policy change decrease with more veto opportunities if the king and the two houses have different interests (Weingast 1997).

Another aspect of Parliament is the role of political parties. For veto opportunities to block change, opponents need only to control one veto, and here the coalition aspect of parties was important. For example, the Whig coalition combined dissenting Protestants and moneyed interests, so each could rely on mutual support through the Whig party to block government action against either. Cross-issue bargaining between factions creates a cohesive coalition on multiple issues (Stasavage 2002).

An additional reason for Parliament’s credibility was reputation. As a deterrent against violating commitments today, reputation relies on penalties felt tomorrow, so reputation often does not deter those overly focused on the present. A desperate king is a common example. As collective bodies of indefinite life, however, Parliament and political parties have longer time horizons than an individual, so reputation has better chance of fostering credibility.

A measure of fiscal credibility is the risk premium that the market puts on government debt. During the Nine Years War (1688-97), government debt carried a risk premium of 4 percent over private debt, but that risk premium disappeared and became a small discount in the years 1698 to 1705 (Quinn 2001: 610). The drop in the rates on government debt marks a substantial increase in the market’s confidence in the government after the Treaty of Ryswick ended the Nine Years War in 1697 and left William III and the new constitution intact. A related measure of confidence was the market price of stock in companies like the Bank of England and the East India Company. Because those companies were created by Parliamentary authorization and held large quantities of government debt, changes in confidence were reflected in changes in their stock prices. Again, the Treaty of Ryswick greatly increased stock prices and confirms a substantial increase in the credibility of the government (Wells and Wills 2000, 434). In contrast, later Jacobite threats, such as the invasion of Scotland by James II’s son ‘the Pretender’ in 1708, had negative but largely transitory effects on share prices.

Financial Consequences

The fiscal credibility of the English government created by the Glorious Revolution unleashed a revolution in public finance. The most prominent element was the introduction of long-run borrowing by the government, because such borrowing absolutely relied on the government’s fiscal credibility. To create credible long-run debt, Parliament took responsibility for the debt, and Parliamentary-funded debt became the National Debt, instead of just the king’s debt. To bolster credibility, Parliament committed future tax revenues to servicing the debts and introduced new taxes as needed (Dickson 1967, Brewer 1988). Credible government debt formed the basis of the Bank of England in 1694 and the core the London stock market. The combination of these changes has been called the Financial Revolution and was essential for Britain’s emergence as a Great Power in the eighteenth century (Neal 2000).

While the Glorious Revolution was critical to the Financial Revolution in England, the follow up assertion in North and Weingast (1989) that the Glorious Revolution increased the security of property rights in general, and so spurred economic growth, remains an open question. A difficulty is how to test the question. An increase in the credibility of property rights might cause interest rates to decrease because people become willing to save more; however, rates based on English property rentals show no effect from the Glorious Revolution, and the rates of one London banker actually increased after the Glorious Revolution (Clark 1996, Quinn 2001). In contrast, high interest rates could indicate that the Glorious Revolution increased entrepreneurship and demand for investment. Unfortunately, high rates could also mean that the expansion of government borrowing permitted by the Financial Revolution crowded out investment. North and Weingast (1989) point to a general expansion of financial intermediation which is supported by studies like Carlos, Key, and Dupree (1998) that find the secondary market for Royal African Company and Hudson’s Bay Company stocks became busier in the 1690s. Distinguishing between crowding out and increased demand for investment, however, relies on establishing whether the overall quantity of business investment changed, and that remains unresolved because of the difficulty in constructing such an aggregate measure. The potential linkages between the credibility created by the Glorious Revolution and economic development remain an open question.

References:

Brewer, John. The Sinews of Power. Cambridge: Harvard Press, 1988.

Carlos, Ann M., Jennifer Key, and Jill L. Dupree. “Learning and the Creation of Stock-Market Institutions: Evidence from the Royal African and Hudson’s Bay Companies, 1670-1700.” Journal of Economic History 58, no. 2 (1998): 318-44.

Clark, Gregory. “The Political Foundations of Modern Economic Growth: England, 1540-1800.” Journal of Interdisciplinary History 55 (1996): 563-87.

Dickson, Peter. The Financial Revolution in England. New York: St. Martin’s, 1967.

Israel, Jonathan. “The Dutch Role in the Glorious Revolution.” In The Anglo-Dutch Moment, edited by Jonathan Israel, 103-62. Cambridge: Cambridge University Press, 1991.

Jones, James, Country and Court England, 1658-1714. Cambridge: Harvard University Press, 1978.

Neal, Larry. “How it All Began: the Monetary and Financial Architecture of Europe during the First Global Capital Markets, 1648-1815.” Financial History Review 7 (2000): 117-40.

North, Douglass, and Barry Weingast. “Constitutions and Commitment: The Evolution of Institutions Governing Public Choice in Seventeenth-Century England.” Journal of Economic History 49, no. 4(1989): 803-32.

Roseveare, Henry. The Financial Revolution 1660-1760. London: Longman, 1991.

Quinn, Stephen. “The Glorious Revolution’s Effect on English Private Finance: A Microhistory, 1680-1705.” Journal of Economic History 61, no. 3 (2001): 593-615.

Stasavage, David. “Credible Commitments in Early Modern Europe: North and Weingast Revisited.” Journal of Law and Economics 18, no. 1 (2002): 155-86.

Weingast, Barry, “The Political Foundations of Limited Government: Parliament Sovereign Debt in Seventeenth-Century and Eighteenth-Century England.” In The Frontiers of the New Institutional Economics, edited by John Drobak and John Nye, 213-246. San Diego: Academic Press, 1997.

Wells, John, and Douglas Wills. “Revolution, Restoration, and Debt Repudiation: The Jacobite Threat to England’s Institutions and Economic Growth.” Journal of Economic History 60, no 2 (2000): 418-41.

Citation: Quinn, Stephen. “The Glorious Revolution of 1688″. EH.Net Encyclopedia, edited by Robert Whaples. April 17, 2003. URL http://eh.net/encyclopedia/the-glorious-revolution-of-1688/

The Economic History of the International Film Industry

Gerben Bakker, University of Essex

Introduction

Like other major innovations such as the automobile, electricity, chemicals and the airplane, cinema emerged in most Western countries at the same time. As the first form of industrialized mass-entertainment, it was all-pervasive. From the 1910s onwards, each year billions of cinema-tickets were sold and consumers who did not regularly visit the cinema became a minority. In Italy, today hardly significant in international entertainment, the film industry was the fourth-largest export industry before the First World War. In the depression-struck U.S., film was the tenth most profitable industry, and in 1930s France it was the fastest-growing industry, followed by paper and electricity, while in Britain the number of cinema-tickets sold rose to almost one billion a year (Bakker 2001b). Despite this economic significance, despite its rapid emergence and growth, despite its pronounced effect on the everyday life of consumers, and despite its importance as an early case of the industrialization of services, the economic history of the film industry has hardly been examined.

This article will limit itself exclusively to the economic development of the industry. It will discuss just a few countries, mainly the U.S., Britain and France, and then exclusively to investigate the economic issues it addresses, not to give complete histories of the industries in those countries. This entry cannot do justice to developments in each and every country, given the nature of an encyclopedia article. This entry also limits itself to the evolution of the Western film industry, because it has been and still is the largest film industry in the world, in revenue terms, although this may well change in the future.

Before Cinema

In the late eighteenth century most consumers enjoyed their entertainment in an informal, haphazard and often non-commercial way. When making a trip they could suddenly meet a roadside entertainer, and their villages were often visited by traveling showmen, clowns and troubadours. Seasonal fairs attracted a large variety of musicians, magicians, dancers, fortune-tellers and sword-swallowers. Only a few large cities harbored legitimate theaters, strictly regulated by the local and national rulers. This world was torn apart in two stages.

First, most Western countries started to deregulate their entertainment industries, enabling many more entrepreneurs to enter the business and make far larger investments, for example in circuits of fixed stone theaters. The U.S. was the first with liberalization in the late eighteenth century. Most European countries followed during the nineteenth century. Britain, for example, deregulated in the mid-1840s, and France in the late 1860s. The result of this was that commercial, formalized and standardized live entertainment emerged that destroyed a fair part of traditional entertainment. The combined effect of liberalization, innovation and changes in business organization, made the industry grow rapidly throughout the nineteenth century, and integrated local and regional entertainment markets into national ones. By the end of the nineteenth century, integrated national entertainment industries and markets maximized productivity attainable through process innovations. Creative inputs, for example, circulated swiftly along the venues – often in dedicated trains – coordinated by centralized booking offices, maximizing capital and labor utilization.

At the end of the nineteenth century, in the era of the second industrial revolution, falling working hours, rising disposable income, increasing urbanization, rapidly expanding transport networks and strong population growth resulted in a sharp rise in the demand for entertainment. The effect of this boom was further rapid growth of live entertainment through process innovations. At the turn of the century, the production possibilities of the existing industry configuration were fully realized and further innovation within the existing live-entertainment industry could only increase productivity incrementally.

At this moment, in a second stage, cinema emerged and in its turn destroyed this world, by industrializing it into the modern world of automated, standardized, tradable mass-entertainment, integrating the national entertainment markets into an international one.

Technological Origins

In the early 1890s, Thomas Edison introduced the kinematograph, which enabled the shooting of films and their play-back in slot-coin machines for individual viewing. In the mid-1890s, the Lumière brothers added projection to the invention and started to play films in theater-like settings. Cinema reconfigured different technologies that all were available from the late 1880s onwards: photography (1830s), taking negative pictures and printing positives (1880s), roll films (1850s), celluloid (1868), high-sensitivity photographic emulsion (late 1880s), projection (1645) and movement dissection/ persistence of vision (1872).

After the preconditions for motion pictures had been established, cinema technology itself was invented. Already in 1860/1861 patents were filed for viewing and projecting motion pictures, but not for the taking of pictures. The scientist Jean Marey completed the first working model of a film camera in 1888 in Paris. Edison visited Georges Demeney in 1888 and saw his films. In 1891, he filed an American patent for a film camera, which had a different moving mechanism than the Marey camera. In 1890, the Englishman Friese Green presented a working camera to a group of enthusiasts. In 1893 the Frenchman Demeney filed a patent for a camera. Finally, the Lumière brothers filed a patent for their type of camera and for projection in February 1895. In December of that year they gave the first projection for a paying audience. They were followed in February 1896 by the Englishman Robert W. Paul. Paul also invented the ‘Maltese cross,’ a device which is still used in film cameras today. It is instrumental in the smooth rolling of the film, and in the correcting of the lens for the space between the exposures (Michaelis 1958; Musser 1990: 65-67; Low and Manvell 1948).

Three characteristics stand out in this innovation process. First, it was an international process of invention, taking place in several countries at the same time, and the inventors building upon and improving upon each other’s inventions. This connects to Joel Mokyr’s notion that in the nineteenth century communication became increasingly important to innovations, and many innovations depended on international communication between inventors (Mokyr 1990: 123-124). Second, it was what Mokyr calls a typical nineteenth century invention, in that it was a smart combination of many existing technologies. Many different innovations in the technologies which it combined had been necessary to make possible the innovation of cinema. Third, cinema was a major innovation in the sense that it was quickly and universally adopted throughout the western world, quicker than the steam engine, the railroad or the steamship.

The Emergence of Cinema

For about the first ten years of its existence, cinema in the United States and elsewhere was mainly a trick and a gadget. Before 1896 the coin-operated Kinematograph of Edison was present at fairs and in entertainment venues. Spectators had to throw a coin in the machine and peek through glasses to see the film. The first projections, from 1896 onwards, attracted large audiences. Lumière had a group of operators who traveled around the world with the cinematograph, and showed the pictures in theaters. After a few years films became a part of the program in vaudeville and sometimes in theater as well. At the same time traveling cinema emerged: cinemas which traveled around with a tent or mobile theater and set up shop for a short time in towns and villages. These differed from the Lumière operators and others in that they catered for the general, popular audiences, while the former were more upscale parts of theater programs, or a special program for the bourgeoisie (Musser 1990: 140, 299, 417-20).

This whole era, which in the U.S. lasted up to about 1905, was a time in which cinema seemed just one of many new fashions, and it was not at all certain that it would persist, or that it would be forgotten or marginalized quickly, such as happened to the boom in skating rinks and bowling alleys at the time. This changed when Nickelodeons, fixed cinemas with a few hundred seats, emerged and quickly spread all over the country between 1905 and 1907. From this time onwards cinema changed into an industry in its own right, which was distinct from other entertainments, since it had its own buildings and its own advertising. The emergence of fixed cinemas coincided which a huge growth phase in the business in general; film production increased greatly, and film distribution developed into a special activity, often managed by large film producers. However, until about 1914, besides the cinemas, films also continued to be combined with live entertainment in vaudeville and other theaters (Musser 1990; Allen 1980).

Figure 1 shows the total length of negatives released on the U.S., British and French film markets. In the U.S., the total released negative length increased from 38,000 feet in 1897, to two million feet in 1910, to twenty million feet in 1920. Clearly, the initial U.S. growth between 1893 and 1898 was very strong: the market increased by over three orders of magnitude, but from an infinitesimal initial base. Between 1898 and 1906, far less growth took place, and in this period it may well have looked like the cinematograph would remain a niche product, a gimmick shown at fairs and used to be interspersed with live entertainment. From 1907, however, a new, sharp sustained growth phase starts: The market increased further again by two orders of magnitude – and from a far higher base this time. At the same time, the average film length increased considerably, from eighty feet in 1897 to seven hundred feet in 1910 to three thousand feet in 1920. One reel of film held about 1,500 feet and had a playing time of about fifteen minutes.

Between the mid-1900s and 1914 the British and French markets were growing at roughly the same rates as the U.S. one. World War I constituted a discontinuity: from 1914 onwards European growth rates are far lower those in the U.S.

The prices the Nickelodeons charged were between five and ten cents, for which spectators could stay as long as they liked. Around 1910, when larger cinemas emerged in hot city center locations, more closely resembling theaters than the small and shabby Nickelodeons, prices increased. They varied from between one dollar to one dollar and-a-half for ‘first run’ cinemas to five cents for sixth-run neighborhood cinemas (see also Sedgwick 1998).

Figure 1

Total Released Length on the U.S., British and French Film Markets (in Meters), 1893-1922

Note: The length refers to the total length of original negatives that were released commercially.

See Bakker 2005, appendix I for the method of estimation and for a discussion of the sources.

Source: Bakker 2001b; American Film Institute Catalogue, 1893-1910; Motion Picture World, 1907-1920.

The Quality Race

Once Nickelodeons and other types of cinemas were established, the industry entered a new stage with the emergence of the feature film. Before 1915, cinemagoers saw a succession of many different films, each between one and fifteen minutes, of varying genres such as cartoons, newsreels, comedies, travelogues, sports films, ‘gymnastics’ pictures and dramas. After the mid-1910s, going to the cinema meant watching a feature film, a heavily promoted dramatic film with a length that came closer to that of a theater play, based on a famous story and featuring famous stars. Shorts remained only as side dishes.

The feature film emerged when cinema owners discovered that films with a far higher quality and length, enabled them to ask far higher ticket prices and get far more people into their cinemas, resulting in far higher profits, even if cinemas needed to pay far more for the film rental. The discovery that consumers would turn their back on packages of shorts (newsreels, sports, cartoons and the likes) as the quality of features increased set in motion a quality race between film producers (Bakker 2005). They all started investing heavily in portfolios of feature films, spending large sums on well-known stars, rights to famous novels and theater plays, extravagant sets, star directors, etc. A contributing factor in the U.S. was the demise of the Motion Picture Patents Company (MPPC), a cartel that tried to monopolize film production and distribution. Between about 1908 and 1912 the Edison-backed MPPC had restricted quality artificially by setting limits on film length and film rental prices. When William Fox and the Department of Justice started legal action in 1912, the power of the MPPC quickly waned and the ‘independents’ came to dominate the industry.

In the U.S., the motion picture industry became the internet of the 1910s. When companies put the word motion pictures in their IPO investors would flock to it. Many of these companies went bankrupt, were dissolved or were taken over. A few survived and became the Hollywood studios most of which we still know today: Paramount, Metro-Goldwyn-Mayer (MGM), Warner Brothers, Universal, Radio-Keith-Orpheum (RKO), Twentieth Century-Fox, Columbia and United Artists.

A necessary condition for the quality race was some form of vertical integration. In the early film industry, films were sold. This meant that the cinema-owner who bought a film, would receive all the marginal revenues the film generated. In the film industry, these revenues were largely marginal profits, as most costs were fixed, so an additional film ticket sold was pure (gross) profit. Because the producer did not get any of these revenues, at the margin there was little incentive to increase quality. When outright sales made way for the rental of films to cinemas for a fixed fee, producers got a higher incentive to increase a film’s quality, because it would generate more rentals (Bakker 2005). This further increased when percentage contracts were introduced for large city center cinemas, and when producers-distributors actually started to buy large cinemas. The changing contractual relationship between cinemas and producers was paralleled between producers and distributors.

The Decline and Fall of the European Film Industry

Because the quality race happened when Europe was at war, European companies could not participate in the escalation of quality (and production costs) discussed above. This does not mean all of them were in crisis. Many made high profits during the war from newsreels, other short films, propaganda films and distribution. They also were able to participate in the shift towards the feature film, substantially increasing output in the new genre during the war (Figure 2). However, it was difficult for them to secure the massive amount of venture capital necessary to participate in the quality race while their countries were at war. Even if they would have managed it may have been difficult to justify these lavish expenditures when people were dying in the trenches.

Yet a few European companies did participate in the escalation phase. The Danish Nordisk company invested heavily in long feature-type films, and bought cinema chains and distributors in Germany, Austria and Switzerland. Its strategy ended when the German government forced it to sell its German assets to the newly founded UFA company, in return for a 33 percent minority stake. The French Pathé company was one of the largest U.S. film producers. It set up its own U.S. distribution network and invested in heavily advertised serials (films in weekly installments) expecting that this would become the industry standard. As it turned out, Pathé bet on the wrong horse and was overtaken by competitors riding high on the feature film. Yet it eventually switched to features and remained a significant company. In the early 1920s, its U.S. assets were sold to Merrill Lynch and eventually became part of RKO.

Figure 2

Number of Feature Films Produced in Britain, France and the U.S., 1911-1925

(semi-logarithmic scale)

Source: Bakker 2005 [American Film Institute Catalogue; British Film Institute; Screen Digest; Globe, World Film Index, Chirat, Longue métrage.]

Because it could not participate in the quality race, the European film industry started to decline in relative terms. Its market share at home and abroad diminished substantially (Figure 3). In the 1900s European companies supplied at least half of the films shown in the U.S. In the early 1910s this dropped to about twenty percent. In the mid-1910s, when the feature film emerged, the European market share declined to nearly undetectable levels.

By the 1920s, most large European companies gave up film production altogether. Pathé and Gaumont sold their U.S. and international business, left film making and focused on distribution in France. Éclair, their major competitor, went bankrupt. Nordisk continued as an insignificant Danish film company, and eventually collapsed into receivership. The eleven largest Italian film producers formed a trust, which terribly failed and one by one they fell into financial disaster. The famous British producer, Cecil Hepworth, went bankrupt. By late 1924, hardly any films were being made in Britain. American films were shown everywhere.

Figure 3

Market Shares by National Film Industries, U.S., Britain, France, 1893-1930

Note: EU/US is the share of European companies on the U.S. market, EU/UK is the share of European companies on the British market, and so on. For further details see Bakker 2005.

The Rise of Hollywood

Once they had lost out, it was difficult for European companies to catch up. First of all, since the sharply rising film production costs were fixed and sunk, market size was becoming of essential importance as it affected the amount of money that could be spent on a film. Exactly at this crucial moment, the European film market disintegrated, first because of war, later because of protectionism. The market size was further diminished by heavy taxes on cinema tickets that sharply increased the price of cinema compared to live entertainment.

Second, the emerging Hollywood studios benefited from first mover advantages in feature film production: they owned international distribution networks, they could offer cinemas large portfolios of films at a discount (block-booking), sometimes before they were even made (blind-bidding), the quality gap with European features was so large it would be difficult to close in one go, and, finally, the American origin of the feature films in the 1910s had established U.S. films as a kind of brand, leaving consumers with high switching costs to try out films from other national origins. It would be extremely costly for European companies to re-enter international distribution, produce large portfolios, jump-start film quality, and establish a new brand of films – all at the same time (Bakker 2005).

A third factor was the rise of Hollywood as production location. The large existing American Northeast coast film industry and the newly emerging film industry in Florida declined as U.S. film companies started to locate in Southern California. First of all, the ‘sharing’ of inputs facilitated knowledge spillovers and allowed higher returns. The studios lowered costs because creative inputs had less down-time, needed to travel less, could participate in many try-outs to achieve optimal casting and could be rented out easily to competitors when not immediately wanted. Hollywood also attracted new creative inputs through non-monetary means: even more than money creative inputs wanted to maximize fame and professional recognition. For an actress, an offer to work with the world’s best directors, costume designers, lighting specialists and make-up artists was difficult to decline.

Second, a thick market for specialized supply and demand existed. Companies could easily rent out excess studio capacity (for example, during the nighttime B-films were made), and a producer was quite likely to find the highly specific products or services needed somewhere in Hollywood (Christopherson and Storper 1987, 1989). While a European industrial ‘film’ district may have been competitive and even have a lower over-all cost/quality ratio than Hollywood, a first European major would have a substantially higher cost/quality ratio (lacking external economies) and would therefore not easily enter (see, for example, Krugman and Obstfeld 2003, chapter 6). If entry did happen, the Hollywood studios could and would buy successful creative inputs away, since they could realize higher returns on these inputs, which resulted in American films with even a higher perceived quality, thus perpetuating the situation.

Sunlight, climate and the variety of landscape in California were of course favorable to film production, but were not unique. Locations such as Florida, Italy, Spain and Southern France offered similar conditions.

The Coming of Sound

In 1927, sound films were introduced. The main innovator was Warner Brothers, backed by the bank Goldman, Sachs, which actually parachuted a vice-president to Warner. Although many other sound systems had been tried and marketed from the 1900s onwards, the electrical microphone, invented at Bell labs in the mid-1920s, sharply increased the quality of sound films and made possible the change of the industry. Sound increased the interests in the film industry of large industrial companies such as General Electric, Western Electric and RCA, as well as those of the banks who were eager the finance the new innovation, such as the Bank of America and Goldman, Sachs.

In economic terms, sound represented an exogenous jump in sunk costs (and product quality) which did not affect the basic industry structure very much: The industry structure was already highly concentrated before sound and the European, New York/Jersey and Florida film industries were already shattered. What it did do was industrialize away most of the musicians and entertainers that had complemented the silent films with sound and entertainment, especially those working in the smaller cinemas. This led to massive unemployment among musicians (see, for example, Gomery 1975; Kraft 1996).

The effect of sound film in Europe was to increase the domestic revenues of European films, because they became more culture-specific as they were in the local language, but at the same time it decreased the foreign revenues European films received (Bakker 2004b). It is difficult to completely assess the impact of sound film, as it coincided with increased protection; many European countries set quotas for the amount of foreign films that could be shown shortly before the coming of sound. In France, for example, where sound became widely adopted from 1930 onwards, the U.S. share of films dropped from eighty to fifty percent between 1926 and 1929, mainly the result of protectionist legislation. During the 1930s, the share temporarily declined to about forty percent, and then hovered to between fifty and sixty percent. In short, protectionism decreased the U.S. market share and increased the French market shares of French and other European films, while sound film increased French market share, mostly at the expense of other European films and less so at the expense of U.S. films.

In Britain, the share of releases of American films declined from eighty percent in 1927 to seventy percent in 1930, while British films increased from five percent to twenty percent, exactly in line with the requirements of the 1927 quota act. After 1930, the American share remained roughly stable. This suggests that sound film did not have a large influence, and that the share of U.S. films was mainly brought down by the introduction of the Cinematograph Films Act in 1927, which set quotas for British films. Nevertheless, revenue data, which are unfortunately lacking, would be needed to give a definitive answer, as little is known about effects on the revenue per film.

The Economics of the Interwar Film Trade

Because film production costs were mainly fixed and sunk, international sales or distribution were important, because these were additional sales without much additional cost to the producer; the film itself had already been made. Films had special characteristics that necessitated international sales. Because they essentially were copyrights rather than physical products, theoretically the costs of additional sales were zero. Film production involved high endogenous sunk costs, recouped through renting the copyright to the film. The marginal foreign revenue equaled marginal net revenue (and marginal profits after the film’s production costs had been fully amortized). All companies large or small had to take into account foreign sales when setting film budgets (Bakker 2004b).

Films were intermediate products sold to foreign distributors and cinemas. While the rent paid varied depending on perceived quality and general conditions of supply and demand, the ticket price paid by consumers generally did not vary. It only varied by cinema: highest in first-run city center cinemas and lowest in sixth-run ramshackle neighborhood cinemas. Cinemas used films to produce ‘spectator-hours’: a five-hundred-seat cinema providing one hour of film, produced five hundred spectator-hours of entertainment. If it sold three hundred tickets, the other two hundred spectator-hours produced would have perished.

Because film was an intermediate product and a capital good at that, international competition could not be on price alone, just as sales of machines depend on the price/performance ratio. If we consider a film’s ‘capacity to sell spectator-hours’ (hereafter called selling capacity) as proportional to production costs, a low-budget producer could not simply push down a film’s rental price in line with its quality in order to make a sale; even at a price of zero, some low-budget films could not be sold. The reasons were twofold.

First, because cinemas had mostly fixed costs and few variable costs, a film’s selling capacity needed to be at least as large as fixed cinema costs plus its rental price. A seven-hundred-seat cinema, with a production capacity of 39,200 spectator-hours a week, weekly fixed costs of five hundred dollars, and an average admission price of five cents per spectator-hour, needed a film selling at least ten thousand spectator-hours, and would not be prepared to pay for that (marginal) film, because it only recouped fixed costs. Films needed a minimum selling capacity to cover cinema fixed costs. Producers could only price down low-budget films to just above the threshold level. With a lower expected selling capacity, these films could not be sold at any price.

This reasoning assumes that we know a film’s selling capacity ex ante. A main feature distinguishing foreign markets from domestic ones was that uncertainty was markedly lower: from a film’s domestic launch the audience appeal was known, and each subsequent country added additional information. While a film’s audience appeal across countries was not perfectly correlated, uncertainty was reduced. For various companies, correlations between foreign and domestic revenues for entire film portfolios fluctuated between 0.60 and 0.95 (Bakker 2004b). Given the riskiness of film production, this reduction in uncertainty undoubtedly was important.

The second reason for limited price competition was the opportunity cost, given cinemas’ production capacities. If the hypothetical cinema obtained a high-capacity film for a weekly rental of twelve hundred dollars, which sold all 39,200 spectator-hours, the cinema made a profit of $260 (($0.05 times 39,200) – $1,200 – $500 = $260). If a film with half the budget and, we assume, half the selling capacity, rented for half the price, the cinema-owner would lose $120 (($0.05 times 19,600) – $600 – $500 = -$120). Thus, the cinema owner would want to pay no more than $220 for the lower budget film, given that the high budget film is available (($0.05 times 19,600) – $220- $500 = $260). So the low-capacity film with half the selling capacity of the high-capacity film would need to sell for under a fifth of the price of the high capacity film to even enable the possibility of a transaction.

These sharply increasing returns to selling capacity made the setting of production outlays important, as a right price/capacity ratio was crucial to win foreign markets.

How Films Became Branded Products

To make sure film revenues reached above cinema fixed costs, film companies transformed films into branded products. With the emergence of the feature film, they started to pay large sums to actors, actresses and directors and for rights to famous plays and novels. This is still a major characteristic of the film industry today that fascinates many people. Yet the huge sums paid for stars and stories are not as irrational and haphazard as they sometimes may seem. Actually, they might be just as ‘rational’ and have just as quantifiable a return as direct spending on marketing and promotion (Bakker 2001a).

To secure an audience, film producers borrowed branding techniques from other consumer goods’ industries, but the short product-life-cycle forced them to extend the brand beyond one product – using trademarks or stars – to buy existing ‘brands,’ such as famous plays or novels, and to deepen the product-life-cycle by licensing their brands.

Thus, the main value of stars and stories lay not in their ability to predict successes, but in their services as giant ‘publicity machines’ which optimized advertising effectiveness by rapidly amassing high levels of brand-awareness. After a film’s release, information such as word-of-mouth and reviews would affect its success. The young age at which stars reached their peak, and the disproportionate income distribution even among the superstars, confirm that stars were paid for their ability to generate publicity. Likewise, because ‘stories’ were paid several times as much as original screenplays, they were at least partially bought for their popular appeal (Bakker 2001a).

Stars and stories marked a film’s qualities to some extent, confirming that they at least contained themselves. Consumer preferences confirm that stars and stories were the main reason to see a film. Further, fame of stars is distributed disproportionately, possibly even twice as unequal as income. Film companies, aided by long-term contracts, probably captured part of the rent of their popularity. Gradually these companies specialized in developing and leasing their ‘instant brands’ to other consumer goods’ industries in the form of merchandising.

Already from the late 1930s onwards, the Hollywood studios used the new scientific market research techniques of George Gallup to continuously track the brand-awareness among the public of their major stars (Bakker 2003). Figure 4 is based on one such graph used by Hollywood. It shows that Lana Turner was a rising star, Gable was consistently a top star, while Stewart’s popularity was high but volatile. James Stewart was eleven percentage-points more popular among the richest consumers than among the poorest, while Lana Turner differed only a few percentage-points. Additional segmentation by city size seemed to matter, since substantial differences were found: Clark Gable was ten percentage-points more popular in small cities than in large ones. Of the richest consumers, 51 percent wanted to see a movie starring Gable, but altogether they constituted just 14 percent of Gable’s market, while the 57 percent poorest Gable-fans constituted 34 percent. The increases in Gable’s popularity roughly coincided with his releases, suggesting that while producers used Gable partially for the brand-awareness of his name, each use (film) subsequently increased or maintained that awareness in what seems to have been a self-reinforcing process.

Figure 4

Popularity of Clark Gable, James Stewart and Lana Turner among U.S. respondents

April 1940 – October 1942, in percentage

Source: Audience Research Inc.; Bakker 2003.

The Film Industry’s Contribution to Economic Growth and Welfare

By the late 1930s, cinema had become an important mass entertainment industry. Nearly everyone in the Western world went to the cinema and many at least once a week. Cinema had made possible a massive growth in productivity in the entertainment industry and thereby disproved the notions of some economists that productivity growth in certain service industries is inherently impossible. Between 1900 and 1938, output of the entertainment industry, measured in spectator-hours, grew substantially in the U.S., Britain and France, varying from three to eleven percent per year over a period of nearly forty years (Table 1). The output per worker increased from 2,453 spectator hours in the U.S. in 1900 to 34,879 in 1938. In Britain it increased from 16,404 to 37,537 spectator-hours and in France from 1,575 to 8,175 spectator-hours. This phenomenal growth could be explained partially by adding more capital (such as in the form of film technology and film production outlays) and partially by simply producing more efficiently with the existing amount of capital and labor. The increase in efficiency (‘total factor productivity’) varied from about one percent per year in Britain to over five percent in the U.S., with France somewhere in between. In all countries, this increase in efficiency was at least one and a half times the increase in efficiency at the level of the entire nation. For the U.S. it was as much as five times and for France it was more than three times the national increase in efficiency (Bakker 2004a).

Another noteworthy feature is that the labor productivity in entertainment varied less across countries in the late 1930s than it did in 1900. Part of the reason is that cinema technology made entertainment partially tradable and therefore forced productivity in similar directions in all countries; the tradable part of the entertainment industry would now exert competitive pressure on the non-tradable part (Bakker 2004a). It is therefore not surprising that cinema caused the lowest efficiency increase in Britain, which had already a well-developed and competitive entertainment industry (with the highest labor and capital productivity both in 1900 and in 1938) and higher efficiency increases in the U.S. and to a lesser extent in France, which had less well-developed entertainment industries in 1900.

Another way to measure the contribution of film technology to the economy in the late 1930s is by using a social savings methodology. If we assume that cinema did not exist and all demand for entertainment (measured in spectator-hours) would have to be met by live entertainment, we can calculate the extra costs to society and thus the amount saved by film technology. In the U.S., these social savings amounted to as much as 2.2 percent ($2.5 billion) of GDP, in France to just 1.4 percent ($0.16 billion) and in Britain to only 0.3 percent ($0.07 billion) of GDP.

A third and different way to look at the contribution of film technology to the economy is to look at the consumer surplus generated by cinema. Contrary to the TFP and social savings techniques used above, which assume that cinema is a substitute for live entertainment, this approach assumes that cinema is a wholly new good and that therefore the entire consumer surplus generated by it is ‘new’ and would not have existed without cinema. For an individual consumer, the surplus is the difference between the price she was willing to pay and the ticket she actually paid. This difference varies from consumer to consumer, but with econometric techniques, one can estimate the sum of individual surpluses for an entire country. The resulting national consumer surpluses for entertainment varied from about a fifth of total entertainment expenditure in the U.S., to about half in Britain and as much as three quarters in France.

All the measures show that by the late 1930s cinema was making an essential contribution in increasing total welfare as well as the entertainment industry’s productivity.

Vertical Disintegration

After the Second World War, the Hollywood film industry disintegrated: production, distribution and exhibition became separate activities that were not always owned by the same organization. Three main causes brought about the vertical disintegration. First, the U.S. Supreme Court forced the studios to divest their cinema chains in 1948. Second, changes in the social-demographic structure in the U.S. brought about a shift towards entertainment within the home: many young couples started to live in the new suburbs and wanted to stay home for entertainment. Initially, they mainly used radio for this purpose and later they switched to television (Gomery 1985). Third, television broadcasting in itself (without the social-demographic changes that increased demand for it) constituted a new distribution channel for audiovisual entertainment and thus decreased the scarcity of distribution capacity. This meant that television took over the focus on the lowest common denominator from radio and cinema, while the latter two differentiated their output and started to focus more on specific market segments.

Figure 5

Real Cinema Box Office Revenue, Real Ticket Price and Number of Screens in the U.S., 1945-2002

Note: The values are in dollars of 2002, using the EH.Net consumer price deflator.

Source: Adapted from Vogel 2004 and Robertson 2001.

The consequence was a sharp fall in real box office revenue in the decade after the war (Figure 5). After the mid-1950s, real revenue stabilized, and remained the same, with some fluctuations, until the mid-1990s. The decline in screens was more limited. After 1963 the number of screens increased again steadily to reach nearly twice the 1945 level in the 1990s. Since the 1990s there have been more movie screens in the U.S. than ever before. The proliferation of screens, coinciding with declining capacity per screen, facilitated market segmentation. Revenue per screen nearly halved in the decade after the war, then made a rebound during the 1960s, to start a long and steady decline from 1970 onwards. The real price of a cinema ticket was quite stable until the 1960s, after which it more than doubled. Since the early 1970s, the price has been declining again and nowadays the real admission price is about what it was in 1965.

It was in this adverse post-war climate that the vertical disintegration unfolded. It took place at three levels. First (obviously) the Hollywood studios divested their cinema-chains. Second, they outsourced part of their film production and most of their production factors to independent companies. This meant that the Hollywood studios would only produce part of the films they distributed themselves, that they changed the long-term, seven-year contracts with star actors for per-film contracts and that they sold off part of their studio facilities to rent them back for individual films. Third, the Hollywood studios’ main business became film distribution and financing. They specialized in planning and assembling a portfolio of films, contracting and financing most of them and marketing and distributing them world-wide.

The developments had three important effects. First, production by a few large companies was replaced by production by many small flexibly specialized companies. Southern California became an industrial district for the film industry and harbored an intricate network of these businesses, from set design companies and costume makers, to special effects firms and equipment rental outfits (Storper and Christopherson 1989). Only at the level of distribution and financing did concentration remain high. Second, films became more differentiated and tailored to specific market segments; they were now aimed at a younger and more affluent audience. Third, the European film market gained in importance: because the social-demographic changes (suburbanization) and the advent of television happened somewhat later in Europe, the drop in cinema attendance also happened later there. The result was that the Hollywood off-shored a large chunk – at times over half – of their production to Europe in the 1960s. This was stimulated by lower European production costs, difficulties in repatriating foreign film revenues and by the vertical disintegration in California, which severed the studios’ ties with their production units and facilitated outside contracting.

European production companies could better adapt to changes in post-war demand because they were already flexibly specialized. The British film production industry, for example, had been fragmented almost from its emergence in the 1890s. In the late 1930s, distribution became concentrated, mainly through the efforts of J. Arthur Rank, while the production sector, a network of flexibly specialized companies in and around London, boomed. After the war, the drop in admissions followed the U.S. with about a ten year delay (Figure 6). The drop in the number of screens experienced the same lag, but was more severe: about two-third of British cinema screens disappeared, versus only one-third in the U.S. In France, after the First World War film production had disintegrated rapidly and chaotically into a network of numerous small companies, while a few large firms dominated distribution and production finance. The result was a burgeoning industry, actually one of the fastest growing French industries in the 1930s.

Figure 6

Admissions and Number of Screens in Britain, 1945-2005

Source: Screen Digest/Screen Finance/British Film Institute and Robertson 2001.

Several European companies attempted to (re-)enter international film distribution, such as Rank in the 1930s and 1950s, the International Film Finance Corporation in the 1960s, Gaumont in the 1970s, PolyGram in the 1970s and again in the 1990s, Cannon in the 1980s. All of them failed in terms of long-run survival, even if they made profits during some years. The only postwar entry strategy that was successful in terms of survival was the direct acquisition of a Hollywood studio (Bakker 2000).

The Come-Back of Hollywood

From the mid-1970s onwards, the Hollywood studios revived. The slide of box office revenue was brought to a standstill. Revenues were stabilized by the joint effect of seven different factors. First, the blockbuster movie increased cinema attendance. This movie was heavily marketed and supported by intensive television advertisement. Jaws was one of the first of these kind of movies and an enormous success. Second, the U.S. film industry received several kinds of tax breaks from the early 1970s onwards, which were kept in force until the mid-1980s, when Hollywood was in good shape again. Third, coinciding with the blockbuster movie and tax-breaks film budgets increased substantially, resulting in a higher perceived quality and higher quality difference with television, drawing more consumers into the cinema. Fourth, a rise in multiplex cinemas, cinemas with several screens, increased consumer choice and increased the appeal of cinema by offering more variety within a specific cinema, thus decreasing the difference with television in this respect. Fifth, one could argue that the process of flexible specialization of the California film industry was completed in the early 1970s, thus making the film industry ready to adapt more flexibly to changes in the market. MGM’s sale of its studio complex in 1970 marked the final ending of an era. Sixth, new income streams from video sales and rentals and cable television increased the revenues a high-quality film could generate. Seventh, European broadcasting deregulation increased the demand for films by television stations substantially.

From the 1990s onwards further growth was driven by newer markets in Eastern Europe and Asia. Film industries from outside the West also grew substantially, such as those of Japan, Hong Kong, India and China. At the same time, the European Union started a large scale subsidy program for its audiovisual film industry, with mixed economic effects. By 1997, ten years after the start of the program, a film made in the European Union cost 500,000 euros on average, was seventy to eighty percent state-financed, and grossed 800,000 euros world-wide, reaching an audience of 150,000 persons. In contrast, the average American film cost fifteen million euros, was nearly hundred percent privately financed, grossed 58 million euros, and reached 10.5 million persons (Dale 1997). This seventy-fold difference in performance is remarkable. Even when measured in gross return on investment or gross margin, the U.S. still had a fivefold and twofold lead over Europe, respectively.[1] In few other industries does such a pronounced difference exist.

During the 1990s, the film industry moved into television broadcasting. In Europe, broadcasters often co-funded small-scale boutique film production. In the U.S., the Hollywood studios started to merge with broadcasters. In the 1950s they had experienced difficulties with obtaining broadcasting licenses, because their reputation had been compromised by the antitrust actions. They had to wait for forty years before they could finally complete what they intended.[2] Disney, for example, bought the ABC network, Paramount’s owner Viacom bought CBS, and General Electric, owner of NBC, bought Universal. At the same time, the feature film industry was also becoming more connected to other entertainment industries, such as videogames, theme parks and musicals. With video game revenues now exceeding films’ box office revenues, it seems likely that feature films will simply be the flagship part of large entertainment supply system that will exploit the intellectual property in feature films in many different formats and markets.

Conclusion

The take-off of the film industry in the early twentieth century had been driven mainly by changes in demand. Cinema industrialized entertainment by standardizing it, automating it and making it tradable. After its early years, the industry experienced a quality race that led to increasing industrial concentration. Only later did geographical concentration take place, in Southern California. Cinema made a substantial contribution to productivity and total welfare, especially before television. After television, the industry experienced vertical disintegration, the flexible specialization of production, and a self-reinforcing process of increasing distribution channels and capacity as well as market growth. Cinema, then, was not only the first in a row of media industries that industrialized entertainment, but also the first in a series of international industries that industrialized services. The evolution of the film industry thus may give insight into technological change and its attendant welfare gains in many service industries to come.

Selected Bibliography

Allen, Robert C. Vaudeville and Film, 1895-1915. New York: Arno Press, 1980.

Bächlin, Peter, Der Film als Ware. Basel: Burg-Verlag, 1945.

Bakker, Gerben, “American Dreams: The European Film Industry from Dominance to Decline.” EUI Review (2000): 28-36.

Bakker, Gerben. “Stars and Stories: How Films Became Branded Products.” Enterprise and Society 2, no. 3 (2001a): 461-502.

Bakker, Gerben. Entertainment Industrialised: The Emergence of the International Film Industry, 1890-1940. Ph.D. dissertation, European University Institute, 2001b.

Bakker, Gerben. “Building Knowledge about the Consumer: The Emergence of Market Research in the Motion Picture Industry.” Business History 45, no. 1 (2003): 101-27.

Bakker, Gerben. “At the Origins of Increased Productivity Growth in Services: Productivity, Social Savings and the Consumer Surplus of the Film Industry, 1900-1938.” Working Papers in Economic History, No. 81, Department of Economic History, London School of Economics, 2004a.

Bakker, Gerben. “Selling French Films on Foreign Markets: The International Strategy of a Medium-Sized Film Company.” Enterprise and Society 5 (2004b): 45-76.

Bakker, Gerben. “The Decline and Fall of the European Film Industry: Sunk Costs, Market Size and Market Structure, 1895-1926.” Economic History Review 58, no. 2 (2005): 311-52.

Caves, Richard E. Creative Industries: Contracts between Art and Commerce. Cambridge, MA: Harvard University Press, 2000.

Christopherson, Susan, and Michael Storper. “Flexible Specialization and Regional Agglomerations: The Case of the U.S. Motion Picture Industry.” Annals of the Association of American Geographers 77, no. 1 (1987).

Christopherson, Susan, and Michael Storper. “The Effects of Flexible Specialization on Industrial Politics and the Labor Market: The Motion Picture Industry.” Industrial and Labor Relations Review 42, no. 3 (1989): 331-47.

Gomery, Douglas, The Coming of Sound to the American Cinema: A History of the Transformation of an Industry. Ph.D. dissertation, University of Wisconsin, 1975.

Gomery, Douglas, “The Coming of television and the ‘Lost’ Motion Picture Audience.” Journal of Film and Video 37, no. 3 (1985): 5-11.

Gomery, Douglas. The Hollywood Studio System. London: MacMillan/British Film Institute, 1986; reprinted 2005.

Kraft, James P. Stage to Studio: Musicians and the Sound Revolution, 1890-1950. Baltimore: Johns Hopkins University Press, 1996.

Krugman, Paul R., and Maurice Obstfeld, International Economics: Theory and Policy (sixth edition). Reading, MA: Addison-Wesley, 2003.

Low, Rachael, and Roger Manvell, The History of the British Film, 1896-1906. London, George Allen & Unwin, 1948.

Michaelis, Anthony R. “The Photographic Arts: Cinematography.” In A History of Technology, Vol. V: The Late Nineteenth Century, c. 1850 to c. 1900, edited by Charles Singer, 734-51. Oxford, Clarendon Press, 1958, reprint 1980.

Mokyr, Joel. The Lever of Riches: Technological Creativity and Economic Progress. Oxford: Oxford University Press, 1990.

Musser, Charles. The Emergence of Cinema: The American Screen to 1907. The History of American Cinema, Vol. I. New York: Scribner, 1990.

Sedgwick, John, “Product Differentiation at the Movies: Hollywood, 1946-65.” Journal of Economic History 63 (2002): 676-705.

Sedgwick, John, and Michael Pokorny. “The Film Business in Britain and the United States during the 1930s.” Economic History Review 57, no. 1 (2005): 79-112.

Sedgwick, John, and Mike Pokorny, editors. An Economic History of Film. London: Routledge, 2004.

Thompson, Kristin.. Exporting Entertainment: America in the World Film Market, 1907-1934. London: British Film Institute, 1985.

Vogel, Harold L. Entertainment Industry Economics: A Guide for Financial Analysis. Cambridge: Cambridge University Press, Sixth Edition, 2004.

Gerben Bakker may be contacted at gbakker at essex.ac.uk


[1] Gross return on investment, disregarding interest costs and distribution charges was 60 percent for European vs. 287 percent for U.S. films. Gross margin was 37 percent for European vs. 74 percent for U.S. films. Costs per viewer are 3.33 vs. 1.43 euros, revenues per viewer are 5.30 vs. 5.52 euros.

[2] The author is indebted to Douglas Gomery for this point.

Citation: Bakker, Gerben. “The Economic History of the International Film Industry”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-the-international-film-industry/

Federal Reserve System

Mark Toma, University of Kentucky

The historical origins of the Federal Reserve System can be traced to chronic currency problems in the nineteenth century. Under the National Banking System, national banks were required to hold eligible government securities in order to obtain national bank notes from the Treasury. Contemporary observers complained that such restrictions made the currency inelastic, so that the supply of money did not expand when the demand for money rose, which resulted in periodic shortages of currency and bank panics. In response to the Panic of 1907, Congress created the National Monetary Commission charged with the mission of reforming the currency system. It soon became clear that some type of central banking institution would emerge from the Commissions deliberations, albeit one operating within the context of a gold standard. The key question was what type of central bank? Would it be a centralized one, or a populist, decentralized one?

Early victories went to the advocates of centralization. The head of the National Monetary Commission, Republican Nelson Aldrich, presented a bill to Congress in early 1912 that followed the European model of a monopoly central bank. But Aldrich’s bill stalled, and the election of a Democratic President, Woodrow Wilson, in November 1912 gave added momentum to the populist movement. A central bank embodying a decentralized, competitive supply mechanism was now on the fast track.

Over the course of 1913, Wilson and the Democratic Congress crafted the populist blueprint that would become the Federal Reserve Act and would shape the operation of the currency system during the early years (1914-1930) of the Federal Reserve. The nominal structure of the Fed was a curious mixture of private and public elements. On the private side, the Fed was to be polycentric system of 12 reserve banks, each having the power to produce a distinct gold-backed currency marked by a seal indicating the district of origin, each owned by its member banks, and each required to finance itself from earnings. On the public side, the most important government element was the Federal Reserve Board, a political body that was to oversee the operation of the system.

The details of the Federal Reserve Act would determine how the private-public balance would play out. Consider first the financing arrangement. The Act forcefully rejected the typical budgetary arrangement instead giving reserve bank management first call on earnings from discount loans, open market operations, and fees charged for providing clearinghouse services to member banks. These earnings were to be used to finance reserve bank expenses, dividend payments to member banks, and, residually, payments to the Treasury. One thing the Act did not do was to authorize payments from the general government to the individual reserve banks in case of a shortfall in earnings. In this sense, the reserve banks faced a bottom line.

With respect to ownership rights, the Federal Reserve Act nominally designated member banks as shareholders. They were required to subscribe to the capital stock of their reserve bank. Stock ownership, however, did not convey voting powers. Nor were there secondary markets where shares could be traded.

With respect to selection of the Fed management team, every member of the Federal Reserve Board was to have a government connection. In addition to five political appointees, the Board included the Secretary of Treasury and the Comptroller of Currency. Discount rates set by the individual reserve banks were “subject to review and determination of the Federal Reserve Board.” Thus the government, through the Board could influence, if not control, money created through the discount window.

The Federal Reserve Act contained one important loophole, however, which tended to undermine the Board’s influence. According to the Act, the one margin of adjustment over which individual reserve banks unambiguously could exercise discretion was the amount of government securities to buy and sell. These open market operations were to be at the initiative of the individual reserve banks and each bank was to have first claim to the earnings generated by the government securities in its portfolio.

Whether the populist founders of the Federal Reserve were fully aware of the role the open market operation loophole might play is subject to debate. Nevertheless, the loophole emerged as a key feature of the money supply process in the first decade, the 1920s, of the system’s peacetime operation. While gold convertibility held in check currency oversupply, the power possessed by each reserve bank to purchase government securities for its own account held in check any tendency the Board might have to pursue a tight monetary policy by raising discount rates significantly above market rates.

The Great Depression marked the end to the novel experiment in monetary populism. The Federal Reserve Board sharply raised discount rates and reserve banks failed to fill the void with open market operations. Numerous explanations have been offered for the restrictive depression policy. The traditional explanations have emphasized a failure in leadership, a flawed policy procedure, and a rigid adherence to the gold standard. Another contributing factor may have been a shift in decision-making power away from the individual reserve banks and toward the Board that effectively shutdown the decentralized open market operations that had been the hallmark of the twenties.

In the aftermath of the Great Depression, a series of presidential and legislative initiatives created the Fed we now know. Franklin Roosevelt ended the domestic gold standard in 1934 and the Banking Act of 1935 centralized open market operations under the authority of a new agency, the Federal Open Market Committee, a majority of whose members were political appointees. Interestingly, the new powers lay dormant for the next decade and a half, as the Treasury took the monetary lead. The Treasury-Fed Accord of 1951 ended the period of Treasury dominance and the Fed assumed the role of a full-fledged central bank exercising significant discretionary powers in the last half of the twentieth century.

Recent global events have rekindled mainstream interest in the historical origins of the Fed. For one thing, debate on the institutional structure of the European Monetary Union has invited comparisons with the founding of the Fed. More generally, financial innovations have made it easier for agents worldwide to substitute among various currencies, thereby reducing the power of any single currency supplier. The upshot is that currency supply in the twenty-first century may have more in common with the “populist” early Fed than the “monopolist” Fed of the late twentieth century.

Further Reading

Broz, J. Lawrence. The International Origins of the Federal Reserve System. Ithaca: Cornell University Press, 1997.

Meltzer, Allan H. A History of the Federal Reserve, Volume 1, 1913-1951. Chicago: University of Chicago Press, 2003.

Friedman, Milton, and Anna Schwartz. A Monetary History of the United States, 1867-1960. Princeton, Princeton University Press, 1963.

Toma, Mark. Competition and Monopoly in the Federal Reserve System, 1914-1951. Cambridge: Cambridge University Press, 1997.

Wheelock, David. The Strategy and Consistency of Federal Reserve Policy, 1924- 1933. Cambridge: Cambridge University Press, 1991.

Citation: Toma, Mark. “Federal Reserve”. EH.Net Encyclopedia, edited by Robert Whaples. June 2, 2004. URL http://eh.net/encyclopedia/federal-reserve-system/

The Economics of American Farm Unrest, 1865-1900

James I. Stewart, Reed College

American farmers have often expressed dissatisfaction with their lot but the decades after the Civil War were extraordinary in this regard. The period was one of persistent and acute political unrest. The specific concerns of farmers were varied, but at their core was what farmers perceived to be their deteriorating political and economic status.

The defining feature of farm unrest was the efforts of farmers to join together for mutual gain. Farmers formed cooperatives, interest groups, and political parties to protest their declining fortunes and to increase their political and economic power. The first such group to appear was The Grange or Patrons of Husbandry, founded in the 1860s to address farmers’ grievances against the railroads and desire for greater cooperation in business matters. The agrarian-dominated Greenback Party followed in the 1870s. Its main goal was to increase the amount of money in circulation and thus to lower the costs of credit to farmers. The Farmers’ Alliance appeared in the 1880s. Its members practiced cooperative marketing and lobbied the government for various kinds of business and banking regulation. In the 1890s, aggrieved farmers took their most ambitious steps yet, forming the independent People’s or Populist Party to challenge the dominance of the unsympathetic Republican and Democratic parties.

Although farmers in every region of the country had cause for agitation, unrest was probably greatest in the northern prairie and Plains states. A series of droughts there between 1870 and 1900 created recurring hardships, and Midwestern grain farmers faced growing price competition from producers abroad. Farmers in the South also revolted, but their protests were muted by racism. Black farmers were excluded from most farm groups, and many white farmers were reluctant to join the attack on established politics and business for fear of undermining the system of social control that kept blacks inferior to whites (Goodwyn, 1978).

The Debate about the Causes of Farm Unrest

For a long time, a debate raged about the causes of farm unrest. Historians could not reconcile the complaints of farmers with evidence about the agricultural terms of trade— the prices farmers received for their output, especially relative to the prices of other goods and services farmers purchased such as transportation, credit, and manufactures. Now, however, there appears to be some consensus. Before exploring the basis for this consensus, it will be useful to examine briefly the complaints of farmers. What were farmers so upset about? Why did they feel so threatened?

The Complaints of Farmers

The complaints of farmers are well documented (Buck, 1913; Hicks, 1931) and relatively uncontroversial. They concerned farmers’ declining incomes and fractious business relationships primarily. First, farmers claimed that farm prices were falling and, as a consequence, so were their incomes. They generally blamed low prices on over-production. Second, farmers alleged that monopolistic railroads and grain elevators charged unfair prices for their services. Government regulation was the farmers’ solution to the problem of monopoly. Third, there was a perceived shortage of credit and money. Farmers believed that interest rates were too high because of monopolistic lenders, and the money supply was inadequate, producing deflation. A falling price level increased the real burden of debt, as farmers repaid loans with dollars worth significantly more than those they had borrowed. Farmers demanded ceilings on interest rates, public boards to mediate foreclosure proceedings, and the U.S. Treasury to coin silver freely to increase the money supply. Finally, farmers complained about the political influence of the railroads, big business, and money lenders. These interests had undue influence over policy making in the state legislatures and U.S. Congress. In short, farmers felt their economic and political interests were being shortchanged by a gang of greedy railroads, creditors, and industrialists.

The Puzzle of Farm Unrest

Economic historians have subjected the complaints of farmers to rigorous statistical testing. Each claim has been found inconsistent to some extent with the available evidence about the terms of trade.

First, consider farmers’ complaints about prices. Farm prices were falling, along with the prices of most other goods during this period. This does not imply, however, that farm incomes were also falling. First, real prices (farm prices relative to the general price level) are a better measure of the value that farmers were receiving for their output. When real prices over the post-Civil War period are examined, there is an approximately horizontal trend (North, 1974). Moreover, even if real farm prices had been falling, farmers were not necessarily worse off (Fogel and Rutner, 1972). Rising farm productivity could have offset the negative effects of falling real prices on incomes. Finally, direct evidence about the incomes of farmers is scarce, but estimates suggest that farm incomes were not falling (Bowman, 1965). Some regions experienced periods of distress—Iowa and Illinois in the 1870s and Kansas and Nebraska in the 1890s, for instance—but there was no general agricultural depression. If anything, data on wages, land rents, and returns to capital suggest that land in the West was opened to settlement too slowly (Fogel and Rutner, 1972).

Next, consider farmers’ claims about interest rates and mortgage debt. It is true that interest rates on the frontier were high, averaging two to three percentage points more than those in the Northeast. Naturally, frontier farmers complained bitterly about paying so much for credit. Lenders, however, may have been well justified in the rates they charged. The susceptibility of the frontier to drought and the financial insecurity of many settlers created above average lending risks for which creditors had to be compensated (Bogue, 1955). For instance, borrowers often defaulted, leaving land worth only a fraction of the loan as security. This story casts doubt on the exploitation hypothesis. Furthermore, when the claims of farmers were subjected to rigorous statistical testing, there was little evidence to substantiate the monopoly hypothesis (Eichengreen, 1984). Instead, consistent with the unique features of the frontier mortgage market, high rates of interest appear to have been compensation for the inherent risks of lending to frontier farmers. Finally, regarding the burden on borrowers of a falling price level, deflation may have been not as onerous as farmers alleged. The typical mortgage had a short term, less than five years, implying that lenders and borrowers could often anticipate changes in the price level (North, 1974).

Last, consider farmers’ complaints about the railroads. These appear to have the most merit. Nevertheless, for a long time, most historians dismissed farmers’ grievances, assuming that the real cost to farmers of shipping their produce to market must have been steadily falling because of productivity improvements in the railroad sector. As Robert Higgs (1970) shows, however, gains in productivity in rail shipping did not necessarily translate into lower rates for farmers and thus higher farm gate prices. Real rates (railroad rates relative to the prices farmers received for their output) were highly variable between 1865 and 1900. More important, over the whole period, there was little decrease in rail rates relative to farm prices. Only in the 1890s did the terms of trade begin to improve in farmers’ favor. Employing different data, Aldrich (1985) finds a downward trend in railroad rates before 1880 but then no trend or an increasing trend thereafter.

The Causes of Farm Unrest

Many of the complaints of farmers are weakly supported or even contradicted by the available evidence, leaving questions about the true causes of farm unrest. If the monopoly power of the railroads and creditors was not responsible for farmers’ woes, what or who was?

Most economic historians now believe that agrarian unrest reflected the growing risks and uncertainties of agriculture after the Civil War. Uncertainty or risk can be thought of as an economic force that reduces welfare. Today, farmers use sophisticated production technologies and agricultural futures markets to reduce their exposure to environmental and economic uncertainty at little cost. In the late 1800s, the avoidance of risk was much more costly. As a result, increases in risk and uncertainty made farmers worse off. These uncertainties and risks appear to have been particularly severe for farmers on the frontier.

What were the sources of risk? First, agriculture had become more commercial after the Civil War (Mayhew, 1972). Formerly self-sufficient farmers were now dependent on creditors, merchants, and railroads for their livelihoods. These relationships created opportunities for economic gain but also obligations, hardships, and risks that many farmers did not welcome. Second, world grain markets were becoming ever more integrated, creating competition in markets abroad once dominated by U.S. producers and greater price uncertainty (North, 1974). Third, agriculture was now occurring in the semi-arid region of the United States. In Kansas, Nebraska, and the Dakotas, farmers encountered unfamiliar and adverse growing conditions. Recurring but unpredictable droughts caused economic hardship for many Plains farmers. Their plights were made worse because of the greater price elasticity (responsiveness) of world agricultural supply (North, 1974). Drought-stricken farmers with diminished harvests could no longer count on higher domestic prices for their crops.

A growing body of research now supports the hypothesis that discontent was caused by increasing risks and uncertainties in U.S. agriculture. First, there are strong correlations between different measures of economic risk and uncertainty and the geographic distribution of unrest in fourteen northern states between 1866 and 1909 (McGuire, 1981; 1982). Farm unrest was closely tied to the variability in farm prices, yields, and incomes across the northern states. Second, unrest was highest in states with high rates of farm foreclosures (Stock, 1986). On the frontier, the typical farmer would have had a neighbor whose farm was seized by creditors and thus cause to worry about his own future financial security. Third, Populist agitation in Kansas in the 1890s coincided with unexpected variability in crop prices that resulted in lost profits and lower incomes (DeCanio, 1980). Finally, as mentioned already, high interest rates were not a sign of monopoly but rather compensation to creditors for the greater risks of frontier lending (Eichengreen, 1984).

The Historical Significance of Farm Unrest

Farm unrest had profound and lasting consequences for American economic development. Above all, it ushered in fundamental and lasting institutional change (Hughes, 1991; Libecap, 1992).

The change began in the 1870s. In response to the complaints of farmers, Midwestern state legislatures enacted a series of laws regulating the prices and practices of railroads, grain elevators, and warehouses. The “Grange” laws were a turning point because they reversed a longstanding trend of decreasing government regulation of the private sector. They also prompted a series of landmark court rulings affirming the regulatory prerogatives of government (Hughes, 1991). In Munn v. Illinois (1877), the U.S. Supreme Court rejected a challenge to the legality of the Granger laws, famously ruling that government had the legal right to regulate any commerce “affected with the public interest.”

Farmers also sought redress of their grievances at the federal level. In 1886, the U.S. Supreme Court had ruled in Wabash, St. Louis, and Pacific Railway v. Illinois that only the federal government had the right to regulate commerce between the states. This meant the states could not regulate many matters of concern to farmers. In 1887, Congress passed the Interstate Commerce Act, which gave the Interstate Commerce Commission regulatory oversight over long distance rail shipping. This legislation was followed by the Sherman Antitrust Act of 1890, which prohibited monopolies and certain conspiracies, combinations, and restraints of trade. Midwestern cattle farmers urged the passage of an antitrust law, alleging that the notorious Chicago meat packers had conspired to keep cattle prices artificially low (Libecap, 1992). Both laws marked the beginning of growing federal involvement in private economic activity (Hughes, 1991; Ulen, 1987).

Not all agrarian proposals were acted upon, but even demands that fell on deaf ears in Congress and the state legislatures had lasting impacts (Hicks, 1931). For instance, many Alliance and Populist demands such as the graduated income tax and the direct election of U.S. Senators became law during the Progressive Era.

Historians disagree about the legacy of the late nineteenth century farm movements. Some view their contributions to U.S. institutional development positively (Hicks, 1931), while others do not (Hughes, 1991). Nonetheless, few would dispute their impact. In fact, it is possible to see much institutional change in the U.S. over the last century as the logical consequence of political and legal developments initiated by farmers during the late 1800s (Hughes, 1991).

The Sources of Cooperation in the Farm Protest Movement

Nineteenth century farmers were remarkably successful at joining together to increase their economic and political power. Nevertheless, one aspect of farm unrest that has largely been neglected by scholars is the sources of cooperation in promoting agrarian interests. According to Olson (1965), large lobbying or interest groups like the Grange and the Farmers’ Alliance should have been plagued by free-riding: the incentives for individuals not to contribute to the collective production of public goods—those goods for which it is impossible or very costly to exclude others from enjoying. A rational and self-interested farmer would not join a lobbying group because he could enjoy the benefits of its work without incurring any of the costs.

Judging by their political power, most farm interest groups were, however, able to curb free-riding. Stewart (2006) studies how the Dakota Farmers’ Alliance did this between 1885 and 1890. First, the Dakota Farmers’ Alliance provided valuable goods and services to its members that were not available to outsiders, creating economic incentives for membership. These goods and services included better terms of trade through cooperative marketing and the sharing of productivity-enhancing information about agriculture. Second, the structure of the Dakota Farmers’ Alliance as a federation of township chapters enabled the group to monitor and sanction free-riders. Within townships, Alliance members were able to pressure others to join the group. This strategy appears to have succeeded among German and Norwegian immigrants, who were much more likely than others to join the Dakota Farmers’ Alliance and whose probability of joining was increasing in the share of their nativity group in the township population. This is consistent with long-standing social norms of cooperation in Germany and Norway and economic theory about the use of social norms to elicit cooperation in collective action.

References

Aldrich, Mark. “A Note on Railroad Rates and the Populist Uprising.” Agricultural History 41 (1985): 835-52.

Bogue, Allan G. Money at Interest: The Farm Mortgage on the Middle Border. Ithaca, NY: Cornell University Press, 1955.

Bowman, John. “An Economic Analysis of Midwestern Farm Values and Farm Land Income, 1860-1900.” Yale Economic Essays 5 (1965): 317-52.

Buck, Solon J. The Granger Movement: A Study of Agricultural Organization and Its Political, Economic, and Social Manifestations, 1870-1880. Cambridge: Harvard University Press. 1913.

DeCanio, Stephen J. “Economic Losses from Forecasting Error in Agriculture.” Journal of Political Economy 88 (1980): 234-57.

Eichengreen, Barry. “Mortgage Interest Rates in the Populist Era.” American Economic Review 74 (1984): 995-1015.

Fogel, Robert W. and Jack L. Rutner. “The Efficiency Effects of Federal Land Policy, 1850-1900: A Report of Some Provisional Findings.” In The Dimensions of Quantitative Research in History, edited by Wayne O. Aydelotte, Allan G. Bogue and Robert W. Fogel. Princeton, N.: Princeton University Press, 1972.

Goodwyn, Lawrence. The Populist Moment: A Short History of the Agrarian Revolt in America. New York: Oxford University Press, 1978.

Hicks, John D. The Populist Revolt: A History of the Farmers’ Alliance and the People’s Party. Minneapolis: University of Minnesota Press, 1931.

Higgs, Robert. “Railroad Rates and the Populist Uprising.” Agricultural History 44 (1970): 291-97.

Hughes, Jonathan T. The Government Habit Redux: Economic Controls from Colonial Times to the Present. Princeton, NJ: Princeton University Press, 1991.

Libecap, Gary D. “The Rise of the Chicago Packers and the Origins of Meat Inspection and Antitrust.” Economic Inquiry 30 (1992): 242-62.

Mayhew, Anne. “A Reappraisal of the Causes of the Farm Protest Movement in the United States, 1870-1900.” Journal of Economic History 32 (1972): 464-75.

McGuire, Robert A. “Economic Causes of Late Nineteenth Century Agrarian Unrest: New Evidence.” Journal of Economic History 41 (1981): 835-52.

McGuire, Robert A. “Economic Causes of Late Nineteenth Century Agrarian Unrest: Reply.” Journal of Economic History 42 (1981): 697-99.

North, Douglass. Growth and Welfare in the American Past: A New Economic History. Englewood Cliffs, NJ: Prentice Hall, 1974.

Olson, Mancur. The Logic of Collective Action: Public Goods and the Theory of Groups. Cambridge, MA: Harvard University Press, 1965.

Stewart, James I. “Free-riding, Collective Action, and Farm Interest Group Membership.” Reed College Working Paper, 2006. Available at http://www.reed.edu/~stewartj.

Stock, James H. “Real Estate Mortgages, Foreclosures, and Midwestern Agrarian Unrest, 1865-1920.” Journal of Economic History 44 (1983): 89-105.

Ulen, Thomas C. “The Market for Regulation: The ICC from 1887 to 1920.” American Economic Review 70 (1980): 306-10.

Citation: Stewart, James. “The Economics of American Farm Unrest, 1865-1900″. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/the-economics-of-american-farm-unrest-1865-1900/

Fair Housing Laws

William J. Collins, Vanderbilt University

Before the Civil Rights Movement, housing market discrimination was common and blatant, especially against African Americans but also against Jews and other minority groups.1 This essay focuses on the treatment of African Americans, but readers should keep in the mind the pervasiveness of housing discrimination around 1950. By “discrimination,” I mean (as usual in economics) the differential treatment of market participants on the basis of their race or ethnicity — for example, the refusal to rent an apartment to a black family that is willing and able to pay a rental price that would be acceptable if the family were white. Proponents of fair housing laws, at the local, state, and federal levels, hoped that the laws would effectively limit housing market discrimination.

Around mid-century, many barriers inhibited African Americans’ residential mobility, including racially restrictive covenants among white property owners, biased lending practices of banks and government institutions, strong social norms against selling or renting property to blacks outside established black neighborhoods, and harassment of blacks seeking residence in otherwise white neighborhoods (Myrdal 1944, Abrams 1955, Meyer 2000). Since then, the potentially adverse effects of housing discrimination on blacks’ accumulation of wealth through housing equity and on blacks’ access to high quality schools, jobs, and public goods have been widely discussed (Kain 1968, Oliver and Shapiro 1995, Yinger 2001). A related literature has sought to understand the apparent connection between residential segregation, in part a legacy of housing market discrimination (Kain and Quigley 1975), and a variety of adverse socioeconomic outcomes (Massey and Denton 1993, Cutler and Glaeser 1997, Collins and Margo 2000).

Given these concerns, it is not surprising that dismantling housing market discrimination has been among the top priorities of civil rights groups and urban policymakers for decades. Starting in 1959, states began implementing fair housing laws to curb discriminatory practices by sellers, renters, real estate agents, builders, and lenders. In 1968, almost immediately after the murder of Martin Luther King Jr., the United States Congress passed the Fair Housing Act. The Fair Housing Amendments Act of 1988 substantially broadened federal enforcement powers (Yinger 1999).

Fair housing laws are commonly placed among the Civil Rights Movement’s central legislative achievements. Unfortunately, we still do not have convincing measures of the laws’ impact on blacks’ housing market outcomes. It is clear that the laws did not completely eliminate discriminatory practices, let alone the residential patterns that such practices had promoted. The more relevant open questions concern how much headway the laws made on discriminatory practices and segregation, and especially, whether minority families improved their housing situation because of the laws’ implementation. On the basis of the existing evidence, it would be difficult to argue that the laws made a large direct contribution to improvements in African Americans’ housing market outcomes (or those of other groups protected by the laws). One could argue, however, that fair housing was one element of a larger campaign that successfully changed discriminatory norms and policies.

Fair Housing’s Origins and Operation

The federal Fair Housing Act of 1968 remains a highly visible accomplishment of the Civil Rights Movement. It is important to note, however, that the basic ideas that underpinned the federal legislation emerged long before 1968. State and local governments incrementally adopted nondiscriminatory standards for public housing starting in the late 1930s. The application of anti-discrimination policy to the private housing market, however, was among the Civil Rights Movement’s least popular initiatives among whites, and as a result, fair housing legislation lagged years behind fair-employment and public accommodations laws (Lockard 1968). On one level, this reflected whites’ concern about property values and their desire to avoid interracial social contact. On another level, it reflected the rhetorical strength of the argument that the government ought not infringe on perceived private property rights, particularly with respect to homes.

Nevertheless, as black migration to central-city neighborhoods continued through the 1950s, and as the Civil Rights Movement gained momentum, fair housing initiatives rose toward the top of the Movement’s legislative agenda. In this regard, especially when considering state legislation outside the South, it is important to note that the efforts of African-American groups were complemented by those of Jewish groups and labor unions (Lockard 1968, Collins 2004b). In 1957, New York City adopted the nation’s first fair housing ordinance which served as a model for several of the subsequent state laws and was itself based on existing fair-employment statutes. While granting exceptions for the rental of rooms in or attached to owner-occupied homes (the “Mrs. Murphy rule”), the ordinance (as amended in 1962) stated that:

“no owner, . . . real estate broker, . . . or other person having the right to sell, rent, lease, . . . or otherwise dispose of a housing accommodation . . . shall refuse to sell, rent, lease . . . or otherwise deny or withhold from any person or group of persons such housing accommodations, or represent that such housing accommodations are not available for inspection, when in fact they are so available, because of the race, color, religion, national origin or ancestry of such persons” (Housing and Home Finance Agency 1964, p. 287). It also barred discrimination in the terms of sale or rental, advertisements expressing discriminatory preferences, and discrimination by banks and lending institutions. Finally, it outlined a procedure for handling complaints and enforcing the policy.

The state fair housing statutes initially had varying degrees of coverage. Almost all states included a Mrs. Murphy rule. More importantly, some states also exempted activities surrounding the sale or rental of owner-occupied single-family homes. Others allowed the owner-occupiers of homes to discriminate while simultaneously prohibiting discriminatory acts by real-estate brokers, advertisers, lenders, and builders. By 1968, several states had converged to a standard that covered virtually all sales and rentals (except those by Mrs. Murphy). In general, these state laws contained stronger enforcement mechanisms than the federal legislation passed in that year.

Following procedures established to enforce the earlier fair-employment laws, the administrative agencies charged with enforcing the fair housing laws did so, for the most part, by responding to individual complaints rather than by seeking out discriminatory practices. When presented with a viable complaint (i.e., within the law’s coverage), the agency would conduct an investigation. If evidence of discrimination was found, the agency’s representatives would attempt to persuade the discriminatory party to comply with the law. If the discriminatory party refused to cooperate, a public hearing could be held, a cease and desist order and/or fine could be issued, court proceedings could be undertaken, and (if appropriate) a real estate agent’s license could be suspended. Of course, all of this would take time, and households attempting to move might not have been willing or able to wait for redress. Beyond their enforcement role, fair housing agencies often undertook broad educational campaigns and offered advice to community leaders and housing industry participants regarding residential integration.

The effectiveness of this approach in dealing with housing market discrimination or, more to the point, in improving blacks’ housing market outcomes, is unclear a priori. The anti-discrimination measures were weak in the sense that the agencies’ first step was always to seek “conciliation” rather than punishment. Thus, even if caught, there was no immediate penalty and perhaps little incentive to adjust discriminatory policies until confronted by the agency. Even so, the passage of the laws and the threat of sanctions against resistant builders, lenders, or real estate agents might have facilitated conciliation procedures once initiated, might have modified discriminatory behavior immediately (rendering complaints unnecessary), and might have provided a convenient excuse for those who wished to do business with blacks but felt constrained by community norms. Moreover, the speed with which some neighborhoods “tipped” from white to black might have amplified the effects from enforcement efforts. Finally, it is possible that the state agency’s educational campaigns contributed to changing discriminatory norms. Whether the fair housing laws actually contributed to the observed improvement in blacks’ housing market outcomes is discussed below.

In 1966 and 1967, Congress failed to enact federal fair housing legislation, and its doing so in 1968 surprised many observers (Congressional Quarterly Almanac 1968). Southern opposition to the law was strong, and therefore, attaining cloture on a filibuster in the Senate (then requiring a 2/3 majority of votes) was a key step in moving the legislation forward. The Senate finally passed the bill on March 11, 1968; the House passed the bill on April 10 despite opposition mobilized by the National Association of Real Estate Boards. All of this occurred against a background of extraordinary urban civil disturbances from the mid to late 1960s, including an outburst after Martin Luther King’s assassination on April 4.

The federal Fair Housing Act of 1968 initially exempted privately owned, single-family housing. The policy’s coverage was extended over the next two years, but the Department of Housing and Urban Development’s (HUD) enforcement powers remained severely circumscribed (Yinger 1999). The legislation allowed only informal, clandestine efforts at persuasion. If persuasion failed, the complainant was then free to sue for an injunction in federal court, but this was obviously cumbersome, costly, and time consuming. The federal law also specified that a state with its own fair housing law had initial jurisdiction over any complaints originating there. Thus, the original federal law was no stronger than, and in many instances weaker than, existing state legislation.

Fair Housing’s Impact and Extension

Since 1960, blacks’ average housing market outcomes have improved relative to whites’, at least according to broad and commonly referenced measures such as home ownership rates and property values. Moreover, in the 1960s middle- and upper-class black families moved to suburban neighborhoods in larger numbers than ever before, and the average level of residential segregation within cities began to decline around 1970 (Cutler, Glaeser, and Vigdor 1999). These developments are consistent with the presence of a significant fair housing policy effect, but they are far from a direct evaluation of the hypothesis that fair housing laws helped improve blacks’ housing market outcomes.

How could the fair housing laws have contributed to improvement in blacks’ housing outcomes? The laws were intended to lower barriers to blacks’ entry into predominantly white neighborhoods and new housing developments, and to curb discriminatory treatment of blacks seeking mortgages, thereby lowering the effective cost of housing and expanding minorities’ set of housing opportunities. If this mechanism worked as intended, one would expect blacks to increase their housing consumption relative to whites, all other things being equal. One might also expect to see more racial integration in neighborhoods, though in theory, this need not follow. Of course, given that the laws’ enforcement mechanisms were far from draconian and that discriminatory biases in housing markets were deeply rooted, it is possible that the laws had no detectable effect whatsoever.

Comparing similar states that happened to have different fair housing policies before federal legislation was passed, Collins (2004a) finds little statistical evidence to support the hypothesis that state-level fair housing laws made an economically significant contribution to African-Americans’ housing market outcomes in the 1960s. Others (e.g., Yinger 1998) have suggested that a substantial degree of housing market discrimination still exists, though almost certainly less than before the passage of fair housing laws. The difficult measurement problem is figuring out how much of the perceived decline in discrimination or improvement in blacks’ housing is attributable to the anti-discrimination laws and how much is attributable to more general changes in discriminatory sentiment and in the economic resources of African Americans.

Since 1968, the federal government has made several extensions to its original fair housing policy. Among the most important are Fair Housing Assistance Program (1984), the Fair Housing Initiatives Program (1986), and amendments to the Fair Housing Act (1988). Separate but relevant legislation that may have had implications for minority home ownership includes the Home Mortgage Disclosure Act (1975, amended in 1989) and the Community Reinvestment Act (1977). Readers are referred to Galster (1999) and Yinger (1999) for further discussion of fair housing policy in contemporary housing markets.

References

Abrams, Charles. Forbidden Neighbors: A Study of Prejudice in Housing. New York: Harper & Brothers, 1955.

Collins, William J. “The Housing Market Impact of State-Level Anti-Discrimination Laws, 1960-1970.” Journal of Urban Economics 55, no. 3 (2004a): 534-564.

Collins, William J. “The Political Economy of Fair Housing Laws, 1950-1968.” Cambridge, MA: NBER Working Paper 10610 (2004b), available at http://www.nber.org/papers/w10610.

Collins, Willam J. and Robert A. Margo. “When Did Ghettos Go Bad? Residential Segregation and Socioeconomic Outcomes.” Economics Letters 69 (2000): 239-243.

Congressional Quarterly Almanac. “Congress Enacts Open Housing Legislation.” CQ Almanac 1968. Washington, DC: Congressional Quarterly News Features (1968): 152-168.

Cutler, David M. and Edward L. Glaeser. “Are Ghettos Good or Bad?” Quarterly Journal of Economics 112 (1997): 827-872.

Cutler, David M, Edward L. Glaeser, and Jacob L. Vigdor. “The Rise and Decline of the American Ghetto.” Journal of Political Economy 107 (1999): 455-506.

Galster, George C. “The Evolving Challenges of Fair Housing since 1968: Open Housing, Integration, and the Reduction of Ghettoization.” Cityscape 4 (1999): 123-138.

Housing and Home Finance Agency. Fair Housing Laws: Summaries and Text of State and Municipal Laws. Washington, DC: Government Printing Office, 1964.

Kain, John F. “Housing Segregation, Negro Employment, and Metropolitan Decentralization.” Quarterly Journal of Economics 82 (1968): 175-197.

Kain, John F. and John M. Quigley, Housing Markets and Racial Discrimination: A Microeconomic Analysis. New York: Columbia University Press, 1975.

Lockard, Duane. Toward Equal Opportunity: A Study of State and Local Antidiscrimination Laws. New York: Macmillan Company, 1968.

Massey, Douglas S. and Nancy A. Denton. American Apartheid: Segregation and the Making of the Underclass. Cambridge, MA: Harvard University Press, 1993.

Meyer, Stephen G. As Long As They Don’t Move Next Door: Segregation and Racial Conflict in American Neighborhoods. New York: Rowman & Littlefield, 2000.

Myrdal, Gunnar. An American Dilemma: The Negro Problem and Modern Democracy. New York: Harper & Row, 1962 (originally 1944).

Oliver, Melvin L. and Thomas M. Shapiro. Black Wealth/White Wealth: A New Perspective on Racial Inequality. New York: Routledge, 1995.

Yinger, John. “Housing Discrimination and Residential Segregation as Causes of Poverty.” In Understanding Poverty, edited by S.H. Danziger and R.H. Haveman, 359-391. Cambridge, MA: Harvard University Press, 2001.

Yinger, John. “Sustaining the Fair Housing Act.” Cityscape 4 (1999): 93-105.

Yinger, John. “Evidence on Discrimination in Consumer Markets.” Journal of Economic Perspectives 12 (1998): 23-40.

1This essay draws heavily on Collins 2004a and 2004b.

Citation: Collins, William. “Fair Housing Laws”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/fair-housing-laws/

The Euro and Its Antecedents

Jerry Mushin, Victoria University of Wellington

The establishment, in 1999, of the euro was not an isolated event. It was the latest installment in the continuing story of attempts to move towards economic and monetary integration in western Europe. Its relationship with developments since 1972, when the Bretton Woods system of fixed (but adjustable) exchange rates in terms of the United States dollar was collapsing, is of particular interest.

Political moves towards monetary cooperation in western Europe began at the end of the Second World War, but events before 1972 are beyond the scope of this article. Coffey and Presley (1971) have described and analyzed relevant events between 1945 and 1971.

The Snake

In May 1972, at the end of the Bretton Woods (adjustable-peg) system, many countries in western Europe attempted to stabilize their currencies in relation to each other’s currencies. The arrangements known as the Snake in the Tunnel (or, more frequently, as the Snake), which were set up by members of the European Economic Community (EEC), one of the forerunners of the European Union, lasted until 1979. Each member agreed to limit, by market intervention, the fluctuations of its currency’s exchange rate in terms of other members’ currencies. The maximum divergence between the strongest and the weakest currencies was 2.25%. The agreement meant that the French government, for example, would ensure that the value of the French franc would show very limited fluctuation in terms of the Italian lira or the Netherlands guilder, but that there would be no commitment to stabilize its fluctuations against the United States dollar, the Japanese yen, or other currencies outside the agreement.

This was a narrower objective than the aim of the adjustable-peg system, which was intended to stabilize the value of each currency in terms of the values of all other major currencies, but for which the amount of reserves held by governments had proved to be insufficient. It was felt that this limited objective could be achieved with the amount of reserves available to member governments.

The agreement also had a political dimension. Stable exchange rates are likely to encourage international trade, and it was hoped that the new exchange-rate regime would stimulate members’ trade within western Europe at the expense of their trade with the rest of the world. This was one of the objectives of the EEC from its inception.

Exchange rates within the group of currencies were to be managed by market intervention; member governments undertook to buy and sell their currencies in sufficiently large quantities to influence their exchange rates. There was an agreed maximum divergence between the strongest and weakest currencies. Exchange rates of the whole group of currencies fluctuated together against external denominators such as the United States dollar.

The Snake is generally regarded as a failure. Membership was very unstable; the United Kingdom and the Irish Republic withdrew after less than one month, and only the German Federal Republic remained a member for the whole of its existence. Other members withdrew and rejoined, and some did this several times. In addition, the political context of the Snake was not clearly defined. Sweden and Norway participated in the Snake although, at that time, neither of these countries was a member of the EEC and Sweden was not a candidate for admission.

The curious name of the Snake in the Tunnel comes from the appearance of exchange-rate graphs. In terms of a non-member currency, the value of each currency in the system could fluctuate but only within a narrow band that was also fluctuating. The trend of each exchange rate showed some resemblance to a snake inside the narrow confines of a tunnel.

European Monetary System

The Snake came to an end in 1979 and was replaced with the European Monetary System (EMS). The exchange-rate mechanism of the EMS had the same objectives as the Snake, but the procedure for allocating intervention responsibilities among member governments was more precisely specified.

The details of the EMS arrangements have been explained by Adams (1990). Membership of the EMS involved an obligation on each EMS-member government to undertake to stabilize its currency value with respect to the value of a basket of EMS-member currencies called the European Currency Unit (ECU). Each country’s currency had a weight in the ECU that was related to the importance of that country’s trade within the EEC. An autonomous shift in the external value of any EMS-member currency changed the value of the ECU and therefore imposed exchange-rate adjustment obligations on all members of the system. The magnitude of each of these obligations was related to the weight allocated to the currency experiencing the initial disturbance.

The effects of the EMS requirements on each individual member depended upon that country’s weight in the ECU. The system ensured that major members delegated to their smaller partners a greater proportion of their exchange-rate adjustment responsibilities than the less important members imposed on the dominant countries. The explanation for this lack of symmetry depends on the fact that a particular percentage shift in the external value of the currency of a major member of the EMS (with a high weight in the ECU) had a greater effect on the external value of the ECU than had the same percentage disturbance to the external value of the currency of a less important member. It therefore imposed greater exchange-rate adjustment responsibilities on the remaining members than did the same percentage shift applied to the external value of the less important currency. While each of the major members of the EMS could delegate to the remaining members a high proportion of its adjustment obligations, the same is not true for the smaller countries in the system. This burden was, however, seen by the smaller nations (including Denmark, Belgium, and Netherlands) as an acceptable price for exchange-rate stability with their main trading partners (including France and the German Federal Republic).

The position of the Irish Republic, which joined the EMS in 1979 despite both the very low weight of its currency in the ECU and the absence of the UK, its dominant trading partner, appears to be anomalous. The explanation of this decision is that it was principally concerned about the significant problem of imported inflation that was derived from the rising price level of its British imports. This was based on the assumption that, once the rigid link between the two currencies was broken, inflation in the UK would lead to a fall in the value of the British pound relative to the value of the Irish Republic pound. However, purchasing power is not the only determinant of exchange rates, and the value of the British pound increased sharply in 1979 causing increased imported inflation in the Irish Republic. The appreciation of the British pound was probably caused principally by developments in the UK oil industry and by the monetarist style of UK macroeconomic policy.

Partly because it had different rules for different countries, the EMS had a more stable membership than had the Snake. The standard maximum exchange-rate fluctuation from its reference value that was permitted for each EMS currency was ±2.25%. However, there were wider bands (±6%) for weaker members (Italy from 1979, Spain from 1989, and the UK from 1990) and the Netherlands observed a band of ±1%. The system was also subject to frequent realignments of the parity grid. The Irish Republic joined the EMS in 1979 but the UK did not, thus ending the link between the British pound and the Irish Republic pound. The UK joined in 1990 but, as a result of substantial international capital flows, left in 1992. The bands were increased in width to ±15% in 1992.

Incentives to join the EMS were comparable to those that applied to the Snake and included the desire for stable exchange rates with a country’s principal trading partners and the desire to encourage trade within the group of EMS members rather than with countries in the rest of the world. Cohen (2003), in his analysis of monetary unions, has explained the advantages and disadvantages of trans-national monetary integration.

The UK decided not to participate in the exchange-rate mechanism of the EMS at its inception. It was influenced by the fact that the weight allocated to the British pound (0.13) in the definition of the ECU was insufficient to allow the UK government to delegate to other EMS members a large proportion of the exchange-rate stabilization responsibilities that it would acquire under EMS rules. The outcome of EMS membership for the UK in 1979 would have been, therefore, in marked contrast to the outcome for France (with an ECU-weight of 0.20) and, especially, for the German Federal Republic (with an ECU-weight of 0.33). The proportion of the UK’s exports that, at that time, was sold in EMS countries was low relative to the proportion of any other EMS-member’s exports, and this was reflected in its ECU weight. The relationship between the weight assigned to an individual EMS-member’s currency in the definition of the ECU and the ability of that country to delegate adjustment responsibilities was that a particular percentage shift in the external value of the currency of a major member of the EMS had a greater effect on the value of the ECU than the same percentage disturbance to the external value of the currency of a less important member, and it therefore imposed greater exchange-rate adjustment responsibilities on the remaining EMS members than did the same percentage shift applied to the external value of the less important EMS-member currency.

A second reason for the refusal of the UK to join the EMS in 1979 was that membership would not have led to greater stability of its exchange rates with respect to the currencies of its major trading partners, which were, at that time, outside the EMS group of countries.

An important reason for the British government’s continued refusal, for more than eleven years, to participate in the EMS was its concern about the loss of sovereignty that membership would imply. A floating exchange rate (even a managed floating exchange rate such as was operated by the UK government from 1972 to 1990) permits an independent monetary policy, but EMS obligations make this impossible. Monetarist views on the efficacy of restraining the rate of inflation by controlling the rate of growth of the money supply were dominant during the early years of the EMS, and an independent monetary policy was seen as being particularly significant.

By 1990, when the UK government decided to join the EMS, a number of economic conditions had changed. It is significant that the proportion of UK exports sold in EMS countries had risen markedly. Following substantial speculative selling of British currency in September 1992, however, the UK withdrew from the EMS. One of the causes of this was the substantial flow of short-term capital from the UK, where interest rates were relatively low, to Germany, which was implementing a very tight monetary policy and hence had very high interest rates. This illustrates that a common monetary policy is one of the necessary conditions for the operation of agreements, such as the EMS, that are intended to limit exchange-rate fluctuations.

The Euro

Despite the partial collapse of the EMS in 1992, a common currency, the euro, was introduced in 1999 by eleven of the fifteen members of the European Union, and a twelfth country joined the euro zone in 2001. From 1999, each national currency in this group had a rigidly fixed exchange rate with the euro (and, hence, with each other). Fixed exchange rates, in national currency units per euro, are listed in Table 1. In 2002, euro notes and coins replaced national currencies in these countries. The intention of the new currency arrangement is to reduce transactions costs and encourage economic integration. The Snake and the EMS can perhaps be regarded as transitional structures leading to the introduction of the euro, which is the single currency of a single integrated economy.

Table 1
Value of the Euro (in terms of national currencies)

Austria 13.7603
Belgium 40.3399
Finland 5.94573
France 6.55957
Germany 1.95583
Greece 340.750
Irish Republic 0.787564
Italy 1936.27
Luxembourg 40.3399
Netherlands 2.20371
Portugal 200.482
Spain 166.386

Source: European Central Bank

Of the members of the European Union, to which participation in this innovation was restricted, Denmark, Sweden, and the UK chose not to introduce the euro in place of their existing currencies. The countries that adopted the euro in 1999 were Austria, Belgium, France, Finland, Germany, Irish Republic, Italy, Luxembourg, Netherlands, Portugal, and Spain.

Greece, which adopted the euro in 2001, was initially excluded from the new currency arrangement because it had failed to satisfy the conditions described in the Treaty of Maastricht, 1991. The maximum value for each of five variables for each country that was specified in the Treaty is listed in Table 2.

Table 2
Conditions for Euro Introduction (Treaty of Maastricht, 1991)

Inflation rate 1.5 percentage points above the average of the three euro countries with the lowest rates
Long-term interest rates 2.0 percentage points above the average of the three euro countries with the lowest rates
Exchange-rate stability fluctuations within the EMS band for at least two years
Budget deficit/GDP ratio 3%
Government debt/GDP ratio 60%

Source: The Economist, May 31, 1997.

The euro is also used in countries that, before 1999, used currencies that it has replaced: Andorra (French franc and Spanish peseta), Kosovo (German mark), Monaco (French franc), Montenegro (German mark), San Marino (Italian lira), and Vatican (Italian lira). The euro is also the currency of French Guiana, Guadeloupe, Martinique, Mayotte, Réunion, and St Pierre-Miquelon that, as départements d’outre-mer, are constitutionally part of France.

The euro was adopted by Slovenia in 2007, by Cyprus (South) and Malta in 2008, by Slovakia in 2009, by Estonia in 2011,  by Latvia in 2014, and by Lithuania in 2015. Table 3 shows the exchange rates between the euro and the currencies of these countries.

Table 3 Value of the Euro (in terms of national currencies)

         Cyprus (South) 0.585274
         Estonia 15.6466
         Latvia 0.702804
         Lithuania 3.4528
         Malta 0.4293
         Slovakia 30.126
         Slovenia 239.64

Source: European Central Bank

Currencies whose exchange rates were, in 1998, pegged to currencies that have been replaced by the euro have had exchange rates defined in terms of the euro since its inception. The Communauté Financière Africaine (CFA) franc, which is used by Benin, Burkina Faso, Cameroon, Central African Republic, Chad, Congo Republic, Côte d’Ivoire, Equatorial Guinea, Gabon, Guinea-Bissau, Mali, Niger, Sénégal, and Togo was defined in terms of the French franc until 1998, and is now pegged to the euro. The Comptoirs Français du Pacifique (CFP) franc, which is used in the three French territories in the south Pacific (Wallis and Futuna Islands, French Polynesia, and New Caledonia), was also defined in terms of the French franc and is now pegged to the euro. The Comoros franc has similarly moved from a French-franc peg to a euro peg. The Cape Verde escudo, which was pegged to the Portuguese escudo, is also now pegged to the euro. Bosnia-Herzegovina, and Bulgaria, which previously operated currency-board arrangements with respect to the German mark, now fix the exchange rates of their currencies in terms of the euro. Botswana, Croatia, Czech Republic, Denmark, Macedonia, and São Tomé-Príncipe also peg their currencies to the euro. Additional countries that peg their currencies to a basket that includes the euro are Algeria, Belarus, Fiji, Iran, Kuwait, Libya, Morocco, Samoa (Western), Singapore, Syria, Tunisia, and Vanuatu. Romania, and Switzerland, which do not operate fixed exchange-rate systems, occasionally intervene to smooth extreme fluctuations, in terms of the euro, of their exchange rates (European Central Bank, 2016).

The group of countries that use the euro or that have linked the values of their currencies to the euro might be called the “greater euro zone.” It is interesting that membership of this group of countries has been determined largely by historical accident. Its members exhibit a marked absence of macroeconomic commonality. Within this bloc, macroeconomic indicators, including the values of GDP and of GDP per person, have a wide range of values. The degree of financial integration with international markets also varies substantially in these countries. Countries that stabilize their exchange rates with respect to a basket of currencies that includes the euro have adjustment systems that are less closely related to its value. This weaker connection means that these countries should not be regarded as part of the greater euro zone.

The establishment of the euro is a remarkable development whose economic effects, especially in the long term, are uncertain. This type of exercise, involving the rigid fixing of certain exchange rates and then the replacement of a group of existing currencies, has rarely been undertaken in the recent past. Other than the introduction of the euro, and the much less significant case of the merger in 1990 of the former People’s Democratic Republic of Yemen (Aden) and the former Arab Republic of Yemen (Sana’a), the monetary union that accompanied the expansion of the German Federal Republic to incorporate the former German Democratic Republic in 1990 is the sole recent example. However, the very distinctive political situation of post-1945 Germany (and its economic consequences) make it difficult to draw relevant conclusions from this experience. The creation of the euro is especially noteworthy at a time when the majority, and an increasing proportion, of countries have chosen floating (or managed floating) exchange rates for their currencies. With the important exception of China, this includes most major economies. This statement should be treated with caution, however, because countries that claim to operate a managed floating exchange rate frequently aim, as described by Calvo and Reinhart (2002), to stabilize their currencies with respect to the United States dollar.

When the euro was established, it replaced national currencies. However, this is not the same as the process known as dollarization, in which a country adopts another country’s currency. For example, the United States dollar is the sole legal tender in Ecuador, El Salvador, Marshall Islands, Micronesia, Palau, Panama, Timor-Leste, and Zimbabwe. It is also the sole legal tender in the overseas possessions of the United States (American Samoa, Guam, Northern Mariana Islands, Puerto Rico, and U.S. Virgin Islands), in two British territories (Turks and Caicos Islands and British Virgin Islands) and in the Caribbean Netherlands. Like the countries that use the euro, a dollarized country cannot operate an independent monetary policy. A euro-using country will, however, have some input into the formation of monetary policy, whereas dollarized countries have none. In addition, unlike euro-using countries, dollarized countries probably receive none of the seigniorage that is derived from the issue of currency.

Prospects for the Euro

The expansion of the greater euro zone, which is likely to continue with the economic integration of the new members of the European Union, and with the probable admission of additional new members, has enhanced the importance of the euro. However, this expansion is unlikely to make the greater euro zone into a major currency bloc comparable to, for example, the Sterling Area even at the time of its collapse in 1972.  Mushin (2012) has described the nature and role of the Sterling Area

Mundell (2003) has predicted that the establishment of the euro will be the model for a new currency bloc in Asia. However, there is no evidence yet of any significant movement in this direction. Eichengreen et al (1995) have argued that monetary unification in the emerging industrial economies of Asia is unlikely to occur. A feature of Mundell’s paper is that he assumes that the benefits of joining a currency area almost necessarily exceed the costs, but this remains unproven.

The creation of the euro will have, and might already have had, macroeconomic consequences for the countries that comprise the greater euro zone. Since 1999, the influences on the import prices and export prices of these countries have included the effects of monetary policy run by the European Central Bank (www.ecb.int), a non-elected supra-national institution that is directly accountable neither to individual national governments nor to individual national parliaments, and developments, including capital flows, in world financial markets. Neither of these can be relied upon to ensure stable prices at an acceptable level in price-taking economies. The consequences of the introduction of the euro might be severe in some parts of the greater euro zone, especially in the low-GDP economies. For example, unemployment might increase if exports cease to have competitive prices. Further, domestic macroeconomic policy is not independent of exchange-rate policy. One of the costs of joining a monetary union is the loss of monetary-policy independence.

Data on Exchange-rate Policies

The best source of data on exchange-rate policies is probably the International Monetary Fund (IMF) (see www.imf.org). Almost all countries of significant size are members of the IMF; notable exceptions are Cuba (since 1964), the Republic of China (Taiwan) (since 1981), and the People’s Democratic Republic of Korea (North Korea). The most significant IMF publications that contain exchange-rate data are International Financial Statistics and the Annual Report on Exchange Arrangements and Exchange Restrictions.

Since 2009, the IMF has allocated each country’s exchange rate policy to one of ten categories. Unfortunately, the definitions of these mean that the members of the greater euro zone are not easy to identify. In this taxonomy, the exchange rate systems of countries that are part of a monetary union are classified according to the arrangements that govern the joint currency. The exchange rate policies of the eleven countries that introduced the euro in 1999, Cyprus (South), Estonia, Greece, Latvia, Lithuania, Malta, Slovakia, and Slovenia are classified as “Free floating.” Kosovo, Montenegro, and San Marino have “No separate legal tender.” Bosnia-Herzegovina, and Bulgaria have “Currency boards.” Cape Verde, Comoros, Denmark, São Tomé and Príncipe, and the fourteen African countries that use the CFA franc have “Conventional pegs.” Macedonia has a “Stabilized arrangement.” Croatia has a “Crawl-like arrangement.” Andorra, Monaco, Vatican, and the three territories in the south Pacific that use the CFP franc are not IMF members. Anderson, Habermeier, Kokenyne, and Veyrune (2009) explain and discuss the definitions of these categories and compare them to the definitions that were used by the International Monetary Fund until 2010. Information on the exchange-rate policy of each of its members is published by the International Monetary Fund (2016).

Other Monetary Unions in Europe

The establishment of the Snake, the EMS, and the euro have affected some of the other monetary unions in Europe. The monetary unions of Belgium-Luxembourg, of France-Monaco, and of Italy-Vatican-San Marino predate the Snake, survived within the EMS, and have now been absorbed into the euro zone. Unchanged by the introduction of the euro are the UK-Gibraltar-Guernsey-Isle of Man-Jersey monetary union (which is the remnant of the Sterling Area that also includes Falkland Islands and St. Helena), the Switzerland-Liechtenstein monetary union, and the use of the Turkish lira in Northern Cyprus.

The relationship between the currencies of the Irish Republic (previously the Irish Free State) and the UK is an interesting case study of the interaction of political and economic forces on the development of macroeconomic (including exchange-rate) policy. Despite the non-participation of the UK, the Irish Republic was a foundation member of the EMS. This ended the link between the British pound and the Irish Republic pound (also called the punt) that had existed since the establishment of the Irish currency following the partition of Ireland (1922), so that a step towards one monetary union destroyed another. Until 1979, the Irish Republic pound had a rigidly fixed exchange rate with the British pound, and each of the two banking systems cleared the other’s checks as if denominated in its own currency. These very close financial links meant that every policy decision of monetary importance in the UK coincided with an identical change in the Irish Republic, including the currency reforms of 1939 (US-dollar peg), 1949 (devaluation), 1967 (devaluation), 1971 (decimalization), 1972 (floating exchange rate), and 1972 (brief membership of the Snake). From 1979 until 1999, when the Irish Republic adopted the euro, there was a floating exchange rate between the British pound and the Irish Republic pound. South of the Irish border, the dominant political mood in the 1920s was the need to develop a distinct non-British national identity, but there were perceived to be good economic grounds for retaining a very close link with the British pound. By 1979, although political rhetoric still referred to the desire for a united Ireland, the economic situation had changed, and the decision to join the EMS without the membership of the UK meant that, for the first time, different currencies were used on each side of the Irish border. In both of these cases, political objectives were tempered by economic pressures.

Effects of the Global Financial Crisis

One of the ways of analyzing the significance of a new system is to observe the effects of circumstances that have not been predicted. The global financial crisis [GFC] that began in 2007 provides such an opportunity. In the UK and in the Irish Republic, whose business cycles are usually comparable, the problems that followed the GFC were similar in nature and in severity. In both of these countries, major banks (and therefore their depositors) were rescued from collapse by their governments. However, the macroeconomic outcomes have been different. The increase in the unemployment rate has been much greater in the Irish Republic than in the UK. The explanation for this is that an independent monetary policy is not possible in the Irish Republic, which is part of the euro zone. The UK, which does not use the euro, responded to the GFC by operating a very loose monetary policy (with a very low discount rate and large scale “quantitative easing”). The effects of this have been compounded by depreciation of the British pound. Although, partly because of the common language, labor is mobile between the UK and the Irish Republic, the unemployment rate in the Irish Republic remains high because its real exchange rate is high and its real interest rates are high. The effect of the GFC is that the Irish Republic now has an overvalued currency, which has made an inefficient economy more inefficient. Simultaneously, the more efficient economies in the euro zone (and some countries that are outside the euro zone, including the UK, whose currencies have depreciated) now have undervalued currencies, which have encouraged their economies to expand. This illustrates one of the consequences of membership of the euro zone. Had the GFC been predicted, the estimation of the economic benefits for the Irish Republic (and for Greece, Italy, Portugal, Spain, and other countries) would probably have been different. The political consequences for the more efficient countries in the euro zone, including Germany, might also be significant. At great cost, these countries have provided financial assistance to the weaker members of the euro zone, especially Greece.

Conclusion

The future role of the euro is uncertain. Especially in view of the British decision to withdraw from the European Union, even its survival is not guaranteed. It is clear, however, that the outcome will depend on both political and economic forces.

References:

Adams, J. J. “The Exchange-Rate Mechanism in the European Monetary System.” Bank of England Quarterly Bulletin 30, no. 4 (1990): 479-81.

Anderson, Harald, Karl Habermeier, Annamaria Kokenyne, and Romain Veyrune. Revised System for the Classification of Exchange Rate Arrangements, Washington DC: International Monetary Fund, 2009.

Calvo, Guillermo and Carmen Reinhart. “Fear of Floating.” Quarterly Journal of Economics 117, no 2 (2002): 379-408.

Coffey, Peter and John Presley. European Monetary Integration. London: Macmillan Press, 1971.

Cohen, Benjamin. “Monetary Unions.” In Encyclopedia of Economic and Business History, edited by Robert Whaples, 2003. http://eh.net/encyclopedia/monetary-unions/

Eichengreen, Barry, James Tobin, and Charles Wyplosz. “Two Cases for Sand in the Wheels of International Finance.” Economic Journal 105, no. 1 (1995): 162-72.

European Central Bank.  The International Role of the Euro.  2016.

International Monetary Fund. Annual Report of the Executive Board, 2016.

Mundell, Robert. “Prospects for an Asian Currency Area.” Journal of Asian Economics 14, no. 1 (2003): 1-10.

Mushin, Jerry. “The Sterling Area.” In Encyclopedia of Economic and Business History, edited by Robert Whaples, 2012.  http://eh.net/encyclopedia/the-sterling-area/

Endnote:

Jerry Mushin can be reached at  jerry.mushin1@outlook.com.  This article includes material from some of the author’s publications:

Mushin, Jerry. “A Simulation of the European Monetary System.” Computer Education 35 (1980): 8-19.

Mushin, Jerry. “The Irish Pound: Recent Developments.” Atlantic Economic Journal 8, no, 4 (1980): 100-10.

Mushin, Jerry. “Exchange-Rate Adjustment in a Multi-Currency Monetary System.” Simulation 36, no 5 (1981): 157-63.

Mushin, Jerry. “Non-Symmetry in the European Monetary System.” British Review of Economic Issues 8, no 2 (1986): 85-89.

Mushin, Jerry. “Exchange-Rate Stability and the Euro.” New Zealand Banker 11, no. 4 (1999): 27-32.

Mushin, Jerry. “A Taxonomy of Fixed Exchange Rates.” Australian Stock Exchange Perspective 7, no. 2 (2001): 28-32.

Mushin, Jerry. “Exchange-Rate Policy and the Efficacy of Aggregate Demand Management.” The Business Economist 33, no. 2 (2002): 16-24.

Mushin, Jerry. Output and the Role of Money. New York, London and Singapore: World Scientific Publishing Company, 2002.

Mushin, Jerry. “The Deceptive Resilience of Fixed Exchange Rates.” Journal of Economics, Business and Law 6, no. 1 (2004): 1-27.

Mushin, Jerry. “The Uncertain Prospect of Asian Monetary Integration.” International Economics and Finance Journal 1, no. 1 (2006): 89-94.

Mushin, Jerry. “Increasing Stability in the Mix of Exchange Rate Policies.” Studies in Business and Economics 14, no. 1 (2008): 17-30.

Mushin, Jerry. “Predicting Monetary Unions.” International Journal of Economic Research 5, no. 1 (2008): 27-33.

Mushin, Jerry. Interest Rates, Prices, and the Economy. Jodhpur: Scientific Publishers (India), 2009.

Mushin, Jerry. “Infrequently Asked Questions on the Monetary Union of the Countries of the Gulf Cooperation Council.” Economics and Business Journal: Inquiries and Perspectives, 3, no. 1, (2010): 1-12.

Mushin, Jerry. “Common Currencies: Economic and Political Causes and Consequences.” The Business Economist 42, no. 2, (2011): 19-26.

Mushin, Jerry. “Exchange Rates, Monetary Aggregates, and Inflation,” Bulletin of Political Economy 7, no. 1 (2013): 69-88.

Citation: Mushin, Jerry. “The Euro and Its Antecedents”. EH.Net Encyclopedia, edited by Robert Whaples. October 12, 2016. URL http://eh.net/encyclopedia/the-euro-and-its-antecedents/

Education and Economic Growth in Historical Perspective

David Mitch, University of Maryland Baltimore County

In his introduction to the Wealth of Nations, Adam Smith (1776, p. 1) states that the proportion between the annual produce of a nation and the number of people who are to consume that produce depends on “the skill, dexterity, and judgment with which its labour is generally applied.” In recent decades, analysts of economic productivity in the United States during the twentieth century have made allowance for Smith’s “skill, dexterity, and judgment” of the labor force under the rubric of labor force quality (Ho and Jorgenson 1999; Aaronson and Sullivan 2001; DeLong, Goldin, and Katz 2003). These studies have found that a variety of factors have influenced labor force quality in the U.S., including age structure and workforce experience, female labor force participation, and immigration. One of the most important determinants of labor force quality has been years of schooling completed by the labor force.

Data limitations complicate generalizing these findings to periods before the twentieth century and to geographical areas beyond the United States. However, the rise of modern economic growth over the last few centuries seems to roughly coincide with the rise of mass schooling throughout the world. The sustained growth in income per capita evidenced in much of the world over the past two to two and a half centuries is a marked divergence from previous tendencies. Kuznets (1966) used the phrase “modern economic growth” to describe this divergence and he placed its onset in the mid-eighteenth century. More recently, Maddison (2001) has placed the start of sustained economic growth in the early nineteenth century. Maddison (1995) estimates that per capita income between 1520 and 1992 increased some eight times for the world as a whole and up to seventeen times for certain regions. Popular schooling was not widespread anywhere in the world before 1600. By 1800, most of North America, Scandinavia, and Germany had achieved literacy rates well in excess of fifty percent. In France and England literacy rates were closer to fifty percent and school attendance before the age of ten was certainly widespread, if not yet the rule. It was not until later in the nineteenth century and the early twentieth century that Southern and Eastern Europe were to catch up with Western Europe and it was only the first half of the twentieth century that saw schooling become widespread through much of Asia and Latin America. Only later in the twentieth century did schooling begin to spread throughout Africa. The twentieth century has seen the spread of secondary and university education to much of the adult population in the United States and to a lesser extent in other developed countries.[2] However, correlation is not causation; rising income per capita may have contributed to rising levels of schooling, as well as schooling to income levels. Thus, the contribution of rising schooling to economic growth should be examined more directly.

Estimating the Contribution of the Rise of Mass Schooling to Economic Growth: A Growth Accounting Perspective

Growth accounting can be used to estimate the general bounds of the contribution the rise of schooling has made to economic growth over the past few centuries.[3] A key assumption of growth accounting is that factors of production are paid their social marginal products. Growth accounting starts with estimates of the growth of individual factors of production, as well as the shares of these factors in total output and estimates of the growth of total product. It then apportions the growth in output into that attributable to growth in each factor of production specified in the analysis and into that due to a residual that cannot otherwise be explained. Estimates of how much schooling has increased the productivity of individual workers, combined with estimates of the increase in schooling completed by the labor force, yield estimates of how much the increase in schooling has contributed to increasing output. A growth accounting approach offers the advantage that with basic estimates (or at least possible ranges) for trends in output, labor force, schooling attainment, and preferably capital stock and factor shares, it yields estimates of schooling’s contribution to economic growth. An important disadvantage is that it relies on indirect estimates at the micro level for how schooling influences productivity at the aggregate level, rather than on direct empirical evidence.[4]

Back-of-the-envelope estimates of increases in income per capita attributable to rising levels of education over a period of a few centuries can be obtained by considering possible ranges of levels of schooling increases as measured in average years of schooling along with possible ranges of rates of return per year of schooling, in terms of the percentage by which a year of schooling raises earnings and common ranges for labor’s share in national income. By using a Cobb-Douglas specification of the aggregate production function with two factors of production, labor and physical capital, one can arrive at the following equation for the ratio between final and initial national income per worker due to increases in average school years completed between the two time periods:

1) (Y/L)1/ (Y/L)0 = ( (1 + r )S1 - S0 )α

Where Y = output, L = the labor force, r = the percent by which a year of schooling increases labor productivity, S is the average years of schooling completed by the labor force in each time period, α is labor’s share in national income, and the subscripts 0 and 1 denote the initial and final time period over which the schooling changes occur.[5] This formulation is a partial equilibrium one, holding constant the level of physical capital. However, the level of physical capital should be expected to increase in response to improved labor force quality due to more schooling. A common specification of a growth model that allows for such responses of physical capital implies the following ratio between final and initial national income per worker (see Lord 2001, 99-100):

2) (Y/L)1/ (Y/L)0 = ( (1 + r )S1 - S0 )

The bounds on increases in years of schooling can be placed at between zero and 16, that is, between a completely unschooled and presumably illiterate population to one in which a college education is universal. As bounds on returns to increasing earnings per year of schooling, one can employ Krueger and Lindahl’s (2001) survey of results from recent estimates of earnings functions, which finds that returns range from 5 percent to 15 percent. The implications of varying these two parameters are reported in Tables 1A and 1B. Table 1A reports estimates based on the partial equilibrium specification holding constant the level of physical capital in equation 1). Table 1B reports estimates allowing for a changing level of physical capital as in equation 2). Labor’s share of income has been set at a commonly used value of 0.7 (see DeLong, Goldin and Katz 2003, 29; Maddison 1995, 255).

Table 1A
Increase in per Capita Income over a Base Level of 1 Attributable to Hypothetical Increases in Average Schooling Levels — Holding the Physical Capital Stock Constant

Percent Increase in Earnings per Extra Year of Schooling
Increase in Average
Years of Schooling
5% 10% 15%
1 1.035 1.07 1.10
3 1.11 1.22 1.34
6 – illiteracy to
universal grammar school
.23 1.49 1.80
12 – illiteracy to
universal high school
1.51 2.23 3.23
16 – illiteracy to
universal college
1.73 2.91 4.78

Table 1B
Increase in per Capita Income over a Base Level of 1 Attributable to Hypothetical Increases in Average Schooling Levels — Allowing for Steady-state Changes in the Physical Capital Stock

Percent Increase in Earnings per Extra Year of Schooling
Increase in Average
Years of Schooling
5% 10% 15%
1 1.05 1.10 1.15
3 1.16 1.33 1.52
6 – illiteracy to
universal grammar school
1.34 1.77 2.31
12 – illiteracy to
universal high school
1.79 3.14 5.35
16 – illiteracy to
universal college
2.18 4.59 9.36

The back-of-the-envelope calculations in Tables 1A and 1B make two simple points. First, schooling increases have the potential to explain a good deal of estimated long-term increases in per capita income. With the average member of an economy’s labor force embodying investments of twelve years of schooling and a moderate ten-percent rate of return per year of schooling and no increase in the capital stock, at least 17 percent of Maddison’s eight-fold increase in per capita income can be accounted for (i.e. 1.23/7) by rising schooling. Indeed, a 16 year schooling increase allowing for steady-state capital stock increases and at 15 percent per year return overexplains Maddison’s eight-fold increase (8.36/7). After all, if schooling has had substantial effects on the productivity of individual workers, if a sizable share of the labor force has experienced improvements in schooling completed and with labor’s share of output greater than half, then the contribution of rising schooling to increasing output should be large.

Second, the contribution of schooling increases that have actually occurred historically to per capita income increases is more modest accounting for at best about one fifth of Maddison’s one-fold increase. Thus an increase in average years of schooling completed by the labor force of 6 years, roughly that entailed by the spread of universal grammar schooling, would account for 19 percent (1.31/7) of an eight-fold per capita output increase at a high 15 percent rate of return allowing for steady state changes in the physical capital stock (Table 1B). And at a low 5 percent return per year of schooling, the contribution would be only 5 percent of the increase (0.34/7). Making lower-level elementary education universal would entail increasing average years of schooling completed by the labor force by 1 to 3 years; in most circumstances this is not a trivial accomplishment as measured by the societal resources required. However, even at a high 15 percent per year return and allowing for steady state changes in the capital stock (Table 1B), the contribution of a 3 year increase in average years of schooling would only account for 7 percent (0.52/7) of Maddison’s eight-fold increase.

How do the above proposed bounds on schooling increases compare with possible increases in the physical capital stock? Kendrick (1993, 143) finds a somewhat larger growth rate in his estimated human capital stock than in the stock of non-human capital for the U.S. between 1929 and 1969, though for the sub-period 1929-48, he estimates a slightly higher growth rate for the non-human capital stock. In contrast, Maddison (1995, 35-37) estimates larger increases in the value of non-residential structures per worker and in the value of machinery and equipment per worker than in years of schooling per adult for the U.S. and the U.K. between 1820 and 1992. For the U.S., he estimates that the value of non-residential structures per worker rose by 21 times and the value of machinery and equipment per worker rose by 141 times in comparison with a ten-fold increase in the years of schooling per adult between 1820 and 1992. For the U.K., his estimates indicate a 15 fold increase in the value of structures per worker and a 97 fold increase in value of machinery and equipment per worker in contrast with a seven-fold increase in average years of schooling between 1820 and 1992. It should be noted that these estimates are based on cumulated investments in schooling to estimate human capital; that is, they are based on the costs incurred to produce human capital. Davies and Whalley (1991, 188-189) argue that estimates based on the alternative approach of calculating the present value of future earnings premiums attributable to schooling and other forms of human capital yield substantially higher estimates of human capital due to capturing inframarginal returns above costs accruing to human capital investments. For the growth accounting approach employed here, the cumulated investment or cost approach would seem the appropriate one. Are there more inherent bounds on the accumulation of human capital over time than non-human capital? One limit on the accumulation of human capital is set by how much of one’s potential working life a worker is willing to sacrifice for purposes of improving education and future productivity. This can be compared with the corresponding limit on the willingness to sacrifice current consumption for wealth accumulation.

However, this discussion makes no explicit allowance for changes over time in the quality of schooling. Improvements in teacher training and teacher recruitment along with ongoing curriculum developments among other factors could lead to ongoing improvements over time in how much a year of school attendance would improve the underlying future productivity of the student. Woessmann (2002) and Hanushek and Kimcoe (2000) have recently argued for the importance of allowing for variation in school quality in estimating the impact of cross national variation in human capital levels on economic growth. Woessmann (2002) makes the suggestion that allowing for improvements in the quality of schooling can remove the upper bounds on schooling investment that would be present if this was simply a matter of increasing the percentage of the population enrolled in school at given levels of quality. While there would seem to be inherent bounds on the proportion of one’s life that one is willing to spend in school, such bounds would not apply to increases in expenditures and other means of improving school quality.

Expenditures per pupil appear to have risen markedly over long periods of time. Thus, in the United States, expenditure per pupil in public elementary and secondary schools in constant 1989-90 dollars rose by over 6 times between 1923-24 and 1973-74 (National Center for Educational Statistics, 60). And in Victorian England, nominal expenditures per pupil in state subsidized schools more than doubled between 1870 and 1900, despite falling prices (Mitch 1982, 204). These figures do not control for the rising percentage of students enrolled in higher grade levels (presumably at higher expenditure per student), general rises in living standards affecting teachers’ salaries and other factors influencing the abilities of those recruited into teaching. Nevertheless, they suggest the possibility of sizable improvements over time in school quality.

It can be argued that implicitly allowance is made for improvements in school quality in the rate of return imputed per year of schooling completed on average by the labor force. Insofar as schools became more effective over time in transmitting knowledge and skills, the economic return per year of schooling should have increased correspondingly. Thus any attempt to allow for school quality in a growth accounting analysis should be careful to avoid double counting school quality in both school inputs and in returns per year of schooling.

The benchmark for the impact of increases in average levels of schooling completed in Table 1 are Maddison’s estimates of changes in output per capita over the last two centuries. In fact, major increases in schooling levels have most commonly been compressed into intervals of several decades or less, rather than periods of a century or more. This would imply that the contribution to output growth of improvements in labor force quality due to increases in schooling levels would have been concentrated primarily in periods of marked improvement in schooling levels and would have been far more modest during periods of more sluggish increase in educational attainment. In order to gauge the impact of the time interval over which changes in schooling occur on growth rates of output, Table 2 provides the change in average years of schooling implied by some of the hypothetical changes in average levels of schooling attainment reported in Table 1 for various time periods.

Table 2

Annual Change in Average Years of Schooling per Adult per Year Implied by Hypothetical Figures in Table 1

Time period over which increase occurred
Total Increase in
Average Years of
Schooling per Adult
5 years 10 years 30 years 50 years 100 years
1 0.2 0.1 0.033 0.02 0.01
3 0.6 0.3 0.1 0.06 0.03
6 1.2 0.6 0.2 0.12 0.06
9 1.8 0.9 0.3 0.18 0.09

Table 3 translates these rates of schooling growth into output growth rates using the partial equilibrium framework of equation 1) using a value for the share of labor of 0.7 as above. The contribution of schooling to growth rates of output and output per capita can be calculated as labor’s share times the percentage return per year of schooling on earnings times the annual increase in average years of schooling.

Table 3A
Contribution of Schooling for Large Increases in Schooling to Annual Growth Rates of Output

Length of time for schooling increase 6 year rise in average years of schooling 6 year rise in average years of schooling 9 year rise in average years of schooling 9 year rise in average years of schooling
5% return 10 % return 5 % return 10% return
30 years 0.7% 1.4% 1.05% 2.1%
50 years 0.42% 0.84% 0.63% 1.26%

Table 3B
Contribution of Schooling for Small to Modest Increases in Schooling to Annual Growth Rates of Output

Length of time for schooling increase 1 year rise in average years of schooling 1 year rise in average years of schooling 3 year rise in average years of schooling 3 year rise in average years of schooling
5 % return 10 % return 5% return 10% return
5 years 0.7% 1.4% 2.1% 4.2%
10 years 0.35% 0.7% 1.05% 2.1%
20 years 0.175% 0.35% 0.525% 1.05%
30 years 0.12% 0.23% 0.35% 0.7%
50 years 0.07% 0.14% 0.21% 0.42%
100 years 0.035% 0.07% 0.105% 0.21%

The case of the U.S. in the twentieth century as analyzed in DeLong, Goldin and Katz (2003) offers an example of how apparent limits or at least resistance to ongoing expansion of schooling have lowered the contribution of schooling to growth. They find that between World War I and the end of the century, improvements in labor quality attributable to schooling can account for about a quarter of the growth of output per capita in the U.S. during this period; this is similar in magnitude to Denison’s (1962) estimates for the first part of this period. This era saw the mean years of schooling completed by age 35 increased from 7.4 years for an American born in 1875 to 14.1 years for an American born in 1975 (DeLong, Goldin and Katz 2003, 22). However, in the last two decades of the twentieth century the rate of increase of mean years of schooling completed leveled off and correspondingly the contribution of schooling to labor quality improvements fell almost in half.

Maddison (1995) has compiled estimates of the average years of schooling completed for a number of countries going back to 1820. It is indicative of the sparseness of schooling completed by adult population estimates that Maddison reports estimates for only 3 countries, the U.S., the U.K., and Japan, all the way back to 1820. Maddison’s figures come from other studies and their reliability warrants further critical scrutiny than can be accorded them here. Since systematic census evidence on adult educational attainment did not begin until the mid-twentieth century, estimates of labor force educational attainment prior to 1900 should be treated with some skepticism. Nevertheless, Maddison’s estimates can be used to give a sense of plausible changes in levels of schooling completed over the last century and a half. The average increases in years of schooling per year for various time periods implied by Maddison’s figures are reported in Table 4. Maddison constructed his figures by giving primary education a weight of 1, secondary education a weight of 1.4, and tertiary, a weight of 2 based on evidence on relative earnings for each level of education.

Table 4
Estimates of the Annual Change in Average Years of Schooling per Person aged 15-64 for Selected Countries and Time Periods

Country 1913-1973 1870-1973 1870-1913
U.S. 0 .112 0.107 0.092
France 0.0783
Germany 0.053
Netherlands 0.064
U.K. 0.0473 0.0722 0.102
Japan 0.112 0.106 0.090

Source: Maddison (1995), 37, Table 2-3

Table 5
Annual Growth Rates in GDP per Capita

Region 1820-70 1870-1913 1913-50 1950-73 1973-92
12 West European Countries 0.9 1.3 1.2 3.8 1.8
4 Western Offshoots 1.4 1.5 1.3 2.4 1.2
5 South European Countries n.a. 0.9 0.7 4.8 2.2
7 East European Countries n.a. 1.2 1.0 4.0 -0.8
7 Latin American Countries n.a. 1.5 1.9 2.4 0.4
11 Asian Countries 0.1 0.7 -0.2 3.1 3.5
10 African countries n.a. n.a. 1.0 1.8 -0.4

Source: Maddison (1995), 62-63, Table 3-2.

In comparing Tables 2 and 4 it can be observed that the estimated actual changes in years of schooling compiled by Maddison (as well as the average over 55 countries reported by Lichtenberg (1994) for the third quarter of the twentieth century) fall within a lower bound set in the hypothetical ranges of a 3 year increase in average schooling spread over a century and an upper bound set by a 6 year increase in average schooling spread over 50 years.

Equations 1) and 2) above assume that each year of schooling of a worker has the same impact on productivity. In fact it has been common to find that the impact of schooling on productivity varies according to level of education. While the rate of return as a percentage of costs tends to be higher for primary than secondary schooling, which is in turn higher than tertiary education, this reflects the far lower costs, especially lower foregone earnings, of primary schooling (Psacharopolous and Patrinos 2004). The earnings premium per year of schooling tends to be higher for higher levels of education and this earnings premium, rather than the rate of return as a percentage costs, is the appropriate measure for assessing the contribution of rising schooling to growth (OECD 2001). Accordingly growth accounting analyses commonly construct schooling indexes weighting years of schooling according to estimates of each year’s impact on earnings (see for example Maddison 1995; Denison 1962). DeLong, Goldin and Katz (2003) use chain weighted indexes of returns according to each level of schooling. A rough approximation of the effect of allowing for variation in economic impact by level of schooling in the analysis in Table 1 is simply to focus on the mid-range 10 percent rate of return as an approximate average of high, low, and medium level returns.[6]

The U.S. is notable for rapid expansion in schooling attainment over the twentieth century at both the secondary and tertiary level, while in Europe widespread expansion has tended to focus on the primary and lower secondary level. These differences are evident in Denison’s estimates of the actual differences in educational distribution between the United States and a number of Western European countries in the mid-twentieth century (see Table 6).

Table 6

Percentage Distributions of the Male Labor Force by Years of Schooling Completed

Years of School Completed United States 1957 France 1954 United Kingdom 1951 Italy 1961
0 1.4 0.3 0.2 13.7
1-4 5.7 2.4 0.2 26.1
5-6 6.3 19.2 0.8 38.0
7 5.8 21.1 4.0 4.2
8 17.2 27.8 27.2 8.1
9 6.3 4.6 45.1 0.7
10 7.3 4.1 8.4 0.7
11 6.0 6.5 7.3 0.6
12 26.2 5.4 2.5 1.8
13-15 8.3 5.4 2.2 3.0
16 or more 9.5 3.2 2.1 3.1

Source: Denison (1967), 80, Table 8-1.

Some segments of the population are likely to have much greater enhancements of productivity from additional years of schooling than others. Insofar as the more able benefit from schooling compared to the rest of the ability distribution, putting substantially greater relative emphasis on expansion of higher levels of schooling could considerably augment growth rates over a more egalitarian strategy. This result would follow from a substantially greater premium assigned to higher levels of education. However, some studies of education in developing countries have found that they allocate a disproportionate share of resources to tertiary schooling at the expense of primary schooling, reflecting efforts of elites to benefit their offspring. How this has impeded economic growth would depend on the disparity in rates of return among levels of education, a point of some controversy in the economics of education literature (Birdsall 1996; Psacharopoulos 1996).

While allocating schooling disproportionately towards the more able in a society may have promoted growth, there would have been corresponding losses stemming from groups that have been systematically excluded or at least restricted in their access to education due to discrimination by factors such as race, gender and religion (Margo 1990). These losses could be attributed in part to the presence of individuals of high ability in groups experiencing discrimination due to failure to provide them with sufficient education to properly utilize their talents. However, historians such as Ashton (1948, 15) have argued that the exclusion of non-Anglicans from English universities prior to the mid-nineteenth century resulted in the channeling of their talents into manufacturing and commerce.

Even if returns have been higher at some levels of education than others, a sustained and substantial increase in labor force quality would seem to entail an egalitarian strategy of widespread increase in access to schooling. The contrast between the rapid increase in access to secondary and tertiary schooling in the U.S. and the much more limited increase in access in Europe during the twentieth century with the correspondingly much greater role for schooling in accounting for economic growth in the U.S. than in Europe (see Denison 1967) points to the importance of an egalitarian strategy in sustaining ongoing increases in aggregate labor force quality.

One would expect on increase in the relative supply of more schooled labor to lead to a decline in the premium to schooling, other things equal. Some recent analyses of the contribution of schooling to growth have allowed for this by specifying a parametric relationship between the distribution of schooling in an economy’s labor force and its impact on output or on a hypothesized intermediary human capital factor (Bils and Klenow 2000).[7]

Direct empirical evidence on trends in the premium to schooling is helpful both to obviate reliance on a theoretical specification and to allow for factors such as technical change that may have offset the impact of the increasing supply of schooling. Goldin and Katz (2001) have developed evidence on trends in the premium to schooling over the twentieth century that have allowed them to adjust for these trends in estimating the contribution of schooling to economic growth (DeLong, Goldin and Katz 2003). They find a marked fall in the premium to schooling, roughly falling in half between 1910 and 1950. However, they also find that this decline in the schooling premium was more than offset by their estimated increase over this same period in years of schooling completed by the average worker of 2.9 years and hence that on net schooling increases contributed to improved productivity of the U.S. workforce. They estimate increases of 0.5 percent per year in labor productivity due to increased educational attainment between 1910 and 1950 relative to the average total annual increase in labor productivity of 1.62 percent over the entire period 1915 to 2000. For the period since 1960, DeLong, Goldin and Katz find that the premium to education has increased while the increase in educational attainment at first increased and then declined. During this latter period, the increase in labor force quality has declined, as noted above, despite a widening premium to education, due to the slowing down in the increase in educational attainment.

Classifying the Range of Possible Relationships between Schooling and Economic Growth

In generalizing beyond the twentieth-century U.S. experience, allowance should be made both for the role of influences other than education on economic growth and for the possibility that the impact of education on growth can vary considerably according to the historical situation. In fact to understand why and how education might contribute to economic growth over the range of historical experience, it is important to investigate the variation in the impact of education on growth that has occurred historically. In relating education to economic growth, one can distinguish four basic possibilities.

The first is one of stagnation in both educational attainment and in output per head. Arguably, this was the most common situation throughout the world until 1750 and even after that date characterized Southern and Eastern Europe through the late nineteenth century, as well as most of Africa, Asia, and Latin American through the mid-twentieth century. The qualifier “arguably” is inserted here, because this view of the matter almost surely makes inadequate allowance for the improvements in informal acquisition of skills through family transmission and direct experience as well as through more formal non-schooling channels such as guild-sponsored apprenticeships, an aspect to be taken up further below. It also makes no allowance for the possible long-term improvements in per capita income that took place prior to 1750 but have been inadequately documented. Still focusing on formal schooling as the source of improvement in labor force, there is reason to think that this may have been a pervasive situation throughout much of human history.

The second situation is one in which income per capita rose despite stagnating education levels; factors other than improvements in educational attainment were generating economic growth. England during its industrial revolution, 1750 to 1840 is a notable instance in which some historians have argued that this situation prevailed. During this period, English schooling and literacy rates rose only slightly if at all, while income per capita appears to have risen. Literacy and schooling appears to have been of little use in newly created manufacturing occupations such as in cotton spinning. Indeed, literacy rates and schooling actually appears to have declined in some of the most rapidly industrializing areas of England such as Lancashire (Sanderson 1972; Nicholas and Nicholas 1992). Not all have concurred with this interpretation of the role of education in the English industrial revolution and the result depends on how educational trends are measured and how education is specified as affecting output (see Laqueur; Crafts 1995; Mitch 1999). Moreover this makes no allowance for the role of informal acquisition of skills. Boot (1995) argues that in the case of cotton spinners, informal skill acquisition with experience was substantial.

The simplest interpretation of this situation is that other factors contributed to economic growth other than schooling or human capital more generally. The clearest non-human capital explanatory factor would perhaps be physical capital accumulation; another might be foreign trade. However, if one turns to technological advance as a driving force, then this gives rise to the possibility that human capital — at least broadly defined — was if not the underlying force at least a central contributing factor to the industrial revolution. The argument for this possibility is that the improvements in knowledge and skills associated with technological advance are embodied in human agents and hence are forms of human capital. Recent work by Mokyr (2002) would suggest this interpretation. Nevertheless, the British industrial revolution does remain as a prominent instance in which human capital conventionally defined as schooling stagnated in the presence of a notable upsurge in economic growth. A less extreme case is provided by the post-World War II European catch-up with the United States, as Denison’s (1967) growth accounting analysis indicates that this occurred despite slower European increases in educational attainment due to other factors offsetting this. Historical instances such as that of the British industrial revolution call into question the common assumption that education is a necessary prerequisite for economic growth (see Mitch 1990).

The third situation is one in which rising educational attainment corresponds with rising rates of economic growth. This is the situation one would expect to prevail if education contributes to economic productivity and if any negative factors are not sufficient to offset this influence. One sub-set of instances would be those in which very large and reasonably compressed increases in the educational attainment of the labor force occurred. One important example of this is the twentieth century U.S., with the high school movement followed by increases in college attendance, as noted above. Another would be those of certain East Asian economies since World War II, as documented in the growth accounting analysis by Young (1995) of the substantial contributions of their rising educational attainment to their rapid growth rates. Another sub-set of cases corresponding to more modest increases in schooling can be interpreted as applying either to countries experiencing schooling increases focussed at the elementary level, as in much of Western Europe over the nineteenth century. The so-called literacy campaigns, as in the Soviet Union and Cuba (see Arnove and Graff eds. 1987) in the early and mid-twentieth century with modest improvements in educational attainment over compressed time periods of just a few decades could also be viewed as fitting into this sub-category. However, whether there were increases in output per capita corresponding to these more modest increases in educational attainment remains to be established.

The fourth situation is one in which economic growth has stagnated despite the presence of marked improvements in educational attainment. Possible examples of this situation would include the early rise of literacy in some Northern European areas, such as Scotland and Scandinavia, in the seventeenth and eighteenth centuries (see Houston 1988; Sandberg 1979) and some regions of Africa and Asia in the later twentieth century (see Pritchett 2001). One explanation of this situation is that it reflects instances in which any positive impact of educational attainment is small relative to other influences having an adverse impact. But one can also interpret it as reflecting situations in which incentive structures direct educated people into destructive and transfer activities inimical to economic growth (see North 1990; Baumol 1990; Murphy, Shleifer, and Vishny 1991).

Cross-country studies of the relationship between changes in schooling and growth since 1960 have yielded conflicting results which in itself could be interpreted as supporting the presence of some mix of the four situations just surveyed. A number of studies have found at best a weak relationship between changes in schooling and growth (Pritchett 2001; Bils and Klenow 2000); others have found a stronger relationship (Topel 1999). Much seems to depend on issues of measurement and on how the relationship between schooling and output is specified (Temple 2001b; Woessmann 2002, 2003).

The Determinants of Schooling

Whether education contributes to economic growth can be seen as depending on two factors, the extent to which educational levels improve over time and the impact of education on economic productivity. The first factor is a topic for extended discussion in its own right and no attempt will be made to consider it in depth here. Factors commonly considered include rising income per capita, distribution of political power, and cultural influences (Goldin 2001, Lindert 2004, Mariscal and Sokoloff 2000, Easterlin 1981; Mitch 2004). The issue of endogeneity of determination has often been raised with respect to the determinants of schooling. Thus, it is plausible that rising income contributes to rising levels of schooling and that the spread of mass education can influence the distribution of political power as well as the reverse. While these are important considerations, they are sufficiently complex to warrant extended attention in their own right.[8]

Influences on the Economic Impact of Schooling

Insofar as schooling improves general human intellectual capacities, it could be seen as having a universal impact irrespective of context. However, Rosenzweig (1995; 1999) has noted that the even the general influence of education on individual productivity or adaptability depend on the complexity of the situation. He notes that for agricultural tasks primarily involving physical exertion, no difference in productivity is evident between workers according to education levels; however, in more complex allocative decisions, education does enhance performance. This could account for findings that literacy rates were low among cotton spinners in the British industrial revolution despite findings of substantial premiums to experience (Sanderson 1972; Boot 1995). However, other studies have found literacy to have a substantial positive impact on labor productivity in cotton textile manufacture in the U.S., Italy, and Japan (Bessen 2003; A’Hearn 1998, Saxonhouse 1977) and have suggested a connection between literacy and labor discipline.

A more macro influence is the changing sectoral composition of the economy. It is common to suggest that the service and manufacturing sector have more functional uses for educated labor than the agricultural sector and hence that the shift from agriculture to industry in particular will lead to greater use of educated labor and in turn to require more educated labor forces. However, there are no clear theoretical or empirical grounds for the claim that agriculture makes less use of educated labor than other sectors of the economy. In fact, farmers have often had relatively high literacy rates and there are more obvious functional uses for education in agriculture in keeping accounts and keeping up with technological developments than in manufacturing. Nilsson et al (1999) argue that the process of enclosure in nineteenth-century Sweden, with the increased demands for reading and writing land transfer documents that this entailed, increased the value of literacy in the Swedish agrarian economy. The findings noted above that those in cotton textile occupations associated with early industrialization in Britain had relatively low literacy rates is one indication of the lack of any clear cut ranking across broad economic sectors in the use of educated labor.

Changes in the organization of decision making within major sectors as well as changes in the composition of production within sectors are more likely to have had an impact on demands for educated labor. Thus, within agriculture the extent of centralization or decentralization of decision making, that is the extent to which farm work forces consisted of farmers and large numbers of hired workers or of large numbers of peasants each with scope for making allocative decisions, is likely to have affected the uses made of educated labor in agriculture. Within manufacturing, a given country’s endowment of skilled relative to unskilled labor has been seen as influencing the extent to which openness to trade increases skill premiums, though this entails endogenous determination (Wood 1995).

Technological advance would have tended to boost the demand for more skilled and educated labor if technological advance and skills are complementary, as is often asserted.

However, there is no theoretical reason why technology and skills need be complementary and indeed concepts of directed technological change or induced innovation would suggest that in the presence of relatively high skill premiums, technological advance would be skill saving rather than skill using. Goldin and Katz (1998) have argued that the shift from the factory to continuous processing and batch production associated with the shift of power sources from steam to electricity in the early twentieth century lead to rising technology skill complementarity in U.S. manufacturing. It remains to be established how general this trend has been. It could be related to the distinction made between the dominance in the United States of extensive growth in the nineteenth century due to the growth of factors of production such as labor and capital and the increasing importance of intensive growth in the twentieth century. Intensive growth is often associated with technological advance and a presumed enhanced value for education (Abramovitz and David 2000). Some analysts have emphasized the importance of capital-skill complementarity. For example, Galor and Moav (2003) point to the level of the physical capital stock as a key influence on the return to human capital investment; they suggest that once physical capital stock accumulation surpassed a certain level, the positive impact of human capital accumulation on the return to physical capital became large enough that owners of physical capital came to support the rise of mass schooling. They cite the case of schooling reform in early twentieth century Britain as an example.

Even sharp declines in the premiums to schooling do not preclude a significant impact of education on economic growth. DeLong, Goldin and Katz’s (2003) growth accounting analysis for the twentieth century U.S. makes the point that even at modest positive returns to schooling on the order of 5 percent per year of schooling, with large enough increases in educational attainment, the contribution to growth can be substantial.

Human Capital

Economists have generalized the impact of schooling on labor force quality into the concept of human capital. Human capital refers to the investments that human beings make in themselves to enhance their economic productivity. These investments can take on many forms and include not only schooling but also apprenticeship, a healthy diet, and exercise, among other possibilities. Some economists have even suggested that more amorphous societal factors such as trust, institutional tradition, technological know how and innovation can all be viewed as forms of human capital (Temple 2001a; Topel 1999; Mokyr 2002). Thus broadly defined, human capital would appear as a prime candidate for explaining much of the difference across nations and over time in output and economic growth. However, gaining much insight into the actual magnitudes and the channels of influence by which human capital might influence economic growth requires specification of both the nature and determinants of human capital and how human capital affects aggregate production of an economy.

Much of the literature on human capital and growth makes the implicit assumption that some sort of numerical scale exists for human capital, even if multidimensional and even if unobservable. This in turn implies that it is meaningful to relate levels and changes of human capital to levels of income per capita and rates of economic growth. Given the multiplicity of factors that influence human knowledge and skill and in turn how these influence labor productivity, difficulties would seem likely to arise with attempts to measure aggregate human capital similar to those that have arisen with attempts to specify and measure the nature of human intelligence. Woessmann (2002, 2003) provides useful surveys of some of the issues involved in attempting to specify human capital at the aggregate level appropriate for relating it to economic growth.

One can distinguish between approaches to the measurement of human capital that focus on schooling, as in the discussion above, and those that take a broader view. Broad view approaches try to capture all investments that may have improved human productivity from whatever source, including not just schooling but other productivity enhancing investments, such as on-the-job training. The basic premise of broad view approaches is that for an aggregate economy, the income going to labor over and above what that labor would earn if it were paid the income of an unskilled worker can be viewed as human capital. This measure can be constructed in various ways including as a ratio using unskilled labor earnings as the denominator as in Mulligan and Sala-I-Martin (1997) or using the share of labor income not going as compensation for unskilled labor as in Crafts (1995) and Mitch (2004). Mulligan and Sala-I-Martin (2000) point to some of the major index number problems that can arise in using this approach to aggregate heterogeneous workers.

Crafts and Mitch find that for Britain during its late eighteenth and early nineteenth century industrial revolution between one-sixth and one-fourth of income per capita can be attributed to human capital measured as the share of labor income not going as compensation for unskilled labor.

One approach that has been taken recently to estimate the role of human capital differences in explaining international differences in income per capita is to consider changes in immigrant earnings between origin and destination countries along with differences between immigrant and native workers in the destination country. Olson (1996) suggested that the large increase in earnings of immigrants commonly observed in moving from a low income to a high income country points to a small role for human capital in explaining the wide variation in per capita income across countries. Hendricks (2002) has used differences between immigrant and native earnings in the U.S. to estimate the contribution of otherwise unobserved skill differences to explaining differences in income per capita across countries and finds that they account for only a small part of the latter differences. Hendricks’ approach raises the issue of whether there could be long-term increases in otherwise unobserved skills that could have contributed to economic growth.

The Informal Acquisition of Human Capital

One possible source of such skills is through the informal acquisition of human capital through on-the-job experience. Insofar as work has been common from early adolescence onwards, the issue arises of why the aggregate stock of skills acquired through experience would vary over time and thus influence rates of economic growth. Some types of on-the-job experience which contribute to economic productivity, such as apprenticeship, may entail an opportunity cost and aggregate trends in skill accumulation will be influenced by societal willingness to incur such opportunity costs.

Insofar as schooling continues through adolescence, this can interfere with the accumulation of workforce experience. DeLong, Goldin and Katz (2003) note the tradeoff between rising average years of schooling completed and decreasing years of labor force experience in influencing labor force quality of the U.S. labor force in the last half of the twentieth century. Connolly (2004) has found that informal experience played a relatively greater role in Southern economic growth than for other regions of the United States.

Hansen (1997) has also distinguished the academically-oriented secondary schooling the United States developed in the late nineteenth and early twentieth century from the vocationally-oriented schooling and apprenticeship system that Germany developed over the same time period. Goldin (2001) argues that in the United States the educational system developed general abilities suitable for the greater opportunities for geographical and occupational mobility that prevailed there, while specific vocational training was more suitable for the more restricted mobility opportunities in Germany.

Little evidence exists on whether long-term trends in informal opportunities for skill acquisition have influenced growth rates. However, Smith’s (1776) view of the importance of the division of labor in influencing productivity would suggest that the impact of trends in these opportunities may well have been quite sizable.

Externalities from Education

Economists commonly claim that education yields benefits to society over and above the impact on labor market productivity perceived by the person receiving the education. These benefits can include impacts on economic productivity, such as impacts on technological advance. They can also include non-labor market benefits. Thus McMahon (2002, 11) in his assessment of the social benefits of education includes not only direct effects on economic productivity but also impacts on a) population growth rates and health b) democratization, political stability, and human rights, c) the environment, d) reduction of poverty and inequality, e) crime and drug use, and f) labor force participation. While these effects may appear to involve primarily non-market activity and thus would not be reflected in national output measures and growth rates, factors such as political stability, democratization, population growth, and health have obvious consequences for prospects for long-term growth. However, allowance should be made for the simultaneous influence of the distribution of political power and life expectancy on societal investments in schooling.

For the period since 1960, numerous studies have employed cross country variation in various estimates of human capital and income per capita to directly estimate the impact of human capital on levels of income per capita and growth. A central goal of many such estimates is to see if there are externalities to education on output over and above the private returns estimated from micro data. The results have been conflicting and this has been attributed not only to problems of measurement error but also to differences in specification of human capital and its impact on growth. There does not appear to be strong evidence of large positive externalities to human capital (Temple 2001a). Furthermore, McMahon (2004) reports some empirical specifications which yield substantial indirect long-run effects.

For the period before 1960, limits on the availability of data on schooling and income have limited the use of this empirical regression approach. Thus, any discussion of the impact of externalities of education on production is considerably more conjectural. The central role of government, religious, and philanthropic agencies in the provision of schooling suggests the presence of externalities. Politicians and educators more frequently justified government and philanthropic provision of schooling by its impacts on religious and moral behavior than by any market failure resulting in sub-optimal provision of schooling from the standpoint of maximizing labor productivity. Thus, Adam Smith in his discussion of mass schooling in The Wealth of Nations, places more emphasis on its value to the state in enhancing orderliness and decency while reducing the propensity to popular superstition than on its immediate value in enhancing the economic productivity of the individual worker.

The Impact of the Level of Human Capital on Rates of Economic Growth

The approaches considered thus far relate changes in educational attainment of the labor force to changes in output per worker. An alternative, though not mutually exclusive, approach is to relate the level of educational attainment of an economy’s labor force to its rate of economic growth. The argument for doing so is that a high but unchanging level of educational attainment should contribute to growth by facilitating creativity, innovation and adaptation to change as well as facilitating the ongoing maintenance and improvement of skill in the workforce. Topel (1999) has argued that there may not be any fundamental difference between the two types of approach insofar as ongoing sources of productivity advance and adaptation to change could be viewed as reflecting ongoing improvements in human capital. Nevertheless, some empirical studies based on international data for the late twentieth century have found that a country’s level of educational attainment has a much stronger impact on its rate of economic growth than its rate of improvement in educational attainment (Benhabib and Spiegel 1994).

The paucity of data on schooling attainment has limited the empirical examination of the relationship between levels of human capital and economic growth for periods before the late twentieth century. However, Sandberg (1982) has argued, based on a descriptive comparison of economies in various categories, that those with high levels of schooling in 1850 subsequently experienced faster rates of economic growth. Some studies, such as O’Rourke and Williamson (1997) and Foreman-Peck and Lains (1999), have found that high levels of schooling and literacy have contributed to more rapid rates of convergence for European countries in the late nineteenth century and at the state level for the U.S. over the twentieth century (Connolly 2004).

Bowman and Anderson (1963), a much earlier study based on international evidence for the mid-twentieth century, can be interpreted in the spirit of relating levels of education to subsequent levels of income growth. Their reading of the cross-country relationship between literacy rates and per capita income at mid-twentieth-century was that a threshold of 40 percent adult literacy was required for a country to have a per capita income above 300 1955 dollars. Some have ahistorically projected back this literacy threshold to earlier centuries although the Bowman and Anderson proposal was intended to apply to mid-twentieth century development patterns.

The mechanisms by which the level of schooling would influence the rate of economic growth are problematic to establish. One can distinguish two general possibilities. One would be that higher levels of educational attainment facilitate adaptation and responsiveness to change throughout the workforce. This would be especially important where a large percentage of workers are in decision making positions such as an economy composed largely of small farmers and other small enterprises. The finding of Foster and Rosenzweig (1996) for late twentieth century India that the rate of return to schooling is higher during periods of more rapid technological advance in agriculture would be consistent with this. Likewise, Nilsson et al (1999) find that literacy was important for nineteenth-century Swedish farmers in dealing with enclosure, an institutional change. The other possibility is that higher levels of educational attainment increase the potential pool from which an elite group responsible for innovation can be recruited. This could be viewed as applying specifically to scientific and technical innovation as in Mokyr (2002) and Jones (2002) — but also to technological and industrial leadership more generally (Nelson and Wright 1992) and to facilitating advancement in society by ability irrespective of social origins (Galor and Tsiddon 1997). Recently, Labuske and Baten (2004) have found that international rates of patenting are related to secondary enrollment rates.

Two issues have arisen in the recent theoretical literature regarding specifying relationships between the level of human capital and rates of economic growth. First, Lucas (1988) in an influential model of the impact of human capital on growth, specifies that the rate of growth of human capital formation depends on initial levels of human capital, in other words that parents’ and teachers’ human capital has a direct positive influence on the rate of growth of learners’ human capital. This specification of the impact of the initial level of human capital allows for ongoing and unbounded growth of human capital and through this its ongoing contribution to economic growth. Such ongoing growth of human capital could occur through improvements in the quality of schooling or through enhanced improvements in learning from parents and other informal settings. While it might be plausible to suppose that improved education of teachers will enhance their effectiveness with learners, it seems less plausible to suppose that this enhanced effectiveness will increase unbounded in proportion to initial levels of education (Lord 2001, 82).

A second issue is that insofar as higher levels of human capital contribute to economic growth through increases in research and development activity and innovative activity more generally, one would expect the presence of scale effects. Economies with larger populations holding constant their level of human capital per person should benefit from more overall innovative activity simply because they have more people engaged in innovative activity. Jones (1995) has pointed out that such scale effects seem implausible if one looks at the time series relationship between rates of economic growth and those engaged in innovative activity. In recent decades the growth of the number of scientists, engineers, and others engaged in innovative activity has far outstripped the actual growth of productivity and other indicators of direct impact on innovation. Thus, one should allow for diminishing returns in the relationship between levels of education and technological advance.

Thus, as with schooling externalities, considering the impact of levels of education on growth offers numerous channels of influence leaving the challenge for the historian of ascertaining their quantitative importance in the past.

Conclusion

This survey has considered some of the basic ways in which the rise of mass education has contributed to economic growth in recent centuries. Given their potential influence on labor productivity, levels and changes in schooling and of human capital more generally have the potential for explaining a large share of increases in per capita output over time. However, increases in mass schooling seem to explain a major share of economic growth only over relatively short periods of time, with a more modest impact over longer time horizons. In some situations, such as the United States in the twentieth century, it appears that improvements in the schooling of the labor force have made substantial contributions to economic growth. Yet schooling should not be seen as either a necessary or sufficient condition for generating economic growth. Factors other than education can contribute to economic growth and in their absence, it is not clear that schooling in itself can contribute to economic growth. Moreover, there are likely limits on the extent to which average years of schooling of the labor force can expand, although improvement in the quality of schooling is not so obviously bounded. Perhaps the most obvious avenue through which education has contributed to economic growth is by expanding the rate of technological change. But as has been noted, there are numerous other possible channels of influence ranging from political stability and property rights to life expectancy and fertility. The diversity of these channels point to both the challenges and the opportunities in examining the historical connections between education and economic growth.

References

Aaronson, Daniel and Daniel Sullivan. “Growth in Worker Quality.” Economic Perspectives, Federal Reserve Bank of Chicago 25, no. 4 (2001): 53-74.

Abramovitz, Moses and Paul David. “American Macroeconomic Growth in the Era of Knowledge-Based Progress: The Long-Run Perspective.” In Cambridge Economic History of the United States, Vol. III, The Twentieth Century, edited by Stanley L. Engerman and Robert E. Gallman, 1-92. New York: Cambridge University Press, 2000.

A’Hearn, Brian. “Institutions, Externalities, and Economic Growth in Southern Italy: Evidence from the Cotton Textile Industry, 1861-1914.” Economic History Review 51, no. 4 (1998): 734-62.

Arnove, Robert F. and Harvey J. Graff, editors. National Literacy Campaigns: Historical and Comparative Perspectives. New York: Plenum Press, 1987.

Ashton, T.S. The Industrial Revolution, 1760-1830. Oxford: Oxford University Press, 1948.

Barro, Robert J. “Notes on Growth Accounting.” NBER Working Paper 6654, 1998.

Baumol, William. “Entrepreneurship: Productive, Unproductive, and Destructive.” Journal of Political Economy 98, no. 5, part 1 (1990): 893-921.

Benhabib, J. and M. M. Spiegel. “The Role of Human Capital in Economic Development: Evidence from Aggregate Cross-country Data.” Journal of Monetary Economics 34 (1994): 143-73.

Bessen, James. “Technology and Learning by Factory Workers: The Stretch-Out at Lowell, 1842.” Journal of Economic History 63, no. 1 (2003): 33-64.

Bils, Mark and Peter J. Klenow. “Does Schooling Cause Growth?” American Economic Review 90, no. 5 (2000): 1160-83.

Birdsall, Nancy. “Public Spending on Higher Education in Developing Countries: Too Much or Too Little?” Economics of Education Review 15, no. 4 (1996): 407-19.

Blaug, Mark. An Introduction to the Economics of Education. Harmondsworth, England: Penguin Books, 1970.

Boot, H.M. “How Skilled Were Lancashire Cotton Factory Workers in 1833?” Economic History Review 48, no. 2 (1995): 283-303.

Bowman, Mary Jean and C. Arnold Anderson. “Concerning the Role of Education in Development.” In Old Societies and New States: The Quest for Modernity in Africa and Asia, edited by Clifford Geertz. Glencoe, IL: Free Press, 1963.

Broadberry, Stephen. “Human Capital and Productivity Performance: Britain, the United States and Germany, 1870-1990.” In The Economic Future in Historical Perspective, edited by Paul A. David and Mark Thomas. Oxford: Oxford University Press, 2003.

Conlisk, John. “Comments” on Griliches. In Education, Income, and Human Capital, edited by W. Lee Hansen. New York: Columbia University Press, 1970.

Connolly, Michelle. “Human Capital and Growth in the Postbellum South: A Separate but Unequal Story.” Journal of Economic History 64, no.2 (2004): 363-99.

Crafts, Nicholas. “Exogenous or Endogenous Growth? The Industrial Revolution Reconsidered.” Journal of Economic History 55, no. 4 (1995): 745-72.

Davies, James and John Whalley. “Taxes and Capital Formation: How Important Is Human Capital?” In National Saving and Economic Performance, edited by B. Douglas Bernheim and John B. Shoven, 163-97. Chicago: University of Chicago Press, 1991.

DeLong, J. Bradford, Claudia Goldin and Lawrence F. Katz. “Sustaining U.S. Economic Growth.” In Agenda for the Nation, edited by Henry Aaron, James M. Lindsay, and Pietro S. Niyola, 17-60. Washington, D.C.: Brookings Institution Press, 2003.

Denison, Edward F. The Sources of Economic Growth in the United Statesand the Alternatives before Us. New York: Committee for Economic Development, 1962.

Denison, Edward F. Why Growth Rates Differ: Postwar Experience in Nine Western Countries. Washington, D.C.: Brookings Institution Press, 1967.

Easterlin, Richard. “Why Isn’t the Whole World Developed?” Journal of Economic History 41, no. 1 (1981): 1-19.

Foreman-Peck, James and Pedro Lains. “Economic Growth in the European Periphery, 1870-1914.” Paper presented at the Third Conference of the European Historical Economics Society, Lisbon, Portugal, 1999..

Foster, Andrew D. and Mark R. Rosenzweig. “Technical Change and Human-capital Returns and Investments: Evidence from the Green Revolution.” American Economic Review 86, no. 4 (1996): 931-53.

Galor, Oded and Daniel Tsiddon. “The Distribution of Human Capital, Technological Progress and Economic Growth.” Journal of Economic Growth 2, no. 1 (1997): 93-124.

Galor, Oded and Omer Moav. “Das Human Kapital.” Brown University Working Paper No. 2000-17, July 2003.

Goldin, Claudia. “The Human Capital Century and American Leadership: Virtues of the Past.” Journal of Economic History 61, no. 2 (2001): 263-92.

Goldin, Claudia and Lawrence F. Katz. “The Origins of Technology-Skill Complementarity.” Quarterly Journal of Economics 113, no. 3 (1998): 693-732.

Goldin, Claudia and Lawrence F. Katz. “Decreasing (and Then Increasing) Inequality in America: A Tale of Two Half-Centuries.” In The Causes and Consequences of Increasing Inequality, edited by Finis Welch, 37-82. Chicago: University of Chicago Press, 2001.

Graff, Harvey.. The Legacies of Literacy: Continuities and Contradictions in Western Culture and Society. Bloomington: Indiana University Press, 1987.

Griliches, Zvi. “Notes on the Role of Education in Production Functions and Growth Accounting.” In Education, Income, and Human Capital, edited by W. Lee Hansen. New York: Columbia University Press, 1970.

Hansen, Hal. “Caps and Gowns: Historical Reflections on the Institutions that Shaped Learning for and at Work in Germany and the United States, 1800-1945.” Ph.D. dissertation, University of Wisconsin, 1997.

Hanushek, Eric and Dennis D. Kimko. “Schooling, Labor-Force Quality, and the Growth of Nations.” American Economic Review 90, no.3 (2000): 1184-1208.

Hendricks, Lutz. “How Important Is Human Capital for Development? Evidence from Immigrant Earnings.” American Economic Review 92, no. 1 (2002): 198-219.

Ho, Mun S. and Dale Jorgenson. “The Quality of the U.S. Work Force, 1948-1995.” Harvard University Working Paper, 1999.

Houston, R.A. Literacy in Early Modern Europe: Culture and Education, 1500-1800.

London: Longman, 1988.

Jones, Charles. “R&D-Based Models of Economic Growth.” Journal of Political Economy 103, no. 4 (1995): 759-84.

Jones, Charles. “Sources of U.S. Economic Growth in a World of Ideas.” American Economic Review 92, no. 1 (2002): 220-39.

Jorgenson, Dale W. and Barbara M. Fraumeni. “The Accumulation of Human and Nonhuman Capital, 1948-84.” In The Measurement of Saving, Investment, and Wealth, edited by R. E. Lipsey and H. S. Tice. Chicago: University of Chicago Press, 1989.

Jorgenson, Dale W. and Zvi Griliches. “The Explanation of Productivity Change.” Review of Economic Studies 34, no. 3 (1967): 249-83.

Kendrick, John W. “How Much Does Capital Explain?” In Explaining Economic Growth: Essays in Honour of Angus Maddison, edited by Adam Szirmai, Bart van Ark and Dirk Pilat, 129-45. Amsterdam: North Holland, 1993.

Krueger, Alan B. and Mikael Lindahl. 2001. “Education for Growth: Why and for Whom?”

Journal of Economic Literature 39, no. 4 (2001): 1101-36.

Krueger, Anne O. “Factor Endowments and per Capita Income Differences among Countries.” Economic Journal 78, no. 311 (1968): 641-59.

Kuznets, Simon. Modern Economic Growth: Rate, Structure and Spread. New Haven: Yale University Press, 1966.

Labuske, Kirsten and Joerg Baten. “Patenting Abroad and Human Capital Formation.” University of Tubingen Working Paper, 2004.

Laqueur, Thomas. “Debate: Literacy and Social Mobility in the Industrial Revolution in England.” Past and Present 64, no. 1 (1974): 96-107.

Lichtenberg, Frank R. “Have International Differences in Educational Attainments Narrowed?” In Convergence of Productivity: Cross-national Studies and Historical Evidence, edited by William J. Baumol, Richard R. Nelson, and Edward N. Wolff, 225-42. New York: Oxford University Press, 1994.

Lindert, Peter H. Growing Public: Social Spending and Economic Growth since the Eighteenth Century. Cambridge: Cambridge University Press, 2004.

Lord, William A. Household Dynamics: Economic Growth and Policy. New York: Oxford University Press, 2002.

Lucas, Robert E., Jr. “On the Mechanics of Economic Development.” Journal of Monetary Economics 22, no. 1 (1988): 3-42.

Maddison, Angus. Monitoring the World Economy, 1820-1992. Paris: OECD, 1995.

Maddison, Angus. The World Economy: A Millennial Perspective. Paris: OECD, 2001.

Margo, Robert A. Race and Schooling in the South, 1880-1950: An Economic History. Chicago: University of Chicago Press, 1990.

Mariscal, Elisa and Kenneth Sokoloff. “Schooling, Suffrage, and the Persistence of Inequality in the Americas, 1800-1945.” InPolitical Institutions and Economic Growth in Latin America: Essays in Policy, History, and Political Economy, edited by Stephen Haber, 159-218. Stanford: Hoover Institution Press, 2000.

Matthews, R.C.O., C. H. Feinstein, and J.C. Odling-Smee. British Economic Growth, 1856-1973. Stanford: Stanford University Press, 1982.

McMahon, Walter. Education and Development: Measuring the Social Benefits. Oxford: Oxford University Press, 2002.

Mitch, David. “The Spread of Literacy in Nineteenth-Century England.” Ph.D. dissertation, University of Chicago, 1982.

Mitch, David. “Education and Economic Growth: Another Axiom of Indispensability?” In Education and Economic Development since the Industrial Revolution, edited by Gabriel Tortella. Valencia: Generalitat Valencia, 1990. Reprinted in The Economic Value of Education: Studies in the Economics of Education, edited by Mark Blaug, 385-401. Cheltenham, UK: Edward Elgar, 1992.

Mitch, David. “The Role of Education and Skill in the British Industrial Revolution.” In The British Industrial Revolution: An Economic Perspective (second edition), edited by Joel Mokyr, 241-79. Boulder, CO: Westview Press, 1999.

Mitch, David. “Education and Skill of the British Labour Force.” In The Cambridge Economic History of Modern Britain, Vol.1, Industrialization, 1700-1860, edited by Roderick Floud and Paul Johnson, 332-56.Cambridge: Cambridge University Press, 2004a.

Mitch, David. “School Finance.” In International Handbook on the Economics of Education, edited by Geraint Johnes and Jill Johnes, 260-97 Cheltenham UK: Edward Elgar, 2004b.

Mokyr, Joel. The Gifts of Athena: Historical Origins of the Knowledge Economy. Princeton: Princeton University Press, 2002.

Mulligan, Casey G. and Xavier Sala-I-Martin. “A Labor Income-based Measure of the Value of Human Capital: An Application to the States of the United States.” Japanand the World Economy 9, no. 2 (1997): 159-91.

Mulligan, Casey B. and Xavier Sala-I-Martin. “Measuring Aggregate Human Capital.” Journal of Economic Growth 5, no. 3 (2002): 215-52.

Murphy, Kevin M., Andrei Shleifer, and Robert W. Vishny. “The Allocation of Talent: Implications for Growth.” Quarterly Journal of Economics 106, no. 2 (1991): 503-30.

National Center of Education Statistics. 120 Years of American Education: A Statistical Portrait. Washington, D.C.: U.S. Department of Education, Office of Educational Research and Improvement, 1993.

Nelson, Richard R. and Gavin Wright. “The Rise and Fall of American Technological Leadership: The Postwar Era in Historical Perspective.” Journal of Economic Literature 30, no. 4 (1992): 1931-64.

Nicholas, Stephen and Jacqueline Nicholas. “Male Literacy, ‘Deskilling’ and the Industrial Revolution.” Journal of Interdisciplinary History 23, no. 1 (1992): 1-18.

Nilsson, Anders, Lars Pettersson and Patrick Svensson. “Agrarian Transition and Literacy: The Case of Nineteenth-century Sweden.” European Review of Economic History 3, no. 1 (1999): 79-96.

North, Douglass C. Institutions, Institutional Change and Economic Performance. Cambridge: Cambridge University Press, 1990.

OECD. Education at a Glance: OECD Indicators. Paris: OECD, 2001.

Olson, Mancur, Jr. “Big Bills Left on the Sidewalk: Why Some Nations Are Rich and Others Poor.” Journal of Economic Perspectives 10, no. 2 (1996): 3-24.

O’Rourke, Kevin and Jeffrey G. Williamson. “Around the European Periphery, 1870-1913: Globalization, Schooling and Growth.” European Review of Economic History 1, no. 2 (1997): 153-90.

Pritchett, Lant. “Where Has All the Education Gone?” World Bank Economic Review 15, no. 3 (2001): 367-91.

Psacharopoulos, George. “The Contribution of Education to Economic Growth: International Comparisons.” In International Comparisons of Productivity and Causes of the Slowdown, edited by John W. Kendrick. Cambridge, MA: Ballinger Publishing, 1984.

Psacharopoulos, George. “Public Spending on Higher Education in Developing Countries: Too Much Rather than Too Little.” Economics of Education Review 15, no. 4 (1996): 421-22.

Psacharopoulos, George and Harry Anthony Patrinos. “Human Capital and Rates of Return.” In International Handbook on the Economics of Education, edited by Geraint Johnes and Jill Johnes, 1-57. Cheltenham, UK: Edward Elgar, 2004.

Rangazas, Peter. “Schooling and Economic Growth: A King-Rebelo Experiment with Human Capital.” Journal of Monetary Economics 46, no. 2 (2000): 397-416.

Rosenzweig, Mark. “Why Are There Returns to Schooling?” American Economic Review Papers and Proceedings 85, no. 2 (1995): 69-75.

Rosenzweig, Mark. “Schooling, Economic Growth and Aggregate Data.” In Development, Duality and the International Economic Regime, edited by Gary Saxonhouse and T.N. Srinivasan, 107-29. Ann Arbor: University of Michigan Press, 1997.

Sandberg, Lars. “The Case of the Impoverished Sophisticate: Human Capital and Swedish Economic Growth before World War I.” Journal of Economic History 39, no.1 (1979): 225-41.

Sandberg, Lars. “Ignorance, Poverty and Economic Backwardness in the Early Stages of European Industrialization: Variations on Alexander Gerschenkron’s Grand Theme.” Journal of European Economic History 11 (1982): 675-98.

Sanderson, Michael. “Literacy and Social Mobility in the Industrial Revolution in England.” Past and Present 56 (1972): 75-104.

Saxonhouse, Gary. “Productivity Change and Labor Absorption in Japanese Cotton Spinning, 1891-1935″ Quarterly Journal of Economics 91, no. 2 (1977): 195-220.

Smith, Adam. An Inquiry into the Nature and Causes of the Wealth of Nations. Chicago: University of Chicago Press, [1776] 1976.

Temple, Jonathan. “Growth Effects of Education and Social Capital in the OECD Countries.” OECD Economic Studies 33, no. 1 (2001a): 58-96.

Temple, Jonathan. “Generalizations That Aren’t? Evidence on Education and Growth.” European Economic Review 45, no. 4-6 (2001b): 905-18.

Topel, Robert. “Labor Markets and Economic Growth.” In Handbook of Labor Economics, Volume 3, edited by Orley Ashenfelter and David Card, 2943-84. Amsterdam: Elsevier Science, 1999.

Woessmann, Ludger. Schooling and the Quality of Human Capital. Berlin: Springer, 2002.

Woessmann, Ludger. “Specifying Human Capital.” Journal of Economic Surveys 17, no.3 (2003): 239-70.

Wood, Adrian. “How Trade Hurt Unskilled Workers.” Journal of Economic Perspectives 9, no. 3 (1995): 57-80.

Young, Alwyn. “The Tyranny of Numbers: Confronting the Statistical Realities of the East Asian Growth Experience.” Quarterly Journal of Economics 110, no. 3 (1995): 641-80.


[1] I have received helpful comments on this essay from Mac Boot, Claudia Goldin, Bill Lord, Lant Pritchett, Robert Whaples, and an anonymous referee. At an earlier stage in working through some of this material, I benefited from a quite useful conversation with Nick Crafts. However, I bear sole responsibility for remaining errors and shortcomings.

[2] For a detailed survey of trends in schooling in the early modern and modern period see Graff (1987).

[3] See Barro (1998) for a brief intellectual history of growth accounting.

[4] Blaug (1970) provides an accessible, detailed critique of the assumptions behind Denison’s growth accounting approach and Topel (1999) provides a further discussion of the problems of using a growth accounting approach to measure the contribution of education, especially those due to omitting social externalities.

[5] By using a Cobb-Douglas specification of the aggregate production function, one can arrive at the following equation for the ratio between final and initial national income per worker due to increases in average school years completed between the two time periods, t = 0 and t =1:

Start with the aggregate production function specification:

Y = A K(1-α) [(1+r)S L]α

Y/L = A (K/L)(1-α) [(1+r)S L/L]α

Y/L = A (K/L)(1-α) [(1+r)S]α

Assume that the average years of schooling of the labor force is the only change between t = 0 and t =1; that is, assume no change in the ratio of capital to labor between time periods. Then the ratio of the income per worker in the later time period to the earlier time period will be:

(Y/L)1/ (Y/L)0 = ( (1 + r )S1- S0 )α

Where Y = output, A = a measure of the current state of technology, K = the physical capital stock, L = the labor force, r = the percent by which a year of schooling increases labor productivity, S is the average years of schooling completed by the labor force in each time period, α is labor’s share in national income, and the subscripts 0 and 1 denote initial and final time periods.

As noted above, the derivation above is for a partial equilibrium change in years of schooling of the labor force holding constant the physical capital stock. Allowing for physical capital stock accumulation in response to schooling increases in a Solow-type model implies that the ratio of final to initial output per worker will be

(Y/L)1/ (Y/L)0 = ( (1 + r )S1 - S0 ) .

For a derivation of this see Lord (2001, 99-100). Lord’s derivation differs from that here by specifying the technology parameter A as labor augmenting. Allowing for increases in A over time due to technical change would further increase the contribution to output per worker of additional years of schooling.

[6]To take a specific example, suppose that in the steady-state case of Table 1B, a 5 percent earnings premium per year of schooling is assigned to the first 6 years of schooling, i.e. primary schooling, a 10 percent earnings premium per year is assigned to the next 6 years of schooling, i.e. secondary schooling, and a 15 percent earnings premium per year is assigned to the final 4 years of schooling, that is college. In that case, the impact on steady state income per capita compared with no schooling at all would be (1.05)6x(1.10)6x(1.15)4 = 4.15, compared with the 4.59 in going from no schooling to universal college at a 10 percent rate of return for every year of school completed.

[7] Denison’s standard growth accounting approach assumes that education is labor augmenting and, in particular, that there is an infinite elasticity of substitution between skilled and unskilled labor. This specification is conventional in growth accounting analysis. But another common specification in entering education into aggregate production functions is to specify human capital as a third factor of production along with unskilled labor and physical capital. Insofar as this is done with a Cobb-Douglas production function specification, as is conventional, the implied elasticity of substitution between human capital and either unskilled labor or physical capital is unity. The complementarity between human capital and other inputs this implies will tend to increase the contribution of human capital increases to economic growth by decreasing the tendency for diminishing returns to set in. (For a fuller treatment of the considerations involved see Griliches 1970, Conlisk 1970, Broadberry 2003). For an application of this approach in a historical growth accounting exercise, see Crafts (1995), who finds a fairly substantial contribution of human capital during the English industrial revolution. For a critique of Crafts’ estimates see Mitch (1999).

[8] For an examination of long-run growth dynamics with schooling investments endogenously determined by transfer-constrained family decisions see Lord 2001, 209-213 and Rangazas 2000. Lord and Rangazas find that allowing for the fact that families are credit constrained in making schooling investment decisions is consistent with the time path of interest rates in the U.S. between 1870 and 1970.

Citation: Mitch, David. “Education and Economic Growth in Historical Perspective”. EH.Net Encyclopedia, edited by Robert Whaples. July 26, 2005. URL http://eh.net/encyclopedia/education-and-economic-growth-in-historical-perspective/