EH.net is owned and operated by the Economic History Association
with the support of other sponsoring organizations.

The Works Progress Administration

Jim Couch, University of North Alabama

Introduction: The Great Depression and the New Deal

The Great Depression stands as an event unique in American history due to both its length and severity. With the unprecedented economic collapse, the nation faced “an emergency more serious than war” (Higgs 1987, p. 159). The Depression was a time of tremendous suffering and at its worst, left a quarter of the workforce unemployed. During the twentieth century, the annual unemployment rate averaged double-digit levels in just eleven years. Ten of these occurred during the Great Depression.

A confused and hungry nation turned to the government for assistance. With the inauguration of Franklin Delano Roosevelt on March 4, 1933, the federal government’s response to the economic emergency was swift and massive. The explosion of legislation — which came to be collectively called the New Deal — was designed, at least in theory, to bring a halt to the human suffering and put the country on the road to recovery. The president promised relief, recovery and reform.

Although the Civil Works Administration (CWA), the Civilian Conservation Corps (CCC), and the National Recovery Administration (NRA) were all begun two years earlier, the Works Progress Administration (WPA) became the best known of the administration’s alphabet agencies. Indeed, for many the works program is synonymous with the entire New Deal. Roosevelt devoted more energy and more money to the WPA than to any other agency (Charles 1963, p. 220). The WPA would provide public employment for people who were out of work. The administration felt that the creation of make-work jobs for the jobless would restore the human spirit, but dignity came with a price tag — an appropriation of almost $5 billion was requested. From 1936 to 1939 expenditures totaled nearly $7 billion. Annual figures are given in Table 1.

Table 1
WPA Expenditures

Year Expenditure
1936 $1,295,459,010
1937 $1,879,493,595
1938 $1,463,694,664
1939 $2,125,009,386

Source: Office of Government Reports, Statistical Section, Federal Loans and Expenditures, Vol. II, Washington, D.C., 1940.

WPA Projects and Procedures

The legislation that created the WPA, the Emergency Relief Appropriation Act of 1935 sailed through the House, passing by a margin of 329 to 78 but bogged down in the Senate where a vocal minority argued against the measure. Despite the opposition, the legislation passed in April of 1935.

Harry Hopkins headed the new organization. Hopkins became, “after Roosevelt, the most powerful man in the administration” (Reading 1972, pp. 16-17). All WPA administrators, whether assigned to Washington or to the agency’s state and local district offices, were employees of the federal government and all WPA workers’ wages were distributed directly from the U.S. Treasury (Kurzman 1974, p. 107). The WPA required the states to provide some of their own resources to finance projects but a specific match was never stipulated — a fact that would later become a source of contentious debate.

The agency prepared a “Guide to Eligibility of WPA Projects” which was made available to the states. Nineteen types of potentially fundable activities were described ranging from malaria control to recreational programs to street building (MacMahon, Millet and Ogden 1941, p. 308).

Hopkins and Roosevelt proposed that WPA compensation be based on a “security wage” which would be an hourly amount greater than the typical relief payment but less than that offered by private employers. The administration contended that it was misleading to evaluate the programs’ effects solely on the basis of wages paid — more important were earnings through continuous employment. Thus, wages were reported in monthly amounts.

Wages differed widely from region to region and state-to-state. Senator Richard Russell of Georgia explained, “In the State of Tennessee the man who is working with a pick and shovel at 18 cents an hour is limited to $26 a month, and he must work 144 hours to earn $26. Whereas the man who is working in Pennsylvania has to work only 30 hours to earn $94, out of funds which are being paid out of the common Treasury of the United States” (U.S. House of Representatives 1938, p. 913). Recurring complaints of this nature led to adjustments in the wage rate that narrowed regional differentials to more closely reflect the cost of living in the state.

Robert Margo argues that federal relief programs like the WPA may have exacerbated the nation’s unemployment problem. He presents evidence indicating that the long-term unemployed on work relief were “not very responsive to improved economic conditions” while the long-term unemployed not on work relief “were responsive to improved economic conditions” (Margo 1991:339). Many workers were afraid of the instability associated with a private-sector job and were reluctant to leave the WPA. As Margo explains, “By providing an alternative to the employment search (which many WPA workers perceived, correctly or not, to be fruitless), work relief may have lessened downward pressure on nominal wages” (p. 340). This lack of adjustment of the wage rate may have slowed the economy’s return to full employment.

The number of persons employed by the WPA is given in Figure 1. Gavin Wright points out that “WPA employment reached peaks in the fall of election years” (Wright 1974, p. 35).

Figure 1 – Number of Persons Employed by WPA
1936-1941
(in thousands)

Source: Wright (1974), p. 35.

The work done by the organization stands as a tribute to the WPA. Almost every community in America has a park, bridge or school constructed by the agency. As of 1940, the WPA had erected 4,383 new school buildings and made repairs and additions to over 30,000 others. More than 130 hospitals were built and improvements made to another 1670 (MacMahon, Millet and Ogden 1941, pp. 4-5). Nearly 9000 miles of new storm drains and sanitary sewer lines were laid. The agency engaged in conservation work planting 24 million trees (Office of Government Reports 1939, p. 80). The WPA built or refurbished over 2500 sports stadiums around the country with a combined seating capacity of 6,000,000 (MacMahon, Millet and Ogden 1941. pp. 6-7).

Addressing the nation’s transportation needs accounted for much of the WPA’s work. By the summer of 1938, 280,000 miles of roads and streets had been paved or repaired and 29,000 bridges had been constructed. Over 150 new airfields and 280 miles of runway were built (Office of Government Reports 1939, p. 79).

Because Harry Hopkins believed that the work provided by the WPA should match the skills of the unemployed, artists were employed to paint murals in public buildings, sculptors created park and battlefield monuments, and actors and musicians were paid to perform. These white-collar programs did not escape criticism and the term “boondoggling” was added to the English language to describe government projects of dubious merit.

Work relief for the needy was the putative purpose of the WPA. Testifying before the Senate Special Committee to Investigate Unemployment and Relief in 1938, Corrington Gill — Assistant to WPA administrator Harry Hopkins — asserted, “Our regional representatives . . . are intimately in touch with the States and the conditions in the States” (U.S. Senate 1938, p. 51).

The Roosevelt administration, of course, asserted that dollars were allocated to where need was the greatest. Some observers at the time, however, were suspicious of what truly motivated the New Dealers.

The Distribution of WPA Funds

In 1939, Georgia Senator Richard Russell in a speech before the Senate compared the appropriation his state received with those received by Wisconsin, a state with similar land area and population but with far more resources. He was interrupted by Senator Ellison Smith of South Carolina:

Mr. Smith: I have been interested in the analysis the Senator has made of the wealth and population which showed that Wisconsin and Georgia were so nearly equal in those features. I wondered if the Senator had any way of ascertaining the political aspect in those two States.
Mr. Russell: Mr. President, I had not intended to touch upon any political aspects of this question.
Mr. Smith: Why not? The Senator knows that is all there is to it (U.S. House of Representatives 1939, p. 926).

Scholars have begun to examine the New Deal in this light, producing evidence supporting Senator Smith’s assertion that political considerations helped to shape the WPA.

An empirical analysis of New Deal spending priorities was made possible by Leonard Arrington’s discovery in 1969 of documents prepared by an obscure federal government agency. “Prepared in late 1939 by the Office of Government Reports for the use of Franklin Roosevelt during the presidential campaign of 1940, the 50-page reports — one for each state — give precise information on the activities and achievements of the various New Deal economic agencies” (Arrington 1969, p. 311).

Using this data source to investigate the relationship between WPA appropriations to the states and state economic conditions makes the administration’s claims of allocating dollars to where need was greatest difficult to support. Instead, evidence supports a political motivation to the pattern of expenditures. While the legislation that funded the WPA sailed through the House, a vocal minority in the Senate argued against the measure — a fact the Roosevelt administration did not forget. “Hopkins devoted considerable attention to his relations with Congress, particularly from 1935 on. While he continually ignored several Congressmen because of their obnoxious ways of opposing the New Deal . . . he gave special attention to Senators . . . who supported the work relief program (Charles 1963, p. 162).

Empirical results confirm Charles’ assertion; WPA dollars flowed to states whose Senators voted in favor of the 1935 legislation. Likewise, if the state’s Senators opposed the measure, significantly fewer work relief dollars were distributed to the state.

The matching funds required to ‘buy’ WPA appropriations were not uniform from state-to-state. The Roosevelt administration argued that allowing them discretion to determine the size of the match would enable them to get projects to the states with fewer resources. Senator Richard Russell of Georgia complained in a Senate speech, “the poorer states . . . are required to contribute more from their poverty toward sponsored projects than the wealthier states are” (Congressional Record 1939, p. 921). Senator Russell entered sponsor contributions from each state into the Congressional Record. The data support the Senator’s assertion. Citizens in relatively poor Tennessee were forced to contribute 33.2 percent toward WPA projects while citizens in relatively rich Pennsylvania were required to contribute only 10.1 percent toward their projects. Empirical evidence supports the notion that by lowering the size of the match, Roosevelt was able to put more projects into states that were important to him politically (Couch and Smith, 2000).

The WPA represented the largest program of its kind in American history. It put much-needed dollars into the hands of jobless millions and in the process contributed to the nation’s infrastructure. Despite this record of achievement, serious questions remain concerning whether the program’s money, projects, and jobs were distributed to those who were truly in need or instead to further the political aspirations of the Roosevelt administration.

References

Arrington, Leonard J. “The New Deal in the West: A Preliminary Statistical Inquiry.” Pacific Historical Review 38 (1969): 311-16.

Charles, Searle F. Minister of Relief: Harry Hopkins and the Depression. Syracuse: Syracuse University Press, 1969.

Congressional Record (1934 and 1939) Washington: Government Printing Office.

Couch, Jim F. and Lewis Smith (2000) “New Deal Programs and State Matching Funds: Reconstruction or Re-election?” unpublished manuscript, University of North Alabama.

Higgs, Robert. Crisis and Leviathan: Critical Episodes in the Growth of American Government, New York: Oxford University Press, 1987.

Kurzman, Paul A. Harry Hopkins and the New Deal. Fairlawn, NJ: R.E. Burdick, 1974.

MacMahon, Arthur, John Millett and Gladys Ogden. The Administration of Federal Work Relief. Chicago: Public Administration Service, 1941.

Margo, Robert A. “The Microeconomics of Depression Unemployment.” Journal of Economic History 51, no. 2 (1991): 333-41.

Office of Government Reports. Activities of Selected Federal Agencies, Report No. 7. Washington, DC: Office of Government Reports, 1939.

Reading, Don C. “A Statistical Analysis of New Deal Economic Programs in the Forty-eight States, 1933-1939.” Ph.D. dissertation, Utah State University, 1972.

US House of Representatives. Congressional Directory, Washington, DC: US Government Printing Office, 1938 and 1939.

US Senate, Special Committee to Investigate Unemployment and Relief (‘Byrnes Committee’). Unemployment and Relief: Hearings before a Special Committee to Investigate Unemployment and Relief, Washington, DC: US Government Printing Office, 1938.

Wright, Gavin. “The Political Economy of New Deal Spending: An Econometric Analysis.” Review of Economics and Statistics 56, no. 1 (1974): 30-38.

Suggestions for further reading:

Heckelman, Jac C., John C. Moorhouse, and Robert M. Whaples, editors. Public Choice Interpretations of American Economic History. Boston: Kluwer Academic Publishers, 2000.

Couch, Jim F. and William F. Shughart. The Political Economy of the New Deal Edward Elgar, 1998.

Citation: Couch, Jim. “Works Progress Administration”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-works-progress-administration/

Workers’ Compensation

Price V. Fishback, University of Arizona

Workers’ compensation was one of the first social insurance programs adopted broadly throughout the United States. Under workers’ compensation employers are required to make provisions such that workers who are injured in accidents arising “out of or in the course of employment” receive medical treatment and receive payments ranging up to roughly two-thirds of their wages to replace lost income. Workers’ compensation laws were originally adopted by most states between 1911 and 1920 and the programs continue to be administered by state governments today.

The Origins of Workers’ Compensation

The System of Negligence Liability

Prior to the introduction of workers’ compensation, workers injured on the job were compensated under a system of negligence liability. If the worker could show that the accident was caused by the employer’s negligence, the worker was entitled to full compensation for the damage he experienced. The employer was considered negligent if he failed to exercise due care. If the worker could show the employer had been negligent, the worker still might not have received any compensation if the employer could rely on one of three defenses: assumption of risk, fellow-servant defense, and contributory negligence. The employer was no longer liable, respectively, if the worker knew of the danger and assumed the risk of the danger when accepting the job, if a fellow worker caused the accident, or if the worker’s own negligence contributed to the accident.

Compensation to Accident Victims before Workers’ Compensation

These common law rules were the ultimate guide for judges who adjudicated disputes between employers and workers. As in many civil situations, the vast majority of accident cases were settled long before they ever went to trial. The employers or their insurers typically offered settlements to injured workers. Various studies done by state employer liability commissions suggest that a substantial number of workers received no compensation for their accidents, which might have been expected if the employer’s negligence was not a cause of the accident. In samples of fatal accidents, about half the families of fatal accident victims received some payments for the loss of their loved ones. For those who received payments, the average payment was around one year’s income. There were a few cases where the accident victims and their families received substantial payments, but there were far more cases where no payment was made.

To some extent workers received some compensation for accepting accident risk in the form of higher wages for more dangerous jobs. Workers had relatively limited opportunities to use these higher wages to buy accident or life insurance, or pay premiums into benefit societies. As a result, many workers and families tried to rely on savings to sustain them in the event of an accident. The problem they faced is that it would take quite a few years to save enough to cover the losses of an accident, and if they were unlucky enough to have an accident early on, they would quickly exhaust these savings. The system of negligence liability, although without the three defenses, continues to determine the nature of accident compensation in the railroad industry.

Adoption of Workers’ Compensation Laws in the 1910s

In the late nineteenth century a number of European countries began to introduce workers’ compensation in a variety of forms. Among industrial countries the U.S. was relatively slow to adopt the changes. The federal government generally considered social insurance and welfare to be the purview of the states, so workers’ compensation was adopted at the state and not the federal level. The federal government did lead the way in covering its own workforce under workers’ compensation with legislation passed in 1908. As shown in Table 1, the vast majority of states adopted workers’ compensation laws between 1911 and 1920. The last state to adopt was Mississippi in 1948.

Table 1
Characteristics of Workers’ Compensation Laws in the United States, 1910-1930

State Year State Legislature First Enacted a General Lawa Method of Insuranceb
New York 1910 (1913)a Competitive Statec
California 1911 Competitive Statec
Illinois 1911 Private
Kansas 1911 Private
Massachusetts 1911 Private
New Hampshire 1911 Private
New Jersey 1911 Private
Ohio 1911 State
Washington 1911 State
Wisconsin 1911 Private
Marylandf 1912 Competitive State
Michigan 1912 Competitive State
Rhode Island 1912 Private
Arizona 1913 Competitive State
Connecticut 1913 Private
Iowa 1913 Private
Minnesota 1913 Private
Nebraska 1913 Private
Nevada 1913 State
New Yorkf 1913 Competitive State
Oregon 1913 State
Texas 1913 Private
West Virginia 1913 State
Louisiana 1914 Private
Kentucky 1914 (1916)a Private
Colorado 1915 Competitive State
Indiana 1915 Private
Maine 1915 Private
Montanaf 1915 Competitive State
Oklahoma 1915 Private
Pennsylvania 1915 Competitive State
Vermont 1915 Private
Wyoming 1915 State
Delaware 1917 Private
Idaho 1917 Competitive State
New Mexico 1917 Private
South Dakota 1917 Private
Utah 1917 Competitive State
Virginia 1918 Private
Alabama 1919 Private
North Dakota 1919 State
Tennessee 1919 Private
Missouri 1919 (1926)a Private
Georgia 1920 Private
North Carolina 1929 Private
Florida 1935 Private
South Carolina 1935 Private
Arkansas 1939 Private
Mississippi 1948 Private

Source: Fishback and Kantor, 2000, pp. 103-4.

a Some general laws were enacted by legislatures but were declared unconstitutional. The years that the law was permanently established are in parentheses. New York passed a compulsory law in 1910 and an elective law in 1910, but the compulsory law was declared unconstitutional, and the elective law saw little use. New York passed a compulsory law in 1913 after passing a constitutional amendment. The Kentucky law of 1914 was declared unconstitutional and was replaced by a law in 1916. The Missouri General Assembly passed a workers’ compensation law in 1919, but it failed to receive enough votes in a referendum in 1920. Another law passed in 1921 was defeated in a referendum in 1922 and an initiative on the ballot was again defeated in 1924. Missouri voters finally approved a workers’ compensation law in a 1926 referendum on a 1925 legislative act. Maryland (1902) and Montana (1909) passed earlier laws specific to miners that were declared unconstitutional.

b Competitive state insurance allowed employers to purchase their workers’ compensation insurance from either private insurance companies or the state. A monopoly state fund required employers to purchase their policies through the state’s fund. Most states also allowed firms to self-insure if they could meet certain financial solvency tests.

c California and New York established their competitive state funds in 1913.

d The initial laws in Ohio, Illinois, and California were elective. Ohio and California in 1913 and Illinois later established compulsory laws.

e Illinois’ initial law was administered by the courts; they switched to a commission in 1913.

f Employees have option to collect compensation or sue for damages after injury.

g Compulsory for motor bus industry only.

h Compulsory for coal mining only.

Provisions of Workers’ Compensation Laws

The adoption of workers’ compensation led to substantial changes in the nature of workplace accident compensation. Compensation was no longer based on the worker showing that the employer was at fault, nor could compensation be denied if the worker’s negligence contributed to the injury. An injured worker typically had to sustain an injury that lasted several days before he would become eligible for wage replacement. Once he became eligible, he could expect to receive weekly payments of up to two-thirds of his wage while injured. These payments were often capped at a fixed amount per week. As a result, high-wage workers sometimes received payments that replaced a smaller percentage of their lost earnings. The families of workers killed in fatal accidents typically receive burial expenses and a weekly payment of up to two-thirds of the wage, often subject to caps on the weekly payments and limits on the total amounts paid out.

Gains to Workers from Workers’ Compensation Laws

Most workers appeared to benefit from the introduction of workers’ compensation. Comparisons of the typical payments under negligence liability and payments under workers’ compensation suggest that a typical worker injured on the job was likely to receive more compensation under workers’ compensation than under negligence liability. Partly this rise was due to the fact that all workers injured under workers’ compensation were eligible for compensation; partly it was due to higher average workers’ compensation payments when compared with the typical settlement under negligence liability. Studies of wages before and after the introduction of workers’ compensation show, however, that non-union workers’ wages were reduced by the introduction of workers’ compensation. In essence, the non-union workers “bought” these improvements in their benefit levels. Even though workers may have paid for their benefits, they still seem to have been better off as a result of the introduction of workers’ compensation. Many workers had faced problems in purchasing accident insurance at the turn of the century. Workers’ compensation left them better insured, and allowed many of them to spend some of their savings that they had set aside in case of an accident.

Employers and Insurers Also Favor Workers’ Compensation

Employers were also active in pressing for workers’ compensation legislation for a variety of reasons. Some were troubled by the uncertainties associated with the courts and juries applying negligence liability to accidents. Some large awards by juries fueled these fears. Others were worried about state legislatures adopting legislation that would limit their defenses in liability suits. The negligence liability system had become an increasing source of friction between workers and employers. In the final analysis, the employers were also able to pass many of the costs of the new workers’ compensation system back to the workers in the form of lower wages. Finally, insurance companies also favored the introduction of workers’ compensation as long as the states did not try to establish their own insurance funds. Under the negligence liability system, the insurers had not been selling much accident insurance to workers because of information problems in identifying who would be good and bad risks. The switch to workers’ compensation put more of the impetus for insurance on employers and insurers found that they could more effectively solve these information problems when selling insurance to employers. As a result, insurance companies saw a rise in their business of insuring workplace accidents.

In the final analysis, the adoption of workers’ compensation was popular legislation. It was supported by the major interest groups-employers, workers, and insurers-each of whom anticipated gains from the legislation. Progressives and social reformers played some role in the adoption of the legislation, but their efforts were not as important to the passage as often surmised because so many interests groups supported the legislation.

Interest Groups Battle over Specific Provisions

On the other hand, the various interest groups fought, sometimes bitterly, over the specific details of the legislation, including the generosity of benefit levels and whether or not the state would sell workers’ compensation insurance to employers. These battles over the details at times slowed the passage of the legislation. The benefit levels tended to be higher in states where there were more workers in unionized industry but lower in states where dangerous industries predominated. Reformers played a larger role on the details as they promoted higher benefits. In several states the insurance companies lost the battle over state insurance, most often in settings where the insurance industry had a limited presence and reformers had a strong presence. As seen in Table 1, several states established monopoly state insurance funds that prevented private companies from underwriting workers’ compensation insurance. Some other states established state insurance funds that would compete with private insurers.

Trends in Workers’ Compensation over the Past Century

Changes in Occupational Coverage

Since its introduction, workers’ compensation has gone through several changes. More classes of workers have been covered by workers’ compensation over time. When workers’ compensation was first introduced, several types of employment were exempted, including agricultural workers, domestic servants, many railroad workers in interstate commerce, and in some states workers in nonhazardous employments. Further, workers hired by employers with fewer than 3 to 5 workers (varying by state) have been typically exempt from the law. As seen in Table 2, by 1940 employees earning wages and salaries accounting for 75 percent of wage and salary disbursements were covered by workers’ compensation laws. At the time that Mississippi adopted in 1948, the percentage rose to about 78 percent. Since that time a decline in domestic servitude, railroading, and agricultural employment, as well as expansions of workers’ compensation coverage have led to payroll coverage of about 92 percent.

Growth in Expenditures on Workers’ Compensation

Since 1939, real expenditures on workers’ compensation programs (in 1996 dollars) have grown at an average annual rate of 4.8 percent per year. The growth has been caused in part by the expansions in the types of workers covered, as described above. Another source of growth has been expansions in the coverage of types of injuries and occupational diseases. Although workers’ compensation was originally established to insure workers again workplace accidents, the programs in most states were expanded to cover occupation-related diseases. Starting with California in 1915, states began expanding the coverage of workers’ compensation laws to include payments to workers’ disabled by occupational diseases. By 1939 23 states covered at least some occupational diseases.1 As of July 1953 every state but Mississippi and Wyoming had at least some coverage for occupation diseases. By the 1980s all states had some form of coverage. More recently, some states have begun to expand coverage to include compensation to persons suffering from work-related disabilities associated with psychological stress.

Increased Benefit Levels

Another contributor to the growth in workers’ compensation spending has been an increase in benefit levels. The rise in benefits paid out has occurred even though workplace accident rates have declined since the beginning of the century. Workers’ compensation costs as a percentage of covered payroll (see Table 2) generally stayed around 1 percent until the late 1960s and early 1970. Since then, these costs have risen along a strong upward trend to nearly 2.5 percent in 1990. The rise in compensation costs in Table 2 was driven in part by increased payments for benefits and medical coverage, as well as the introduction of the Black Lung program for coal miners in 1969. The rise in benefits can be explained in part by a series of amendments to state laws in the 1970s that sharply increased the weekly maximums that could be paid for benefits.

Table 2
Long-Term Trends in Workers’ Compensation Coverage and Costs

Year Share of wage and salary payments to workers covered by WC WC benefits paid in 1996 dollars Cost of WC programs as percent of covered payrolla WC benefits as percent of covered payrolla Medical and hospital payments as percent of wage and salaries covered by WC Disability payments as percent of wage and salaries covered by WC Survivor payments as percent of wage and salaries covered by WC
percent $ (millions) percent percent percent percent percent
1940 73.6 2686 1.2 0.7 0.27 0.36 0.09
1941 na 2839 na na na na na
1942 na 2859 na na na na na
1943 na 2862 na na na na na
1944 na 3047 na na na na na
1945 63.0 3148 na na 0.17 0.33 0.06
1946 71.4 2997 0.9 0.5 0.18 0.31 0.06
1947 74.3 3000 na na 0.17 0.31 0.05
1948 77.5 3090 1.0 0.5 0.17 0.29 0.05
1949 76.4 3296 1.0 0.6 0.18 0.32 0.05
1950 77.2 3532 0.9 0.5 0.18 0.32 0.05
1951 76.8 3815 0.9 0.5 0.18 0.32 0.05
1952 76.3 4132 0.9 0.6 0.18 0.33 0.05
1953 77.3 4387 1.0 0.6 0.18 0.32 0.05
1954 77.7 4503 1.0 0.6 0.20 0.33 0.05
1955 79.4 4641 0.9 0.6 0.19 0.31 0.04
1956 79.5 4909 0.9 0.6 0.19 0.32 0.04
1957 79.4 5017 0.9 0.6 0.19 0.32 0.04
1958 79.8 5121 0.9 0.6 0.20 0.34 0.05
1959 80.7 5485 0.9 0.6 0.20 0.33 0.05
1960 80.9 5789 0.9 0.6 0.20 0.34 0.05
1961 81.0 6074 1.0 0.6 0.20 0.35 0.05
1962 80.9 6494 1.0 0.6 0.21 0.36 0.05
1963 81.0 6822 1.0 0.6 0.21 0.37 0.05
1964 80.9 7251 1.0 0.6 0.21 0.37 0.05
1965 80.7 7565 1.0 0.6 0.21 0.37 0.05
1966 80.6 8107 1.0 0.6 0.21 0.36 0.05
1967 80.1 8608 1.1 0.6 0.22 0.38 0.05
1968 80.0 8956 1.1 0.6 0.22 0.37 0.04
1969 80.3 9471 1.1 0.6 0.22 0.37 0.04
1970 80.4 10348 1.1 0.7 0.24 0.40 0.05
1971 80.7 11557 1.1 0.7 0.24 0.44 0.08
1972 80.6 12620 1.1 0.7 0.24 0.46 0.09
1973 82.3 15000 1.2 0.7 0.26 0.51 0.12
1974 83.2 15641 1.2 0.8 0.28 0.53 0.11
1975 84.1 16344 1.3 0.8 0.30 0.57 0.11
1976 84.3 17724 1.5 0.9 0.32 0.59 0.11
1977 84.1 18930 1.7 0.9 0.32 0.61 0.11
1978 83.4 20094 1.9 0.9 0.32 0.63 0.10
1979 84.1 22822 2.0 1.0 0.34 0.69 0.12
1980 82.8 23733 2.0 1.1 0.35 0.74 0.12
1981 82.6 24010 1.9 1.1 0.36 0.74 0.11
1982 82.0 24668 1.8 1.2 0.39 0.76 0.11
1983 82.4 25383 1.7 1.2 0.41 0.75 0.11
1984 82.4 27416 1.7 1.2 0.42 0.77 0.11
1985 81.9 30003 1.8 1.3 0.46 0.81 0.10
1986 82.3 32531 2.0 1.4 0.50 0.83 0.10
1987 82.0 35094 2.1 1.4 0.54 0.86 0.09
1988 81.8 38159 2.2 1.5 0.58 0.88 0.08
1989 81.8 41067 2.3 1.6 0.63 0.91 0.08
1990 89.0 44037 2.4 1.7 0.62 0.87 0.08
1991 90.3 46981 2.4 1.8 0.66 0.92 0.08
1992 90.4 49802 2.4 1.9 0.68 0.90 0.07
1993 90.7 48141 2.4 1.8 0.63 0.84 0.07
1994 91.0 46376 2.3 1.7 0.58 0.86 0.07
1995 91.0 44173 2.1 1.6 0.54 0.79 0.06

Sources: 1939-1967, Alfred M. Skolnik and Daniel N. Price, “Another Look at Workmen’s Compensation,” in U.S. Social Security Administration, Social Security Bulletin 33 (October 1970), pp. 3-25; 1968-1986, U.S. Social Security Administration, Social Security Bulletin, Annual Statistical Supplement, 1994, Table 9.B1, p. 333; 1992-1993, Jack Schmulowitz, “Workers’ Compensation: Coverage, Benefits, and Costs, 1992-93,” Social Security Bulletin 58 (Summer 1995), pp. 51-57. For 1987 through 1998, National Academy of Social Insurance, “Workers’ Compensation: Benefits, Coverage and Costs, 1997-1998 New Estimates.” The publication is available at the National Academy of Social Science website: http://www.nasi.org/.

a The workers’ compensation series on costs as a percentage of the covered payroll (pvf.b.18.10) contains some employer contributions to the Black Lung program while the benefits series (pvf.b.18.11) does not include benefits associated with the Black Lung program

Expenditures on Medical Care, Disability and Survivors

Over time, and particularly during the 1980s and early 1990s, rising medical expenditures have been a prime contributor to rising costs. Expenditures on medical and hospital benefits have risen from less than 0.2 percent of the payroll to over 0.6 percent in the early 1990s. At that time employers and insurers began managing their health care costs more closely and have slowed the growth of workers’ compensation medical costs during the 1990s. Similarly, the disability benefits paid to replace lost earnings have also risen sharply over times as reforms of workers’ compensation expanded the range of workplace injuries and diseases covered. Payments of replacement wages to disabled workers have increased relative to the size of payrolls from 0.3 percent of wages and salaries covered by workers compensation to as high as .9 percent around 1990 (see Table 2). In contrast, the percentage of the payrolls spent on paying benefits to the survivors of the victims of fatal accidents has stayed relatively constant at below 0.1 percent from the 1940s through 1970 and again from the 1980s to the present (see Table 2). The upward surge in the percentage of payroll paid out to survivors between 1970 and 1973 was driven by the introduction of the federal Black Lung program. The impact of Black Lung was so dramatic because of the accumulation of a number of years of survivors all being added to the system in the span of three years. Once the Black Lung program had stabilized, the survivors’ benefits reached a steady state of about 0.1 percent of the payroll and have declined in the 1990s.

Declining Injury and Illness Rates

The general rise in workers’ compensation benefits as a share of the payroll should not necessarily be considered a sign that workplaces have become more dangerous. Workers’ compensation has increasingly provided benefits for a wide range of injuries and diseases for which compensation would not have been awarded earlier in the century. Data on occupational injury and illness rates for all occupations shows that number of cases of injury and illness per 100 workers in the private sectors has fallen by 32 percent since 1972, while the number of lost workday cases has stayed roughly constant.

Trends in the Shares of Payments Made by Types of Insurers

Although the states establish the basic rules for compensation, employers can obtain insurance to cover their compensation responsibilities from several sources: private insurance carriers in the majority of states, government-sponsored insurance funds in roughly half of the states, or the employer can self-insure as long as they demonstrate sufficient resources to handle their benefit obligations. Between the end of World War II and 1970, the distribution of benefits paid by these various insurers stayed relatively constant (see Table 3). The percentage of benefits paid by private insurers was roughly 62 percent, by state and federal funds roughly 25 percent and by self-insurers was about 12 to 15 percent. The introduction of the Black Lung benefit program in 1970 led to a sharp rise in the state and federal insurance funds, as a large number of workers not previously covered received federal coverage for black lung disease. Since 1973 the trend has been to return more of the insurance activity to private insurers, and many employers have increasingly self-insured.

Table 3
Shares of Workers’ Compensation Payments Made by Types of Insurer

Year Private Insurer Government Fund Self-Insurance
percent percent percent
1940 52.7 28.5 18.8
1941 55.0 26.5 18.6
1942 57.9 24.7 17.4
1943 60.3 22.9 16.7
1944 61.4 22.3 16.3
1945 61.9 22.2 15.9
1946 62.2 22.1 15.7
1947 62.1 22.6 15.2
1948 62.7 22.7 14.6
1949 62.4 23.3 14.3
1950 62.0 24.2 13.8
1951 62.7 24.0 13.3
1952 62.5 24.6 12.9
1953 62.3 25.0 12.7
1954 61.7 25.7 12.6
1955 61.5 26.0 12.6
1956 61.7 25.8 12.5
1957 62.2 25.5 12.2
1958 62.5 25.7 11.9
1959 62.2 26.1 11.7
1960 62.5 25.1 12.4
1961 61.9 25.3 12.8
1962 62.1 24.9 13.0
1963 62.4 24.5 13.1
1964 62.6 24.1 13.2
1965 62.0 24.5 13.5
1966 62.0 24.3 13.8
1967 62.2 23.9 13.8
1968 62.4 23.4 14.2
1969 62.3 23.0 14.7
1970 60.8 24.9 14.3
1971 56.3 30.8 12.9
1972 53.6 33.9 12.4
1973 49.3 39.1 11.6
1974 51.4 36.1 12.5
1975 51.9 35.2 12.9
1976 52.4 33.9 13.7
1977 53.6 31.9 14.5
1978 53.7 31.1 15.3
1979 51.2 33.4 15.4
1980 51.6 31.8 16.6
1981 52.3 30.5 17.2
1982 52.7 29.1 18.2
1983 52.7 28.8 18.5
1984 53.9 27.5 18.6
1985 55.5 25.9 18.6
1986 56.2 25.4 18.4
1987 56.6 24.8 18.6
1988 57.0 24.3 18.7
1989 58.0 23.2 18.7
1990 58.1 22.9 19.0
1991 58.1 23.0 18.8
1992 55.4 23.4 21.3
1993 53.2 23.3 23.4
1994 50.0 24.1 25.9
1995 48.8 25.4 25.9
1996 48.8 25.4 25.8
1997 50.8 24.9 24.3
1998 53.3 24.8 21.9

Sources: See Table 2.

The Moral Hazard Problem and Accident Compensation

The provision of accident compensation is potentially subject to problems with moral hazard, which is a situation where people reduce their prevention activities because their net losses from the injury are reduced by the presence of compensation. Over the course of the century, there have been two trends that have contributed to the potential for greater moral hazard problems. First, the character of the most common injuries has changed. In the early 1900s the common workplace injuries were readily identifiable, as the probability of accidents leading to broken bones, lost body parts, and fatalities were far more common. The most common forms of workers’ compensation injuries today are soft tissue injuries to the back and carpal tunnel syndrome in wrists. These injuries are not so easy to diagnose effectively, which could lead to excess reporting of this type of injury. The second trend has been a rise in benefit levels as a share of after-tax income. Workers’ compensation payments are not taxed. When the workers’ compensation programs were first introduced, the federal income tax was first being put into place. Through 1940, less than 7 percent of households were subject to the income tax. Since World War II, however, the income tax rates have been substantially higher. As a result, workers’ compensation benefits have been replacing a higher share of the after-tax wage. The absence of much taxation in the early 1900s meant that workers’ compensation benefits often replaced less than two-thirds of the after-tax wage, and sometimes weekly maximums on payments led to replacement of a substantially lower percentage. In the modern era, with greater taxation of wages, workers’ compensation benefits are replacing up to 90 percent of the after-tax wage in some states. Both the trends toward more soft-tissue injuries and the higher after-tax replacement rates have led to improvements in the compensation of injured workers, although there is evidence that workers pay for these improvements through lower wages (Moore and Viscusi 1990). On the other hand, the trends increase the risk of problems with moral hazard, which in turn lead to higher insurance costs for employers and insurers. Employers and insurers have sought to limit the problems with moral hazard through closer monitoring of accident claims and the recovery process. The tensions between improved accident compensation and moral hazard have been a constant source of conflict in the debates over the proper level of compensation for workers.

Conclusion

Workers’ compensation is now one of the cornerstones of our network of social insurance programs. Although many of the modern social insurance programs were proposed at the state level during the 1910s, workers’ compensation was the only program to be widely adopted at the time. Unemployment insurance and old-age pension programs later joined the network through federal legislation in the 1930s. All of these programs have faced new challenges, as they have become a central feature of our economic terrain.

References

Aldrich, Mark. Safety First: Technology, Labor, and Business in the Building of American Work Safety, 1870-1939. Baltimore: Johns Hopkins University Press, 1997.

Fishback, Price V. and Shawn Everett Kantor. A Prelude to the Welfare State: The Origins of Workers’ Compensation. Chicago: University of Chicago Press, 2000.

Moore, Michael J., and W. Kip Viscusi. Compensation Mechanisms for Job Risks: Wages, Workers’ Compensation, and Product Liability. Princeton, NJ: Princeton University Press, 1990.

Data and descriptions of trends for workers’ compensation are available from the National Academy of Social Insurance website: http://www.nasi.org/. The NASI continues to publish annual updates. In addition, detailed descriptions of the benefit rules in each state are published annually by the U.S. Chamber of Commerce in Analysis of Workers’ Compensation Laws.

1 The states include California 1915, North Dakota 1925, Minnesota 1927, Connecticut 1930, Kentucky 1930, New York 1930, Illinois 1931, Missouri 1931, New Jersey 1931, Ohio 1931, Massachusetts 1932, Nebraska 1935, North Carolina 1935, Wisconsin 1935, West Virginia 1935, Rhode Island 1936, Delaware 1937, Indiana 1937, Michigan 1937, Pennsylvania 1937, Washington 1937, Idaho 1939 and Maryland 1939. Balkan 1998, p. 64.

Citation: Fishback, Price. “Workers’ Compensation”. EH.Net Encyclopedia, edited by Robert Whaples. March 26, 2008. URL http://eh.net/encyclopedia/workers-compensation/

Urban Decline (and Success) in the United States

Fred Smith and Sarah Allen, Davidson College

Introduction

Any discussion of urban decline must begin with a difficult task – defining what is meant by urban decline. Urban decline (or “urban decay”) is a term that evokes images of abandoned homes, vacant storefronts, and crumbling infrastructure, and if asked to name a city that has suffered urban decline, people often think of a city from the upper Midwest like Cleveland, Detroit, or Buffalo. Yet, while nearly every American has seen or experienced urban decline, the term is one that is descriptive and not easily quantifiable. Further complicating the story is this simple fact – metropolitan areas, like greater Detroit, may experience the symptoms of severe urban decline in one neighborhood while remaining economically robust in others. Indeed, the city of Detroit is a textbook case of urban decline, but many of the surrounding communities in metropolitan Detroit are thriving. An additional complication comes from the fact that modern American cities – cities like Dallas, Charlotte, and Phoenix – don’t look much like their early twentieth century counterparts. Phoenix of the early twenty-first century is an economically vibrant city, yet the urban core of Phoenix looks very, very different from the urban core found in “smaller” cities like Boston or San Francisco.[1] It is unlikely that a weekend visitor to downtown Phoenix would come away with the impression that Phoenix is a rapidly growing city, for downtown Phoenix does not contain the housing, shopping, or recreational venues that are found in downtown San Francisco or Boston.

There isn’t a single variable that will serve as a perfect choice for measuring urban decline, but this article will take an in depth look at urban decline by focusing on the best measure of a city’s well being – population. In order to provide a thorough understanding of urban decline, this article contains three additional sections. The next section employs data from a handful of sources to familiarize the reader with the location and severity of urban decline in the United States. Section three is dedicated to explaining the causes of urban decline in the U.S. Finally, the fourth section looks at the future of cities in the United States and provides some concluding remarks.

Urban Decline in the United States – Quantifying the Population Decline

Between 1950 and 2000 the population of the United States increased by approximately 120 million people, from 152 million to 272 million. Despite the dramatic increase in population experienced by the country as a whole, different cities and states experienced radically different rates of growth. Table 1 shows the population figures for a handful of U.S. cities for the years 1950 to 2000. (It should be noted that these figures are population totals for the cities in the list, not for the associated metropolitan areas.)

Table 1: Population for Selected U.S. Cities, 1950-2000

City

Population

% Change

1950 – 2000

1950

1960

1970

1980

1990

2000

New York

7,891,957

7,781,984

7,895,563

7,071,639

7,322,564

8,008,278

1.5

Philadelphia

2,071,605

2,002,512

1,949,996

1,688,210

1,585,577

1,517,550

-26.7

Boston

801,444

697,177

641,071

562,994

574,283

589,141

-26.5

Chicago

3,620,962

3,550,404

3,369,357

3,005,072

2,783,726

2,896,016

-20.0

Detroit

1,849,568

1,670,144

1,514,063

1,203,339

1,027,974

951,270

-48.6

Cleveland

914,808

876,050

750,879

573,822

505,616

478,403

-47.7

Kansas City

456,622

475,539

507,330

448,159

435,146

441,545

-3.3

Denver

415,786

493,887

514,678

492,365

467,610

554,636

33.4

Omaha

251,117

301,598

346,929

314,255

335,795

390,007

55.3

Los Angeles

1,970,358

2,479,015

2,811,801

2,966,850

3,485,398

3,694,820

87.5

San Francisco

775,357

740,316

715,674

678,974

723,959

776,733

0.2

Seattle

467,591

557,087

530,831

493,846

516,259

563,374

20.5

Houston

596,163

938,219

1,233,535

1,595,138

1,630,553

1,953,631

227.7

Dallas

434,462

679,684

844,401

904,078

1,006,877

1,188,580

173.6

Phoenix

106,818

439,170

584,303

789,704

983,403

1,321,045

1136.7

New Orleans

570,445

627,525

593,471

557,515

496,938

484,674

-15.0

Atlanta

331,314

487,455

495,039

425,022

394,017

416,474

25.7

Nashville

174,307

170,874

426,029

455,651

488,371

545,524

213.0

Washington

802,178

763,956

756,668

638,333

606,900

572,059

-28.7

Miami

249,276

291,688

334,859

346,865

358,548

362,470

45.4

Charlotte

134,042

201,564

241,178

314,447

395,934

540,828

303.5

Source: U.S. Census Bureau.

Several trends emerge from the data in Table 1. The cities in the table are clustered together by region, and the cities at the top of the table – cities from the Northeast and Midwest – experience no significant population growth (New York City) or experience dramatic population loss (Detroit and Cleveland). These cities’ experiences stand in stark contrast to that of the cities located in the South and West – cities found farther down the list. Phoenix, Houston, Dallas, Charlotte, and Nashville all experience triple digit population increases during the five decades from 1950 to 2000. Figure 1 displays this information even more dramatically:

Figure 1: Percent Change in Population, 1950 – 2000

Source: U.S. Census Bureau.

While Table 1 and Figure 1 clearly display the population trends within these cities, they do not provide any information about what was happening to the metropolitan areas in which these cities are located. Table 2 fills this gap. (Please note – these metropolitan areas do not correspond directly to the metropolitan areas identified by the U.S. Census Bureau. Rather, Jordan Rappaport – an economist at the Kansas City Federal Reserve Bank – created these metropolitan areas for his 2005 article “The Shared Fortunes of Cities and Suburbs.”)

Table 2: Population of Selected Metropolitan Areas, 1950 to 2000

Metropolitan Area

1950

1960

1970

2000

Percent Change 1950 to 2000

New York-Newark-Jersey City, NY

13,047,870

14,700,000

15,812,314

16,470,048

26.2

Philadelphia, PA

3,658,905

4,175,988

4,525,928

4,580,167

25.2

Boston, MA

3,065,344

3,357,607

3,708,710

4,001,752

30.5

Chicago-Gary, IL-IN

5,612,248

6,805,362

7,606,101

8,573,111

52.8

Detroit, MI

3,150,803

3,934,800

4,434,034

4,366,362

38.6

Cleveland, OH

1,640,319

2,061,668

2,238,320

1,997,048

21.7

Kansas City, MO-KS

972,458

1,232,336

1,414,503

1,843,064

89.5

Denver, CO

619,774

937,677

1,242,027

2,414,649

289.6

Omaha, NE

471,079

568,188

651,174

803,201

70.5

Los Angeles-Long Beach, CA

4,367,911

6,742,696

8,452,461

12,365,627

183.1

San Francisco-Oakland, CA

2,531,314

3,425,674

4,344,174

6,200,867

145.0

Seattle, WA

920,296

1,191,389

1,523,601

2,575,027

179.8

Houston, TX

1,021,876

1,527,092

2,121,829

4,540,723

344.4

Dallas, TX

780,827

1,119,410

1,555,950

3,369,303

331.5

Phoenix, AZ

NA

663,510

967,522

3,251,876

390.1*

New Orleans, LA

754,856

969,326

1,124,397

1,316,510

74.4

Atlanta, GA

914,214

1,224,368

1,659,080

3,879,784

324.4

Nashville, TN

507,128

601,779

704,299

1,238,570

144.2

Washington, DC

1,543,363

2,125,008

2,929,483

4,257,221

175.8

Miami, FL

579,017

1,268,993

1,887,892

3,876,380

569.5

Charlotte, NC

751,271

876,022

1,028,505

1,775,472

136.3

* The percentage change is for the period from 1960 to 2000.

Source: Rappaport; http://www.kc.frb.org/econres/staff/jmr.htm

Table 2 highlights several of the difficulties in conducting a meaningful discussion about urban decline. First, by glancing at the metro population figures for Cleveland and Detroit, it becomes clear that while these cities were experiencing severe urban decay, the suburbs surrounding them were not. The Detroit metropolitan area grew more rapidly than the Boston, Philadelphia, or New York metro areas, and even the Cleveland metro area experienced growth between 1950 and 2000. Next, we can see from Tables 1 and 2 that some of the cities experiencing dramatic growth between 1950 and 2000 did not enjoy similar increases in population at the metro level. The Phoenix, Charlotte, and Nashville metro areas experienced tremendous growth, but their metro growth rates were not nearly as large as their city growth rates. This raises an important question – did these cities experience tremendous growth rates because the population was growing rapidly or because the cities were annexing large amounts of land from the surrounding suburbs? Table 3 helps to answer this question. In Table 3, land area, measured in square miles, is provided for each of the cities initially listed in Table 1. The data in Table 3 clearly indicate that Nashville and Charlotte, as well as Dallas, Phoenix, and Houston, owe some of their growth to the expansion of their physical boundaries. Charlotte, Phoenix, and Nashville are particularly obvious examples of this phenomenon, for each city increased its physical footprint by over seven hundred percent between 1950 and 2000.

Table 3: Land Area for Selected U.S. Cities, 1950 – 2000

Metropolitan Area

1950

1960

1970

2000

Percent Change 1950 to 2000

New York, NY

315.1

300

299.7

303.3

-3.74

Philadelphia, PA

127.2

129

128.5

135.1

6.21

Boston, MA

47.8

46

46

48.4

1.26

Chicago, IL

207.5

222

222.6

227.1

9.45

Detroit, MI

139.6

138

138

138.8

-0.57

Cleveland, OH

75

76

75.9

77.6

3.47

Kansas City, MO

80.6

130

316.3

313.5

288.96

Denver, CO

66.8

68

95.2

153.4

129.64

Omaha, NE

40.7

48

76.6

115.7

184.28

Los Angeles, CA

450.9

455

463.7

469.1

4.04

San Francisco, CA

44.6

45

45.4

46.7

4.71

Seattle, WA

70.8

82

83.6

83.9

18.50

Houston, TX

160

321

433.9

579.4

262.13

Dallas, TX

112

254

265.6

342.5

205.80

Phoenix, AZ

17.1

187

247.9

474.9

2677.19

New Orleans, LA

199.4

205

197.1

180.6

-9.43

Atlanta, GA

36.9

136

131.5

131.7

256.91

Nashville, TN

22

29

507.8

473.3

2051.36

Washington, DC

61.4

61

61.4

61.4

0.00

Miami, FL

34.2

34

34.3

35.7

4.39

Charlotte, NC

30

64.8

76

242.3

707.67

Sources: Rappaport, http://www.kc.frb.org/econres/staff/jmr.htm; Gibson, Population of the 100 Largest Cities.

Taken together, Tables 1 through 3 paint a clear picture of what has happened in urban areas in the United States between 1950 and 2000: Cities in the Southern and Western U.S. have experienced relatively high rates of growth when they are compared to their neighbors in the Midwest and Northeast. And, as a consequence of this, central cities in the Midwest and Northeast have remained the same size or they have experienced moderate to severe urban decay. But, to complete this picture, it is worth considering some additional data. Table 4 presents regional population and housing data for the United States during the period from 1950 to 2000.

Table 4: Regional Population and Housing Data for the U.S., 1950 – 2000

1950

1960

1970

1980

1990

2000

Population Density – persons/(square mile)

50.9

50.7

57.4

64

70.3

79.6

Population by Region

West

19,561,525

28,053,104

34,804,193

43,172,490

52,786,082

63,197,932

South

47,197,088

54,973,113

62,795,367

75,372,362

85,445,930

100,236,820

Midwest

44,460,762

51,619,139

56,571,663

58,865,670

59,668,632

64,392,776

Northeast

39,477,986

44,677,819

49,040,703

49,135,283

50,809,229

53,594,378

Population by Region – % of Total

West

13

15.6

17.1

19.1

21.2

22.5

South

31.3

30.7

30.9

33.3

34.4

35.6

Midwest

29.5

28.8

27.8

26

24

22.9

Northeast

26.2

24.9

24.1

21.7

20.4

19

Population Living in non-Metropolitan Areas (millions)

66.2

65.9

63

57.1

56

55.4

Population Living in Metropolitan Areas (millions)

84.5

113.5

140.2

169.4

192.7

226

Percent in Suburbs in Metropolitan Area

23.3

30.9

37.6

44.8

46.2

50

Percent in Central City in Metropolitan Area

32.8

32.3

31.4

30

31.3

30.3

Percent Living in the Ten Largest Cities

14.4

12.1

10.8

9.2

8.8

8.5

Percentage Minority by Region

West

26.5

33.3

41.6

South

25.7

28.2

34.2

Midwest

12.5

14.2

18.6

Northeast

16.6

20.6

26.6

Housing Units by Region

West

6,532,785

9,557,505

12,031,802

17,082,919

20,895,221

24,378,020

South

13,653,785

17,172,688

21,031,346

29,419,692

36,065,102

42,382,546

Midwest

13,745,646

16,797,804

18,973,217

22,822,059

24,492,718

26,963,635

Northeast

12,051,182

14,798,360

16,642,665

19,086,593

20,810,637

22,180,440

Source: Hobbs and Stoops (2002).

There are several items of particular interest in Table 4. Every region in the United States becomes more diverse between 1980 and 2000. No region has a minority population greater than 26.5 percent minority in 1980, but only the Midwest remains below 26.5 percent minority by 2000. The U.S. population becomes increasingly urbanized over time, yet the percentage of Americans who live in central cities remains nearly constant. Thus, it is the number of Americans living in suburban communities that has fueled the dramatic increase in “urban” residents. This finding is reinforced by looking at the figures for average population density for the United States as a whole, the figures listing the numbers of Americans living in metropolitan versus non-metropolitan areas, and the figures listing the percentage of Americans living in the ten largest cities in the United States.

Other Measures of Urban Decline

While the population decline documented in the first part of this section suggests that cities in the Northeast and Midwest experienced severe urban decline, anyone who has visited the cities of Detroit and Boston would be able to tell you that the urban decline in these cities has affected their downtowns in very different ways. The central city in Boston is, for the most part, economically vibrant. A visitor to Boston would fine manicured public spaces as well as thriving retail, housing, and commercial sectors. Detroit’s downtown is still scarred by vacant office towers, abandoned retail space, and relatively little housing. Furthermore, the city’s public spaces would not compare favorably to those of Boston. While the leaders of Detroit have made some needed improvements to the city’s downtown in the past several years, the central city remains a mere shadow of its former self. Thus, the loss of population experienced by Detroit and Boston do not tell the full story about how urban decline has affected these cities. They have both lost population, yet Detroit has lost a great deal more – it no longer possesses a well-functioning urban economy.

To date, there have been relatively few attempts to quantify the loss of economic vitality in cities afflicted by urban decay. This is due, in part, to the complexity of the problem. There are few reliable historical measures of economic activity available at the city level. However, economists and other social scientists are beginning to better understand the process and the consequences of severe urban decline.

Economists Edward Glaeser and Joseph Gyourko (2005) developed a model that thoroughly explains the process of urban decline. One of their principal insights is that the durable nature of housing means that the process of urban decline will not mirror the process of urban expansion. In a growing city, the demand for housing is met through the construction of new dwellings. When a city faces a reduction in economic productivity and the resulting reduction in the demand for labor, workers will begin to leave the city. Yet, when population in a city begins to decline, housing units do not magically disappear from the urban landscape. Thus, in Glaeser and Gyourko’s model a declining city is characterized by a stock of housing that interacts with a reduction in housing demand, producing a rapid reduction in the real price of housing. Empirical evidence supports the assertions made by the model, for in cities like Cleveland, Detroit, and Buffalo the real price of housing declined in the second half of the twentieth century. An important implication of the Glaeser and Gyourko model is that declining housing prices are likely to attract individuals who are poor and who have acquired relatively little human capital. The presence of these workers makes it difficult for a declining city – like Detroit – to reverse its economic decline, for it becomes relatively difficult to attract businesses that need workers with high levels of human capital.

Complementing the theoretical work of Glaeser and Gyourko, Fred H. Smith (2003) used property values as a proxy for economic activity in order to quantify the urban decline experienced by Cleveland, Ohio. Smith found that the aggregate assessed value for the property in the downtown core of Cleveland fell from its peak of nearly $600 million in 1930 to a mere $45 million by 1980. (Both figures are expressed in 1980 dollars.) Economists William Collins and Robert Margo have also examined the impact of urban decline on property values. Their work focuses on how the value of owner occupied housing declined in cities that experienced a race riot in the 1960s, and, in particular, it focuses on the gap in property values that developed between white and black owned homes. Nonetheless, a great deal of work still remains to be done before the magnitude of urban decay in the United States is fully understood.

What Caused Urban Decline in the United States?

Having examined the timing and the magnitude of the urban decline experienced by U.S. cities, it is now necessary to consider why these cities decayed. In the subsections that follow, each of the principal causes of urban decline is considered in turn.

Decentralizing Technologies

In “Sprawl and Urban Growth,” Edward Glaeser and Matthew Kahn (2001) assert that “while many factors may have helped the growth of sprawl, it ultimately has only one root cause: the automobile” (p. 2). Urban sprawl is simply a popular term for the decentralization of economic activity, one of the principal symptoms of urban decline. So it should come as no surprise that many of the forces that have caused urban sprawl are in fact the same forces that have driven the decline of central cities. As Glaeser and Kahn suggest, the list of causal forces must begin with the emergence of the automobile.

In order to maximize profit, firm owners must choose their location carefully. Input prices and transportation costs (for inputs and outputs) vary across locations. Firm owners ultimately face two important decisions about location, and economic forces dictate the choices made in each instance. First, owners must decide in which city they will do business. Then, the firm owners must decide where the business should be located within the chosen city. In each case, transportation costs and input costs must dominate the owners’ decision making. For example, a business owner whose firm will produce steel must consider the costs of transporting inputs (e.g. iron ore), the costs of transporting the output (steel), and the cost of other inputs in the production process (e.g. labor). For steel firms operating in the late nineteenth century these concerns were balanced out by choosing locations in the Midwest, either on the Great Lakes (e.g. Cleveland) or major rivers (e.g. Pittsburgh). Cleveland and Pittsburgh were cities with plentiful labor and relatively low transport costs for both inputs and the output. However, steel firm owners choosing Cleveland or Pittsburgh also had to choose a location within these cities. Not surprisingly, the owners chose locations that minimized transportation costs. In Cleveland, for example, the steel mills were built near the shore of Lake Erie and relatively close to the main rail terminal. This minimized the costs of getting iron ore from ships that had come to the city via Lake Erie, and it also provided easy access to water or rail transportation for shipping the finished product. The cost of choosing a site near the rail terminal and the city’s docks was not insignificant: Land close to the city’s transportation hub was in high demand, and, therefore, relatively expensive. It would have been cheaper for firm owners to buy land on the periphery of these cities, but they chose not to do this because the costs associated with transporting inputs and outputs to and from the transportation hub would have dominated the savings enjoyed from buying cheaper land on the periphery of the city. Ultimately, it was the absence of cheap intra-city transport that compressed economic activity into the center of an urban area.

Yet, transportation costs and input prices have not simply varied across space; they’ve also changed over time. The introduction of the car and truck had a profound impact on transportation costs. In 1890, moving a ton of goods one mile cost 18.5 cents (measured in 2001 dollars). By 2003 the cost had fallen to 2.3 cents (measured in 2001 dollars) per ton-mile (Glaeser and Kahn 2001, p. 4). While the car and truck dramatically lowered transportation costs, they did not immediately affect firm owners’ choices about which city to choose as their base of operations. Rather, the immediate impact was felt in the choice of where within a city a firm should choose to locate. The intra-city truck made it easy for a firm to locate on the periphery of the city, where land was plentiful and relatively cheap. Returning to the example from the previous paragraph, the introduction of the intra-city truck allowed the owners of steel mills in Cleveland to build new plants on the periphery of the urban area where land was much cheaper (Encyclopedia of Cleveland History). Similarly, the car made it possible for residents to move away from the city center and out to the periphery of the city – or even to newly formed suburbs. (The suburbanization of the urban population had begun in the late nineteenth century when streetcar lines extended from the central city out to the periphery of the city or to communities surrounding the city; the automobile simply accelerated the process of decentralization.) The retail cost of a Ford Model T dropped considerably between 1910 and 1925 – from approximately $1850 to $470, measuring the prices in constant 1925 dollars (these values would be roughly $21,260 and $5400 in 2006 dollars), and the market responded accordingly. As Table 5 illustrates, the number of passenger car registrations increased dramatically during the twentieth century.

Table 5: Passenger Car Registrations in the United States, 1910-1980

Year

Millions of Registered Vehicles

1910

.5

1920

8.1

1930

23.0

1940

27.5

1950

40.4

1960

61.7

1970

89.2

1980

131.6

Source: Muller, p. 36.

While changes in transportation technology had a profound effect on firms’ and residents’ choices about where to locate within a given city, they also affected the choice of which city would be the best for the firm or resident. Americans began demanding more and improved roads to capitalize on the mobility made possible by the car. Also, the automotive, construction, and tourism related industries lobbied state and federal governments to become heavily involved in funding road construction, a responsibility previously relegated to local governments. The landmark National Interstate and Defense Highway Act of 1956 signified a long-term commitment by the national government to unite the country through an extensive network of interstates, while also improving access between cities’ central business district and outlying suburbs. As cars became affordable for the average American, and paved roads became increasingly ubiquitous, not only did the suburban frontier open up to a rising proportion of the population; it was now possible to live almost anywhere in the United States. (However, it is important to note that the widespread availability of air conditioning was a critical factor in Americans’ willingness to move to the South and West.)

Another factor that opened up the rest of the United States for urban development was a change in the cost of obtaining energy. Obtaining abundant, cheap energy is a concern for firm owners and for households. Historical constraints on production and residential locations continued to fall away in the late nineteenth and early twentieth century as innovations in energy production began to take hold. One of the most important of these advances was the spread of the alternating-current electric grid, which further expanded firms’ choices regarding plant location and layout. Energy could be generated at any site and could travel long distances through thin copper wires. Over a fifty-year period from 1890 to 1940, the proportion of goods manufactured using electrical power soared from 0.1 percent to 85.6 percent (Nye 1990). With the complementary advancements in transportation, factories now had the option of locating outside of the city where they could capture savings from cheaper land. The flexibility of electrical power also offered factories new freedom in the spatial organization of production. Whereas steam engines had required a vertical system of organization in multi-level buildings, the AC grid made possible a form of production that permanently transformed the face of manufacturing – the assembly line (Nye 1990).

The Great Migration

Technological advances were not bound by urban limits; they also extended into rural America where they had sweeping social and economic repercussions. Historically, the vast majority of African Americans had worked on Southern farms, first as slaves and then as sharecroppers. But progress in the mechanization of farming – particularly the development of the tractor and the mechanical cotton-picker – reduced the need for unskilled labor on farms. The dwindling need for farm laborers coupled with continuing racial repression in the South led hundreds of thousands of southern African Americans to migrate North in search of new opportunities. The overall result was a dramatic shift in the spatial distribution of African Americans. In 1900, more than three-fourths of black Americans lived in rural areas, and all but a handful of rural blacks lived in the South. By 1960, 73% of blacks lived in urban areas, and the majority of the urban blacks lived outside of the South (Cahill 1974).

Blacks had begun moving to Northern cities in large numbers at the onset of World War I, drawn by the lure of booming wartime industries. In the 1940s, Southern blacks began pouring into the industrial centers at more than triple the rate of the previous decade, bringing with them a legacy of poverty, poor education, and repression. The swell of impoverished and uneducated African Americans rarely received a friendly reception in Northern communities. Instead they frequently faced more of the treatment they had sought to escape (Groh 1972). Furthermore, the abundance of unskilled manufacturing jobs that had greeted the first waves of migrants had begun to dwindle. Manufacturing firms in the upper Midwest (the Rustbelt) faced increased competition from foreign firms, and many of the American firms that remained in business relocated to the suburbs or the Sunbelt to take advantage of cheap land. African Americans had difficulty accessing jobs at locations in the suburbs, and the result for many was a “spatial mismatch” – they lived in the inner city where employment opportunities were scarce, yet lacked access to transportation and that would allow them to commute to the suburban jobs (Kain 1968). Institutionalized racism, which hindered blacks’ attempts to purchase real estate in the suburbs, as well as the proliferation of inner city public housing projects, reinforced the spatial mismatch problem. As inner city African Americans coped with high unemployment rates, high crime rates and urban disturbances such as the race riots of the 1960s were obvious symptoms of economic distress. High crime rates and the race riots simply accelerated the demographic transformation of Northern cities. White city residents had once been “pulled” to the suburbs by the availability of cheap land and cheap transportation when the automobile became affordable; now white residents were being “pushed” by racism and the desire to escape the poverty and crime that had become common in the inner city. Indeed, by 2000 more than 80 percent of Detroit’s residents were African American – a stark contrast from 1950 when only 16 percent of the population was black.

The American City in the Twenty-First Century

Some believe that technology – specifically advances in information technology – will render the city obsolete in the twenty-first century. Urban economists find their arguments unpersuasive (Glaeser 1998). Recent history shows that the way we interact with one another has changed dramatically in a very short period of time. E-mail, cell phones, and text messages belonged to the world science fiction as recently as 1980. Clearly, changes in information technology no longer make it a requirement that we locate ourselves in close proximity to the people we want to interact with. Thus, one can understand the temptation to think that we will no longer need to live so close to one another in New York, San Francisco or Chicago. Ultimately, a person or a firm will only locate in a city if the benefits from being in the city outweigh the costs. What is missing from this analysis, though, is that people and firms locate in cities for reasons that are not immediately obvious.

Economists point to economies of agglomeration as one of the main reasons that firms will continue to choose urban locations over rural locations. Economics of agglomeration exist when a firm’s productivity is enhanced (or its cost of doing business is lowered) because it is located in a cluster of complementary firms of in a densely populated area. A classic example of an urban area that displays substantial economies of agglomeration is “Silicon Valley” (near San Jose, California). Firms choosing to locate in Silicon Valley benefit from several sources of economies of agglomeration, but two of the most easily understood are knowledge spillovers and labor pooling. Knowledge spillovers in Silicon Valley occur because individuals who work at “computer firms” (firms producing software, hardware, etc.) are likely to interact with one another on a regular basis. These interactions can be informal – playing together on a softball team, running into one another at a child’s soccer game, etc. – but they are still very meaningful because they promote the exchange of ideas. By exchanging ideas and information it makes it possible for workers to (potentially) increase their productivity at their own job. Another example of economies of agglomeration in Silicon Valley is the labor pooling that occurs there. Because workers who are trained in computer related fields know that computer firms are located in Silicon Valley, they are more likely to choose to live in or around Silicon Valley. Thus, firms operating in Silicon Valley have an abundant supply of labor in close proximity, and, similarly, workers enjoy the opportunities associated with having several firms that can make use of their skills in a small geographic area. The clustering of computer industry workers and firms allows firms to save money when they need to hire another worker, and it makes it easier for workers who need a job to find one.

In addition to economies of agglomeration, there are other economic forces that make the disappearance of the city unlikely. Another of the benefits that some individuals will associate with urban living is the diversity of products and experiences that are available in a city. For example, in a large city like Chicago it is possible to find deep dish pizza, thin crust pizza, Italian food, Persian food, Greek food, Swedish food, Indian food, Chinese food… literally almost any type of food that you might imagine. Why is all of this food available in Chicago but not in a small town in southern Illinois? Economists answer this question using the concept of demand density. Lots of people like Chinese food, so it is not uncommon to find a Chinese restaurant in a small town. Fewer people, though, have been exposed to Persian cuisine. While it is quite likely that the average American would like Persian food if it were available, most Americans haven’t had the opportunity to try it. Hence, the average American is unlikely to demand much Persian food in a given time period. So, individuals who are interested in operating a Persian food restaurant logically choose to operate in Chicago instead of a small town in southern Illinois. While each individual living in Chicago may not demand Persian food any more frequently than the individuals living in the small town, the presence of so many people in a relatively small area makes it possible for the Persian food restaurant to operate and thrive. Moreover, exposure to Persian food may change people’s tastes and preferences. Over time, the amount of Persian food demand (on average) from each inhabitant of the city may increase.

Individuals who value Persian food – or any of the other experiences that can only be found in a large city – will value the opportunity to live in a large city more than they will value the opportunity to live in a rural area. But the incredible diversity that a large city has to offer is a huge benefit to some individuals, not to everyone. Rural areas will continue to be populated as long as there are people who prefer the pleasures of low-density living. For these individuals, the pleasure of being able to walk in the woods or hike in the mountains may be more than enough compensation for living in a part of the country that doesn’t have a Persian restaurant.

As long as there are people (and firm owners) who believe that the benefits from locating in a city outweigh the costs, cities will continue to exist. The data shown above make it clear that Americans continue to value urban living. Indeed, the population figures for Chicago and New York suggest that in the 1990s more people were finding that there are net benefits to living in very large cities. The rapid expansion of cities in the South and Southwest simply reinforces this idea. To be sure, the urban living experienced in Charlotte is not the same as the urban living experience in Chicago or New York. So, while the urban cores of cities like Detroit and Cleveland are not likely to return to their former size anytime soon, and urban decline will continue to be a problem for these cities in the foreseeable future, it remains clear that Americans enjoy the benefits of urban living and that the American city will continue to thrive in the future.

References

Cahill, Edward E. “Migration and the Decline of the Black Population in Rural and Non-Metropolitan Areas.” Phylon 35, no. 3, (1974): 284-92.

Casadesus-Masanell, Ramon. “Ford’s Model-T: Pricing over the Product Life Cycle,” ABANTE –

Studies in Business Management 1, no. 2, (1998): 143-65.

Chudacoff, Howard and Judith Smith. The Evolution of American Urban Society, fifth edition. Upper Saddle River, NJ: Prentice Hall, 2000.

Collins, William and Robert Margo. “The Economic Aftermath of the 1960s Riots in American Cities: Evidence from Property Values.” Journal of Economic History 67, no. 4 (2007): 849 -83.

Collins, William and Robert Margo. “Race and the Value of Owner-Occupied Housing, 1940-1990.”

Regional Science and Urban Economics 33, no. 3 (2003): 255-86.

Cutler, David et al. “The Rise and Decline of the American Ghetto.” Journal of Political Economy 107, no. 3 (1999): 455-506.

Frey, William and Alden Speare, Jr. Regional and Metropolitan Growth and Decline in the United States. New York: Russell Sage Foundation, 1988.

Gibson, Campbell. “Population of the 100 Largest Cities and Other Urban Places in the United States: 1790 to 1990.” Population Division Working Paper, no. 27, U.S. Bureau of the Census, June 1998. Accessed at: http://www.census.gov/population/www/documentation/twps0027.html

Glaeser, Edward. “Are Cities Dying?” Journal of Economic Perspectives 12, no. 2 (1998): 139-60.

Glaeser, Edward and Joseph Gyourko. “Urban Decline and Durable Housing.” Journal of Political Economy 113, no. 2 (2005): 345-75.

Glaeser, Edward and Matthew Kahn. “Decentralized Employment and the Transformation of the American City.” Brookings-Wharton Papers on Urban Affairs, 2001.

Glaeser, Edward and Janet Kohlhase. “Cities, Regions, and the Decline of Transport Costs.” NBER Working Paper Series, National Bureau of Economic Research, 2003.

Glaeser, Edward and Albert Saiz. “The Rise of the Skilled City.” Brookings-Wharton Papers on Urban Affairs, 2004.

Glaeser, Edward and Jesse Shapiro. “Urban Growth in the 1990s: Is City Living Back?” Journal of Regional Science 43, no. 1 (2003): 139-65.

Groh, George. The Black Migration: The Journey to Urban America. New York: Weybright and Talley, 1972.

Gutfreund, Owen D. Twentieth Century Sprawl: Highways and the Reshaping of the American Landscape. Oxford: Oxford University Press, 2004.

Hanson, Susan, ed. The Geography of Urban Transportation. New York: Guilford Press, 1986.

Hobbs, Frank and Nicole Stoops. Demographic Trends in the Twentieth Century: Census 2000 Special Reports. Washington, DC: U.S. Census Bureau, 2002.

Kim, Sukkoo. “Urban Development in the United States, 1690-1990.” NBER Working Paper Series, National Bureau of Economic Research, 1999.

Mieszkowski, Peter and Edwin Mills. “The Causes of Metropolitan Suburbanization.” Journal of Economic Perspectives 7, no. 3 (1993): 135-47.

Muller, Peter. “Transportation and Urban Form: Stages in the Spatial Evolution of the American Metropolis.” In The Geography of Urban Transportation, edited by Susan Hanson. New York: Guilford Press, 1986.

Nye, David. Electrifying America: Social Meanings of a New Technology, 1880-1940. Cambridge, MA: MIT Press, 1990.

Nye, David. Consuming Power: A Social History of American Energies. Cambridge, MA: MIT Press, 1998.

Rae, Douglas. City: Urbanism and Its End. New Haven: Yale University Press, 2003.

Rappaport, Jordan. “U.S. Urban Decline and Growth, 1950 to 2000.” Economic Review: Federal Reserve Bank of Kansas City, no. 3, 2003: 15-44.

Rodwin, Lloyd and Hidehiko Sazanami, eds. Deindustrialization and Regional Economic Transformation: The Experience of the United States. Boston: Unwin Hyman, 1989.

Smith, Fred H. “Decaying at the Core: Urban Decline in Cleveland, Ohio.” Research in Economic History 21 (2003): 135-84.

Stanback, Thomas M. Jr. and Thierry J. Noyelle. Cities in Transition: Changing Job Structures in Atlanta, Denver, Buffalo, Phoenix, Columbus (Ohio), Nashville, Charlotte. Totowa, NJ: Allanheld, Osmun, 1982.

Van Tassel, David D. and John J. Grabowski, editors, The Encyclopedia of Cleveland History. Bloomington: Indiana University Press, 1996. Available at http://ech.case.edu/


[1] Reporting the size of a “city” should be done with care. In day-to-day usage, many Americans might talk about the size (population) of Boston and assert that Boston is a larger city than Phoenix. Strictly speaking, this is not true. The 2000 Census reports that the population of Boston was 589,000 while Phoenix had a population of 1.3 million. However, the Boston metropolitan area contained 4.4 million inhabitants in 2000 – substantially more than the 3.3 million residents of the Phoenix metropolitan area.

Citation: Smith, Fred and Sarah Allen. “Urban Decline (and Success), US”. EH.Net Encyclopedia, edited by Robert Whaples. June 5, 2008. URL http://eh.net/encyclopedia/urban-decline-and-success-in-the-united-states/

Sweden – Economic Growth and Structural Change, 1800-2000

Lennart Schön, Lund University

This article presents an overview of Swedish economic growth performance internationally and statistically and an account of major trends in Swedish economic development during the nineteenth and twentieth centuries.1

Modern economic growth in Sweden took off in the middle of the nineteenth century and in international comparative terms Sweden has been rather successful during the past 150 years. This is largely thanks to the transformation of the economy and society from agrarian to industrial. Sweden is a small economy that has been open to foreign influences and highly dependent upon the world economy. Thus, successive structural changes have put their imprint upon modern economic growth.

Swedish Growth in International Perspective

The century-long period from the 1870s to the 1970s comprises the most successful part of Swedish industrialization and growth. On a per capita basis the Japanese economy performed equally well (see Table 1). The neighboring Scandinavian countries also grew rapidly but at a somewhat slower rate than Sweden. Growth in the rest of industrial Europe and in the U.S. was clearly outpaced. Growth in the entire world economy, as measured by Maddison, was even slower.

Table 1 Annual Economic Growth Rates per Capita in Industrial Nations and the World Economy, 1871-2005

Year Sweden Rest of Nordic Countries Rest of Western Europe United States Japan World Economy
1871/1875-1971/1975 2.4 2.0 1.7 1.8 2.4 1.5
1971/1975-2001/2005 1.7 2.2 1.9 2.0 2.2 1.6

Note: Rest of Nordic countries = Denmark, Finland and Norway. Rest of Western Europe = Austria, Belgium, Britain, France, Germany, Italy, the Netherlands, and Switzerland.

Source: Maddison (2006); Krantz/Schön (forthcoming 2007); World Bank, World Development Indicator 2000; Groningen Growth and Development Centre, www.ggdc.com.

The Swedish advance in a global perspective is illustrated in Figure 1. In the mid-nineteenth century the Swedish average income level was close to the average global level (as measured by Maddison). In a European perspective Sweden was a rather poor country. By the 1970s, however, the Swedish income level was more than three times higher than the global average and among the highest in Europe.

Figure 1
Swedish GDP per Capita in Relation to World GDP per Capita, 1870-2004
(Nine year moving averages)
Swedish GDP per Capita in Relation to World GDP per Capita, 1870-2004
Sources: Maddison (2006); Krantz/Schön (forthcoming 2007).

Note. The annual variation in world production between Maddison’s benchmarks 1870, 1913 and 1950 is estimated from his supply of annual country series.

To some extent this was a catch-up story. Sweden was able to take advantage of technological and organizational advances made in Western Europe and North America. Furthermore, Scandinavian countries with resource bases such as Sweden and Finland had been rather disadvantaged as long as agriculture was the main source of income. The shift to industry expanded the resource base and industrial development – directed both to a growing domestic market but even more to a widening world market – became the main lever of growth from the late nineteenth century.

Catch-up is not the whole story, though. In many industrial areas Swedish companies took a position at the technological frontier from an early point in time. Thus, in certain sectors there was also forging ahead,2 quickening the pace of structural change in the industrializing economy. Furthermore, during a century of fairly rapid growth new conditions have arisen that have required profound adaptation and a renewal of entrepreneurial activity as well as of economic policies.

The slow down in Swedish growth from the 1970s may be considered in this perspective. While in most other countries growth from the 1970s fell only in relation to growth rates in the golden post-war ages, Swedish growth fell clearly below the historical long run growth trend. It also fell to a very low level internationally. The 1970s certainly meant the end to a number of successful growth trajectories in the industrial society. At the same time new growth forces appeared with the electronic revolution, as well as with the advance of a more service based economy. It may be the case that this structural change hit the Swedish economy harder than most other economies, at least of the industrial capitalist economies. Sweden was forced into a transformation of its industrial economy and of its political economy in the 1970s and the 1980s that was more profound than in most other Western economies.

A Statistical Overview, 1800-2000

Swedish economic development since 1800 may be divided into six periods with different growth trends, as well as different composition of growth forces.

Table 2 Annual Growth Rates in per Capita Production, Total Investments, Foreign Trade and Population in Sweden, 1800-2000

Period Per capita GDP Investments Foreign Trade Population
1800-1840 0.6 0.3 0.7 0.8
1840-1870 1.2 3.0 4.6 1.0
1870-1910 1.7 3.0 3.3 0.6
1910-1950 2.2 4.2 2.0 0.5
1950-1975 3.6 5.5 6.5 0.6
1975-2000 1.4 2.1 4.3 0.4
1800-2000 1.9 3.4 3.8 0.7

Source: Krantz/Schön (forthcoming 2007).

In the first decades of the nineteenth century the agricultural sector dominated and growth was slow in all aspects but in population. Still there was per capita growth, but to some extent this was a recovery from the low levels during the Napoleonic Wars. The acceleration during the next period around the mid-nineteenth century is marked in all aspects. Investments and foreign trade became very dynamic ingredients with the onset of industrialization. They were to remain so during the following periods as well. Up to the 1970s per capita growth rates increased for each successive period. In an international perspective it is most notable that per capita growth rates increased also in the interwar period, despite the slow down in foreign trade. The interwar period is crucial for the long run relative success of Swedish economic growth. The decisive culmination in the post-war period with high growth rates in investments and in foreign trade stands out as well, as the deceleration in all aspects in the late twentieth century.

An analysis in a traditional growth accounting framework gives a long term pattern with certain periodic similarities (see Table 3). Thus, total factor productivity growth has increased over time up to the 1970s, only to decrease to its long run level in the last decades. This deceleration in productivity growth may be looked upon either as a failure of the “Swedish Model” to accommodate new growth forces or as another case of the “productivity paradox” in lieu of the information technology revolution.3

Table 3 Total Factor Productivity (TFP) Growth and Relative Contribution of Capital, Labor and TFP to GDP Growth in Sweden, 1840-2000

Period TFP Growth Capital Labor TFP
1840-1870 0.4 55 27 18
1870-1910 0.7 50 18 32
1910-1950 1.0 39 24 37
1950-1975 2.1 45 7 48
1975-2000 1.0 44 1 55
1840-2000 1.1 45 16 39

Source: See Table 2.

In terms of contribution to overall growth, TFP has increased its share for every period. The TFP share was low in the 1840s but there was a very marked increase with the onset of modern industrialization from the 1870s. In relative terms TFP reached its highest level so far from the 1970s, thus indicating an increasing role of human capital, technology and knowledge in economic growth. The role of capital accumulation was markedly more pronounced in early industrialization with the build-up of a modern infrastructure and with urbanization, but still capital did retain much of its importance during the twentieth century. Thus its contribution to growth during the post-war Golden Ages was significant with very high levels of material investments. At the same time TFP growth culminated with positive structural shifts, as well as increased knowledge intensity complementary to the investments. Labor has in quantitative terms progressively reduced its role in economic growth. One should observe, however, the relatively large importance of labor in Swedish economic growth during the interwar period. This was largely due to demographic factors and to the employment situation that will be further commented upon.

In the first decades of the nineteenth century, growth was still led by the primary production of agriculture, accompanied by services and transport. Secondary production in manufacturing and building was, on the contrary, very stagnant. From the 1840s the industrial sector accelerated, increasingly supported by transport and communications, as well as by private services. The sectoral shift from agriculture to industry became more pronounced at the turn of the twentieth century when industry and transportation boomed, while agricultural growth decelerated into subsequent stagnation. In the post-war period the volume of services, both private and public, increased strongly, although still not outpacing industry. From the 1970s the focus shifted to private services and to transport and communications, indicating fundamental new prerequisites of growth.

Table 4 Growth Rates of Industrial Sectors, 1800-2000

Period Agriculture Industrial and Hand Transport and Communic. Building Private Services Public Services GDP
1800-1840 1.5 0.3 1.1 -0.1 1.4 1.5 1.3
1840-1870 2.1 3.7 1.8 2.4 2.7 0.8 2.3
1870-1910 1.0 5.0 3.9 1.3 2.7 1.0 2.3
1910-1950 0.0 3.5 4.9 1.4 2.2 2.2 2.7
1950-1975 0.4 5.1 4.4 3.8 4.3 4.0 4.3
1975-2000 -0.4 1.9 2.6 -0.8 2.2 0.2 1.8
1800-2000 0.9 3.8 3.7 1.8 2.7 1.7 2.6

Source: See Table 2.

Note: Private services are exclusive of dwelling services.

Growth and Transformation in the Agricultural Society of the Early Nineteenth Century

During the first half of the nineteenth century the agricultural sector and the rural society dominated the Swedish economy. Thus, more than three-quarters of the population were occupied in agriculture while roughly 90 percent lived in the countryside. Many non-agrarian activities such as the iron industry, the saw mill industry and many crafts as well as domestic, religious and military services were performed in rural areas. Although growth was slow, a number of structural and institutional changes occurred that paved the way for future modernization.

Most important was the transformation of agriculture. From the late eighteenth century commercialization of the primary sector intensified. Particularly during the Napoleonic Wars, the domestic market for food stuffs widened. The population increase in combination with the temporary decrease in imports stimulated enclosures and reclamation of land, the introduction of new crops and new methods and above all it stimulated a greater degree of market orientation. In the decades after the war the traditional Swedish trade deficit in grain even shifted to a trade surplus with an increasing exportation of oats, primarily to Britain.

Concomitant with the agricultural transformation were a number of infrastructural and institutional changes. Domestic transportation costs were reduced through investments in canals and roads. Trade of agricultural goods was liberalized, reducing transaction costs and integrating the domestic market even further. Trading companies became more effective in attracting agricultural surpluses for more distant markets. In support of the agricultural sector new means of information were introduced by, for example, agricultural societies that published periodicals on innovative methods and on market trends. Mortgage societies were established to supply agriculture with long term capital for investments that in turn intensified the commercialization of production.

All these elements meant a profound institutional change in the sense that the price mechanism became much more effective in directing human behavior. Furthermore, a greater interest in information and in the main instrument of information, namely literacy, was infused. Traditionally, popular literacy had been upheld by the church, mainly devoted to knowledge of the primary Lutheran texts. In the new economic environment, literacy was secularized and transformed into a more functional literacy marked by the advent of schools for public education in the 1840s.

The Breakthrough of Modern Economic Growth in the Mid-nineteenth Century

In the decades around the middle of the nineteenth century new dynamic forces appeared that accelerated growth. Most notably foreign trade expanded by leaps and bounds in the 1850s and 1860s. With new export sectors, industrial investments increased. Furthermore, railways became the most prominent component of a new infrastructure and with this construction a new component in Swedish growth was introduced, heavy capital imports.

The upswing in industrial growth in Western Europe during the 1850s, in combination with demand induced through the Crimean War, led to a particularly strong expansion in Swedish exports with sharp price increases for three staple goods – bar iron, wood and oats. The charcoal-based Swedish bar iron had been the traditional export good and had completely dominated Swedish exports until mid-nineteenth century. Bar iron met, however, increasingly strong competition from British and continental iron and steel industries and Swedish exports had stagnated in the first half of the nineteenth century. The upswing in international demand, following the diffusion of industrialization and railway construction, gave an impetus to the modernization of Swedish steel production in the following decades.

The saw mill industry was a really new export industry that grew dramatically in the 1850s and 1860s. Up until this time, the vast forests in Sweden had been regarded mainly as a fuel resource for the iron industry and for household heating and local residential construction. With sharp price increases on the Western European market from the 1840s and 1850s, the resources of the sparsely populated northern part of Sweden suddenly became valuable. A formidable explosion of saw mill construction at the mouths of the rivers along the northern coastline followed. Within a few decades Swedish merchants, as well as Norwegian, German, British and Dutch merchants, became saw mill owners running large-scale capitalist enterprises at the fringe of the European civilization.

Less dramatic but equally important was the sudden expansion of Swedish oat exports. The market for oats appeared mainly in Britain, where short-distance transportation in rapidly growing urban centers increased the fleet of horses. Swedish oats became an important energy resource during the decades around the mid-nineteenth century. In Sweden this had a special significance since oats could be cultivated on rather barren and marginal soils and Sweden was richly endowed with such soils. Thus, the market for oats with strongly increasing prices stimulated further the commercialization of agriculture and the diffusion of new methods. It was furthermore so since oats for the market were a substitute for local flax production – also thriving on barren soils – while domestic linen was increasingly supplanted by factory-produced cotton goods.

The Swedish economy was able to respond to the impetus from Western Europe during these decades, to diffuse the new influences in the economy and to integrate them in its development very successfully. The barriers to change seem to have been weak. This is partly explained by the prior transformation of agriculture and the evolution of market institutions in the rural economy. People reacted to the price mechanism. New social classes of commercial peasants, capitalists and wage laborers had emerged in an era of domestic market expansion, with increased regional specialization, and population increase.

The composition of export goods also contributed to the diffusion of participation and to the diffusion of export income. Iron, wood and oats meant both a regional and a social distribution. The value of prior marginal resources such as soils in the south and forests in the north was inflated. The technology was simple and labor intensive in industry, forestry, agriculture and transportation. The demand for unskilled labor increased strongly that was to put an imprint upon Swedish wage development in the second half of the nineteenth century. Commercial houses and industrial companies made profits but export income was distributed to many segments of the population.

The integration of the Swedish economy was further enforced through initiatives taken by the State. The parliament decision in the 1850s to construct the railway trunk lines meant, first, a more direct involvement by the State in the development of a modern infrastructure and, second, new principles of finance since the State had to rely upon capital imports. At the same time markets for goods, labor and capital were liberalized and integration both within Sweden and with the world market deepened. The Swedish adoption of the Gold Standard in 1873 put a final stamp on this institutional development.

A Second Industrial Revolution around 1900

In the late nineteenth century, particularly in the 1880s, international competition became fiercer for agriculture and early industrial branches. The integration of world markets led to falling prices and stagnation in the demand for Swedish staple goods such as iron, sawn wood and oats. Profits were squeezed and expansion thwarted. On the other hand there arose new markets. Increasing wages intensified mechanization both in agriculture and in industry. The demand increased for more sophisticated machinery equipment. At the same time consumer demand shifted towards better foodstuff – such as milk, butter and meat – and towards more fabricated industrial goods.

The decades around the turn of the twentieth century meant a profound structural change in the composition of Swedish industrial expansion that was crucial for long term growth. New and more sophisticated enterprises were founded and expanded particularly from the 1890s, in the upswing after the Baring Crisis.

The new enterprises were closely related to the so called Second Industrial Revolution in which scientific knowledge and more complex engineering skills were main components. The electrical motor became especially important in Sweden. A new development block was created around this innovation that combined engineering skills in companies such as ASEA (later ABB) with a large demand in energy-intensive processes and with the large supply of hydropower in Sweden.4 Financing the rapid development of this large block engaged commercial banks, knitting closer ties between financial capital and industry. The State, once again, engaged itself in infrastructural development in support of electrification, still resorting to heavy capital imports.

A number of innovative industries were founded in this period – all related to increased demand for mechanization and engineering skills. Companies such as AGA, ASEA, Ericsson, Separator (AlfaLaval) and SKF have been labeled “enterprises of genius” and all are represented with renowned inventors and innovators. This was, of course, not an entirely Swedish phenomenon. These branches developed simultaneously on the Continent, particularly in nearby Germany and in the U.S. Knowledge and innovative stimulus was diffused among these economies. The question is rather why this new development became so strong in Sweden so that new industries within a relatively short period of time were able to supplant old resource-based industries as main driving forces of industrialization.

Traditions of engineering skills were certainly important, developed in old heavy industrial branches such as iron and steel industries and stimulated further by State initiatives such as railway construction or, more directly, the founding of the Royal Institute of Technology. But apart from that the economic development in the second half of the nineteenth century fundamentally changed relative factor prices and the profitability of allocation of resources in different lines of production.

The relative increase in the wages of unskilled labor had been stimulated by the composition of early exports in Sweden. This was much reinforced by two components in the further development – emigration and capital imports.

Within approximately the same period, 1850-1910, the Swedish economy received a huge amount of capital mainly from Germany and France, while delivering an equally huge amount of labor to primarily the U.S. Thus, Swedish relative factor prices changed dramatically. Swedish interest rates remained at rather high levels compared to leading European countries until 1910, due to a continuous large demand for capital in Sweden, but relative wages rose persistently (see Table 5). As in the rest of Scandinavia, wage increases were much stronger than GDP growth in Sweden indicating a shift in income distribution in favor of labor, particularly in favor of unskilled labor, during this period of increased world market integration.

Table 5 Annual Increase in Real Wages of Unskilled Labor and Annual GDP Growth per Capita, 1870-1910

Country Annual real wage increase, 1870-1910 Annual GDP growth per capita, 1870-1910
Sweden 2.8 1.7
Denmark and Norway 2.6 1.3
France, Germany and Great Britain 1.1 1.2
United States 1.1 1.6

Sources: Wages from Williamson (1995); GDP growth see Table 1.

Relative profitability fell in traditional industries, which exploited rich natural resources and cheap labor, while more sophisticated industries were favored. But the causality runs both ways. Had this structural shift with the growth of new and more profitable industries not occurred, the Swedish economy would not have been able to sustain the wage increase.5

Accelerated Growth in the War-stricken Period, 1910-1950

The most notable feature of long term Swedish growth is the acceleration in growth rates during the period 1910-1950, which in Europe at large was full of problems and catastrophes.6 Thus, Swedish per capita production grew at 2.2 percent annually while growth in the rest of Scandinavia was somewhat below 2 percent and in the rest of Europe hovered at 1 percent. The Swedish acceleration was based mainly on three pillars.

First, the structure created at the end of the nineteenth century was very viable, with considerable long term growth potential. It consisted of new industries and new infrastructures that involved industrialists and financial capitalists, as well as public sector support. It also involved industries meeting a relatively strong demand in war times, as well as in the interwar period, both domestically and abroad.

Second, the First World War meant an immense financial bonus to the Swedish market. A huge export surplus at inflated prices during the war led to the domestication of the Swedish national debt. This in turn further capitalized the Swedish financial market, lowering interest rates and ameliorating sequential innovative activity in industry. A domestic money market arose that provided the State with new instruments for economic policy that were to become important for the implementation of the new social democratic “Keynesian” policies of the 1930s.

Third, demographic development favored the Swedish economy in this period. The share of the economically active age group 15-64 grew substantially. This was due partly to the fact that prior emigration had sized down cohorts that now would have become old age pensioners. Comparatively low mortality of young people during the 1910s, as well as an end to mass emigration further enhanced the share of the active population. Both the labor market and domestic demand was stimulated in particular during the 1930s when the household forming age group of 25-30 years increased.

The augmented labor supply would have increased unemployment had it not been combined with the richer supply of capital and innovative industrial development that met elastic demand both domestically and in Europe.

Thus, a richer supply of both capital and labor stimulated the domestic market in a period when international market integration deteriorated. Above all it stimulated the development of mass production of consumption goods based upon the innovations of the Second Industrial Revolution. Significant new enterprises that emanated from the interwar period were very much related to the new logic of the industrial society, such as Volvo, SAAB, Electrolux, Tetra Pak and IKEA.

The Golden Age of Growth, 1950-1975

The Swedish economy was clearly part of the European Golden Age of growth, although Swedish acceleration from the 1950s was less pronounced than in the rest of Western Europe, which to a much larger extent had been plagued by wars and crises.7 The Swedish post-war period was characterized primarily by two phenomena – the full fruition of development blocks based upon the great innovations of the late nineteenth century (the electrical motor and the combustion engine) and the cementation of the “Swedish Model” for the welfare state. These two phenomena were highly complementary.

The Swedish Model had basically two components. One was a greater public responsibility for social security and for the creation and preservation of human capital. This led to a rapid increase in the supply of public services in the realms of education, health and children’s day care as well as to increases in social security programs and in public savings for transfers to pensioners program. The consequence was high taxation. The other component was a regulation of labor and capital markets. This was the most ingenious part of the model, constructed to sustain growth in the industrial society and to increase equality in combination with the social security program and taxation.

The labor market program was the result of negotiations between trade unions and the employers’ organization. It was labeled “solidaristic wage policy” with two elements. One was to achieve equal wages for equal work, regardless of individual companies’ ability to pay. The other element was to raise the wage level in low paid areas and thus to compress the wage distribution. The aim of the program was actually to increase the speed in the structural rationalization of industries and to eliminate less productive companies and branches. Labor should be transferred to the most productive export-oriented sectors. At the same time income should be distributed more equally. A drawback of the solidaristic wage policy from an egalitarian point of view was that profits soared in the productive sectors since wage increases were held back. However, capital market regulations hindered the ability of high profits to be converted into very high incomes for shareholders. Profits were taxed very low if they were converted into further investments within the company (the timing in the use of the funds was controlled by the State in its stabilization policy) but taxed heavily if distributed to share holders. The result was that investments within existing profitable companies were supported and actually subsidized while the mobility of capital dwindled and the activity at the stock market fell.

As long as the export sectors grew, the program worked well.8 Companies founded in the late nineteenth century and in the interwar period developed into successful multinationals in engineering with machinery, auto industries and shipbuilding, as well as in resource-based industries of steel and paper. The expansion of the export sector was the main force behind the high growth rates and the productivity increases but the sector was strongly supported by public investments or publicly subsidized investments in infrastructure and residential construction.

Hence, during the Golden Age of growth the development blocks around electrification and motorization matured in a broad modernization of the society, where mass consumption and mass production was supported by social programs, by investment programs and by labor market policy.

Crisis and Restructuring from the 1970s

In the 1970s and early 1980s a number of industries – such as steel works, pulp and paper, shipbuilding, and mechanical engineering – ran into crisis. New global competition, changing consumer behavior and profound innovative renewal, especially in microelectronics, made some of the industrial pillars of the Swedish Model crumble. At the same time the disadvantages of the old model became more apparent. It put obstacles to flexibility and to entrepreneurial initiatives and it reduced individual incentives for mobility. Thus, while the Swedish Model did foster rationalization of existing industries well adapted to the post-war period, it did not support more profound transformation of the economy.

One should not exaggerate the obstacles to transformation, though. The Swedish economy was still very open in the market for goods and many services, and the pressure to transform increased rapidly. During the 1980s a far-reaching structural change within industry as well as in economic policy took place, engaging both private and public actors. Shipbuilding was almost completely discontinued, pulp industries were integrated into modernized paper works, the steel industry was concentrated and specialized, and the mechanical engineering was digitalized. New and more knowledge-intensive growth industries appeared in the 1980s, such as IT-based telecommunication, pharmaceutical industries, and biotechnology, as well as new service industries.

During the 1980s some of the constituent components of the Swedish model were weakened or eliminated. Centralized negotiations and solidaristic wage policy disappeared. Regulations in the capital market were dismantled under the pressure of increasing international capital flows simultaneously with a forceful revival of the stock market. The expansion of public sector services came to an end and the taxation system was reformed with a reduction of marginal tax rates. Thus, Swedish economic policy and welfare system became more adapted to the main European level that facilitated the Swedish application of membership and final entrance into the European Union in 1995.

It is also clear that the period from the 1970s to the early twenty-first century comprise two growth trends, before and after 1990 respectively. During the 1970s and 1980s, growth in Sweden was very slow and marked by the great structural problems that the Swedish economy had to cope with. The slow growth prior to 1990 does not signify stagnation in a real sense, but rather the transformation of industrial structures and the reformulation of economic policy, which did not immediately result in a speed up of growth but rather in imbalances and bottle necks that took years to eliminate. From the 1990s up to 2005 Swedish growth accelerated quite forcefully in comparison with most Western economies.9 Thus, the 1980s may be considered as a Swedish case of “the productivity paradox,” with innovative renewal but with a delayed acceleration of productivity and growth from the 1990s – although a delayed productivity effect of more profound transformation and radical innovative behavior is not paradoxical.

Table 6 Annual Growth Rates per Capita, 1971-2005

Period Sweden Rest of Nordic Countries Rest of Western Europe United States World Economy
1971/1975-1991/1995 1.2 2.1 1.8 1.6 1.4
1991/1995-2001/2005 2.4 2.5 1.7 2.1 2.1

Sources: See Table 1.

The recent acceleration in growth may also indicate that some of the basic traits from early industrialization still pertain to the Swedish economy – an international attitude in a small open economy fosters transformation and adaptation of human skills to new circumstances as a major force behind long term growth.

References

Abramovitz, Moses. “Catching Up, Forging Ahead and Falling Behind.” Journal of Economic History 46, no. 2 (1986): 385-406.

Dahmén, Erik. “Development Blocks in Industrial Economics.” Scandinavian Economic History Review 36 (1988): 3-14.

David, Paul A. “The Dynamo and the Computer: An Historical Perspective on the Modern Productivity Paradox.” American Economic Review 80, no. 2 (1980): 355-61.

Eichengreen, Barry. “Institutions and Economic Growth: Europe after World War II.” In Economic Growth in Europe since 1945, edited by Nicholas Crafts and Gianni Toniolo. New York: Cambridge University Press, 1996.

Krantz, Olle and Lennart Schön. Swedish Historical National Accounts, 1800-2000. Lund: Almqvist and Wiksell International (forthcoming, 2007).

Maddison, Angus. The World Economy, Volumes 1 and 2. Paris: OECD (2006).

Schön, Lennart. “Development Blocks and Transformation Pressure in a Macro-Economic Perspective: A Model of Long-Cyclical Change.” Skandinaviska Enskilda Banken Quarterly Review 20, no. 3-4 (1991): 67-76.

Schön, Lennart. “External and Internal Factors in Swedish Industrialization.” Scandinavian Economic History Review 45, no. 3 (1997): 209-223.

Schön, Lennart. En modern svensk ekonomisk historia: Tillväxt och omvandling under två sekel (A Modern Swedish Economic History: Growth and Transformation in Two Centuries). Stockholm: SNS (2000).

Schön, Lennart. “Total Factor Productivity in Swedish Manufacturing in the Period 1870-2000.” In Exploring Economic Growth: Essays in Measurement and Analysis: A Festschrift for Riitta Hjerppe on Her Sixtieth Birthday, edited by S. Heikkinen and J.L. van Zanden. Amsterdam: Aksant, 2004.

Schön, Lennart. “Swedish Industrialization 1870-1930 and the Heckscher-Ohlin Theory.” In Eli Heckscher, International Trade, and Economic History, edited by Ronald Findlay et al. Cambridge, MA: MIT Press (2006).

Svennilson, Ingvar. Growth and Stagnation in the European Economy. Geneva: United Nations Economic Commission for Europe, 1954.

Temin, Peter. “The Golden Age of European Growth Reconsidered.” European Review of Economic History 6, no. 1 (2002): 3-22.

Williamson, Jeffrey G. “The Evolution of Global Labor Markets since 1830: Background Evidence and Hypotheses.” Explorations in Economic History 32, no. 2 (1995): 141-96.

Citation: Schön, Lennart. “Sweden – Economic Growth and Structural Change, 1800-2000”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/sweden-economic-growth-and-structural-change-1800-2000/

A History of the Standard of Living in the United States

Richard H. Steckel, Ohio State University

Methods of Measuring the Standard of Living

During many years of teaching, I have introduced the topic of the standard of living by asking students to pretend that they would be born again to unknown (random) parents in a country they could choose based on three of its characteristics. The list put forward in the classroom invariably includes many of the categories usually suggested by scholars who have studied the standard of living over the centuries: access to material goods and services; health; socio-economic fluidity; education; inequality; the extent of political and religious freedom; and climate. Thus, there is little disagreement among people, whether newcomers or professionals, on the relevant categories of social performance.

Components and Weights

Significant differences of opinion emerge, both among students and research specialists, on the precise measures to be used within each category and on the weights or relative importance that should be attached to each. There are numerous ways to measure health, for example, with some approaches emphasizing length of life while other people give high priority to morbidity (illness or disability) or to yet other aspects of health quality of life while living (e.g. physical fitness). Conceivably one might attempt comparisons using all feasible measures, but this is expensive and time-consuming and in any event, many good measures within categories are often highly correlated.

Weighting the various components is the most contentious issue in any attempt to summarize the standard of living, or otherwise compress diverse measures into a single number. Some people give high priority to income, for example, while others claim that health is most important. Economists and other social scientists recognize that tastes or preferences are individualistic and diverse, and following this logic to the extreme, one might argue that all interpersonal comparisons are invalid. On the other hand, there are general tendencies in preferences. Every class that I have taught has emphasized the importance of income and health, and for this reason I discuss historical evidence on these measures.

Material Aspects of the Standard of Living

Gross Domestic Product

The most widely used measure of the material standard of living is Gross Domestic Product (GDP) per capita, adjusted for changes in the price level (inflation or deflation). This measure, real GDP per capita, reflects only economic activities that flow through markets, omitting productive endeavors unrecorded in market exchanges, such a preparing meals at home or maintenance done by the homeowner. It ignores work effort required to produce income and does not consider conditions surrounding the work environment, which might affect health and safety. Crime, pollution, and congestion, which many people consider important to their quality of life, are also excluded from GDP. Moreover technological change, relative prices and tastes affect the course of GDP and the products and services that it includes, which creates what economists call an “index number” problem that is not readily solvable. Nevertheless most economists believe that real GDP per capita does summarize or otherwise quantify important aspects of the average availability of goods and services.

Time Trends in Real GDP per Capita

Table 1 shows the course of the material standard of living in the United States from 1820 to 1998. Over this period of 178 years real GDP per capita increased 21.7 fold, or an average of 1.73 percent per year. Although the evidence available to estimate GDP directly is meager, this rate of increase was probably many times higher than experienced during the colonial period. This conclusion is justified by considering the implications of extrapolating the level observed in 1820 ($1,257) backward in time at the growth rate measured since 1820 (1.73 percent). Under this supposition, real per capita GDP would have doubled every forty years (halved every forty years going backward in time) and so by the mid 1700s there would have been insufficient income to support life. Because the cheapest diet able to sustain good health would have cost nearly $500 per year, the tentative assumption of modern economic growth contradicts what actually happened. Moreover, historical evidence suggests that important ingredients of modern economic growth, such as technological change and human and physical capital, accumulated relatively slowly during the colonial period.

Table 1: GDP per Capita in the United States

Year GDP per capitaa Annual growth rate from previous period
1820 1,257
1870 2,445 1.34
1913 5,301 1.82
1950 9,561 1.61
1973 16,689 2.45
1990 23,214 1.94
1998 27,331 2.04

a. Measured in 1990 international dollars.

Source: Maddison (2001), Tables A-1c and A-1d.

Cycles in Real GDP per Capita

Although real GDP per capita is given for only 7 dates in Table 1, it is apparent that economic progress has been uneven over time. If annual or quarterly data were given, it would show that business cycles have been a major feature of the economic landscape since industrialization began in the 1820s. By far the worst downturn in U.S. history occurred during the Great Depression of the 1930s, when real per capita GDP declined by approximately one-third and the unemployment rate reached 25 percent.

Regional Differences

The aggregate numbers also disguise regional differences in the standard of living. In 1840 personal income per capita was twice as high in the Northeast as in the North Central States. Regional divergence increased after the Civil War when the South Atlantic became the nation’s poorest region, attaining a level only one-third of that in the Northeast. Regional convergence occurred in the twentieth century and industrialization in the South significantly improved the region’s economic standing after World War II.

Health and the Standard of Living

Life Expectancy

Two measures of health are widely used in economic history: life expectancy at birth (or average length of life) and average height, which measures nutritional conditions during the growing years. Table 2 shows that life expectancy approximately doubled over the past century and a half, reaching 76.7 years in 1998. If depressions and recessions have adversely affected the material standard of living, epidemics have been a major cause of sudden declines in health in the past. Fluctuations during the nineteenth century are evident from the table, but as a rule growth rates in health have been considerably less volatile than those for GDP, particularly during the twentieth century.

Table 2: Life Expectancy at Birth in the United States

Year Life Expectancy
1850 38.3
1860 41.8
1870 44.0
1880 39.4
1890 45.2
1900 47.8
1910 53.1
1920 54.1
1930 59.7
1940 62.9
1950 68.2
1960 69.7
1970 70.8
1980 73.7
1990 75.4
1998 76.7

Source: Haines (2002)

Childhood mortality greatly affects life expectancy, which was low in the mid 1800s substantially because mortality rates were very high for this age group. For example, roughly one child in five born alive in 1850 did not survive to age one, but today the infant mortality rate is under one percent. The past century and a half witnessed a significant shift in deaths from early childhood to old age. At the same time, the major causes of death have shifted from infectious diseases originating with germs or microorganisms to degenerative processes that are affected by life-style choices such as diet, smoking and exercise.

The largest gains were concentrated in the first half of the twentieth century, when life expectancy increased from 47.8 years in 1900 to 68.2 years in 1950. Factors behind the growing longevity include the ascent of the germ theory of disease, programs of public health and personal hygiene, better medical technology, higher incomes, better diets, more education, and the emergence of health insurance.

Explanations of Increases in Life Expectancy

Numerous important medical developments contributed to improving health. The research of Pasteur and Koch was particularly influential in leading to acceptance of the germ theory in the late 1800s. Prior to their work, many diseases were thought to have arisen from miasmas or vapors created by rotting vegetation. Thus, swamps were accurately viewed as unhealthy, but not because they were home to mosquitoes and malaria. The germ theory gave public health measures a sound scientific basis, and shortly thereafter cities began cost-effective measures to remove garbage, purify water supplies, and process sewage. The notion that “cleanliness is next to Godliness” also emerged in the home, where bathing and the washing of clothes, dishes, and floors became routine.

The discovery of Salvarsan in 1910 was the first use of an antibiotic (for syphilis), which meant that the drug was effective in altering the course of a disease. This was an important medical event, but broad-spectrum antibiotics were not available until the middle of the century. The most famous of these early drugs was penicillin, which was not manufactured in large quantities until the 1940s. Much of the gain in life expectancy was attained before chemotherapy and a host of other medical technologies were widely available. A cornerstone of improving health from the late 1800s to the middle of the twentieth century was therefore prevention of disease by reducing exposure to pathogens. Also important were improvements in immune systems created by better diets and by vaccination against diseases such as smallpox and diphtheria.

Heights

In the past quarter century, historians have increasingly used average heights to assess health aspects of the standard of living. Average height is a good proxy for the nutritional status of a population because height at a particular age reflects an individual’s history of net nutrition, or diet minus claims on the diet made by work (or physical activity) and disease. The growth of poorly nourished children may cease, and repeated bouts of biological stress — whether from food deprivation, hard work, or disease — often leads to stunting or a reduction in adult height. The average heights of children and of adults in countries around the world are highly correlated with their life expectancy at birth and with the log of the per capita GDP in the country where they live.

This interpretation for average height has led to their use in studying the health of slaves, health inequality, living standards during industrialization, and trends in mortality. The first important results in the “new anthropometric history” dealt with the nutrition and health of Americans slaves as determined from stature recorded for identification purposes on slave manifests required in the coastwise slave trade. The subject of slave health has been a contentious issue among historians, in part because vital statistics and nutrition information were never systematically collected for slaves (or for the vast majority of the American population in the mid-nineteenth century, for that matter). Yet, the height data showed that children were astonishingly small and malnourished while working slaves were remarkably well fed. Adolescent slaves grew rapidly as teenagers and were reasonably well off in nutritional aspects of health.

Time Trends in Average Height

Table 3 shows the time pattern in height of native-born American men obtained in historical periods from military muster rolls, and for men and women in recent decades from the National Health and Nutrition Examination Surveys. This historical trend is notable for the tall stature during the colonial period, the mid-nineteenth century decline, and the surge in heights of the past century. Comparisons of average heights from military organizations in Europe show that Americans were taller by two to three inches. Behind this achievement were a relatively good diet, little exposure to epidemic disease, and relative equality in the distribution of wealth. Americans could choose their foods from the best of European and Western Hemisphere plants and animals, and this dietary diversity combined with favorable weather meant that Americans never had to contend with harvest failures. Thus, even the poor were reasonably well fed in colonial America.

Table 3:

Average Height of Native-Born American Men and Women by Year of Birth

Centimeters

Inches

Year Men Men Women
1710 171.5 67.5
1720 171.8 67.6
1730 172.1 67.8
1740 172.1 67.8
1750 172.2 67.8
1760 172.3 67.8
1770 172.8 68.0
1780 173.2 68.2
1790 172.9 68.1
1800 172.9 68.1
1810 173.0 68.1
1820 172.9 68.1
1830 173.5 68.3
1840 172.2 67.8
1850 171.1 67.4
1860 170.6 67.2
1870 171.2 67.4
1880 169.5 66.7
1890 169.1 66.6
1900 170.0 66.9
1910 172.1 67.8
1920 173.1 68.1
1930 175.8 162.6 69.2 64.0
1940 176.7 163.1 69.6 64.2
1950 177.3 163.1 69.8 64.2
1960 177.9 164.2 70.0 64.6
1970 177.4 163.6 69.8 64.4

Source: Steckel (2002) and sources therein.

Explaining Height Cycles

Loss of stature began in the second quarter of the nineteenth century when the transportation revolution of canals, steamboats and railways brought people into greater contact with diseases. The rise of public schools meant that children were newly exposed to major diseases such as whooping cough, diphtheria, and scarlet fever. Food prices also rose during the 1830s and growing inequality in the distribution of income or wealth accompanied industrialization. Business depressions, which were most hazardous for the health of those who were already poor, also emerged with industrialization. The Civil War of the 1860s and its troop movements further spread disease and disrupted food production and distribution. A large volume of immigration also brought new varieties of disease to the United States at a time when urbanization brought a growing proportion of the population into closer contact with contagious diseases. Estimates of life expectancy among adults at ages 20, 30 and 50, which was assembled from family histories, also declined in the middle of the nineteenth century.

Rapid Increases in Heights in the First Half of the Twentieth Century

In the twentieth century, heights grew most rapidly for those born between 1910 and 1950, an era when public health and personal hygiene measures took vigorous hold, incomes rose rapidly and there was reduced congestion in housing. The latter part of the era also witnessed a larger share of income or wealth going to the lower portion of the distribution, implying that the incomes of the less well-off were rising relatively rapidly. Note that most of the rise in heights occurred before modern antibiotics were available, which means that disease prevention rather than the ability to alter its course after onset, was the most important basis of improving health. The growing control that humans have exercised over their environment, particularly increased food supply and reduced exposure to disease, may be leading to biological (but not genetic) evolution of humans with more durable vital organ systems, larger body size, and later onset of chronic diseases.

Recent Stagnation

Between the middle of the twentieth century and the present, however, the average heights of American men have stagnated, increasing by only a small fraction of an inch over the past half century. Table 3 refers to the native born, so recent increases in immigration cannot account for the stagnation. In the absence of other information, one might be tempted to suppose that environmental conditions for growth are so good that most Americans have simply reached their genetic potential for growth. Unlike the United States, heights and life expectancy have continued to grow in Europe, which has the same genetic stock from which most Americans descend. By the 1970s several American health indicators had fallen behind those in Norway, Sweden, the Netherlands, and Denmark. While American heights were essentially flat after the 1970s, heights continued to grow significantly in Europe. The Dutch men are now the tallest, averaging six feet, about two inches more than American men. Lagging heights leads to questions about the adequacy of health care and life-style choices in America. As discussed below, it is doubtful that lack of resource commitment to health care is the problem because America invests far more than the Netherlands. Greater inequality and less access to health care could be important factors in the difference. But access to health care alone, whether due to low income or lack of insurance coverage, may not be the only issues — health insurance coverage must be used regularly and wisely. In this regard, Dutch mothers are known for regular pre-and post-natal checkups, which are important for early childhood health.

Note that significant differences in health and the quality of life follow from these height patterns. The comparisons are not part of an odd contest that emphasizes height, nor is big per se assumed to be beautiful. Instead, we know that on average, stunted growth has functional implications for longevity, cognitive development, and work capacity. Children who fail to grow adequately are often sick, suffer learning impairments and have a lower quality of life. Growth failure in childhood has a long reach into adulthood because individuals whose growth has been stunted are at greater risk of death from heart disease, diabetes, and some types of cancer. Therefore it is important to know why Americans are falling behind.

International Comparisons

Per capita GDP

Table 4 places American economic performance in perspective relative to other countries. In 1820 the United States was fifth in world rankings, falling roughly thirty percent below the leaders (United Kingdom and the Netherlands), but still two-to-three times better off than the poorest sections of the globe. It is notable that in 1820 the richest country (the Netherlands at $1,821) was approximately 4.4 times better off than the poorest (Africa at $418) but by 1950 the ratio of richest-to-poorest had widened to 21.8 ($9,561 in the United States versus $439 in China), which is roughly the level it is today (in 1998, it was $27,331 in the United States versus $1,368 in Africa). These calculations understate the growing disparity in the material standard of living because several African countries today fall significantly below the average, whereas it is unlikely that they did so in 1820 because GDP for the continent as a whole was close to the level of subsistence.

Table 4: GDP per Capita by Country and Year (1990 International $)

Country 1820 1870 1913 1950 1973 1998 Ratio 1998 to 1820
Austria 1,218 1,863 3,465 3,706 11,235 18,905 15.5
Belgium 1,319 2,697 4,220 5,462 12,170 19,442 14.7
Denmark 1,274 2,003 3,912 6,946 13,945 22,123 17.4
Finland 781 1,140 2,111 4,253 11,085 18,324 23.5
France 1,230 1,876 3,485 5,270 13,123 19,558 15.9
Germany 1,058 1,821 3,648 3,881 11,966 17,799 16.8
Italy 1,117 1,499 2,564 3,502 10,643 17,759 15.9
Netherlands 1,821 2,753 4,049 5,996 13,082 20,224 11.1
Norway 1,104 1,432 2,501 5,463 11,246 23,660 21.4
Sweden 1,198 1,664 3,096 6,738 13,493 18,685 15.6
Switzerland 1,280 2,202 4,266 9,064 18,204 21,367 16.7
United Kingdom 1,707 3,191 4,921 6,907 12,022 18,714 11.0
Portugal 963 997 1,244 2,069 7,343 12,929 13.4
Spain 1,063 1,376 2,255 2,397 8,739 14,227 13.4
United States 1,257 2,445 5,301 9,561 16,689 27,331 21.7
Mexico 759 674 1,732 2,365 4,845 6,655 8.8
Japan 669 737 1,387 1,926 11,439 20,413 30.5
China 600 530 552 439 839 3,117 5.2
India 533 533 673 619 853 1,746 3.3
Africa 418 444 585 852 1,365 1,368 3.3
World 667 867 1,510 2,114 4,104 5,709 8.6
Ratio of richest to poorest 4.4 7.2 8.9 20.6 21.7 20.0

Source: Maddison (2001), Table B-21.

It is clear that the poorer countries are better off today than they were in 1820 (3.3 fold in both Africa and India). But the countries that are now rich grew at a much faster rate. The last column of Table 4 shows that Japan realized the most spectacular gain, climbing from approximately the world average in 1820 to the fifth richest today, with an increase of over thirty fold in real per capita GDP. All countries that are rich today had rapid increases in their material standard of living, realizing more than ten-fold increases since 1820. The underlying reasons for this diversity of economic success is a central question in the field of economic history.

Life Expectancy

Table 5 shows that disparities in life expectancy have been much less than those in per capita GDP. In 1820 all countries were bunched in the range of 21 to 41 years, with Germany at the top and India at the bottom, giving a ratio of less than 2 to 1. It is doubtful that any country or region has had a life expectancy below 20 years for long periods of time because death rates would have exceeded any plausible upper limit for birth rates, leading to population implosion. The twentieth century witnessed a compression in life expectancies across countries, with the ratio of levels in 1999 being 1.56 (81 in Japan versus 52 in Africa). Japan has also been a spectacular performer in health, increasing life expectancy from 34 years in 1820 to 81 years in 1999. Among poor unhealthy countries, health aspects of the standard of living have improved more rapidly than the material standard of living relative to the world average. Because many public health measures are cheap and effective, it has been easier to extend life than it has been to promote material prosperity, which has numerous complicated causes.

Table 5: Life Expectancy at Birth by Country and Year

Country 1820 1900 1950 1999
France 37 47 65 78
Germany 41 47 67 77
Italy 30 43 66 78
Netherlands 32 52 72 78
Spain 28 35 62 78
Sweden 39 56 70 79
United Kingdom 40 50 69 77
United States 39 47 68 77
Japan 34 44 61 81
Russia 28 32 65 67
Brazil 27 36 45 67
Mexico n.a. 33 50 72
China n.a. 24 41 71
India 21 24 32 60
Africa 23 24 38 52
World 26 31 49 66

n.a.: not available.

Source: Maddison (2001), Table 1-5a.

Height Comparisons

Figure 1 compares stature in the United States and the United Kingdom. Americans were very tall by global standards in the early nineteenth century as a result of their rich and varied diets, low population density, and relative equality of wealth. Unlike other countries that have been studied (France, the Netherlands, Sweden, Germany, Japan and Australia), both the U.S. and the U.K. suffered significant height declines during industrialization (as defined primarily by the achievement of modern economic growth) in the nineteenth century. (Note, however, that the amount and timing of the height decline in the U.K. has been the subject of a lively debate in the Economic History Review involving Roderick Floud, Kenneth Wachter and John Komlos; only the Floud-Wachter figures are given here.)

Source: Steckel (2002, Figure 12) and Floud, Wachter and Gregory (1990, table 4.8).

One may speculate that the timing of the declines shown in the Figure 1 is probably more coincidental than emblematic of linkage among similar causal factors across the two countries. While it is possible that growing trade and commerce spread disease, as in the United States, it is more likely that a major culprit in the U.K was rapid urbanization and associated increased in exposure to diseases. This conclusion is reached by noting that urban-born men were substantially shorter than the rural born, and between the periods of 1800-1830 and 1830–1870 the share of the British population living in urban areas leaped from 38.7 to 54.1%.

References

Easterlin, Richard A. “Regional Income Trends, 1840-1950.” In The Reinterpretation of American Economic History, edited byRobert William Fogel and Stanley L. Engerman. New York: Harper and Row, 1971.

Engerman, Stanley L. “The Standard of Living Debate in International Perspective: Measures and Indicators.” In Health and Welfare during Industrialization, edited by Richard H. Steckel and Roderick Floud. Chicago: University of Chicago Press, 1997.

Floud, Roderick, Kenneth W. Wachter and Annabel S. Gregory. Height, Health, and History: Nutritional Status in the United Kingdom, 1750-1980. Cambridge: Cambridge University Press, 1990.

Haines, Michael. “Vital Statistics.” In Historical Statistics of the United States: Millennial Edition, edited by Susan Carter, Scott Gartner, Michael Haines, Alan Olmstead, Richard Sutch, and Gavin Wright. New York: Cambridge University Press, forthcoming, 2002.

Komlos, John. “Shrinking in a Growing Economy? The Mystery of Physical Stature during the Industrial Revolution.” Journal of Economic History 58, no. 3 (1998): 779-802.

Maddison, Angus. The World Economy: A Millennial Perspective. Paris: OECD, 2001.

Meeker, Edward. “Medicine and Public Health.” In Encyclopedia of American Economic History, edited by Glenn Porter. New York: Scribner, 1980.

Pope, Clayne L. “Adult Mortality in America before 1900: A View from Family Histories.” In Strategic Factors in Nineteenth-Century American Economic History: A Volume to Honor Robert W. Fogel, edited by Claudia Goldin and Hugh Rockoff. Chicago: University of Chicago Press, 1992.

Porter, Roy, editor. The Cambridge Illustrated History of Medicine. Cambridge: Cambridge University Press, 1996.

Steckel, Richard H. “Health, Nutrition and Physical Well-Being.” In Historical Statistics of the United States: Millennial Edition, edited by Susan Carter, Scott Gartner, Michael Haines, Alan Olmstead, Richard Sutch, and Gavin Wright. New York: Cambridge University Press, forthcoming, 2002.

Steckel, Richard H. “Industrialization and Health in Historical Perspective.” In Poverty, Inequality and Health, edited by David Leon and Gill Walt. Oxford: Oxford University Press, 2000.

Steckel, Richard H. “Strategic Ideas in the Rise of the New Anthropometric History and Their Implications for Interdisciplinary Research.” Journal of Economic History 58, no. 3 (1998): 803-21.

Steckel, Richard H. “Stature and the Standard of Living.” Journal of Economic Literature 33, no. 4 (1995): 1903-1940.

Steckel, Richard H. “A Peculiar Population: The Nutrition, Health, and Mortality of American Slaves from Childhood to Maturity.” Journal of Economic History 46, no. 3 (1986): 721-41.

Steckel, Richard H. and Roderick Floud, editors. Health and Welfare during Industrialization. Chicago: University of Chicago Press, 1997.

Citation: Steckel, Richard. “A History of the Standard of Living in the United States”. EH.Net Encyclopedia, edited by Robert Whaples. July 21, 2002. URL http://eh.net/encyclopedia/a-history-of-the-standard-of-living-in-the-united-states/

Reconstruction Finance Corporation

James Butkiewicz, University of Delaware

Introduction

The Reconstruction Finance Corporation (RFC) was established during the Hoover administration with the primary objective of providing liquidity to, and restoring confidence in the banking system. The banking system experienced extensive pressure during the economic contraction of 1929-1933. During the contraction period, many banks had to suspend business operations and most of these ultimately failed. A number of these suspensions occurred during banking panics, when large numbers of depositors rushed to convert their deposits to cash from fear their bank might fail. Since this period was prior to the establishment of federal deposit insurance, bank depositors lost part or all of their deposits when their bank failed.

During its first thirteen months of operation, the RFC’s primary activity was to make loans to banks and financial institutions. During President Roosevelt’s New Deal, the RFC’s powers were expanded significantly. At various times, the RFC purchased bank preferred stock, made loans to assist agriculture, housing, exports, business, governments, and for disaster relief, and even purchased gold at the President’s direction in order to change the market price of gold. The scope of RFC activities was expanded further immediately before and during World War II. The RFC established or purchased, and funded, eight corporations that made important contributions to the war effort. After the war, the RFC’s activities were limited primarily to making loans to business. RFC lending ended in 1953, and the corporation ceased operations in 1957, when all remaining assets were transferred to other government agencies.

The Genesis of the Reconstruction Finance Corporation

The difficulties experienced by the American banking system were one of the defining characteristics of the Great Contraction of 1929-1933. During this period, the American banking system was comprised of a very large number of banks. At the end of December 1929, there were 24,633 banks in the United States. The vast majority of these banks were small, serving small towns and rural communities. These small banks were particularly susceptible to local economic difficulties, which could result in failure of the bank.

The Federal Reserve and Small Banks

The Federal Reserve System was created in 1913 to address the problem of periodic banking crises. The Fed had the ability to act as a lender of last resort, providing funds to banks during crises. While nationally chartered banks were required to join the Fed, state-chartered banks could join the Fed at their discretion. Most state-chartered banks chose not to join the Federal Reserve System. The majority of the small banks in rural communities were not Fed members. Thus, during crises, these banks were unable to seek assistance from the Fed, and the Fed felt no obligation to engage in a general expansion of credit to assist nonmember banks.

How Banking Panics Develop

At this time there was no federal deposit insurance system, so bank customers generally lost part or all of their deposits when their bank failed. Fear of failure sometimes caused people to panic. In a panic, bank customers attempt to immediately withdraw their funds. While banks hold enough cash for normal operations, they use most of their deposited funds to make loans and purchase interest-earning assets. In a panic, banks are forced to attempt to rapidly convert these assets to cash. Frequently, they are forced to sell assets at a loss to obtain cash quickly, or may be unable to sell assets at all. As losses accumulate, or cash reserves dwindle, a bank becomes unable to pay all depositors, and must suspend operations. During this period, most banks that suspended operations declared bankruptcy. Bank suspensions and failures may incite panic in adjacent communities or regions. This spread of panic, or contagion, can result in a large number of bank failures. Not only do customers lose some or all of their deposits, but also people become wary of banks in general. A widespread withdrawal of bank deposits reduces the amount of money and credit in society. This monetary contraction can contribute to a recession or depression.

Bank failures were a common event throughout the 1920s. In any year, it was normal for several hundred banks to fail. In 1930, the number of failures increased substantially. Failures and contagious panics occurred repeatedly during the contraction years. President Hoover recognized that the banking system required assistance. However, the President also believed that this assistance, like charity, should come from the private sector rather than the government, if at all possible.

The National Credit Corporation

To this end, Hoover encouraged a number of major banks to form the National Credit Corporation (NCC), to lend money to other banks experiencing difficulties. The NCC was announced on October 13, 1931, and began operations on November 11, 1931. However, the banks in the NCC were not enthusiastic about this endeavor, and made loans very reluctantly, requiring that borrowing banks pledge their best assets as collateral, or security for the loan. Hoover quickly recognized that the NCC would not provide the necessary relief to the troubled banking system.

RFC Approved, January 1932

Eugene Meyer, Governor of the Federal Reserve Board, convinced the President that a public agency was needed to make loans to troubled banks. On December 7, 1931, a bill was introduced to establish the Reconstruction Finance Corporation. The legislation was approved on January 22, 1932, and the RFC opened for business on February 2, 1932.

The original legislation authorized the RFC’s existence for a ten-year period. However, Presidential approval was required to operate beyond January 1, 1933, and Congressional approval was required for lending authority to continue beyond January 1, 1934. Subsequent legislation extended the life of the RFC and added many additional responsibilities and authorities.

The RFC was funded through the United States Treasury. The Treasury provided $500 million of capital to the RFC, and the RFC was authorized to borrow an additional $1.5 billion from the Treasury. The Treasury, in turn, sold bonds to the public to fund the RFC. Over time, this borrowing authority was increased manyfold. Subsequently, the RFC was authorized to sell securities directly to the public to obtain funds. However, most RFC funding was obtained by borrowing from the Treasury. During its years of existence, the RFC borrowed $51.3 billion from the Treasury, and $3.1 billion from the public.

The RFC During the Hoover Administration

RFC Authorized to Lend to Banks and Others

The original legislation authorized the RFC to make loans to banks and other financial institutions, to railroads, and for crop loans. While the original objective of the RFC was to help banks, railroads were assisted because many banks owned railroad bonds, which had declined in value, because the railroads themselves had suffered from a decline in their business. If railroads recovered, their bonds would increase in value. This increase, or appreciation, of bond prices would improve the financial condition of banks holding these bonds.

Through legislation approved on July 21, 1932, the RFC was authorized to make loans for self-liquidating public works project, and to states to provide relief and work relief to needy and unemployed people. This legislation also required that the RFC report to Congress, on a monthly basis, the identity of all new borrowers of RFC funds.

RFC Undercut by Requirement That It Publish Names of Banks Receiving Loans

From its inception through Franklin Roosevelt’s inauguration on March 4, 1933, the RFC primarily made loans to financial institutions. During the first months following the establishment of the RFC, bank failures and currency holdings outside of banks both declined. However, several loans aroused political and public controversy, which was the reason the July 21, 1932 legislation included the provision that the identity of banks receiving RFC loans from this date forward be reported to Congress. The Speaker of the House of Representatives, John Nance Garner, ordered that the identity of the borrowing banks be made public. The publication of the identity of banks receiving RFC loans, which began in August 1932, reduced the effectiveness of RFC lending. Bankers became reluctant to borrow from the RFC, fearing that public revelation of a RFC loan would cause depositors to fear the bank was in danger of failing, and possibly start a panic. Legislation passed in January 1933 required that the RFC publish a list of all loans made from its inception through July 21, 1932, the effective date for the publication of new loan recipients.

RFC, Politics and Bank Failure in February and March 1933

In mid-February 1933, banking difficulties developed in Detroit, Michigan. The RFC was willing to make a loan to the troubled bank, the Union Guardian Trust, to avoid a crisis. The bank was one of Henry Ford’s banks, and Ford had deposits of $7 million in this particular bank. Michigan Senator James Couzens demanded that Henry Ford subordinate his deposits in the troubled bank as a condition of the loan. If Ford agreed, he would risk losing all of his deposits before any other depositor lost a penny. Ford and Couzens had once been partners in the automotive business, but had become bitter rivals. Ford refused to agree to Couzens’ demand, even though failure to save the bank might start a panic in Detroit. When the negotiations failed, the governor of Michigan declared a statewide bank holiday. In spite of the RFC’s willingness to assist the Union Guardian Trust, the crisis could not be averted.

The crisis in Michigan resulted in a spread of panic, first to adjacent states, but ultimately throughout the nation. By the day of Roosevelt’s inauguration, March 4, all states had declared bank holidays or had restricted the withdrawal of bank deposits for cash. As one of his first acts as president, on March 5 President Roosevelt announced to the nation that he was declaring a nationwide bank holiday. Almost all financial institutions in the nation were closed for business during the following week. The RFC lending program failed to prevent the worst financial crisis in American history.

Criticisms of the RFC

The effectiveness of RFC lending to March 1933 was limited in several respects. The RFC required banks to pledge assets as collateral for RFC loans. A criticism of the RFC was that it often took a bank’s best loan assets as collateral. Thus, the liquidity provided came at a steep price to banks. Also, the publicity of new loan recipients beginning in August 1932, and general controversy surrounding RFC lending probably discouraged banks from borrowing. In September and November 1932, the amount of outstanding RFC loans to banks and trust companies decreased, as repayments exceeded new lending.

The RFC in the New Deal

FDR Sees Advantages in Using the RFC

President Roosevelt inherited the RFC. He and his colleagues, as well as Congress, found the independence and flexibility of the RFC to be particularly useful. The RFC was an executive agency with the ability to obtain funding through the Treasury outside of the normal legislative process. Thus, the RFC could be used to finance a variety of favored projects and programs without obtaining legislative approval. RFC lending did not count toward budgetary expenditures, so the expansion of the role and influence of the government through the RFC was not reflected in the federal budget.

RFC Given the Authority to Buy Bank Stock

The first task was to stabilize the banking system. On March 9, 1933, the Emergency Banking Act was approved as law. This legislation and a subsequent amendment improved the RFC’s ability to assist banks by giving it the authority to purchase bank preferred stock, capital notes and debentures (bonds), and to make loans using bank preferred stock as collateral. While banks were initially reluctant, the RFC encouraged banks to issue preferred stock for it to purchase. This provision of capital funds to banks strengthened the financial position of many banks. Banks could use the new capital funds to expand their lending, and did not have to pledge their best assets as collateral. The RFC purchased $782 million of bank preferred stock from 4,202 individual banks, and $343 million of capital notes and debentures from 2,910 individual bank and trust companies. In sum, the RFC assisted almost 6,800 banks. Most of these purchases occurred in the years 1933 through 1935.

The preferred stock purchase program did have controversial aspects. The RFC officials at times exercised their authority as shareholders to reduce salaries of senior bank officers, and on occasion, insisted upon a change of bank management. However, the infusion of new capital into the banking system, and the establishment of the Federal Deposit Insurance Corporation to insure bank depositors against loss, stabilized the financial system. In the years following 1933, bank failures declined to very low levels.

RFC’s Assistance to Farmers

Throughout the New Deal years, the RFC’s assistance to farmers was second only to its assistance to bankers. Total RFC lending to agricultural financing institutions totaled $2.5 billion. Over half, $1.6 billion, went to its subsidiary, the Commodity Credit Corporation. The Commodity Credit Corporation was incorporated in Delaware in 1933, and operated by the RFC for six years. In 1939, control of the Commodity Credit Corporation was transferred to the Department of Agriculture, were it remains today.

Commodity Credit Corporation

The agricultural sector was hit particularly hard by depression, drought, and the introduction of the tractor, displacing many small and tenant farmers. The primary New Deal program for farmers was the Agricultural Adjustment Act. Its objective was to reverse the decline of product prices and farm incomes experienced since 1920. The Commodity Credit Corporation contributed to this objective by purchasing selected agricultural products at guaranteed prices, typically above the prevailing market price. Thus, the CCC purchases established a guaranteed minimum price for these farm products.

The RFC also funded the Electric Home and Farm Authority, a program designed to enable low- and moderate- income households to purchase gas and electric appliances. This program would create demand for electricity in rural areas, such as the area served by the new Tennessee Valley Authority. Providing electricity to rural areas was the objective of the Rural Electrification Program.

Decline in Bank Lending Concerns RFC and New Deal Officials

After 1933, bank assets and bank deposits both increased. However, banks changed their asset allocation dramatically during the recovery years. Prior to the depression, banks primarily made loans, and purchased some securities, such as U.S. Treasury securities. During the recovery years, banks primarily purchased securities, which involved less risk. Whether due to concerns over safety, or because potential borrowers had weakened financial positions due to the depression, bank lending did not recover, as indicated by the data in Table 1.

The relative decline in bank lending was a major concern for RFC officials and the New Dealers, who felt that lack of lending by banks was hindering economic recovery. The sentiment within the Roosevelt administration was that the problem was banks’ unwillingness to lend. They viewed the lending by the Commodity Credit Corporation and the Electric Home and Farm Authority, as well as reports from members of Congress, as evidence that there was unsatisfied business loan demand.

TABLE 1
Year Bank Loans and Investments in Millions of Dollars Bank Loans in Millions of Dollars Bank Net Deposits in Millions of Dollars Loans as a Percentage of Loans and Investments Loans as a Percentage of Net Deposits
1921 39895 28927 30129 73% 96%
1922 39837 27627 31803 69% 87%
1923 43613 30272 34359 69% 88%
1924 45067 31409 36660 70% 86%
1925 48709 33729 40349 69% 84%
1926 51474 36035 42114 70% 86%
1927 53645 37208 43489 69% 86%
1928 57683 39507 44911 68% 88%
1929 58899 41581 45058 71% 92%
1930 58556 40497 45586 69% 89%
1931 55267 35285 41841 64% 84%
1932 46310 27888 32166 60% 87%
1933 40305 22243 28468 55% 78%
1934 42552 21306 32184 50% 66%
1935 44347 20213 35662 46% 57%
1936 48412 20636 41027 43% 50%
1937 49565 22410 42765 45% 52%
1938 47212 20982 41752 44% 50%
1939 49616 21320 45557 43% 47%
1940 51336 22340 49951 44% 45%

Source: Banking and Monetary Statistics, 1914 –1941.
Net Deposits are total deposits less interbank deposits.
All data are for the last business day of June in each year.

RFC Provides Credit to Business

Due to the failure of bank lending to return to pre-Depression levels, the role of the RFC expanded to include the provision of credit to business. RFC support was deemed as essential for the success of the National Recovery Administration, the New Deal program designed to promote industrial recovery. To support the NRA, legislation passed in 1934 authorized the RFC and the Federal Reserve System to make working capital loans to businesses. However, direct lending to businesses did not become an important RFC activity until 1938, when President Roosevelt encouraged expanding business lending in response to the recession of 1937-38.

RFC Mortgage Company

During the depression, many families and individuals were unable to make their mortgage payments, and had their homes repossessed. Another New Deal goal was to provide more funding for mortgages, to avoid the displacement of homeowners. In June 1934, the National Housing Act provided for the establishment of the Federal Housing Administration (FHA). The FHA would insure mortgage lenders against loss, and FHA mortgages required a smaller percentage down payment than was customary at that time, thus making it easier to purchase a house. In 1935, the RFC Mortgage Company was established to buy and sell FHA-insured mortgages.

RFC and Fannie Mae

Financial institutions were reluctant to purchase FHA mortgages, so in 1938 the President requested that the RFC establish a national mortgage association, the Federal National Mortgage Association, or Fannie Mae. Fannie Mae was originally funded by the RFC to create a market for FHA and later Veterans Administration (VA) mortgages. The RFC Mortgage Company was absorbed by the RFC in 1947. When the RFC was closed, its remaining mortgage assets were transferred to Fannie Mae. Fannie Mae evolved into a private corporation. During its existence, the RFC provided $1.8 billion of loans and capital to its mortgage subsidiaries.

RFC and Export-Import Bank

President Roosevelt sought to encourage trade with the Soviet Union. To promote this trade, the Export-Import Bank was established in 1934. The RFC provided capital, and later loans to the Ex-Im Bank. Interest in loans to support trade was so strong that a second Ex-Im bank was created to fund trade with other foreign nations a month after the first bank was created. These two banks were merged in 1936, with the authority to make loans to encourage exports in general. The RFC provided $201 million of capital and loans to the Ex-Im Banks.

Other RFC activities during this period included lending to federal government agencies providing relief from the depression including the Public Works Administration and the Works Progress Administration, disaster loans, and loans to state and local governments.

RFC Pushed Up the Price of Gold, Devalues the Dollar

Evidence of the flexibility afforded through the RFC was President Roosevelt’s use of the RFC to affect the market price of gold. The President wanted to reduce the gold value of the dollar from $20.67 per ounce of gold. As the dollar price of gold increased, the dollar exchange rate would fall relative to currencies that had a fixed gold price. A fall in the value of the dollar makes exports cheaper and imports more expensive. In an economy with high levels of unemployment, a decline in imports and increase in exports would increase domestic employment.

The goal of the RFC purchases was to increase the market price of gold. During October 1933 the RFC began purchasing gold at a price of $31.36 per ounce. The price was gradually increased to over $34 per ounce. The RFC price set a floor for the price of gold. In January 1934, the new official dollar price of gold was fixed at $35.00 per ounce, a 59% devaluation of the dollar.

Twice President Roosevelt instructed Jesse Jones, the president of the RFC, to stop lending, as he intended to close the RFC. The first occasion was in October 1937, and the second was in early 1940. The recession of 1937-38 caused Roosevelt to authorize the resumption of RFC lending in early 1938. The German invasion of France and the Low Countries gave the RFC new life on the second occasion.

The RFC in World War II

In 1940 the scope of RFC activities increased significantly, as the United States began preparing to assist its allies, and for possible direct involvement in the war. The RFC’s wartime activities were conducted in cooperation with other government agencies involved in the war effort. For its part, the RFC established seven new corporations, and purchased an existing corporation. The eight RFC wartime subsidiaries are listed in Table 2, below.

Table 2
RFC Wartime Subsidiaries
Metals Reserve Company
Rubber Reserve Company
Defense Plant Corporation
Defense Supplies Corporation
War Damage Corporation
U.S. Commercial Company
Rubber Development Corporation
Petroleum Reserve Corporation (later War Assets Corporation)

Source: Final Report of the Reconstruction Finance Corporation

Development of Materials Cut Off By the War

The RFC subsidiary corporations assisted the war effort as needed. These corporations were involved in funding the development of synthetic rubber, construction and operation of a tin smelter, and establishment of abaca (Manila hemp) plantations in Central America. Both natural rubber and abaca (used to produce rope products) were produced primarily in south Asia, which came under Japanese control. Thus, these programs encouraged the development of alternative sources of supply of these essential materials. Synthetic rubber, which was not produced in the United States prior to the war, quickly became the primary source of rubber in the post-war years.

Other War-Related Activities

Other war-related activities included financing plant conversion and construction for the production of military and essential goods, to deal and stockpile strategic materials, to purchase materials to reduce the supply available to enemy nations, to administer war damage insurance programs, and to finance construction of oil pipelines from Texas to New Jersey to free tankers for other uses.

During its existence, RFC management made discretionary loans and investments of $38.5 billion, of which $33.3 billion was actually disbursed. Of this total, $20.9 billion was disbursed to the RFC’s wartime subsidiaries. From 1941 through 1945, the RFC authorized over $2 billion of loans and investments each year, with a peak of over $6 billion authorized in 1943. The magnitude of RFC lending had increased substantially during the war. Most lending to wartime subsidiaries ended in 1945, and all such lending ended in 1948.

The Final Years of the RFC, 1946-1953

After the war, RFC lending decreased dramatically. In the postwar years, only in 1949 was over $1 billion authorized. Through 1950, most of this lending was directed toward businesses and mortgages. On September 7, 1950, Fannie Mae was transferred to the Housing and Home Finance Agency. During its last three years, almost all RFC loans were to businesses, including loans authorized under the Defense Production Act.

Eisenhower Terminates the RFC

President Eisenhower was inaugurated in 1953, and shortly thereafter legislation was passed terminating the RFC. The original RFC legislation authorized operations for one year of a possible ten-year existence, giving the President the option of extending its operation for a second year without Congressional approval. The RFC survived much longer, continuing to provide credit for both the New Deal and World War II. Now, the RFC would finally be closed.

Small Business Administration

However, there was concern that the end of RFC business loans would hurt small businesses. Thus, the Small Business Administration (SBA) was created in 1953 to continue the program of lending to small businesses, as well as providing training programs for entrepreneurs. The disaster loan program was also transferred to the SBA.

Through legislation passed on July 30, 1953, RFC lending authority ended on September 28, 1953. The RFC continued to collect on its loans and investments through June 30, 1957, at which time all remaining assets were transferred to other government agencies. At the time the liquidation act was passed, the RFC’s production of synthetic rubber, tin, and abaca remained in operation. Synthetic rubber operations were sold or leased to private industry. The tin and abaca programs were ultimately transferred to the General Services Administration.

Successors of the RFC

Three government agencies and one private corporation that were related to the RFC continue today. The Small Business Administration was established to continue lending to small businesses. The Commodity Credit Corporation continues to provide assistance to farmers. The Export-Import Bank continues to provide loans to promote exports. Fannie Mae became a private corporation in 1968. Today it is the most important source of mortgage funds in the nation, and has become one of the largest corporations in the country. Its stock is traded on the New York Stock Exchange under the symbol FNM.

Economic Analysis of the RFC

Role of a Lender of Last Resort

The American central bank, the Federal Reserve System, was created to be a lender of last resort. A lender of last resort exists to provide liquidity to banks during crises. The famous British central banker, Walter Bagehot, advised, “…in a panic the holders of the ultimate Bank reserve (whether one bank or many) should lend to all that bring good securities quickly, freely, and readily. By that policy they allay a panic…”

However, the Fed was not an effective lender of last resort during the depression years. Many of the banks experiencing problems during the depression years were not members of the Federal Reserve System, and thus could not borrow from the Fed. The Fed was reluctant to assist troubled banks, and banks also feared that borrowing from the Fed might weaken depositors’ confidence.

President Hoover hoped to restore stability and confidence in the banking system by creating the Reconstruction Finance Corporation. The RFC made collateralized loans to banks. Many scholars argue that initially RFC lending did provide relief. These observations are based on the decline in bank suspensions and public currency holdings in the months immediately following the creation of the RFC in February 1932. These data are presented in Table 3.

Table 3
1932 Currency in Millions of Dollars Bank Suspensions Number
January 4896 342
February 4824 119
March 4743 45
April 4751 74
May 4746 82
June 4959 151
July 5048 132
August 4988 85
September 4941 67
October 4863 102
November 4842 93
December 4830 161

Data sources: Currency – Friedman and Schwartz (1963)
Bank suspensions – Board of Governors (1937)

Bank suspensions occur when banks cannot open for normal business operations due to financial problems. Most bank suspensions ended in failure of the bank. Currency held by the public can be an indicator of public confidence in banks. As confidence declines, members of the public convert deposits to currency, and vice versa.

The banking situation deteriorated in June 1932 when a crisis developed in and around Chicago. Both Friedman and Schwartz (1963) and Jones (1951) assert that an RFC loan to a key bank helped to end the crisis, even though the bank subsequently failed.

The Debate over the Impact of the RFC

Two studies of RFC lending have come to differing conclusions. Butkiewicz (1995) examines the effect of RFC lending on bank suspensions and finds that lending reduced suspensions in the months prior to publication of the identities of loan recipients. He further argues that publication of the identities of banks receiving loans discouraged banks from borrowing. As noted above, RFC loans to banks declined in two months after publication began. Mason (2001) examines the impact of lending on a sample of Illinois banks and finds that those receiving RFC loans were increasingly likely to fail. Thus, the limited evidence provided from scholarly studies provides conflicting results about the impact of RFC lending.

Critics of RFC lending to banks argue that the RFC took the banks’ best assets as collateral, thereby reducing bank liquidity. Also, RFC lending requirements were initially very stringent. After the financial collapse in March 1933, the RFC was authorized to provide banks with capital through preferred stock and bond purchases. This change, along with the creation of the Federal Deposit Insurance System, stabilized the banking system.

Economic and Noneconomic Rationales for an Agency Like the RFC

Beginning 1933, the RFC became more directly involved in the allocation of credit throughout the economy. There are several economic reasons why a government agency might actively participate in the allocation of liquid capital funds. These are market failure, externalities, and noneconomic reasons.

A market failure occurs if private markets fail to allocate resources efficiently. For example, small business owners complain that markets do not provide enough loans at reasonable interest rates, a so-called “credit gap”. However, small business loans are riskier than loans to large corporations. Higher interest rates compensate for the greater risk involved in lending to small businesses. Thus, the case for a market failure is not compelling. However, small business loans remain politically popular.

An externality exists when the benefits to society are greater than the benefits to the individuals involved. For example, loans to troubled banks may prevent a financial crisis. Purchases of bank capital may also help stabilize the financial system. Prevention of financial crises and the possibility of a recession or depression provide benefits to society beyond the benefits to bank depositors and shareholders. Similarly, encouraging home ownership may create a more stable society. This argument is often used to justify government provision of funds to the mortgage market.

While wars are often fought over economic issues, and wars have economic consequences, a nation may become involved in a war for noneconomic reasons. Thus, the RFC wartime programs were motivated by political reasons, as much or more than economic reasons.

The RFC was a federal credit agency. The first federal credit agency was established in 1917. However, federal credit programs were relatively limited until the advent of the RFC. Many RFC lending programs were targeted to help specific sectors of the economy. A number of these activities were controversial, as are some federal credit programs today. Three important government agencies and one private corporation that descended from the RFC still operate today. All have important effects on the allocation of credit in our economy.

Criticisms of Governmental Credit Programs

Critics of federal credit programs cite several problems. One is that these programs subsidize certain activities, which may result in overproduction and misallocation of resources. For example, small businesses can obtain funds through the SBA at lower interest rates than are available through banks. This interest rate differential is a subsidy to small business borrowers. Crop loans and price supports result in overproduction of agricultural products. In general, federal credit programs reallocate capital resources to favored activities.

Finally, federal credit programs, including the RFC, are not funded as part of the normal budget process. They obtain funds through the Treasury, or their own borrowings are assumed to have the guarantee of the federal government. Thus, their borrowing is based on the creditworthiness of the federal government, not their own activities. These “off-budget” activities increase the scope of federal involvement in the economy while avoiding the normal budgetary decisions of the President and Congress. Also, these lending programs involve risk. Default on a significant number of these loans might require the federal government to bail out the affected agency. Taxpayers would bear the cost of a bailout.

Any analysis of market failures, externalities, or federal programs should involve a comparison of costs and benefits. However, precise measurement of costs and benefits in these cases is often difficult. Supporters value the benefits very highly, while opponents argue that the costs are excessive.

Conclusion

The RFC was created to assist banks during the Great Depression. It experienced some, albeit limited, success in this activity. However, the RFC’s authority to borrow directly from the Treasury outside the normal budget process proved very attractive to President Roosevelt and his advisors. Throughout the New Deal, the RFC was used to finance a vast array of favored activities. During World War II, RFC lending to its subsidiary corporations was an essential component of the war effort. It was the largest and most important federal credit program of its time. Even after the RFC was closed, some of its lending activities have continued through agencies and corporations that were first established or funded by the RFC. These descendent organizations, especially Fannie Mae, play a very important role in the allocation of credit in the American economy. The legacy of the RFC continues, long after it ceased to exist.

 

Data Sources

Banking data are from Banking and Monetary Statistics, 1914-1941, Board of Governors of the Federal Reserve System, 1943.

RFC data are from Final Report on the Reconstruction Finance Corporation, Secretary of the Treasury, 1959.

Currency data are from The Monetary History of the United States, 1867-1960, Friedman and Schwartz, 1963.

Bank suspension data are from Federal Reserve Bulletin, Board of Governors, September 1937.

References

Bagehot, Walter. Lombard Street: A Description of the Money Market. New York: Scribner, Armstrong & Co., 1873.

Board of Governors of the Federal Reserve System. Banking and Monetary Statistics, 1914-1941. Washington, DC, 1943.

Board of Governors of the Federal Reserve System. Federal Reserve Bulletin. September 1937.

Bremer, Cornelius D. American Bank Failures. New York: AMS Press, 1968.

Butkiewicz, James L. “The Impact of a Lender of Last Resort during the Great Depression: The Case of the Reconstruction Finance Corporation.” Explorations in Economic History 32, no. 2 (1995): 197-216.

Butkiewicz, James L. “The Reconstruction Finance Corporation, the Gold Standard, and the Banking Panic of 1933.” Southern Economic Journal 66, no. 2 (1999): 271-93.

Chandler, Lester V. America’s Greatest Depression, 1929-1941. New York: Harper and Row, 1970.

Friedman, Milton, and Anna J. Schwartz. The Monetary History of the United States, 1867-1960. Princeton, NJ: Princeton University Press, 1963.

Jones, Jesse H. Fifty Billion Dollars: My Thirteen Years with the RFC, 1932-1945. New York: Macmillan Co., 1951.

Keehn, Richard H., and Gene Smiley. “U.S. Bank Failures, 1932-1933: A Provisional Analysis.” Essays in Economic and Business History 6 (1988): 136-56.

Keehn, Richard H., and Gene Smiley. “U.S. Bank Failures, 1932-33: Additional Evidence on Regional Patterns, Timing, and the Role of the Reconstruction Finance Corporation.” Essays in Economic and Business History 11 (1993): 131-45.

Kennedy, Susan E. The Banking Crisis of 1933. Lexington, KY: University of Kentucky Press, 1973.

Mason, Joseph R. “Do Lender of Last Resort Policies Matter? The Effects of Reconstruction Finance Corporation Assistance to Banks During the Great Depression.” Journal of Financial Services Research 20, no 1. (2001): 77-95.

Nadler, Marcus, and Jules L. Bogen. The Banking Crisis: The End of an Epoch. New York, NY: Arno Press, 1980.

Olson, James S. Herbert Hoover and the Reconstruction Finance Corporation. Ames, IA: Iowa State University Press, 1977.

Olson, James S. Saving Capitalism: The Reconstruction Finance Corporation in the New Deal, 1933-1940. Princeton, NJ: Princeton University Press, 1988.

Saulnier, R. J., Harold G. Halcrow, and Neil H. Jacoby. Federal Lending and Loan Insurance. Princeton, NJ: Princeton University Press, 1958.

Schlesinger, Jr., Arthur M. The Age of Roosevelt: The Coming of the New Deal. Cambridge, MA: Riverside Press, 1957.

Secretary of the Treasury, Final Report on the Reconstruction Finance Corporation. Washington, DC: United States Government Printing Office, 1959.

Sprinkel, Beryl Wayne. “Economic Consequences of the Operations of the Reconstruction Finance Corporation.” Journal of Business of the University of Chicago 25, no. 4 (1952): 211-24.

Sullivan, L. Prelude to Panic: The Story of the Bank Holiday. Washington, DC: Statesman Press, 1936.

Trescott, Paul B. “Bank Failures, Interest Rates, and the Great Currency Outflow in the United States, 1929-1933.” Research in Economic History 11 (1988): 49-80.

Upham, Cyril B., and Edwin Lamke. Closed and Distressed Banks: A Study in Public Administration. Washington, DC: Brookings Institution, 1934.

Wicker, Elmus. The Banking Panics of the Great Depression. Cambridge: Cambridge University Press, 1996.

Web Links

Commodity Credit Corporation

http://www.fsa.usda.gov/pas/publications/facts/html/ccc99.htm

Ex-Im Bank http://www.exim.gov/history.html

Fannie Mae http://www.fanniemae.com/company/history.html

Small Business Administration http://www.sba.gov/aboutsba/sbahistory.doc

Citation: Butkiewicz, James. “Reconstruction Finance Corporation”. EH.Net Encyclopedia, edited by Robert Whaples. July 19, 2002. URL http://eh.net/encyclopedia/reconstruction-finance-corporation/

English Poor Laws

George Boyer, Cornell University

A compulsory system of poor relief was instituted in England during the reign of Elizabeth I. Although the role played by poor relief was significantly modified by the Poor Law Amendment Act of 1834, the Crusade Against Outrelief of the 1870s, and the adoption of various social insurance programs in the early twentieth century, the Poor Law continued to assist the poor until it was replaced by the welfare state in 1948. For nearly three centuries, the Poor Law constituted “a welfare state in miniature,” relieving the elderly, widows, children, the sick, the disabled, and the unemployed and underemployed (Blaug 1964). This essay will outline the changing role played by the Poor Law, focusing on the eighteenth and nineteenth centuries.

The Origins of the Poor Law

While legislation dealing with vagrants and beggars dates back to the fourteenth century, perhaps the first English poor law legislation was enacted in 1536, instructing each parish to undertake voluntary weekly collections to assist the “impotent” poor. The parish had been the basic unit of local government since at least the fourteenth century, although Parliament imposed few if any civic functions on parishes before the sixteenth century. Parliament adopted several other statutes relating to the poor in the next sixty years, culminating with the Acts of 1597-98 and 1601 (43 Eliz. I c. 2), which established a compulsory system of poor relief that was administered and financed at the parish (local) level. These Acts laid the groundwork for the system of poor relief up to the adoption of the Poor Law Amendment Act in 1834. Relief was to be administered by a group of overseers, who were to assess a compulsory property tax, known as the poor rate, to assist those within the parish “having no means to maintain them.” The poor were divided into three groups: able-bodied adults, children, and the old or non-able-bodied (impotent). The overseers were instructed to put the able-bodied to work, to give apprenticeships to poor children, and to provide “competent sums of money” to relieve the impotent.

Deteriorating economic conditions and loss of traditional forms of charity in the 1500s

The Elizabethan Poor Law was adopted largely in response to a serious deterioration in economic circumstances, combined with a decline in more traditional forms of charitable assistance. Sixteenth century England experienced rapid inflation, caused by rapid population growth, the debasement of the coinage in 1526 and 1544-46, and the inflow of American silver. Grain prices more than tripled from 1490-1509 to 1550-69, and then increased by an additional 73 percent from 1550-69 to 1590-1609. The prices of other commodities increased nearly as rapidly — the Phelps Brown and Hopkins price index rose by 391 percent from 1495-1504 to 1595-1604. Nominal wages increased at a much slower rate than did prices; as a result, real wages of agricultural and building laborers and of skilled craftsmen declined by about 60 percent over the course of the sixteenth century. This decline in purchasing power led to severe hardship for a large share of the population. Conditions were especially bad in 1595-98, when four consecutive poor harvests led to famine conditions. At the same time that the number of workers living in poverty increased, the supply of charitable assistance declined. The dissolution of the monasteries in 1536-40, followed by the dissolution of religious guilds, fraternities, almshouses, and hospitals in 1545-49, “destroyed much of the institutional fabric which had provided charity for the poor in the past” (Slack 1990). Given the circumstances, the Acts of 1597-98 and 1601 can be seen as an attempt by Parliament both to prevent starvation and to control public order.

The Poor Law, 1601-1750

It is difficult to determine how quickly parishes implemented the Poor Law. Paul Slack (1990) contends that in 1660 a third or more of parishes regularly were collecting poor rates, and that by 1700 poor rates were universal. The Board of Trade estimated that in 1696 expenditures on poor relief totaled £400,000 (see Table 1), slightly less than 1 percent of national income. No official statistics exist for this period concerning the number of persons relieved or the demographic characteristics of those relieved, but it is possible to get some idea of the makeup of the “pauper host” from local studies undertaken by historians. These suggest that, during the seventeenth century, the bulk of relief recipients were elderly, orphans, or widows with young children. In the first half of the century, orphans and lone-parent children made up a particularly large share of the relief rolls, while by the late seventeenth century in many parishes a majority of those collecting regular weekly “pensions” were aged sixty or older. Female pensioners outnumbered males by as much as three to one (Smith 1996). On average, the payment of weekly pensions made up about two-thirds of relief spending in the late seventeenth and early eighteenth centuries; the remainder went to casual benefits, often to able-bodied males in need of short-term relief because of sickness or unemployment.

Settlement Act of 1662

One of the issues that arose in the administration of relief was that of entitlement: did everyone within a parish have a legal right to relief? Parliament addressed this question in the Settlement Act of 1662, which formalized the notion that each person had a parish of settlement, and which gave parishes the right to remove within forty days of arrival any newcomer deemed “likely to be chargeable” as well as any non-settled applicant for relief. While Adam Smith, and some historians, argued that the Settlement Law put a serious brake on labor mobility, available evidence suggests that parishes used it selectively, to keep out economically undesirable migrants such as single women, older workers, and men with large families.

Relief expenditures increased sharply in the first half of the eighteenth century, as can be seen in Table 1. Nominal expenditures increased by 72 percent from 1696 to 1748-50 despite the fact that prices were falling and population was growing slowly; real expenditures per capita increased by 84 percent. A large part of this rise was due to increasing pension benefits, especially for the elderly. Some areas also experienced an increase in the number of able-bodied relief recipients. In an attempt to deter some of the poor from applying for relief, Parliament in 1723 adopted the Workhouse Test Act, which empowered parishes to deny relief to any applicant who refused to enter a workhouse. While many parishes established workhouses as a result of the Act, these were often short-lived, and the vast majority of paupers continued to receive outdoor relief (that is, relief in their own homes).

The Poor Law, 1750-1834

The period from 1750 to 1820 witnessed an explosion in relief expenditures. Real per capita expenditures more than doubled from 1748-50 to 1803, and remained at a high level until the Poor Law was amended in 1834 (see Table 1). Relief expenditures increased from 1.0% of GDP in 1748-50 to a peak of 2.7% of GDP in 1818-20 (Lindert 1998). The demographic characteristics of the pauper host changed considerably in the late eighteenth and early nineteenth centuries, especially in the rural south and east of England. There was a sharp increase in numbers receiving casual benefits, as opposed to regular weekly pensions. The age distribution of those on relief became younger — the share of paupers who were prime-aged (20- 59) increased significantly, and the share aged 60 and over declined. Finally, the share of relief recipients in the south and east who were male increased from about a third in 1760 to nearly two-thirds in 1820. In the north and west there also were shifts toward prime-age males and casual relief, but the magnitude of these changes was far smaller than elsewhere (King 2000).

Gilbert’s Act and the Removal Act

There were two major pieces of legislation during this period. Gilbert’s Act (1782) empowered parishes to join together to form unions for the purpose of relieving their poor. The Act stated that only the impotent poor should be relieved in workhouses; the able-bodied should either be found work or granted outdoor relief. To a large extent, Gilbert’s Act simply legitimized the policies of a large number of parishes that found outdoor relief both less and expensive and more humane that workhouse relief. The other major piece of legislation was the Removal Act of 1795, which amended the Settlement Law so that no non-settled person could be removed from a parish unless he or she applied for relief.

Speenhamland System and other forms of poor relief

During this period, relief for the able-bodied took various forms, the most important of which were: allowances-in-aid-of-wages (the so-called Speenhamland system), child allowances for laborers with large families, and payments to seasonally unemployed agricultural laborers. The system of allowances-in-aid-of-wages was adopted by magistrates and parish overseers throughout large parts of southern England to assist the poor during crisis periods. The most famous allowance scale, though by no means the first, was that adopted by Berkshire magistrates at Speenhamland on May 6, 1795. Under the allowance system, a household head (whether employed or unemployed) was guaranteed a minimum weekly income, the level of which was determined by the price of bread and by the size of his or her family. Such scales typically were instituted only during years of high food prices, such as 1795-96 and 1800-01, and removed when prices declined. Child allowance payments were widespread in the rural south and east, which suggests that laborers’ wages were too low to support large families. The typical parish paid a small weekly sum to laborers with four or more children under age 10 or 12. Seasonal unemployment had been a problem for agricultural laborers long before 1750, but the extent of seasonality increased in the second half of the eighteenth century as farmers in southern and eastern England responded to the sharp increase in grain prices by increasing their specialization in grain production. The increase in seasonal unemployment, combined with the decline in other sources of income, forced many agricultural laborers to apply for poor relief during the winter.

Regional differences in relief expenditures and recipients

Table 2 reports data for fifteen counties located throughout England on per capita relief expenditures for the years ending in March 1783-85, 1803, 1812, and 1831, and on relief recipients in 1802-03. Per capita expenditures were higher on average in agricultural counties than in more industrial counties, and were especially high in the grain-producing southern counties — Oxford, Berkshire, Essex, Suffolk, and Sussex. The share of the population receiving poor relief in 1802-03 varied significantly across counties, being 15 to 23 percent in the grain- producing south and less than 10 percent in the north. The demographic characteristics of those relieved also differed across regions. In particular, the share of relief recipients who were elderly or disabled was higher in the north and west than it was in the south; by implication, the share that were able-bodied was higher in the south and east than elsewhere. Economic historians typically have concluded that these regional differences in relief expenditures and numbers on relief were caused by differences in economic circumstances; that is, poverty was more of a problem in the agricultural south and east than it was in the pastoral southwest or in the more industrial north (Blaug 1963; Boyer 1990). More recently, King (2000) has argued that the regional differences in poor relief were determined not by economic structure but rather by “very different welfare cultures on the part of both the poor and the poor law administrators.”

Causes of the Increase in Relief to Able-bodied Males

What caused the increase in the number of able-bodied males on relief? In the second half of the eighteenth century, a large share of rural households in southern England suffered significant declines in real income. County-level cross-sectional data suggest that, on average, real wages for day laborers in agriculture declined by 19 percent from 1767-70 to 1795 in fifteen southern grain-producing counties, then remained roughly constant from 1795 to 1824, before increasing to a level in 1832 about 10 percent above that of 1770 (Bowley 1898). Farm-level time-series data yield a similar result — real wages in the southeast declined by 13 percent from 1770-79 to 1800-09, and remained low until the 1820s (Clark 2001).

Enclosures

Some historians contend that the Parliamentary enclosure movement, and the plowing over of commons and waste land, reduced the access of rural households to land for growing food, grazing animals, and gathering fuel, and led to the immiseration of large numbers of agricultural laborers and their families (Hammond and Hammond 1911; Humphries 1990). More recent research, however, suggests that only a relatively small share of agricultural laborers had common rights, and that there was little open access common land in southeastern England by 1750 (Shaw-Taylor 2001; Clark and Clark 2001). Thus, the Hammonds and Humphries probably overstated the effect of late eighteenth-century enclosures on agricultural laborers’ living standards, although those laborers who had common rights must have been hurt by enclosures.

Declining cottage industry

Finally, in some parts of the south and east, women and children were employed in wool spinning, lace making, straw plaiting, and other cottage industries. Employment opportunities in wool spinning, the largest cottage industry, declined in the late eighteenth century, and employment in the other cottage industries declined in the early nineteenth century (Pinchbeck 1930; Boyer 1990). The decline of cottage industry reduced the ability of women and children to contribute to household income. This, in combination with the decline in agricultural laborers’ wage rates and, in some villages, the loss of common rights, caused many rural household’s incomes in southern England to fall dangerously close to subsistence by 1795.

North and Midlands

The situation was different in the north and midlands. The real wages of day laborers in agriculture remained roughly constant from 1770 to 1810, and then increased sharply, so that by the 1820s wages were about 50 percent higher than they were in 1770 (Clark 2001). Moreover, while some parts of the north and midlands experienced a decline in cottage industry, in Lancashire and the West Riding of Yorkshire the concentration of textile production led to increased employment opportunities for women and children.

The Political Economy of the Poor Law, 1795-1834

A comparison of English poor relief with poor relief on the European continent reveals a puzzle: from 1795 to 1834 relief expenditures per capita, and expenditures as a share of national product, were significantly higher in England than on the continent. However, differences in spending between England and the continent were relatively small before 1795 and after 1834 (Lindert 1998). Simple economic explanations cannot account for the different patterns of English and continental relief.

Labor-hiring farmers take advantage of the poor relief system

The increase in relief spending in the late-eighteenth and early-nineteenth centuries was partly a result of politically-dominant farmers taking advantage of the poor relief system to shift some of their labor costs onto other taxpayers (Boyer 1990). Most rural parish vestries were dominated by labor-hiring farmers as a result of “the principle of weighting the right to vote according to the amount of property occupied,” introduced by Gilbert’s Act (1782), and extended in 1818 by the Parish Vestry Act (Brundage 1978). Relief expenditures were financed by a tax levied on all parishioners whose property value exceeded some minimum level. A typical rural parish’s taxpayers can be divided into two groups: labor-hiring farmers and non-labor-hiring taxpayers (family farmers, shopkeepers, and artisans). In grain-producing areas, where there were large seasonal variations in the demand for labor, labor-hiring farmers anxious to secure an adequate peak season labor force were able to reduce costs by laying off unneeded workers during slack seasons and having them collect poor relief. Large farmers used their political power to tailor the administration of poor relief so as to lower their labor costs. Thus, some share of the increase in relief spending in the early nineteenth century represented a subsidy to labor-hiring farmers rather than a transfer from farmers and other taxpayers to agricultural laborers and their families. In pasture farming areas, where the demand for labor was fairly constant over the year, it was not in farmers’ interests to shed labor during the winter, and the number of able-bodied laborers receiving casual relief was smaller. The Poor Law Amendment Act of 1834 reduced the political power of labor-hiring farmers, which helps to account for the decline in relief expenditures after that date.

The New Poor Law, 1834-70

The increase in spending on poor relief in the late eighteenth and early nineteenth centuries, combined with the attacks on the Poor Laws by Thomas Malthus and other political economists and the agricultural laborers’ revolt of 1830-31 (the Captain Swing riots), led the government in 1832 to appoint the Royal Commission to Investigate the Poor Laws. The Commission published its report, written by Nassau Senior and Edwin Chadwick, in March 1834. The report, described by historian R. H. Tawney (1926) as “brilliant, influential and wildly unhistorical,” called for sweeping reforms of the Poor Law, including the grouping of parishes into Poor Law unions, the abolition of outdoor relief for the able-bodied and their families, and the appointment of a centralized Poor Law Commission to direct the administration of poor relief. Soon after the report was published Parliament adopted the Poor Law Amendment Act of 1834, which implemented some of the report’s recommendations and left others, like the regulation of outdoor relief, to the three newly appointed Poor Law Commissioners.

By 1839 the vast majority of rural parishes had been grouped into poor law unions, and most of these had built or were building workhouses. On the other hand, the Commission met with strong opposition when it attempted in 1837 to set up unions in the industrial north, and the implementation of the New Poor Law was delayed in several industrial cities. In an attempt to regulate the granting of relief to able-bodied males, the Commission, and its replacement in 1847, the Poor Law Board, issued several orders to selected Poor Law Unions. The Outdoor Labour Test Order of 1842, sent to unions without workhouses or where the workhouse test was deemed unenforceable, stated that able-bodied males could be given outdoor relief only if they were set to work by the union. The Outdoor Relief Prohibitory Order of 1844 prohibited outdoor relief for both able-bodied males and females except on account of sickness or “sudden and urgent necessity.” The Outdoor Relief Regulation Order of 1852 extended the labor test for those relieved outside of workhouses.

Historical debate about the effect of the New Poor Law

Historians do not agree on the effect of the New Poor Law on the local administration of relief. Some contend that the orders regulating outdoor relief largely were evaded by both rural and urban unions, many of whom continued to grant outdoor relief to unemployed and underemployed males (Rose 1970; Digby 1975). Others point to the falling numbers of able- bodied males receiving relief in the national statistics and the widespread construction of union workhouses, and conclude that the New Poor Law succeeded in abolishing outdoor relief for the able-bodied by 1850 (Williams 1981). A recent study by Lees (1998) found that in three London parishes and six provincial towns in the years around 1850 large numbers of prime-age males continued to apply for relief, and that a majority of those assisted were granted outdoor relief. The Poor Law also played an important role in assisting the unemployed in industrial cities during the cyclical downturns of 1841-42 and 1847-48 and the Lancashire cotton famine of 1862-65 (Boot 1990; Boyer 1997). There is no doubt, however, that spending on poor relief declined after 1834 (see Table 1). Real per capita relief expenditures fell by 43 percent from 1831 to 1841, and increased slowly thereafter.

Beginning in 1840, data on the number of persons receiving poor relief are available for two days a year, January 1 and July 1; the “official” estimates in Table 1 of the annual number relieved were constructed as the average of the number relieved on these two dates. Studies conducted by Poor Law administrators indicate that the number recorded in the day counts was less than half the number assisted during the year. Lees’s “revised” estimates of annual relief recipients (see Table 1) assumes that the ratio of actual to counted paupers was 2.24 for 1850- 1900 and 2.15 for 1905-14; these suggest that from 1850 to 1870 about 10 percent of the population was assisted by the Poor Law each year. Given the temporary nature of most spells of relief, over a three year period as much as 25 percent of the population made use of the Poor Law (Lees 1998).

The Crusade Against Outrelief

In the 1870s Poor Law unions throughout England and Wales curtailed outdoor relief for all types of paupers. This change in policy, known as the Crusade Against Outrelief, was not a result of new government regulations, although it was encouraged by the newly formed Local Government Board (LGB). The Board was aided in convincing the public of the need for reform by the propaganda of the Charity Organization Society (COS), founded in 1869. The LGB and the COS maintained that the ready availability of outdoor relief destroyed the self-reliance of the poor. The COS went on to argue that the shift from outdoor to workhouse relief would significantly reduce the demand for assistance, since most applicants would refuse to enter workhouses, and therefore reduce Poor Law expenditures. A policy that promised to raise the morals of the poor and reduce taxes was hard for most Poor Law unions to resist (MacKinnon 1987).

The effect of the Crusade can be seen in Table 1. The deterrent effect associated with the workhouse led to a sharp fall in numbers on relief — from 1871 to 1876, the number of paupers receiving outdoor relief fell by 33 percent. The share of paupers relieved in workhouses increased from 12-15 percent in 1841-71 to 22 percent in 1880, and it continued to rise to 35 percent in 1911. The extent of the crusade varied considerably across poor law unions. Urban unions typically relieved a much larger share of their paupers in workhouses than did rural unions, but there were significant differences in practice across cities. In 1893, over 70 percent of the paupers in Liverpool, Manchester, Birmingham, and in many London Poor Law unions received indoor relief; however, in Leeds, Bradford, Newcastle, Nottingham and several other industrial and mining cities the majority of paupers continued to receive outdoor relief (Booth 1894).

Change in the attitude of the poor toward relief

The last third of the nineteenth century also witnessed a change in the attitude of the poor towards relief. Prior to 1870, a large share of the working class regarded access to public relief as an entitlement, although they rejected the workhouse as a form of relief. Their opinions changed over time, however, and by the end of the century most workers viewed poor relief as stigmatizing (Lees 1998). This change in perceptions led many poor people to go to great lengths to avoid applying for relief, and available evidence suggests that there were large differences between poverty rates and pauperism rates in late Victorian Britain. For example, in York in 1900, 3,451 persons received poor relief at some point during the year, less than half of the 7,230 persons estimated by Rowntree to be living in primary poverty.

The Declining Role of the Poor Law, 1870-1914

Increased availability of alternative sources of assistance

The share of the population on relief fell sharply from 1871 to 1876, and then continued to decline, at a much slower pace, until 1914. Real per capita relief expenditures increased from 1876 to 1914, largely because the Poor Law provided increasing amounts of medical care for the poor. Otherwise, the role played by the Poor Law declined over this period, due in large part to an increase in the availability of alternative sources of assistance. There was a sharp increase in the second half of the nineteenth century in the membership of friendly societies — mutual help associations providing sickness, accident, and death benefits, and sometimes old age (superannuation) benefits — and of trade unions providing mutual insurance policies. The benefits provided workers and their families with some protection against income loss, and few who belonged to friendly societies or unions providing “friendly” benefits ever needed to apply to the Poor Law for assistance.

Work relief

Local governments continued to assist unemployed males after 1870, but typically not through the Poor Law. Beginning with the Chamberlain Circular in 1886 the Local Government Board encouraged cities to set up work relief projects when unemployment was high. The circular stated that “it is not desirable that the working classes should be familiarised with Poor Law relief,” and that the work provided should “not involve the stigma of pauperism.” In 1905 Parliament adopted the Unemployed Workman Act, which established in all large cities distress committees to provide temporary employment to workers who were unemployed because of a “dislocation of trade.”

Liberal welfare reforms, 1906-1911

Between 1906 and 1911 Parliament passed several pieces of social welfare legislation collectively known as the Liberal welfare reforms. These laws provided free meals and medical inspections (later treatment) for needy school children (1906, 1907, 1912) and weekly pensions for poor persons over age 70 (1908), and established national sickness and unemployment insurance (1911). The Liberal reforms purposely reduced the role played by poor relief, and paved the way for the abolition of the Poor Law.

The Last Years of the Poor Law

During the interwar period the Poor Law served as a residual safety net, assisting those who fell through the cracks of the existing social insurance policies. The high unemployment of 1921-38 led to a sharp increase in numbers on relief. The official count of relief recipients rose from 748,000 in 1914 to 1,449,000 in 1922; the number relieved averaged 1,379,800 from 1922 to 1938. A large share of those on relief were unemployed workers and their dependents, especially in 1922-26. Despite the extension of unemployment insurance in 1920 to virtually all workers except the self-employed and those in agriculture or domestic service, there still were large numbers who either did not qualify for unemployment benefits or who had exhausted their benefits, and many of them turned to the Poor Law for assistance. The vast majority were given outdoor relief; from 1921 to 1923 the number of outdoor relief recipients increased by 1,051,000 while the number receiving indoor relieve increased by 21,000.

The Poor Law becomes redundant and is repealed

Despite the important role played by poor relief during the interwar period, the government continued to adopt policies, which bypassed the Poor Law and left it “to die by attrition and surgical removals of essential organs” (Lees 1998). The Local Government Act of 1929 abolished the Poor Law unions, and transferred the administration of poor relief to the counties and county boroughs. In 1934 the responsibility for assisting those unemployed who were outside the unemployment insurance system was transferred from the Poor Law to the Unemployment Assistance Board. Finally, from 1945 to 1948, Parliament adopted a series of laws that together formed the basis for the welfare state, and made the Poor Law redundant. The National Assistance Act of 1948 officially repealed all existing Poor Law legislation, and replaced the Poor Law with the National Assistance Board to act as a residual relief agency.

Table 1
Relief Expenditures and Numbers on Relief, 1696-1936

Expend. Real Expend. Expend. Number Share of Number Share of Share of
on expend. as share as share relieved Pop. relieved pop. paupers
Year Relief per capita of GDP of GDP (Official) relieved (Lees) relieved relieved
(£s) 1803=100 (Slack) (Lindert) 1 000s (Official) 1 000s (Lees) indoors
1696 400 24.9 0.8
1748-50 690 45.8 1.0 0.99
1776 1 530 64.0 1.6 1.59
1783-85 2 004 75.6 2.0 1.75
1803 4 268 100.0 1.9 2.15 1 041 11.4 8.0
1813 6 656 91.8 2.58
1818 7 871 116.8
1821 6 959 113.6 2.66
1826 5 929 91.8
1831 6 799 107.9 2.00
1836 4 718 81.1
1841 4 761 61.8 1.12 1 299 8.3 2 910 18.5 14.8
1846 4 954 69.4 1 332 8.0 2 984 17.8 15.0
1851 4 963 67.8 1.07 941 5.3 2 108 11.9 12.1
1856 6 004 62.0 917 4.9 2 054 10.9 13.6
1861 5 779 60.0 0.86 884 4.4 1 980 9.9 13.2
1866 6 440 65.0 916 4.3 2 052 9.7 13.7
1871 7 887 73.3 1 037 4.6 2 323 10.3 14.2
1876 7 336 62.8 749 3.1 1 678 7.0 18.1
1881 8 102 69.1 0.70 791 3.1 1 772 6.9 22.3
1886 8 296 72.0 781 2.9 1 749 6.4 23.2
1891 8 643 72.3 760 2.6 1 702 5.9 24.0
1896 10 216 84.7 816 2.7 1 828 6.0 25.9
1901 11 549 84.7 777 2.4 1 671 5.2 29.2
1906 14 036 96.9 892 2.6 1 918 5.6 31.1
1911 15 023 93.6 886 2.5 1 905 5.3 35.1
1921 31 925 75.3 627 1.7 35.7
1926 40 083 128.3 1 331 3.4 17.7
1931 38 561 133.9 1 090 2.7 21.5
1936 44 379 165.7 1 472 3.6 12.6

Notes: Relief expenditure data are for the year ended on March 25. In calculating real per capita expenditures, I used cost of living and population data for the previous year.

Table 2
County-level Poor Relief Data, 1783-1831

Per capita Per capita Per capita Per capita Share of Percent Share of
relief relief relief relief Percent of Recipients of land in Pop
spending spending spending spending population over 60 or arable Employed
County (s.) (s.) (s.) (s.) relieved Disabled farming in Agric
1783-5 1802-03 1812 1831 1802-03 1802-03 c. 1836 1821
North
Durham 2.78 6.50 9.92 6.83 9.3 22.8 54.9 20.5
Northumberland 2.81 6.67 7.92 6.25 8.8 32.2 46.5 26.8
Lancashire 3.48 4.42 7.42 4.42 6.7 15.0 27.1 11.2
West Riding 2.91 6.50 9.92 5.58 9.3 18.1 30.0 19.6
Midlands
Stafford 4.30 6.92 8.50 6.50 9.1 17.2 44.8 26.6
Nottingham 3.42 6.33 10.83 6.50 6.8 17.3 na 35.4
Warwick 6.70 11.25 13.33 9.58 13.3 13.7 47.5 27.9
Southeast
Oxford 7.07 16.17 24.83 16.92 19.4 13.2 55.8 55.4
Berkshire 8.65 15.08 27.08 15.75 20.0 12.7 58.5 53.3
Essex 9.10 12.08 24.58 17.17 16.4 12.7 72.4 55.7
Suffolk 7.35 11.42 19.33 18.33 16.6 11.4 70.3 55.9
Sussex 11.52 22.58 33.08 19.33 22.6 8.7 43.8 50.3
Southwest
Devon 5.53 7.25 11.42 9.00 12.3 23.1 22.5 40.8
Somerset 5.24 8.92 12.25 8.83 12.0 20.8 24.4 42.8
Cornwall 3.62 5.83 9.42 6.67 6.6 31.0 23.8 37.7
England & Wales 4.06 8.92 12.75 10.08 11.4 16.0 48.0 33.0

References

Blaug, Mark. “The Myth of the Old Poor Law and the Making of the New.” Journal of Economic History 23 (1963): 151-84.

Blaug, Mark. “The Poor Law Report Re-examined.” Journal of Economic History (1964) 24: 229-45.

Boot, H. M. “Unemployment and Poor Law Relief in Manchester, 1845-50.” Social History 15 (1990): 217-28.

Booth, Charles. The Aged Poor in England and Wales. London: MacMillan, 1894.

Boyer, George R. “Poor Relief, Informal Assistance, and Short Time during the Lancashire Cotton Famine.” Explorations in Economic History 34 (1997): 56-76.

Boyer, George R. An Economic History of the English Poor Law, 1750-1850. Cambridge: Cambridge University Press, 1990.

Brundage, Anthony. The Making of the New Poor Law. New Brunswick, N.J.: Rutgers University Press, 1978.

Clark, Gregory. “Farm Wages and Living Standards in the Industrial Revolution: England, 1670-1869.” Economic History Review, 2nd series 54 (2001): 477-505.

Clark, Gregory and Anthony Clark. “Common Rights to Land in England, 1475-1839.” Journal of Economic History 61 (2001): 1009-36.

Digby, Anne. “The Labour Market and the Continuity of Social Policy after 1834: The Case of the Eastern Counties.” Economic History Review, 2nd series 28 (1975): 69-83.

Eastwood, David. Governing Rural England: Tradition and Transformation in Local Government, 1780-1840. Oxford: Clarendon Press, 1994.

Fraser, Derek, editor. The New Poor Law in the Nineteenth Century. London: Macmillan, 1976.

Hammond, J. L. and Barbara Hammond. The Village Labourer, 1760-1832. London: Longmans, Green, and Co., 1911.

Hampson, E. M. The Treatment of Poverty in Cambridgeshire, 1597-1834. Cambridge: Cambridge University Press, 1934

Humphries, Jane. “Enclosures, Common Rights, and Women: The Proletarianization of Families in the Late Eighteenth and Early Nineteenth Centuries.” Journal of Economic History 50 (1990): 17-42.

King, Steven. Poverty and Welfare in England, 1700-1850: A Regional Perspective. Manchester: Manchester University Press, 2000.

Lees, Lynn Hollen. The Solidarities of Strangers: The English Poor Laws and the People, 1770-1948. Cambridge: Cambridge University Press, 1998.

Lindert, Peter H. “Poor Relief before the Welfare State: Britain versus the Continent, 1780- 1880.” European Review of Economic History 2 (1998): 101-40.

MacKinnon, Mary. “English Poor Law Policy and the Crusade Against Outrelief.” Journal of Economic History 47 (1987): 603-25.

Marshall, J. D. The Old Poor Law, 1795-1834. 2nd edition. London: Macmillan, 1985.

Pinchbeck, Ivy. Women Workers and the Industrial Revolution, 1750-1850. London: Routledge, 1930.

Pound, John. Poverty and Vagrancy in Tudor England, 2nd edition. London: Longmans, 1986.

Rose, Michael E. “The New Poor Law in an Industrial Area.” In The Industrial Revolution, edited by R.M. Hartwell. Oxford: Oxford University Press, 1970.

Rose, Michael E. The English Poor Law, 1780-1930. Newton Abbot: David & Charles, 1971.

Shaw-Taylor, Leigh. “Parliamentary Enclosure and the Emergence of an English Agricultural Proletariat.” Journal of Economic History 61 (2001): 640-62.

Slack, Paul. Poverty and Policy in Tudor and Stuart England. London: Longmans, 1988.

Slack, Paul. The English Poor Law, 1531-1782. London: Macmillan, 1990.

Smith, Richard (1996). “Charity, Self-interest and Welfare: Reflections from Demographic and Family History.” In Charity, Self-Interest and Welfare in the English Past, edited by Martin Daunton. NewYork: St Martin’s.

Sokoll, Thomas. Household and Family among the Poor: The Case of Two Essex Communities in the Late Eighteenth and Early Nineteenth Centuries. Bochum: Universitätsverlag Brockmeyer, 1993.

Solar, Peter M. “Poor Relief and English Economic Development before the Industrial Revolution.” Economic History Review, 2nd series 48 (1995): 1-22.

Tawney, R. H. Religion and the Rise of Capitalism: A Historical Study. London: J. Murray, 1926.

Webb, Sidney and Beatrice Webb. English Poor Law History. Part I: The Old Poor Law. London: Longmans, 1927.

Williams, Karel. From Pauperism to Poverty. London: Routledge, 1981.

Citation: Boyer, George. “English Poor Laws”. EH.Net Encyclopedia, edited by Robert Whaples. May 7, 2002. URL http://eh.net/encyclopedia/english-poor-laws/

The National Recovery Administration

Barbara Alexander, Charles River Associates

This article outlines the history of the National Recovery Administration, one of the most important and controversial agencies in Roosevelt’s New Deal. It discusses the agency’s “codes of fair competition” under which antitrust law exemptions could be granted in exchange for adoption of minimum wages, problems some industries encountered in their subsequent attempts to fix prices under the codes, and the macroeconomic effects of the program.

The early New Deal suspension of antitrust law under the National Recovery Administration (NRA) is surely one of the oddest episodes in American economic history. In its two-year life, the NRA oversaw the development of so-called “codes of fair competition” covering the larger part of the business landscape.1 The NRA generally is thought to have represented a political exchange whereby business gave up some of its rights over employees in exchange for permission to form cartels.2 Typically, labor is taken to have gotten the better part of the bargain; the union movement having extended its new powers after the Supreme Court abolished the NRA in 1935, while the business community faced a newly aggressive FTC by the end of the 1930s. While this characterization may be true in broad outline, close examination of the NRA reveals that matters may be somewhat more complicated than is suggested by the interpretation of the program as a win for labor contrasted with a missed opportunity for business.

Recent evaluations of the NRA have wended their way back to themes sounded during the early nineteen thirties, in particular, interrelationships between the so-called “trade practice” or cartelization provisions of the program and the grant of enhanced bargaining power to trade unions.3 On the microeconomic side, allowing unions to bargain for industry-wide wages may have facilitated cartelization in some industries. Meanwhile, macroeconomists have suggested that the Act and its progeny, especially labor measures such as the National Labor Relations Act may bear more responsibility for the length and severity of the Great Depression than has been recognized heretofore.4 If this thesis holds up to closer scrutiny, the era may come to be seen as a primary example of the potential macroeconomic costs of shifts in political and economic power.

Kickoff Campaign and Blanket Codes

The NRA began operations in a burst of “ballyhoo” during the summer of 1933.5 The agency was formed upon passage of the National Industrial Recovery Act (NIRA) in mid-June. A kick-off campaign of parades and press events succeeded in getting over 2 million employers to sign a preliminary “blanket code” known as the “President’s Re-Employment Agreement.” Signatories of the PRA pledged to pay minimum wages ranging from around $12 to $15 per 40-hour week, depending on size of town. Some 16 million workers were covered, out of a non-farm labor force of some 25 million. “Share-the-work” provisions called for limits of 35 to 40 hours per week for most employees.6

NRA Codes

Over the next year and a half, the blanket code was superseded by over 500 codes negotiated for individual industries. The NIRA provided that: “Upon the application to the President by one or more trade or industrial associations or groups, the President may approve a code or codes of fair competition for the trade or industry.”7 The carrot held out to induce participation was enticing: “any code … and any action complying with the provisions thereof . . . shall be exempt from the provisions of the antitrust laws of the United States.”8 Representatives of trade associations overran Washington, and by the time the NRA was abolished, hundreds of codes covering over three-quarters of private, non-farm employment had been approved.9 Code signatories were supposed to be allowed to use the NRA “Blue Eagle” as a symbol that “we do our part” only as long as they remained in compliance with code provisions.10

Disputes Arise

Almost 80 percent of the codes had provisions that were directed at establishment of price floors.11 The Act did not specifically authorize businesses to fix prices, and indeed it specified that ” . . .codes are not designed to promote monopolies.”12 However, it is an understatement to say that there was never any consensus among firms, industries and NRA officials as to precisely what was to be allowed as part of an acceptable code. Arguments about exactly what the NIRA allowed, and how the NRA should implement the Act began during its drafting and continued unabated throughout its life. The arguments extended from the level of general principles to the smallest details of policy, unsurprising given the complete dependence of appropriate regulatory design on precise regulatory objectives, which here were embroiled in dispute from start to finish.

To choose just one out of many examples of such disputes: There was a debate within the NRA as to whether “code authorities” (industry governing bodies) should be allowed to use industry-wide or “representative” cost data to define a price floor based on “lowest reasonable cost.” Most economists would understand this type of rule as a device that would facilitate monopoly pricing. However, a charitable interpretation of the views of administration proponents is that they had some sort of “soft competition” in mind. That is, they wished to develop and allow the use of mechanisms that would extend to more fragmented industries a type of peaceful coexistence more commonly associated with oligopoly. Those NRA supporters of the representative-cost-based price floor imagined that a range of prices would emerge if such a floor were to be set, whereas detractors believed that “the minimum would become the maximum,” that is, the floor would simply be a cartel price, constraining competition across all firms in an industry.13

Price Floors

While a rule allowing emergency price floors based on “lowest reasonable cost” was eventually approved, there was no coherent NRA program behind it.14 Indeed, the NRA and code authorities often operated at cross-purposes. At the same time that some officials of the NRA arguably took actions to promote softened competition, some in industry tried to implement measures more likely to support hard-core cartels, even when they thereby reduced the chance of soft competition should collusion fail. For example, with the partial support of the NRA, many code authorities moved to standardize products, shutting off product differentiation as an arena of potential rivalry, in spite of its role as one of the strongest mechanisms that might soften price competition.15 Of course if one is looking to run a naked price-fixing scheme, it is helpful to eliminate product differentiation as an avenue for cost-raising, profit-eroding rivalry. An industry push for standardization can thus be seen as a way of supporting hard-core cartelization, while less enthusiasm on the part of some administration officials may have reflected an understanding, however intuitive, that socially more desirable soft competition required that avenues for product differentiation be left open.

National Recovery Review Board

According to some critical observers then and later, the codes did lead to an unsurprising sort of “golden age” of cartelization. The National Recovery Review Board, led by an outraged Clarence Darrow (of Scopes “monkey trial” fame) concluded in May of 1934 that “in certain industries monopolistic practices existed.”16 While there are legitimate examples of every variety of cartelization occurring under the NRA, many contemporaneous and subsequent assessments of Darrow’s work dismiss the Board’s “analysis” as hopelessly biased. Thus although its conclusions are interesting as a matter of political economy, it is far from clear that the Board carried out any dispassionate inventory of conditions across industries, much less a real weighing of evidence.17

Compliance Crisis

In contrast to Darrow’s perspective, other commentators focus on the “compliance crisis” that erupted within a few months of passage of the NIRA.18 Many industries were faced with “chiselers” who refused to respect code pricing rules. Firms that attempted to uphold code prices in the face of defection lost both market share and respect for the NRA.

NRA state compliance offices had recorded over 30,000 “trade practice” complaints by early 1935.19 However, the compliance program was characterized by “a marked timidity on the part of NRA enforcement officials.”20 This timidity was fatal to the program, since monopoly pricing can easily be more damaging than is the most bare-knuckled competition to a firm that attempts it without parallel action from its competitors. NRA hesitancy came about as a result of doubts about whether a vigorous enforcement effort would withstand constitutional challenge, a not-unrelated lack of support from the Department of Justice, public antipathy for enforcement actions aimed at forcing sellers to charge higher prices, and unabating internal NRA disputes about the advisability of the price-fixing core of the trade practice program.21 Consequently, by mid-1934, firms disinclined to respect code pricing rules were ignoring them. By that point then, contrary to the initial expectations of many code signatories, the new antitrust regime represented only permission to form voluntary cartelization agreements, not the advent of government-enforced cartels. Even there, participants had to be discreet, so as not to run afoul of the antimonopoly language of the Act.

It is still far from clear how much market power was conferred by the NRA’s loosening of antitrust constraints. Of course, modern observers of the alternating successes and failures of cartels such as OPEC will not be surprised that the NRA program led to mixed results. In the absence of government enforcement, the program simply amounted to de facto legalization of self-enforcing cartels. With respect to the ease of collusion, economic theory is clear only on the point that self-enforceability is an open question; self-interest may lead to either breakdown of agreements or success at sustaining them.

Conflicts between Large and Small Firms

Some part of the difficulties encountered by NRA cartels may have had roots in a progressive mandate to offer special protection to the “little guy.” The NIRA had specified that acceptable codes of fair competition must not “eliminate or oppress small enterprises,”22 and that “any organization availing itself of the benefits of this title shall be truly representative of the trade or industry . . . Any organization violating … shall cease to be entitled to the benefits of this title.”23 Majority rule provisions were exceedingly common in codes, and were most likely a reflection of this statutory mandate. The concern for small enterprise had strong progressive roots.24 Justice Brandeis’s well-known antipathy for large-scale enterprise and concentration of economic power reflected a widespread and long-standing debate about the legitimate goals of the American experiment.

In addition to evaluating monopolization under the codes, the Darrow board had been charged with assessing the impact of the NRA on small business. Its conclusion was that “in certain industries small enterprises were oppressed.” Again however, as with his review of monopolization, Darrow may have seen only what he was predisposed to see. A number of NRA “code histories” detail conflicts within industries in which small, higher-cost producers sought to use majority rule provisions to support pricing at levels above those desired by larger, lower-cost producers. In the absence of effective enforcement from the government, such prices were doomed to break down, triggering repeated price wars in some industries.25

By 1935, there was understandable bitterness about what many businesses viewed as the lost promise of the NRA. Undoubtedly, the bitterness was exacerbated by the fact that the NRA wanted higher wages while failing to deliver the tools needed for effective cartelization. However, it is not entirely clear that everyone in the business community felt that the labor provisions of the Act were undesirable.26

Labor and Employment Issues

By their nature, market economies give rise to surplus-eroding rivalry among those who would be better off collectively if they could only act in concert. NRA codes of fair competition, specifying agreements on pricing and terms of employment, arose from a perceived confluence of interests among representatives of “business,” “labor,” and “the public” in muting that rivalry. Many proponents of the NIRA held that competitive pressures on business had led to downward pressure on wages, which in turn caused low consumption, leading to greater pressure on business, and so on. Allowing workers to organize and bargain collectively, while their employers pledged to one another not to sell below cost, was identified as a way to arrest harmful deflationary forces. Knowledge that one’s rivals would also be forced to pay “code wages” had some potential for aiding cartel survival. Thus the rationale for NRA wage supports at the microeconomic level potentially dovetailed with the macroeconomic theory by which higher wages were held to support higher consumption and, in turn, higher prices.

Labor provisions of the NIRA appeared in Section 7: “. . . employees shall have the right to organize and bargain collectively through representatives of their own choosing … employers shall comply with the maximum hours of labor, minimum rates of pay, and other conditions of employment…” 27 Each “code of fair competition” had to include labor provisions acceptable to the National Recovery Administration, developed during a process of negotiations, hearings, and review. Thus in order to obtain the shield against antitrust prosecution for their “trade practices” offered by an approved code, significant concessions to workers had to be made.

The NRA is generally judged to have been a success for labor and a miserable failure for business. However, evaluation is complicated to the extent that labor could not have achieved gains with respect to collective bargaining rights over wages and working conditions, had those rights not been more or less willingly granted by employers operating under the belief that stabilization of labor costs would facilitate cartelization. The labor provisions may have indeed helped some industries as well as helping workers, and for firms in such industries, the NRA cannot have been judged a failure. Moreover, while some businesses may have found the Act beneficial, because labor cost stability or freedom to negotiate with rivals enhanced their ability to cooperate on price, it is not entirely obvious that workers as a class gained as much as is sometimes contended.

The NRA did help solidify new and important norms regarding child labor, maximum hours, and other conditions of employment; it will never be known if the same progress could have been made had not industry been more or less hornswoggled into giving ground, using the antitrust laws as bait. Whatever the long-term effects of the NRA on worker welfare, the short-term gains for labor associated with higher wages were questionable. While those workers who managed to stay employed throughout the nineteen thirties benefited from higher wages, to the extent that workers were also consumers, and often unemployed consumers at that, or even potential entrepreneurs, they may have been better off without the NRA.

The issue is far from settled. Ben Bernanke and Martin Parkinson examine the economic growth that occurred during the New Deal in spite of higher wages and suggest “part of the answer may be that the higher wages ‘paid for themselves’ through increased productivity of labor. Probably more important, though, is the observation that with imperfectly competitive product markets, output depends on aggregate demand as well as the real wage. Maybe Herbert Hoover and Henry Ford were right: Higher real wages may have paid for themselves in the broader sense that their positive effect on aggregate demand compensated for their tendency to raise cost.”28 However, Christina Romer establishes a close connection between NRA programs and the failure of wages and prices to adjust to high unemployment levels. In her view, “By preventing the large negative deviations of output from trend in the mid-1930s from exerting deflationary pressure, [the NRA] prevented the economy’s self-correction mechanism from working.” 29

Aftermath of Supreme Court’s Ruling in Schecter Case

The Supreme Court struck down the NRA on May 27, 1935; the case was a dispute over violations of labor provisions of the “Live Poultry Code” allegedly perpetrated by the Schecter Poultry Corporation. The Court held the code to be invalid on grounds of “attempted delegation of legislative power and the attempted regulation of intrastate transactions which affect interstate commerce only indirectly.”30 There were to be no more grand bargains between business and labor under the New Deal.

Riven by divergent agendas rooted in industry- and firm-specific technology and demand, “business” was never able to speak with even the tenuous degree of unity achieved by workers. Following the abortive attempt to get the government to enforce cartels, firms and industries went their own ways, using a variety of strategies to enhance their situations. A number of sectors did succeed in getting passage of “little NRAs” with mechanisms tailored to mute competition in their particular circumstances. These mechanisms included the Robinson-Patman Act, aimed at strengthening traditional retailers against the ability of chain stores to buy at lower prices, the Guffey Acts, in which high cost bituminous coal operators and coal miners sought protection from the competition of lower cost operators, and the Motor Carrier Act in which high cost incumbent truckers obtained protection against new entrants.31

On-going macroeconomic analysis suggests that the general public interest may have been poorly served by the experiment of the NRA. Like many macroeconomic theories, the validity of the underconsumption scenario that was put forth in support of the program depended on the strength and timing of the operation of its various mechanisms. Increasingly it appears that the NRA set off inflationary forces thought by some to be desirable at the time, but that in fact had depressing effects on demand for labor and on output. Pure monopolistic deadweight losses probably were less important than higher wage costs (although there has not been any close examination of inefficiencies that may have resulted from the NRA’s attempt to protect small higher-cost producers). The strength of any mitigating effects on aggregate demand remains to be established.

1 Leverett Lyon, P. Homan, L. Lorwin, G. Terborgh, C. Dearing, L. Marshall, The National Recovery Administration: An Analysis and Appraisal, Washington: Brooking Institution, 1935, p. 313, footnote 9.

2 See, for example, Charles Frederick Roos, NRA Economic Planning, Colorado Springs: Cowles Commission, 1935, p. 343.

3See, for example, Colin Gordon, New Deals: Business, Labor, and Politics in America, 1920-1935, New York: Cambridge University Press, 1993, especially chapter 5.

4Christina D. Romer, “Why Did Prices Rise in the 1930s?” Journal of Economic History 59, no. 1 (1999): 167-199; Michael Weinstein, Recovery and Redistribution under the NIRA, Amsterdam: North Holland, 1980, and Harold L. Cole and Lee E. Ohanian, “New Deal Policies and the Persistence of the Great Depression,” Working Paper 597, Federal Reserve Bank of Minneapolis, February 2001. But also see “Unemployment, Inflation and Wages in the American Depression: Are There Lessons for Europe?” Ben Bernanke and Martin Parkinson, American Economic Review: Papers and Proceedings 79, no. 2 (1989): 210-214.

5 See, for example, Donald Brand, Corporatism and the Rule of Law: A Study of the National Recovery Administration, Ithaca: Cornell University Press, 1988, p. 94.

6 See, for example, Roos, op. cit., pp. 77, 92.

7 Section 3(a) of The National Industrial Recovery Act, reprinted at p. 478 of Roos, op. Cit.

8 Section 5 of The National Industrial Recovery Act, reprinted at p. 483 of Roos, op. cit. Note though, that the legal status of actions taken during the NRA era was never clear; Roos points out that “…President Roosevelt signed an executive order on January 20, 1934, providing that any complainant of monopolistic practices … could press it before the Federal Trade Commission or request the assistance of the Department of Justice. And, on the same date, Donald Richberg issued a supplementary statement which said that the provisions of the anti-trust laws were still in effect and that the NRA would not tolerate monopolistic practices.” (Roos, op. cit. p. 376.)

9 Lyon, op. cit., p. 307, cited at p. 52 in Lee and Ohanian, op cit.

10 Roos, op. cit., p. 75; and Blackwell Smith, My Imprint on the Sands of Time: The Life of a New Dealer, Vantage Press, New York, p. 109.

11 Lyon, op. cit., p. 570.

12 Section 3 (a)(2) of The National Industrial Recovery Act, op. Cit.

13 Roos, op. cit., at pp. 254-259. Charles Roos comments that “Leon Henderson and Blackwell Smith, in particular, became intrigued with a notion that competition could be set up within limits and that in this way wide price variations tending to demoralize an industry could be prevented.”

14 Lyon, et al., op. cit., p. 605.

15 Smith, Assistant Counsel of the NRA (per Roos, op cit., p. 254), has the following to say about standardization: One of the more controversial subjects, which we didn’t get into too deeply, except to draw guidelines, was standardization.” Smith goes on to discuss the obvious need to standardize rail track gauges, plumbing fittings, and the like, but concludes, “Industry on the whole wanted more standardization than we could go with.” (Blackwell Smith, op. cit., pp. 106-7.) One must not go overboard looking for coherence among the various positions espoused by NRA administrators; along these lines it is worth remembering Smith’s statement some 60 years later: “Business’s reaction to my policy [Smith was speaking generally here of his collective proposals] to some extent was hostile. They wished that the codes were not as strict as I wanted them to be. Also, there was criticism from the liberal/labor side to the effect that the codes were more in favor of business than they should have been. I said, ‘We are guided by a squealometer. We tune policy until the squeals are the same pitch from both sides.'” (Smith, op. cit. p. 108.)

16 Quoted at p 378 of Roos, op. Cit.

17 Brand, op. cit. at pp. 159-60 cites in agreement extremely critical conclusions by Roos (op. cit. at p. 409) and Arthur Schlesinger, The Age of Roosevelt: The Coming of the New Deal, Boston: Houghton Mifflin, 1959, p. 133.

18 Roos acknowledges a breakdown by spring of 1934: “By March, 1934 something was urgently needed to encourage industry to observe code provisions; business support for the NRA had decreased materially and serious compliance difficulties had arisen.” (Roos, op. cit., at p. 318.) Brand dates the start of the compliance crisis much earlier, in the fall of 1933. (Brand, op. cit., p. 103.)

19 Lyon, op. cit., p. 264.

20 Lyon, op. cit., p. 268.

21 Lyon, op. cit., pp. 268-272. See also Peter H. Irons, The New Deal Lawyers, Princeton: Princeton University Press, 1982.

22 Section 3(a)(2) of The National Industrial Recovery Act, op. Cit.

23 Section 6(b) of The National Industrial Recovery Act, op. Cit.

24 Brand, op. Cit.

25 Barbara Alexander and Gary D. Libecap, “The Effect of Cost Heterogeneity in the Success and Failure of the New Deal’s Agricultural and Industrial Programs,” Explorations in Economic History, 37 (2000), pp. 370-400.

26 Gordon, op. Cit.

27 Section 7 of the National Industrial Recovery Act, reprinted at pp. 484-5 of Roos, op. Cit.

28 Bernanke and Parkinson, op. cit., p. 214.

29 Romer, op. cit., p. 197.

30 Supreme Court of the United States, Nos. 854 and 864, October term, 1934, (decision issued May 27, 1935). Reprinted in Roos, op. cit., p. 580.

31 Ellis W. Hawley, The New Deal and the Problem of Monopoly: A Study in Economic Ambivalence, 1966, Princeton: Princeton University Press, p.

Citation: Alexander, Barbara. “National Recovery Administration”. EH.Net Encyclopedia, edited by Robert Whaples. August 14, 2001. URL http://eh.net/encyclopedia/the-national-recovery-administration/

The Economic History of Norway

Ola Honningdal Grytten, Norwegian School of Economics and Business Administration

Overview

Norway, with its population of 4.6 million on the northern flank of Europe, is today one of the most wealthy nations in the world, both measured as GDP per capita and in capital stock. On the United Nation Human Development Index, Norway has been among the three top countries for several years, and in some years the very top nation. Huge stocks of natural resources combined with a skilled labor force and the adoption of new technology made Norway a prosperous country during the nineteenth and twentieth century.

Table 1 shows rates of growth in the Norwegian economy from 1830 to the present using inflation-adjusted gross domestic product (GDP). This article splits the economic history of Norway into two major phases — before and after the nation gained its independence in 1814.

Table 1
Phases of Growth in the Real Gross Domestic Product of Norway, 1830-2003

(annual growth rates as percentages)

Year GDP GDP per capita
1830-1843 1.91 0.86
1843-1875 2.68 1.59
1875-1914 2.02 1.21
1914-1945 2.28 1.55
1945-1973 4.73 3.81
1973-2003 3.28 2.79
1830-2003 2.83 2.00

Source: Grytten (2004b)

Before Independence

The Norwegian economy was traditionally based on local farming communities combined with other types of industry, basically fishing, hunting, wood and timber along with a domestic and international-trading merchant fleet. Due to topography and climatic conditions the communities in the North and the West were more dependent on fish and foreign trade than the communities in the south and east, which relied mainly on agriculture. Agricultural output, fish catches and wars were decisive for the waves in the economy previous to independence. This is reflected in Figure 1, which reports a consumer price index for Norway from 1516 to present.

The peaks in this figure mark the sixteenth-century Price Revolution (1530s to 1590s), the Thirty Years War (1618-1648), the Great Nordic War (1700-1721), the Napoleonic Wars (1800-1815), the only period of hyperinflation in Norway — World War I (1914-1918) — and the stagflation period, i.e. high rates of inflation combined with a slowdown in production, in the 1970s and early 1980s.

Figure 1
Consumer Price Index for Norway, 1516-2003 (1850 = 100).

Figure 1
Source: Grytten (2004a)

During the last decades of the eighteenth century the Norwegian economy bloomed along with a first era of liberalism. Foreign trade of fish and timber had already been important for the Norwegian economy for centuries, and now the merchant fleet was growing rapidly. Bergen, located at the west coast, was the major city, with a Hanseatic office and one of the Nordic countries’ largest ports for domestic and foreign trade.

When Norway gained its independence from Denmark in 1814, after a tight union covering 417 years, it was a typical egalitarian country with a high degree of self-supply from agriculture, fisheries and hunting. According to the population censuses from 1801 and 1815 more than ninety percent of the population of 0.9 million lived in rural areas, mostly on small farms.

After Independence (1814)

Figure 2 shows annual development in GDP by expenditure (in fixed 2000 prices) from 1830 to 2003. The series, with few exceptions, reveal steady growth rates with few huge fluctuations. However, economic growth as a more or less continuous process started in the 1840s. We can also conclude that the growth process slowed down during the last three decades of the nineteenth century. The years 1914-1945 were more volatile than any other period in question, while there was an impressive and steady rate of growth until the mid 1970s and from then on slower growth.

Figure 2
Gross Domestic Product for Norway by Expenditure Category
(in 2000 Norwegian Kroner)

Figure 2
Source: Grytten (2004b)

Stagnation and Institution Building, 1814-1843

The newborn state lacked its own institutions, industrial entrepreneurs and domestic capital. However, due to its huge stocks of natural resources and its geographical closeness to the sea and to the United Kingdom, the new state, linked to Sweden in a loose royal union, seized its opportunities after some decades. By 1870 it had become a relatively wealthy nation. Measured in GDP per capita Norway was well over the European average, in the middle of the West European countries, and in fact, well above Sweden.

During the first decades after its independence from Denmark, the new state struggled with the international recession after the Napoleonic wars, deflationary monetary policy, and protectionism from the UK.

The Central Bank of Norway was founded in 1816, and a national currency, the spesidaler pegged to silver was introduced. The daler depreciated heavily during the first troubled years of recession in the 1820s.

The Great Boom, 1843-1875

After the Norwegian spesidaler gained its par value to silver in 1842, Norway saw a period of significant economic growth up to the mid 1870s. This impressive growth was mirrored in only a few other countries. The growth process was very much initiated by high productivity growth in agriculture and the success of the foreign sector. The adoption of new structures and technology along with substitution from arable to lifestock production made labor productivity in agriculture increase by about 150 percent between 1835 and 1910. The exports of timber, fish and in particular maritime services achieved high growth rates. In fact, Norway became a major power in shipping services during this period, accounting for about seven percent of the world merchant fleet in 1875. Norwegian sailing vessels freighted international goods all over the world at low prices.

The success of the Norwegian foreign sector can be explained by a number of factors. Liberalization of world trade and high international demand secured a market for Norwegian goods and services. In addition, Norway had vast stocks of fish and timber along with maritime skills. According to recent calculations, GDP per capita had an annual growth rate of 1.6 percent 1843 to 1876, well above the European average. At the same time the Norwegian annual rate of growth for exports was 4.8 percent. The first modern large-scale manufacturing industry in Norway saw daylight in the 1840s, when textile plants and mechanized industry were established. A second wave of industrialization took place in the 1860s and 1870s. Following the rapid productivity growth in agriculture, food processing and dairy production industries showed high growth in this period.

During this great boom, capital was imported mainly from Britain, but also from Sweden, Denmark and Germany, the four most important Norwegian trading partners at the time. In 1536 the King of Denmark and Norway chose the Lutheran faith as the state religion. In consequence of the Reformation, reading became compulsory; consequently Norway acquired a generally skilled and independent labor force. The constitution from 1814 also cleared the way for liberalism and democracy. The puritan revivals during the nineteenth century created a business environment, which raised entrepreneurship, domestic capital and a productive labor force. In the western and southern parts of the country these puritan movements are still strong, both in daily life and within business.

Relative Stagnation with Industrialization, 1875-1914

Norway’s economy was hit hard during the “depression” from mid 1870s to the early 1890s. GDP stagnated, particular during the 1880s, and prices fell until 1896. This stagnation is mirrored in the large-scale emigration from Norway to North America in the 1880s. At its peak in 1882 as many as 28,804 persons, 1.5 percent of the population, left the country. All in all, 250,000 emigrated in the period 1879-1893, equal to 60 percent of the birth surplus. Only Ireland had higher emigration rates than Norway between 1836 and 1930, when 860,000 Norwegians left the country.

The long slow down can largely been explained by Norway’s dependence on the international economy and in particular the United Kingdom, which experienced slower economic growth than the other major economies of the time. As a result of the international slowdown, Norwegian exports contracted in several years, but expanded in others. A second reason for the slowdown in Norway was the introduction of the international gold standard. Norway adopted gold in January 1874, and due to the trade deficit, lack of gold and lack of capital, the country experienced a huge contraction in gold reserves and in the money stock. The deflationary effect strangled the economy. Going onto the gold standard caused the appreciation of the Norwegian currency, the krone, as gold became relatively more expensive compared to silver. A third explanation of Norway’s economic problems in the 1880s is the transformation from sailing to steam vessels. Norway had by 1875 the fourth biggest merchant fleet in the world. However, due to lack of capital and technological skills, the transformation from sail to steam was slow. Norwegian ship owners found a niche in cheap second-hand sailing vessels. However, their market was diminishing, and finally, when the Norwegian steam fleet passed the size of the sailing fleet in 1907, Norway was no longer a major maritime power.

A short boom occurred from the early 1890s to 1899. Then, a crash in the Norwegian building industry led to a major financial crash and stagnation in GDP per capita from 1900 to 1905. Thus from the middle of the 1870s until 1905 Norway performed relatively bad. Measured in GDP per capita, Norway, like Britain, experienced a significant stagnation relative to most western economies.

After 1905, when Norway gained full independence from Sweden, a heavy wave of industrialization took place. In the 1890s the fish preserving and cellulose and paper industries started to grow rapidly. From 1905, when Norsk Hydro was established, manufacturing industry connected to hydroelectrical power took off. It is argued, quite convincingly, that if there was an industrial breakthrough in Norway, it must have taken place during the years 1905-1920. However, the primary sector, with its labor-intensive agriculture and increasingly more capital-intensive fisheries, was still the biggest sector.

Crises and Growth, 1914-1945

Officially Norway was neutral during World War I. However, in terms of the economy, the government clearly took the side of the British and their allies. Through several treaties Norway gave privileges to the allied powers, which protected the Norwegian merchant fleet. During the war’s first years, Norwegian ship owners profited from the war, and the economy boomed. From 1917, when Germany declared war against non-friendly vessels, Norway took heavy losses. A recession replaced the boom.

Norway suspended gold redemption in August 1914, and due to inflationary monetary policy during the war and in the first couple of years afterward, demand was very high. When the war came to an end this excess demand was met by a positive shift in supply. Thus, Norway, like other Western countries experienced a significant boom in the economy from the spring of 1919 to the early autumn 1920. The boom was followed by high inflation, trade deficits, currency depreciation and an overheated economy.

The international postwar recession beginning in autumn 1920, hit Norway more severely than most other countries. In 1921 GDP per capita fell by eleven percent, which was only exceeded by the United Kingdom. There are two major reasons for the devastating effect of the post-war recession. In the first place, as a small open economy, Norway was more sensitive to international recessions than most other countries. This was in particular the case because the recession hit the country’s most important trading partners, the United Kingdom and Sweden, so hard. Secondly, the combination of strong and mostly pro-cyclical inflationary monetary policy from 1914 to 1920 and thereafter a hard deflationary policy made the crisis worse (Figure 3).

Figure 3
Money Aggregates for Norway, 1910-1930

Figure 3
Source: Klovland (2004a)

In fact, Norway pursued a long, but non-persistent deflationary monetary policy aimed at restoring the par value of the krone (NOK) up to May 1928. In consequence, another recession hit the economy during the middle of the 1920s. Hence, Norway was one of the worst performers in the western world in the 1920s. This can best be seen in the number of bankruptcies, a huge financial crisis and mass unemployment. Bank losses amounted to seven percent of GDP in 1923. Total unemployment rose from about one percent in 1919 to more than eight percent in 1926 and 1927. In manufacturing it reached more than 18 percent the same years.

Despite a rapid boom and success within the whaling industry and shipping services, the country never saw a convincing recovery before the Great Depression hit Europe in late summer 1930. The worst year for Norway was 1931, when GDP per capita fell by 8.4 percent. This, however, was not only due to the international crisis, but also to a massive and violent labor conflict that year. According to the implicit GDP deflator prices fell more than 63 percent from 1920 to 1933.

All in all, however, the depression of the 1930s was milder and shorter in Norway than in most western countries. This was partly due to the deflationary monetary policy in the 1920s, which forced Norwegian companies to become more efficient in order to survive. However, it was probably more important that Norway left gold as early as September 27th, 1931 only a week after the United Kingdom. Those countries that left gold early, and thereby employed a more inflationary monetary policy, were the best performers in the 1930s. Among them were Norway and its most important trading partners, the United Kingdom and Sweden.

During the recovery period, Norway in particular saw growth in manufacturing output, exports and import substitution. This can to a large extent be explained by currency depreciation. Also, when the international merchant fleet contracted during the drop in international trade, the Norwegian fleet grew rapidly, as Norwegian ship owners were pioneers in the transformation from steam to diesel engines, tramp to line freights and into a new expanding niche: oil tankers.

The primary sector was still the largest in the economy during the interwar years. Both fisheries and agriculture struggled with overproduction problems, however. These were dealt with by introducing market controls and cartels, partly controlled by the industries themselves and partly by the government.

The business cycle reached its bottom in late 1932. Despite relatively rapid recovery and significant growth both in GDP and in employment, unemployment stayed high, and reached 10-11 percent on annual basis from 1931 to 1933 (Figure 4).

Figure 4
Unemployment Rate and Public Relief Work
as a Percent of the Work Force, 1919-1939

Figure 4
Source: Hodne and Grytten (2002)

The standard of living became poorer in the primary sector, among those employed in domestic services and for the underemployed and unemployed and their households. However, due to the strong deflation, which made consumer prices fall by than 50 percent from autumn 1920 to summer 1933, employees in manufacturing, construction and crafts experienced an increase in real wages. Unemployment stayed persistently high due to huge growth in labor supply, as result of immigration restrictions by North American countries from the 1920s onwards.

Denmark and Norway were both victims of a German surprise attack the 9th of April 1940. After two months of fighting, the allied troops surrendered in Norway on June 7th and the Norwegian royal family and government escaped to Britain.

From then until the end of the war there were two Norwegian economies, the domestic German-controlled and the foreign Norwegian- and Allied-controlled economy. The foreign economy was primarily established on the basis of the huge Norwegian merchant fleet, which again was among the biggest in the world accounting for more than seven percent of world total tonnage. Ninety percent of this floating capital escaped the Germans. The ships were united into one state-controlled company, NORTASHIP, which earned money to finance the foreign economy. The domestic economy, however, struggled with a significant fall in production, inflationary pressure and rationing of important goods, which three million Norwegians had to share with 400.000 Germans occupying the country.

Economic Planning and Growth, 1945-1973

After the war the challenge was to reconstruct the economy and re-establish political and economic order. The Labor Party, in office from 1935, grabbed the opportunity to establish a strict social democratic rule, with a growing public sector and widespread centralized economic planning. Norway first declined the U.S. proposition of financial aid after the world. However, due to lack of hard currencies they accepted the Marshall aid program. By receiving 400 million dollars from 1948 to 1952, Norway was one of the biggest per capita recipients.

As part of the reconstruction efforts Norway joined the Bretton Woods system, GATT, the IMF and the World Bank. Norway also chose to become member of NATO and the United Nations. In 1958 the country also joined the European Free Trade Area (EFTA). The same year Norway made the krone convertible to the U.S. dollar, as many other western countries did with their currencies.

The years from 1950 to 1973 are often called the golden era of the Norwegian economy. GDP per capita showed an annual growth rate of 3.3 percent. Foreign trade stepped up even more, unemployment barely existed and the inflation rate was stable. This has often been explained by the large public sector and good economic planning. The Nordic model, with its huge public sector, has been said to be a success in this period. If one takes a closer look into the situation, one will, nevertheless, find that the Norwegian growth rate in the period was lower than that for most western nations. The same is true for Sweden and Denmark. The Nordic model delivered social security and evenly-distributed wealth, but it did not necessarily give very high economic growth.

Figure 5
Public Sector as a Percent of GDP, 1900-1990

Figure 5
Source: Hodne and Grytten (2002)

Petroleum Economy and Neoliberalism, 1973 to the Present

After the Bretton Woods system fell apart (between August 1971 and March 1973) and the oil price shock in autumn 1973, most developed economies went into a period of prolonged recession and slow growth. In 1969 Philips Petroleum discovered petroleum resources at the Ekofisk field, which was defined as part of the Norwegian continental shelf. This enabled Norway to run a countercyclical financial policy during the stagflation period in the 1970s. Thus, economic growth was higher and unemployment lower than for most other western countries. However, since the countercyclical policy focused on branch and company subsidies, Norwegian firms soon learned to adapt to policy makers rather than to the markets. Hence, both productivity and business structure did not have the incentives to keep pace with changes in international markets.

Norway lost significant competitive power, and large-scale deindustrialization took place, despite efforts to save manufacturing industry. Another reason for deindustrialization was the huge growth in the profitable petroleum sector. Persistently high oil prices from the autumn 1973 to the end of 1985 pushed labor costs upward, through spillover effects from high wages in the petroleum sector. High labor costs made the Norwegian foreign sector less competitive. Thus, Norway saw deindustrialization at a more rapid pace than most of her largest trading partners. Due to the petroleum sector, however, Norway experienced high growth rates in all the three last decades of the twentieth century, bringing Norway to the top of the world GDP per capita list at the dawn of the new millennium. Nevertheless, Norway had economic problems both in the eighties and in the nineties.

In 1981 a conservative government replaced Labor, which had been in power for most of the post-war period. Norway had already joined the international wave of credit liberalization, and the new government gave fuel to this policy. However, along with the credit liberalization, the parliament still ran a policy that prevented market forces from setting interest rates. Instead they were set by politicians, in contradiction to the credit liberalization policy. The level of interest rates was an important part of the political game for power, and thus, they were set significantly below the market level. In consequence, a substantial credit boom was created in the early 1980s, and continued to the late spring of 1986. As a result, Norway had monetary expansion and an artificial boom, which created an overheated economy. When oil prices fell dramatically from December 1985 onwards, the trade surplus was suddenly turned to a huge deficit (Figure 6).

Figure 6
North Sea Oil Prices and Norway’s Trade Balance, 1975-2000

Figure 6
Source: Statistics Norway

The conservative-center government was forced to keep a tighter fiscal policy. The new Labor government pursued this from May 1986. Interest rates were persistently high as the government now tried to run a trustworthy fixed-currency policy. In the summer of 1990 the Norwegian krone was officially pegged to the ECU. When the international wave of currency speculation reached Norway during autumn 1992 the central bank finally had to suspend the fixed exchange rate and later devaluate.

In consequence of these years of monetary expansion and thereafter contraction, most western countries experienced financial crises. It was relatively hard in Norway. Prices of dwellings slid, consumers couldn’t pay their bills, and bankruptcies and unemployment reached new heights. The state took over most of the larger commercial banks to avoid a total financial collapse.

After the suspension of the ECU and the following devaluation, Norway had growth until 1998, due to optimism, an international boom and high prices of petroleum. The Asian financial crisis also rattled the Norwegian stock market. At the same time petroleum prices fell rapidly, due to internal problems among the OPEC countries. Hence, the krone depreciated. The fixed exchange rate policy had to be abandoned and the government adopted inflation targeting. Along with changes in monetary policy, the center coalition government was also able to monitor a tighter fiscal policy. At the same time interest rates were high. As result, Norway escaped the overheating process of 1993-1997 without any devastating effects. Today the country has a strong and sound economy.

The petroleum sector is still very important in Norway. In this respect the historical tradition of raw material dependency has had its renaissance. Unlike many other countries rich in raw materials, natural resources have helped make Norway one of the most prosperous economies in the world. Important factors for Norway’s ability to turn resource abundance into economic prosperity are an educated work force, the adoption of advanced technology used in other leading countries, stable and reliable institutions, and democratic rule.

References

Basberg, Bjørn L. Handelsflåten i krig: Nortraship: Konkurrent og alliert. Oslo: Grøndahl and Dreyer, 1992.

Bergh, Tore Hanisch, Even Lange and Helge Pharo. Growth and Development. Oslo: NUPI, 1979.

Brautaset, Camilla. “Norwegian Exports, 1830-1865: In Perspective of Historical National Accounts.” Ph.D. dissertation. Norwegian School of Economics and Business Administration, 2002.

Bruland, Kristine. British Technology and European Industrialization. Cambridge: Cambridge University Press, 1989.

Danielsen, Rolf, Ståle Dyrvik, Tore Grønlie, Knut Helle and Edgar Hovland. Norway: A History from the Vikings to Our Own Times. Oslo: Scandinavian University Press, 1995.

Eitrheim. Øyvind, Jan T. Klovland and Jan F. Qvigstad, editors. Historical Monetary Statistics for Norway, 1819-2003. Oslo: Norges Banks skriftserie/Occasional Papers, no 35, 2004.

Hanisch, Tore Jørgen. “Om virkninger av paripolitikken.” Historisk tidsskrift 58, no. 3 (1979): 223-238.

Hanisch, Tore Jørgen, Espen Søilen and Gunhild Ecklund. Norsk økonomisk politikk i det 20. århundre. Verdivalg i en åpen økonomi. Kristiansand: Høyskoleforlaget, 1999.

Grytten, Ola Honningdal. “A Norwegian Consumer Price Index 1819-1913 in a Scandinavian Perspective.” European Review of Economic History 8, no.1 (2004): 61-79.

Grytten, Ola Honningdal. “A Consumer Price Index for Norway, 1516-2003.” Norges Bank: Occasional Papers, no. 1 (2004a): 47-98.

Grytten. Ola Honningdal. “The Gross Domestic Product for Norway, 1830-2003.” Norges Bank: Occasional Papers, no. 1 (2004b): 241-288.

Hodne, Fritz. An Economic History of Norway, 1815-1970. Tapir: Trondheim, 1975.

Hodne, Fritz. The Norwegian Economy, 1920-1980. London: Croom Helm and St. Martin’s, 1983.

Hodne, Fritz and Ola Honningdal Grytten. Norsk økonomi i det 19. århundre. Bergen: Fagbokforlaget, 2000.

Hodne, Fritz and Ola Honningdal Grytten. Norsk økonomi i det 20. århundre. Bergen: Fagbokforlaget, 2002.

Klovland, Jan Tore. “Monetary Policy and Business Cycles in the Interwar Years: The Scandinavian Experience.” European Review of Economic History 2, no. 2 (1998):

Klovland, Jan Tore. “Monetary Aggregates in Norway, 1819-2003.” Norges Bank: Occasional Papers, no. 1 (2004a): 181-240.

Klovland, Jan Tore. “Historical Exchange Rate Data, 1819-2003”. Norges Bank: Occasional Papers, no. 1 (2004b): 289-328.

Lange, Even, editor. Teknologi i virksomhet. Verkstedsindustri i Norge etter 1840. Oslo: Ad Notam Forlag, 1989.

Nordvik, Helge W. “Finanspolitikken og den offentlige sektors rolle i norsk økonomi i mellomkrigstiden”. Historisk tidsskrift 58, no. 3 (1979): 239-268.

Sejersted, Francis. Demokratisk kapitalisme. Oslo: Universitetsforlaget, 1993.

Søilen. Espen. “Fra frischianisme til keynesianisme? En studie av norsk økonomisk politikk i lys av økonomisk teori, 1945-1980.” Ph.D. dissertation. Bergen: Norwegian School of Economics and Business Administration, 1998.

Citation: Grytten, Ola. “The Economic History of Norway”. EH.Net Encyclopedia, edited by Robert Whaples. March 16, 2008. URL http://eh.net/encyclopedia/the-economic-history-of-norway/

An Economic History of New Zealand in the Nineteenth and Twentieth Centuries

John Singleton, Victoria University of Wellington, New Zealand

Living standards in New Zealand were among the highest in the world between the late nineteenth century and the 1960s. But New Zealand’s economic growth was very sluggish between 1950 and the early 1990s, and most Western European countries, as well as several in East Asia, overtook New Zealand in terms of real per capita income. By the early 2000s, New Zealand’s GDP per capita was in the bottom half of the developed world.

Table 1:
Per capita GDP in New Zealand
compared with the United States and Australia
(in 1990 international dollars)

US Australia New Zealand NZ as
% of US
NZ as % of
Austrialia
1840 1588 1374 400 25 29
1900 4091 4013 4298 105 107
1950 9561 7412 8456 88 114
2000 28129 21540 16010 57 74

Source: Angus Maddison, The World Economy: Historical Statistics. Paris: OECD, 2003, pp. 85-7.

Over the second half of the twentieth century, argue Greasley and Oxley (1999), New Zealand seemed in some respects to have more in common with Latin American countries than with other advanced western nations. As well as a snail-like growth rate, New Zealand followed highly protectionist economic policies between 1938 and the 1980s. (In absolute terms, however, New Zealanders continued to be much better off than their Latin American counterparts.) Maddison (1991) put New Zealand in a middle-income group of countries, including the former Czechoslovakia, Hungary, Portugal, and Spain.

Origins and Development to 1914

When Europeans (mainly Britons) started to arrive in Aotearoa (New Zealand) in the early nineteenth century, they encountered a tribal society. Maori tribes made a living from agriculture, fishing, and hunting. Internal trade was conducted on the basis of gift exchange. Maori did not hold to the Western concept of exclusive property rights in land. The idea that land could be bought and sold was alien to them. Most early European residents were not permanent settlers. They were short-term male visitors involved in extractive activities such as sealing, whaling, and forestry. They traded with Maori for food, sexual services, and other supplies.

Growing contact between Maori and the British was difficult to manage. In 1840 the British Crown and some Maori signed the Treaty of Waitangi. The treaty, though subject to various interpretations, to some extent regularized the relationship between Maori and Europeans (or Pakeha). At roughly the same time, the first wave of settlers arrived from England to set up colonies including Wellington and Christchurch. Settlers were looking for a better life than they could obtain in overcrowded and class-ridden England. They wished to build a rural and largely self-sufficient society.

For some time, only the Crown was permitted to purchase land from Maori. This land was then either resold or leased to settlers. Many Maori felt – and many still feel – that they were forced to give up land, effectively at gunpoint, in return for a pittance. Perhaps they did not always grasp that land, once sold, was lost forever. Conflict over land led to intermittent warfare between Maori and settlers, especially in the 1860s. There was brutality on both sides, but the Europeans on the whole showed more restraint in New Zealand than in North America, Australia, or Southern Africa.

Maori actually required less land in the nineteenth century because their numbers were falling, possibly by half between the late eighteenth and late nineteenth centuries. By the 1860s, Maori were outnumbered by British settlers. The introduction of European diseases, alcohol, and guns contributed to the decline in population. Increased mobility and contact between tribes may also have spread disease. The Maori population did not begin to recover until the twentieth century.

Gold was discovered in several parts of New Zealand (including Thames and Otago) in the mid-nineteenth century, but the introduction of sheep farming in the 1850s gave a more enduring boost to the economy. Australian and New Zealand wool was in high demand in the textile mills of Yorkshire. Sheep farming necessitated the clearing of native forests and the planting of grasslands, which changed the appearance of large tracts of New Zealand. This work was expensive, and easy access to the London capital market was critical. Economic relations between New Zealand and Britain were strong, and remained so until the 1970s.

Between the mid-1870s and mid-1890s, New Zealand was adversely affected by weak export prices, and in some years there was net emigration. But wool prices recovered in the 1890s, just as new exports – meat and dairy produce – were coming to prominence. Until the advent of refrigeration in the early 1880s, New Zealand did not export meat and dairy produce. After the introduction of refrigeration, however, New Zealand foodstuffs found their way on to the dinner tables of working class families in Britain, but not the tables of the middle and upper classes, as they could afford fresh produce.

In comparative terms, the New Zealand economy was in its heyday in the two decades before 1914. New Zealand (though not its Maori shadow, Aotearoa) was a wealthy, dynamic, and egalitarian society. The total population in 1914 was slightly above one million. Exports consisted almost entirely of land-intensive pastoral commodities. Manufactures loomed large in New Zealand’s imports. High labor costs, and the absence of scale economies in the tiny domestic market, hindered industrialization, though there was some processing of export commodities and imports.

War, Depression and Recovery, 1914-38

World War One disrupted agricultural production in Europe, and created a robust demand for New Zealand’s primary exports. Encouraged by high export prices, New Zealand farmers borrowed and invested heavily between 1914 and 1920. Land exchanged hands at very high prices. Unfortunately, the early twenties brought the start of a prolonged slump in international commodity markets. Many farmers struggled to service and repay their debts.

The global economic downturn, beginning in 1929-30, was transmitted to New Zealand by the collapse in commodity prices on the London market. Farmers bore the brunt of the depression. At the trough, in 1931-32, net farm income was negative. Declining commodity prices increased the already onerous burden of servicing and repaying farm mortgages. Meat freezing works, woolen mills, and dairy factories were caught in the spiral of decline. Farmers had less to spend in the towns. Unemployment rose, and some of the urban jobless drifted back to the family farm. The burden of external debt, the bulk of which was in sterling, rose dramatically relative to export receipts. But a protracted balance of payments crisis was avoided, since the demand for imports fell sharply in response to the drop in incomes. The depression was not as serious in New Zealand as in many industrial countries. Prices were more flexible in the primary sector and in small business than in modern, capital-intensive industry. Nevertheless, the experience of depression profoundly affected New Zealanders’ attitudes towards the international economy for decades to come.

At first, there was no reason to expect that the downturn in 1929-30 was the prelude to the worst slump in history. As tax and customs revenue fell, the government trimmed expenditure in an attempt to balance the budget. Only in 1931 was the severity of the crisis realized. Further cuts were made in public spending. The government intervened in the labor market, securing an order for an all-round reduction in wages. It pressured and then forced the banks to reduce interest rates. The government sought to maintain confidence and restore prosperity by helping farms and other businesses to lower costs. But these policies did not lead to recovery.

Several factors contributed to the recovery that commenced in 1933-34. The New Zealand pound was devalued by 14 percent against sterling in January 1933. As most exports were sold for sterling, which was then converted into New Zealand pounds, the income of farmers was boosted at a stroke of the pen. Devaluation increased the money supply. Once economic actors, including the banks, were convinced that the devaluation was permanent, there was an increase in confidence and in lending. Other developments played their part. World commodity prices stabilized, and then began to pick up. Pastoral output and productivity continued to rise. The 1932 Ottawa Agreements on imperial trade strengthened New Zealand’s position in the British market at the expense of non-empire competitors such as Argentina, and prefigured an increase in the New Zealand tariff on non-empire manufactures. As was the case elsewhere, the recovery in New Zealand was not the product of a coherent economic strategy. When beneficial policies were adopted it was as much by accident as by design.

Once underway, however, New Zealand’s recovery was comparatively rapid and persisted over the second half of the thirties. A Labour government, elected towards the end of 1935, nationalized the central bank (the Reserve Bank of New Zealand). The government instructed the Reserve Bank to create advances in support of its agricultural marketing and state housing schemes. It became easier to obtain borrowed funds.

An Insulated Economy, 1938-1984

A balance of payments crisis in 1938-39 was met by the introduction of administrative restrictions on imports. Labour had not been prepared to deflate or devalue – the former would have increased unemployment, while the latter would have raised working class living costs. Although intended as a temporary expedient, the direct control of imports became a distinctive feature of New Zealand economic policy until the mid-1980s.

The doctrine of “insulationism” was expounded during the 1940s. Full employment was now the main priority. In the light of disappointing interwar experience, there were doubts about the ability of the pastoral sector to provide sufficient work for New Zealand’s growing population. There was a desire to create more industrial jobs, even though there seemed no prospect of achieving scale economies within such a small country. Uncertainty about export receipts, the need to maintain a high level of domestic demand, and the competitive weakness of the manufacturing sector, appeared to justify the retention of quantitative import controls.

After 1945, many Western countries retained controls over current account transactions for several years. When these controls were relaxed and then abolished in the fifties and early sixties, the anomalous nature of New Zealand’s position became more visible. Although successive governments intended to liberalize, in practice they achieved little, except with respect to trade with Australia.

The collapse of the Korean War commodity boom, in the early 1950s, marked an unfortunate turning point in New Zealand’s economic history. International conditions were unpropitious for the pastoral sector in the second half of the twentieth century. Despite the aspirations of GATT, the United States, Western Europe and Japan restricted agricultural imports, especially of temperate foodstuffs, subsidized their own farmers and, in the case of the Americans and the Europeans, dumped their surpluses in third markets. The British market, which remained open until 1973, when the United Kingdom was absorbed into the EEC, was too small to satisfy New Zealand. Moreover, even the British resorted to agricultural subsidies. Compared with the price of industrial goods, the price of agricultural produce tended to weaken over the long term.

Insulation was a boon to manufacturers, and New Zealand developed a highly diversified industrial structure. But competition was ineffectual, and firms were able to pass cost increases on to the consumer. Import barriers induced many British, American, and Australian multinationals to establish plants in New Zealand. The protected industrial economy did have some benefits. It created jobs – there was full employment until the 1970s – and it increased the stock of technical and managerial skills. But consumers and farmers were deprived of access to cheaper – and often better quality – imported goods. Their interests and welfare were neglected. Competing demand from protected industries also raised the costs of farm inputs, including labor power, and thus reduced the competitiveness of New Zealand’s key export sector.

By the early 1960s, policy makers had realized that New Zealand was falling behind in the race for greater prosperity. The British food market was under threat, as the Macmillan government began a lengthy campaign to enter the protectionist EEC. New Zealand began to look for other economic partners, and the most obvious candidate was Australia. In 1901, New Zealand had declined to join the new federation of Australian colonies. Thus it had been excluded from the Australian common market. After lengthy negotiations, a partial New Zealand-Australia Free Trade Agreement (NAFTA) was signed in 1965. Despite initial misgivings, many New Zealand firms found that they could compete in the Australian market, where tariffs against imports from the rest of the world remained quite high. But this had little bearing on their ability to compete with European, Asian, and North American firms. NAFTA was given renewed impetus by the Closer Economic Relations (CER) agreement of 1983.

Between 1973 and 1984, New Zealand governments were overwhelmed by a group of inter-related economic crises, including two serious supply shocks (the oil crises), rising inflation, and increasing unemployment. Robert Muldoon, the National Party (conservative) prime minister between 1975 and 1984, pursued increasingly erratic macroeconomic policies. He tightened government control over the economy in the early eighties. There were dramatic fluctuations in inflation and in economic growth. In desperation, Muldoon imposed a wage and price freeze in 1982-84. He also mounted a program of large-scale investments, including the expansion of a steel works, and the construction of chemical plants and an oil refinery. By means of these investments, he hoped to reduce the import bill and secure a durable improvement in the balance of payments. But the “Think Big” strategy failed – the projects were inadequately costed, and inherently risky. Although Muldoon’s intention had been to stabilize the economy, his policies had the opposite effect.

Economic Reform, 1984-2000

Muldoon’s policies were discredited, and in 1984 the Labour Party came to power. All other economic strategies having failed, Labour resolved to deregulate and restore the market process. (This seemed very odd at the time.) Within a week of the election, virtually all controls over interest rates had been abolished. Financial markets were deregulated, and, in March 1985, the New Zealand dollar was floated. Other changes followed, including the sale of public sector trading organizations, the reduction of tariffs and the elimination of import licensing. However, reform of the labor market was not completed until the early 1990s, by which time National (this time without Muldoon or his policies) was back in office.

Once credit was no longer rationed, there was a large increase in private sector borrowing, and a boom in asset prices. Numerous speculative investment and property companies were set up in the mid-eighties. New Zealand’s banks, which were not used to managing risk in a deregulated environment, scrambled to lend to speculators in an effort not to miss out on big profits. Many of these ventures turned sour, especially after the 1987 share market crash. Banks were forced to reduce their lending, to the detriment of sound as well as unsound borrowers.

Tight monetary policy and financial deregulation led to rising interest rates after 1984. The New Zealand dollar appreciated strongly. Farmers bore the initial brunt of high borrowing costs and a rising real exchange rate. Manufactured imports also became more competitive, and many inefficient firms were forced to close. Unemployment rose in the late eighties and early nineties. The early 1990s were marked by an international recession, which was particularly painful in New Zealand, not least because of the high hopes raised by the post-1984 reforms.

An economic recovery began towards the end of 1991. With a brief interlude in 1998, strong growth persisted for the remainder of the decade. Confidence was gradually restored to the business sector. Unemployment began to recede. After a lengthy time lag, the economic reforms seemed to be paying off for the majority of the population.

Large structural changes took place after 1984. Factors of production switched out of the protected manufacturing sector, and were drawn into services. Tourism boomed as the relative cost of international travel fell. The face of the primary sector also changed, and the wine industry began to penetrate world markets. But not all manufacturers struggled. Some firms adapted to the new environment and became more export-oriented. For instance, a small engineering company, Scott Technology, became a world leader in the provision of equipment for the manufacture of refrigerators and washing machines.

Annual inflation was reduced to low single digits by the early nineties. Price stability was locked in through the 1989 Reserve Bank Act. This legislation gave the central bank operational autonomy, while compelling it to focus on the achievement and maintenance of price stability rather than other macroeconomic objectives. The Reserve Bank of New Zealand was the first central bank in the world to adopt a regime of inflation targeting. The 1994 Fiscal Responsibility Act committed governments to sound finance and the reduction of public debt.

By 2000, New Zealand’s population was approaching four million. Overall, the reforms of the eighties and nineties were responsible for creating a more competitive economy. New Zealand’s economic decline relative to the rest of the OECD was halted, though it was not reversed. In the nineties, New Zealand enjoyed faster economic growth than either Germany or Japan, an outcome that would have been inconceivable a few years earlier. But many New Zealanders were not satisfied. In particular, they were galled that their closest neighbor, Australia, was growing even faster. Australia, however, was an inherently much wealthier country with massive mineral deposits.

Assessment

Several explanations have been offered for New Zealand’s relatively poor economic performance during the twentieth century.

Wool, meat, and dairy produce were the foundations of New Zealand’s prosperity in Victorian and Edwardian times. After 1920, however, international market conditions were generally unfavorable to pastoral exports. New Zealand had the wrong comparative advantage to enjoy rapid growth in the twentieth century.

Attempts to diversify were only partially successful. High labor costs and the small size of the domestic market hindered the efficient production of standardized labor-intensive goods (e.g. garments) and standardized capital-intensive goods (e.g. autos). New Zealand might have specialized in customized and skill-intensive manufactures, but the policy environment was not conducive to the promotion of excellence in niche markets. Between 1938 and the 1980s, Latin American-style trade policies fostered the growth of a ramshackle manufacturing sector. Only in the late eighties did New Zealand decisively reject this regime.

Geographical and geological factors also worked to New Zealand’s disadvantage. Australia drew ahead of New Zealand in the 1960s, following the discovery of large mineral deposits for which there was a big market in Japan. Staple theory suggests that developing countries may industrialize successfully by processing their own primary products, instead of by exporting them in a raw state. Canada had coal and minerals, and became a significant industrial power. But New Zealand’s staples of wool, meat and dairy produce offered limited downstream potential.

Canada also took advantage of its proximity to the U.S. market, and access to U.S. capital and technology. American-style institutions in the labor market, business, education and government became popular in Canada. New Zealand and Australia relied on, arguably inferior, British-style institutions. New Zealand was a long way from the world’s economic powerhouses, and it was difficult for its firms to establish and maintain contact with potential customers and collaborators in Europe, North America, or Asia.

Clearly, New Zealand’s problems were not all of its own making. The elimination of agricultural protectionism in the northern hemisphere would have given a huge boost the New Zealand economy. On the other hand, in the period between the late 1930s and mid-1980s, New Zealand followed inward-looking economic policies that hindered economic efficiency and flexibility.

References

Bassett, Michael. The State in New Zealand, 1840-1984. Auckland: Auckland University Press, 1998.

Belich, James. Making Peoples: A History of the New Zealanders from Polynesian Settlement to the End of the Nineteenth Century, Auckland: Penguin, 1996.

Condliffe, John B. New Zealand in the Making. London: George Allen & Unwin, 1930.

Dalziel, Paul. “New Zealand’s Economic Reforms: An Assessment.” Review of Political Economy 14, no. 2 (2002): 31-46.

Dalziel, Paul and Ralph Lattimore. The New Zealand Macroeconomy: Striving for Sustainable Growth with Equity. Melbourne: Oxford University Press, fifth edition, 2004.

Easton, Brian. In Stormy Seas: The Post-War New Zealand Economy. Dunedin: University of Otago Press, 1997.

Endres, Tony and Ken Jackson. “Policy Responses to the Crisis: Australasia in the 1930s.” In Capitalism in Crisis: International Responses to the Great Depression, edited by Rick Garside, 148-65. London: Pinter, 1993.

Evans, Lewis, Arthur Grimes, and Bryce Wilkinson (with David Teece), “Economic Reform in New Zealand 1984-95: The Pursuit of Efficiency.” Journal of Economic Literature 34, no. 4 (1996): 1856-1902.

Gould, John D. The Rake’s Progress: the New Zealand Economy since 1945. Auckland: Hodder and Stoughton, 1982.

Greasley, David and Les Oxley. “A Tale of Two Dominions: Comparing the Macroeconomic Records of Australia and Canada since 1870.” Economic History Review 51, no. 2 (1998): 294-318.

Greasley, David and Les Oxley. “Outside the Club: New Zealand’s Economic Growth, 1870-1993.” International Review of Applied Economics 14, no. 2 (1999): 173-92.

Greasley, David and Les Oxley. “Regime Shift and Fast Recovery on the Periphery: New Zealand in the 1930s.” Economic History Review 55, no. 4 (2002): 697-720.

Hawke, Gary R. The Making of New Zealand: An Economic History. Cambridge: Cambridge University Press, 1985.

Jones, Steve R.H. “Government Policy and Industry Structure in New Zealand, 1900-1970.” Australian Economic History Review 39, no, 3 (1999): 191-212.

Mabbett, Deborah. Trade, Employment and Welfare: A Comparative Study of Trade and Labour Market Policies in Sweden and New Zealand, 1880-1980. Oxford: Clarendon Press, 1995.

Maddison, Angus. Dynamic Forces in Capitalist Development. Oxford: Oxford University Press, 1991.

Maddison, Angus. The World Economy: Historical Statistics. Paris: OECD, 2003.

McKinnon, Malcolm. Treasury: 160 Years of the New Zealand Treasury. Auckland: Auckland University Press in association with the Ministry for Culture and Heritage, 2003.

Schedvin, Boris. “Staples and Regions of the Pax Britannica.” Economic History Review 43, no. 4 (1990): 533-59.

Silverstone, Brian, Alan Bollard, and Ralph Lattimore, editors. A Study of Economic Reform: The Case of New Zealand. Amsterdam: Elsevier, 1996.

Singleton, John. “New Zealand: Devaluation without a Balance of Payments Crisis.” In The World Economy and National Economies in the Interwar Slump, edited by Theo Balderston, 172-90. Basingstoke: Palgrave, 2003.

Singleton, John and Paul L. Robertson. Economic Relations between Britain and Australasia, 1945-1970. Basingstoke: Palgrave, 2002.

Ville, Simon. The Rural Entrepreneurs: A History of the Stock and Station Agent Industry in Australia and New Zealand. Cambridge: Cambridge University Press, 2000.

Citation: Singleton, John. “New Zealand in the Nineteenth and Twentieth Centuries”. EH.Net Encyclopedia, edited by Robert Whaples. February 10, 2008. URL http://eh.net/encyclopedia/an-economic-history-of-new-zealand-in-the-nineteenth-and-twentieth-centuries/